text
stringlengths
100
500k
subset
stringclasses
4 values
Why isn't interstellar medium pulled in towards nearest massive object? How does the ISM resist gravity? That's the only force acting on it, and all other particles seem to collect together to form stars. What makes the ISM so special among other particles? interstellar-medium interstellar $\begingroup$ Why do you think that the ISM resists gravity? $\endgroup$ – Walter It is not true that the particles in the interstellar medium (ISM) are only acted upon by gravity. For instance, In many cases a significant part of the ISM is ionized, in which case it interacts with magnetic field which permeates the gas and may in some cases be quite strong. In the vicinity of massive and hence luminous stars, radiation pressure may exert a strong force on the ISM. They also emit copious amounts of cosmic rays (i.e. relativistic particles) that transfer momentum to the surrounding gas. Supernova explosions create hot bubbles that expand and sweep through the ISM, resulting in shock waves and galactic outflows. In most cases, however, what may prevent a gas cloud from collapsing is simply its temperature. Despite all the above processes, and despite gravity being the weakest force, gas clouds do sometimes collapse to form stars. The criterion for doing so is that the gas is dense enough, and that its internal pressure (or thermal energy) is weak enough. This is described by Jeans instability, which formulates the criterion for a cloud of gas to collapse through equating pressure forces, or thermal energy, to gravity. One way to express this is the Jeans mass $M_J$ (Jeans 1902) which is the critical mass of a cloud where thermal energy is exactly balanced by gravitational forces: $$ M_J = \rho \left( \frac{\pi k_B T}{4 \mu m_\mathrm{u} G \rho} \right)^{3/2} \\ \propto \frac{T^{3/2}}{\rho^{1/2}}. $$ Here, $k_B$, $G$, and $m_u$ are Boltzmannn's constant, the gravitational constant, and the atomic mass unit, while $T$, $\mu$, and $\rho$ are the temperature, the mean molecular mass, and the density of the gas. In the second line of the equation it is emphasized that $M_J$ increases with temperature, and decreases with density. In other words, if the gas is too hot, or too dilute, the total mass needed to collapse must be higher. In general, gas will not collapse to form stars if the temperature is above some $10^4\,\mathrm{K}$. If the temperature is higher, the particles simply move too fast. Since various processes may easily heat the ISM to millions of degrees, the gas has to cool before it can collapse. On way to do this is by cooling radiation: Fast-moving atoms collide (either with each other or, more often, with electrons). Some of the kinetic energy of the atoms is spent exciting their electrons to higher levels. When the atoms de-excite, photons are emitted which can leave the system. The net result is that thermal energy is removed from the cloud, until at some point it has cooled enough to collapse. pelapela $\begingroup$ So, ...., this begs the question: "What is the temperature and density of the ISM?". $\endgroup$ – Eric Towers $\begingroup$ @EricTowers: Temperatures in the ISM may, in principle, take any value from a few Kelvin to several (tens of) millions of Kelvin. However, various cooling processes makes the gas reach certain "plateaus" of temperatures. I previously discussed exactly that in an answer to How cold is interstellar space?. Wrt. densities ($n$), the various phases of the ISM tend to be very roughly in pressure equilibrium, such that the product $nT$ is more or less constant. $\endgroup$ $\begingroup$ That is, whereas a warm $T\sim10^4$ K cloud may have a density of $n\sim0.1$–$1$ particles per cm$^3$, a surrounding, hot $T\sim10^6$ K envelope will have $n\sim0.001$–$0.01$ cm$^{-3}$. And a small $T\sim10^2$ K molecular cloud will have densities of $n\sim10$–$10^2$ cm$^{-3}$ (and higher). $\endgroup$ $\begingroup$ @EricTowers: it doesn't beg the question, it raises, or prompts the question. Begging the question, or petitio principii, is a logical fallacy in which the writer or speaker assumes the statement under examination to be true. See grammarist.com/rhetoric/begging-the-question-fallacy $\endgroup$ – Jim421616 $\begingroup$ @Jim421616 : "It's a translation of the Latin phrase petitio principii, and it's used to mean that someone has made a conclusion based on a premise that lacks support." "In modern vernacular usage, 'to beg the question' frequently appears to mean 'to raise the question' (as in, 'This begs the question, whether...')" As this is not the 16th century, I am not limited by the past's mistranslations and usage of the phrase. All I asked for was the support. $\endgroup$ First, consider that gravity is weak. The nearest star system to the Sun is Alpha Centauri, at a distance of about 4 light-years. Consider the acceleration due to the Sun's gravity at half that distance: $$a_S=\frac{GM_\odot}{r^2}\simeq3.7\times10^{-13}\text{ m/s}^2$$ where $M_\odot$ is the mass of the Sun. That's an incredibly small acceleration, meaning that most massive bodies have a very small gravitational influence on the ISM. Particles are attracted to massive objects . . . but very, very weakly. That said, the ISM sometimes does collapse. The number density varies in different parts, from $\sim10^{-3}$ particles per cubic centimeter to $\sim10^6$ particles per cubic centimeter (1, 2). Above about $10^4$ particles per cubic centimeter, though, you venture into the regime of molecular clouds, which sometimes do collapse to form stars. On a final note, gravity isn't the only force acting on the ISM. Galactic magnetic fields, for instance, can influence the ISM's dynamics in various scenarios, including preventing or enabling the collapse of molecular clouds (see Ferrier (2005)). HDE 226868♦HDE 226868 How does the ISM resist gravity? It doesn't. There are two distinct sources of gravity: internal and external. The internal or self-gravity of the ISM may in fact result in collapse and subsequent star formation, as explained in another answer. The external gravitational attraction from any star or gas cloud on the ISM is too weak to be of relevance and can be neglected, as demonstrated in yet another answer. However, the ISM is subjected to the combined gravitational pull from all stars, gas, and dark matter in the Galaxy, i.e.\ the gravity of the Galaxy itself. In response to this pull, the ISM orbits the Galaxy on near-circular orbits, as do most of the stars (in a disc galaxy such as ours). Thus, the ISM is not special in this respect. Why does the ISM not fall into the inner Galaxy (where it is pulled to)? This is simply because it has too much angular momentum. The situation is exactly the same as for the Earth pulled to the Sun, yet orbiting (nearly) on a circle around it. Finally, note that magnetic forces and radiation pressure from nearby stars are much weaker than Galactic gravity and can be neglected when considering the Galactic orbits of the ISM. WalterWalter Not the answer you're looking for? Browse other questions tagged interstellar-medium interstellar or ask your own question. How cold is interstellar space? How does interstellar matter density vary? Why is the interstellar medium so hot? Why do comets come from our local Oort cloud instead of from interstellar sources? Why is [O III] a good density probe in interstellar medium? Do liquids exist in the interstellar medium? Do pulsar beams interact with the interstellar medium? What do these statements about interstellar magnetic fields really mean? Why isn't most hydrogen in the universe molecular (diatomic), instead of atomic (monoatomic)? What are the differences between Intergalactic Medium, Interstellar medium and Intracluster medium?
CommonCrawl
Astronomy Meta Astronomy Stack Exchange is a question and answer site for astronomers and astrophysicists. It only takes a minute to sign up. Astronomy Beta Can we change Earth's orbit with modern technology? [closed] Want to improve this question? Update the question so it's on-topic for Astronomy Stack Exchange. Closed 6 months ago. It is well known that, according to Newton's 3rd law, spacecrafts entering or leaving Earth provoke tiny (insignificant) changes to Earth's orbit (see related question here). My question is about significant changes. For instance, say we would like to shift the Earth's orbit closer to that of Mars (forget about the ecological consequences of this). Do we currently have the technology capable of doing this? For instance, something like a huge nuclear explosion? This question only deals with "natural", non-human ways to change a planet's orbit. I am interested about technology-based changes. orbit earth earths-orbit luchonacholuchonacho $\begingroup$ In short: no, we cannot. You can do the simple math by considering the kinetic energy of earth in its current orbit and the orbit you want to move it to and compare that to the annual world energy generation $\endgroup$ – planetmaker Aug 5 '20 at 9:07 $\begingroup$ @planetmaker why not make than an answer? $\endgroup$ – Steve Linton Aug 5 '20 at 9:20 $\begingroup$ I'm voting to close this question because technological manipulation of planetary orbits is out-of-scope on the Astronomy SE. It may be on-topic on other SE sites, e.g. Worldbuilding. $\endgroup$ – user24157 Aug 5 '20 at 10:30 $\begingroup$ See "The Wandering Earth," by Cixin Liu. Keep in mind that's fiction, so it has semi-magical engineering to produce the energy required. $\endgroup$ – Carl Witthoft Aug 5 '20 at 17:24 $\begingroup$ @antispinwards, pointing out the relative scales involved certainly does fall within the scope of Astronomy. $\endgroup$ – Mark Aug 5 '20 at 22:52 No, we cannot. It's relatively simple math to show that we are a very long shot from changing Earth's orbit by anything significant at all: consider the kinetic energy of Earth in its current orbit, and do the same math for that in the desired orbit: $ E_{kin} = \frac{m_E}{2}v^2 \approx \frac{m_E}{2}\frac{GM_S}{a}$ where $m_E = 6\cdot 10^{24}$kg is Earth's mass, $M_S=2\cdot 10^{30}$kg is the solar mass and $a=150\cdot 10^9$m is the distance from Earth to sun. I used the approximation in calculating orbital velocity $v = \sqrt{\frac{GM_S}{a}}$ that the Earth mass, is much smaller than the Sun's mass. So current orbital velocity is (using numbers as above) $29822$m/s. Now, changing oribital distance outward by 1%, thus by $1.5\cdot 10^9$m, to $151.5\cdot 10^9$m yields a speed of $29674$m/s - and in energy that subsequently gives a difference of $2.64\cdot 10^{31}$J to move Earth's orbit about 1% to further from the Sun. Annual world primary energy production currently is around $10^{20}$J - so we "just" need to produce 100.000.000.000 times the yearly energy production of the entire world to change Earth's orbit about 1%. And that implies that we have a technique to transfer energy 100% into Earth orbital velocity - something which cannot exist either due to simple thermodynamic argument. Edit to add: Another way to argue is the rocket way: typical speed of conventional rocket exhausts are a few km/s, which is less than needed to leave Earth's orbit - that's why you need multi-stage rockets to get anything into orbit. But if you want to move the whole Earth with a rocket motor, you need a propellant which is fast enough that it directly leaves Earth's gravitational field - or you will not be able to create any effective change of momentum and only move around mass in the Earth system itself. So you need to make some substantial technological advances here, too - or some entirely different technique. answered Aug 5 '20 at 9:51 planetmakerplanetmaker $\begingroup$ Effective exhaust velocity isn't the reason for multiple stages to orbit. It's the mass ratio element in the Tsilokovsky rocket equation that's the main culprit. It's mathematically possible to single-stage-to-orbit on a black powder rocket, but good luck building a light enough rocket that can contain all the black powder you'd need for it. $\endgroup$ – notovny Aug 5 '20 at 19:32 $\begingroup$ @notovny, effective exhaust velocity is the reason why we can't use chemical rockets to move the Earth around: in order for a rocket engine to have a net effect on the Earth's momentum, the exhaust needs to be moving faster than escape velocity. If it doesn't, the exhaust will (eventually) come back and return its momentum to Earth. $\endgroup$ – Mark Aug 5 '20 at 22:54 $\begingroup$ This argument is correct but it entirely disregards the possibility of using energy sources outside of Earth, like for example the orbital energy of Jupiter (link.springer.com/article/10.1023%2FA%3A1002790227314). Another important point against your conclusion is the amount of time. If we operate in terms of a hundreths of thousands of years we in fact move Earth with smaller power outputs. $\endgroup$ – Swike Aug 6 '20 at 1:39 $\begingroup$ If you take as long as civilisation is already old (10.000 years), you only need continuously for the next 10.000 years 10.000.000 times our current total energy production. Sources outside earth exist, and can be tapped, but don't change the energy equation. Changing the trajectory of a body of 1/100000 of earth mass drastically and controlled does not solve but mostly moves the energy problem from earth to the asteroid or kuiper belt. Additionally it creates the problem that you willfully create a potential planet buster, an object rated most dangerous on each planetary protection scale. $\endgroup$ – planetmaker Aug 6 '20 at 5:14 $\begingroup$ Interesting study, though. "optimistic minimum energy expenditure is about 10^36 erg" (1.000.000 yearly energy productions) and "this scheme would consume a number of large Kuiper Belt objects". These are 100km-diameter bodies sent to pass Earth repeatedly at 1/40 of moon distance. This makes it more concept than 'modern technology". One should also mind the last sentence of the paper: "The collision of a 100-km diameter object with the Earth at cosmic velocity would sterilize the biosphere most effectively, at least to the level ofbacteria. This danger cannot be overemphasized" $\endgroup$ – planetmaker Aug 6 '20 at 5:51 I'll go out on a speculative limb and say yes, maybe, depending on the definition of "significant". Planetmaker's answer notes the infeasibility of raising Earth's orbit if nothing else changes. But what if we also lower some other body's orbit at the same time? First, let the orbital energy of an object with mass $m$ orbiting the Sun with mass $M_S$ at an average distance of $a$ be its kinetic plus potential energy with respect to the Sun: $E_k + U = \frac{1}{2}m v^2 - \frac{G M_S m}{a} = \frac{1}{2} m (\sqrt{\frac{GM_S}{a}})^2 = \frac{G M_S m}{2a} - \frac{G M_S m}{a} = -\frac{G M_S m}{2a} $ Our goal is to raise the Earth's average orbital radius by 1%. Using the above equation and the specific values listed at the end, we need $2.62 \times 10^{31} \ \mathrm{J}$ to do that. (This is all consistent with planetmaker's calculation, I just want to show my own work.) Where can we get this energy? Let's try stealing Ceres. If we drop its orbital radius down to match Earth's, we gain $2.65 \times 10^{29} \ \mathrm{J}$. That's only 1% of the required energy. But if we instead are content to change Earth's orbit by 0.01% (is that "significant"?) then we have enough energy to do it in Ceres. If not, we need to get more bodies involved. (Since Ceres alone is already about 30% of the asteroid belt mass, they will need to come from elsewhere.) How do we transfer energy from Ceres to Earth? We arrange for a series (no pun intended) of Gravitational slingshots between the two bodies, each time letting Ceres pass just ahead of the Earth, thereby transferring energy to the latter. (As an outline of the encounter plan, my basic idea is we start by lowering its periapsis to match Earth, then all encounters happen at Ceres periapsis, thus preserving the possibility of future encounters.) How do we change the orbit of Ceres to cause these slingshots? We apply the same technique, recursively if needed. Find something else nearby, presumably also in the asteroid belt, whose orbit we can perturb to cause encounters with Ceres, gradually steering it toward the eventual encounter with Earth. The bottom of the recursion is some object small enough to be pushed (perhaps slowly) into an encounter with the next object using existing spacecraft and propulsion technology. This would of course take a long time, at least tens to hundreds of thousands of years, but still well short of the hundred billion years planetmaker cited to move Earth using terrestrial energy sources. At the core of this idea is the observation that N-body gravitational systems are chaotic, meaning that small changes in initial conditions can cause very large changes in later system state. To move the world, Archimedes asked for a lever and a fulcrum. But with modest technology, accurate foresight, and ample patience, in principle we should be able to manipulate the solar system almost at will without either. The Wikipedia Asteroid capture article discusses some related concepts. Specific numbers used in calculations (generally taken from Wikipedia): $G = 6.67 \times 10^{-11} \frac{\mathrm{m}^3}{\mathrm{kg} \ \mathrm{s}^2}$ $M_S = 1.99 \times 10^{30} \ \mathrm{kg}$ $m_\mathrm{Earth} = 5.97 \times 10^{24} \ \mathrm{kg}$ $a_\mathrm{Earth} = 150 \times 10^9 \ \mathrm{m}$ $m_\mathrm{Ceres} = 2.38 \times 10^{20} \ \mathrm{kg}$ $a_\mathrm{Ceres} = 414 \times 10^9 \ \mathrm{m}$ Scott McPeakScott McPeak $\begingroup$ Very interesting approach! Thanks $\endgroup$ – luchonacho Aug 5 '20 at 23:44 $\begingroup$ The actual core idea has been studied by researchers. With a carefully planed strategy we could use a bunch of asteroids to transfer a bit of Jupiter's orbital energy to Earth's. With current technology we would be able to move our planet at a rate that allow us to mantain a priviledged position in the habitable zone of our star as it gets hotter (and thus the habitable zone gets farther away). arxiv.org/pdf/astro-ph/0102126.pdf $\endgroup$ – Swike Aug 6 '20 at 1:41 Not the answer you're looking for? Browse other questions tagged orbit earth earths-orbit or ask your own question. Post your pictures of the Great Conjunction here! Changes to Earth's orbit The consequences and the mechanisms of a shift of the Earth away from the sun How much could Earth's orbit shift without causing an ecological catastrophe? Why doesn't Earth's axis change during the year? Can planets inside the Earth's orbit around the Sun appear to undergo retrograde motion? Jupiter orbiting time in Earth's orbit? As the Sun dies, does the Earth's orbit change? Can orbits change? Is Earth's Surface "In orbit"?
CommonCrawl
Flying Colours Maths Start hereBlogBooksAboutContact How big a lead can a football team have? A reader asks: What's the biggest lead a football team can have in the table after $n$ games? In a typical football league, teams get three points for a win, one for a draw, and none for getting beat. After, for example, one game, if one team wins and all of the other games are draws, the winners will have three points, while everyone except the team they beat will have one point — the winners will be two points ahead. There's not a whole lot more to it — after two games, the biggest possible lead is four points (one team wins both of its games to get six points, and all of the others are draws, leaving everyone else with at most two points). As long as the winning team hasn't played all of the teams, the biggest lead after $n$ games is $2n$ points. But what if they've played everyone? In a four-team group, it's possible to have a seven-point lead after three games, rather than just six: if you beat all three of the other teams, you'll have nine points; if they all draw with each other, they each have two points. Assuming you always win and everyone else always draws, once you've played everyone once, you'll have $3n$ points, and the best of the rest will have $n-1$ points - they'll have drawn every game except for the one they lost to you - giving you a margin of $3n - (n-1) = 2n+1$. In general, if you've played everyone at least $m$ times, your biggest possible margin is $2n + m$. So, when Dunfermline beat the other nine teams in Scottish League One four times, and they all draw with each other, they'll have a lead of $2 \times 36 + 4 = 76$ points. A selection of other posts Ask Uncle Colin: a disguised quadratic How the Mathematical Ninja integrates Lives of the Mathematical Ninja: Blaise Pascal Quotable maths: Krishna [email protected] icecolbeveridge Flying Colours Maths helps make sense of maths at A-level and beyond. This website does not use cookies. We do not store any personally identifiable information about visitors. Some pages contain affiliate links.
CommonCrawl
Rabanovych V. I. Article (Ukrainian) On Decompositions of a Scalar Operator into a Sum of Self-Adjoint Operators with Finite Spectrum ↓ Abstract | Full text (.pdf) Ukr. Mat. Zh. - 2015. - 67, № 5. - pp. 701–716 We consider the problem of classification of nonequivalent representations of a scalar operator $λI$ in the form of a sum of $k$ self-adjoint operators with at most $n_1 , ...,n_k$ points in their spectra, respectively. It is shown that this problem is *-wild for some sets of spectra if $(n_1 , ...,n_k)$ coincides with one of the following $k$ -tuples: $(2, ..., 2)$ for $k ≥ 5,\; (2, 2, 2, 3),\; (2, 11, 11),\; (5, 5, 5)$, or $(4, 6, 6)$. It is demonstrated that, for the operators with points 0 and 1 in the spectra and $k ≥ 5$, the classification problems are *-wild for every rational $λ ϵ 2 [2, 3]$. On the Decomposition of a Diagonal Operator into a Linear Combination of Idempotents or Projectors For a bounded operator that is not a sum of scalar and compact operators and is similar to a diagonal operator, we prove that it is a linear combination of three idempotents. It is also proved that any self-adjoint diagonal operator is a linear combination of four orthoprojectors with real coefficients. Article (Russian) On the identities in algebras generated by linearly connected idempotents Rabanovych V. I., Samoilenko Yu. S., Strilets O. V. We investigate the problem of the existence of polynomial identities (PI) in algebras generated by idempotents whose linear combination is equal to identity. In the case where the number of idempotents is greater than or equal to five, we prove that these algebras are not PI-algebras. In the case of four idempotents, in order that an algebra be a PI-algebra, it is necessary and sufficient that the sum of the coefficients of the linear combination be equal to two. In this case, these algebras are F 4-algebras. On the Decomposition of an Operator into a Sum of Four Idempotents Ukr. Mat. Zh. - 2004. - 56, № 3. - pp. 419-424 We prove that operators of the form (2 ± 2/n)I + K are decomposable into a sum of four idempotents for integer n > 1 if there exists the decomposition K = K 1 ⊕ K 2 ⊕ ... ⊕ K n, \(\sum\nolimits_1^n {K_i = 0} \) , of a compact operator K. We show that the decomposition of the compact operator 4I + K or the operator K into a sum of four idempotents can exist if K is finite-dimensional. If n tr K is a sufficiently large (or sufficiently small) integer and K is finite-dimensional, then the operator (2 − 2/n)I + K [or (2 + 2/n)I + K] is a sum of four idempotents. On Identities in Algebras $Q_{n,λ}$ Generated by Idempotents Ukr. Mat. Zh. - 2001. - 53, № 10. - pp. 1380-1390 We investigate the presence of polynomial identities in the algebras $Q_{n,λ}$ generated by $n$ idempotents with the sum $λe$ ($λ ∈ C$ and $e$ is the identity of an algebra). We prove that $Q_{4,2}$ is an algebra with the standard polynomial identity $F_4$, whereas the algebras $Q_{4,2},\; λ ≠ 2$, and $Q_{n,λ},\; n ≥ 5$, do not have polynomial identities. Scalar Operators Representable as a Sum of Projectors Rabanovych V. I., Samoilenko Yu. S. We study sets \(\Sigma _n = \{ \alpha \in \mathbb{R}^1 |\) there exist n projectors P1,...,Pn such that \(\sum\nolimits_{k = 1}^n {P_k = \alpha I} \}\) . We prove that if n ≥ 6, then \(\left\{ {0,1,1 + \frac{1}{{n - 1}},\left[ {1 + \frac{1}{{n - 2}},n - 1 - \frac{1}{{n - 2}}} \right],n - 1 - \frac{1}{{n - 1}},n - 1,n} \right\} \supset\) \(\Sigma _n \supset \left\{ {0,1,1 + \frac{k}{{k\left( {n - 3} \right) + 2}},k \in \mathbb{N},\left[ {1 + \frac{1}{{n - 3}},n - 1 - \frac{1}{{n - 3}}} \right],n - 1 - \frac{k}{{k\left( {n - 3} \right) + 2}},k \in \mathbb{N},n - 1,n} \right\}\) . Brief Communications (Russian) Singly generatedC *-algebras Ukr. Mat. Zh. - 1999. - 51, № 8. - pp. 1136-1141 We consider a $C*$-algebra $A$ generated by $k$ self-adjoint elements. We prove that, for $n \geqslant \sqrt {k - 1}$ , the algebra $M_n (A)$ is singly generated, i.e., generated by one non-self-adjoint element. We present an example of algebraA for which the property that $M_n (A)$ is singly generated implies the relation $n \geqslant \sqrt {k - 1}$.
CommonCrawl
Adrafinil is Modafinil's predecessor, because the scientists tested it as a potential narcolepsy drug. It was first produced in 1974 and immediately showed potential as a wakefulness-promoting compound. Further research showed that Adrafinil is metabolized into its component parts in the liver, that is into inactive modafinil acid. Ultimately, Modafinil has been proclaimed the primary active compound in Adrafinil. While these two compounds may not be as exciting as a super pill that instantly unlocks the full potential of your brain, they currently have the most science to back them up. And, as Patel explains, they're both relatively safe for healthy individuals of most ages. Patel explains that a combination of caffeine and L-theanine is the most basic supplement stack (or combined dose) because the L-theanine can help blunt the anxiety and "shakiness" that can come with ingesting too much caffeine. The difference in standard deviations is not, from a theoretical perspective, all that strange a phenomenon: at the very beginning of this page, I covered some basic principles of nootropics and mentioned how many stimulants or supplements follow a inverted U-curve where too much or too little lead to poorer performance (ironically, one of the examples in Kruschke 2012 was a smart drug which did not affect means but increased standard deviations). As far as anxiety goes, psychiatrist Emily Deans has an overview of why the Kiecolt-Glaser et al 2011 study is nice; she also discusses why fish oil seems like a good idea from an evolutionary perspective. There was also a weaker earlier 2005 study also using healthy young people, which showed reduced anger/anxiety/depression plus slightly faster reactions. The anti-stress/anxiolytic may be related to the possible cardiovascular benefits (Carter et al 2013). In avoiding experimenting with more Russian Noopept pills and using instead the easily-purchased powder form of Noopept, there are two opposing considerations: Russian Noopept is reportedly the best, so we might expect anything I buy online to be weaker or impure or inferior somehow and the effect size smaller than in the pilot experiment; but by buying my own supply & using powder I can double or triple the dose to 20mg or 30mg (to compensate for the original under-dosing of 10mg) and so the effect size larger than in the pilot experiment. Even though smart drugs come with a long list of benefits, their misuse can cause negative side effects. Excess use can cause anxiety, fear, headaches, increased blood pressure, and more. Considering this, it is imperative to study usage instructions: how often can you take the pill, the correct dosage and interaction with other medication/supplements. The idea of a digital pill that records when it has been consumed is a sound one, but as the FDA notes, there is no evidence to say it actually increases the likelihood patients that have a history of inconsistent consumption will follow their prescribed course of treatment. There is also a very strange irony in schizophrenia being the first condition this technology is being used to target. First was a combination of L-theanine and aniracetam, a synthetic compound prescribed in Europe to treat degenerative neurological diseases. I tested it by downing the recommended dosages and then tinkering with a story I had finished a few days earlier, back when caffeine was my only performance-enhancing drug. I zoomed through the document with renewed vigor, striking some sentences wholesale and rearranging others to make them tighter and punchier. Gibson and Green (2002), talking about a possible link between glucose and cognition, wrote that research in the area …is based on the assumption that, since glucose is the major source of fuel for the brain, alterations in plasma levels of glucose will result in alterations in brain levels of glucose, and thus neuronal function. However, the strength of this notion lies in its common-sense plausibility, not in scientific evidence… (p. 185). The choline-based class of smart drugs play important cognitive roles in memory, attention, and mood regulation. Acetylcholine (ACh) is one of the brain's primary neurotransmitters, and also vital in the proper functioning of the peripheral nervous system. Studies with rats have shown that certain forms of learning and neural plasticity seem to be impossible in acetylcholine-depleted areas of the brain. This is particularly worth mentioning because (as noted above under the Racetams section), the Racetam class of smart drugs tends to deplete cholines from the brain, so one of the classic "supplement stacks" – chemical supplements that are used together – are Piracetam and Choline Bitartrate. Cholines can also be found in normal food sources, like egg yolks and soybeans. Nootropics. You might have heard of them. The "limitless pill" that keeps Billionaires rich. The 'smart drugs' that students are taking to help boost their hyperfocus. The cognitive enhancers that give corporate executives an advantage. All very exciting. But as always, the media are way behind the curve. Yes, for the past few decades, cognitive enhancers were largely sketchy substances that people used to grasp at a short term edge at the expense of their health and well being. But the days of taking prescription pills to pull an all-nighter are so 2010. The better, safer path isn't with these stimulants but with nootropics. Nootropics consist of dietary supplements and substances which enhance your cognition, in particular when it comes to motivation, creativity, memory, and other executive functions. They play an important role in supporting memory and promoting optimal brain function. If you want to make sure that whatever you're taking is safe, search for nootropics that have been backed by clinical trials and that have been around long enough for any potential warning signs about that specific nootropic to begin surfacing. There are supplements and nootropics that have been tested in a clinical setting, so there are options out there. …It is without activity in man! Certainly not for the lack of trying, as some of the dosage trials that are tucked away in the literature (as abstracted in the Qualitative Comments given above) are pretty heavy duty. Actually, I truly doubt that all of the experimenters used exactly that phrase, No effects, but it is patently obvious that no effects were found. It happened to be the phrase I had used in my own notes. The abuse of drugs is something that can lead to large negative outcomes. If you take Ritalin (Methylphenidate) or Adderall (mixed amphetamine salts) but don't have ADHD, you may experience more focus. But what many people don't know is that the drug is very similar to amphetamines. And the use of Ritalin is associated with serious adverse events of drug dependence, overdose and suicide attempts [80]. Taking a drug for another reason than originally intended is stupid, irresponsible and very dangerous. My intent here is not to promote illegal drugs or promote the abuse of prescription drugs. In fact, I have identified which drugs require a prescription. If you are a servicemember and you take a drug (such as Modafinil and Adderall) without a prescription, then you will fail a urinalysis test. Thus, you will most likely be discharged from the military. We included studies of the effects of these drugs on cognitive processes including learning, memory, and a variety of executive functions, including working memory and cognitive control. These studies are listed in Table 2, along with each study's sample size, gender, age and tasks administered. Given our focus on cognition enhancement, we excluded studies whose measures were confined to perceptual or motor abilities. Studies of attention are included when the term attention refers to an executive function but not when it refers to the kind of perceptual process taxed by, for example, visual search or dichotic listening or when it refers to a simple vigilance task. Vigilance may affect cognitive performance, especially under conditions of fatigue or boredom, but a more vigilant person is not generally thought of as a smarter person, and therefore, vigilance is outside of the focus of the present review. The search and selection process is summarized in Figure 2. Federal law classifies most nootropics as dietary supplements, which means that the Food and Drug Administration does not regulate manufacturers' statements about their benefits (as the giant "This product is not intended to diagnose, treat, cure, or prevent any disease" disclaimer on the label indicates). And the types of claims that the feds do allow supplement companies to make are often vague and/or supported by less-than-compelling scientific evidence. "If you find a study that says that an ingredient caused neurons to fire on rat brain cells in a petri dish," says Pieter Cohen, an assistant professor at Harvard Medical School, "you can probably get away with saying that it 'enhances memory' or 'promotes brain health.'" OptiMind - It is one of the best Nootropic supplements available and brought to you by AlternaScript. It contains six natural Nootropic ingredients derived from plants that help in overall brain development. All the ingredients have been clinically tested for their effects and benefits, which has made OptiMind one of the best brain pills that you can find in the US today. It is worth adding to your Nootropic Stack. An entirely different set of questions concerns cognitive enhancement in younger students, including elementary school and even preschool children. Some children can function adequately in school without stimulants but perform better with them; medicating such children could be considered a form of cognitive enhancement. How often does this occur? What are the roles and motives of parents, teachers, and pediatricians in these cases? These questions have been discussed elsewhere and deserve continued attention (Diller, 1996; Singh & Keller, 2010). Let's start with the basics of what smart drugs are and what they aren't. The field of cosmetic psychopharmacology is still in its infancy, but the use of smart drugs is primed to explode during our lifetimes, as researchers gain increasing understanding of which substances affect the brain and how they do so. For many people, the movie Limitless was a first glimpse into the possibility of "a pill that can make you smarter," and while that fiction is a long way from reality, the possibilities - in fact, present-day certainties visible in the daily news - are nevertheless extremely exciting. This would be a very time-consuming experiment. Any attempt to combine this with other experiments by ANOVA would probably push the end-date out by months, and one would start to be seriously concerned that changes caused by aging or environmental factors would contaminate the results. A 5-year experiment with 7-month intervals will probably eat up 5+ hours to prepare <12,000 pills (active & placebo); each switch and test of mental functioning will probably eat up another hour for 32 hours. (And what test maintains validity with no practice effects over 5 years? Dual n-back would be unusable because of improvements to WM over that period.) Add in an hour for analysis & writeup, that suggests >38 hours of work, and 38 \times 7.25 = 275.5. 12,000 pills is roughly $12.80 per thousand or $154; 120 potassium iodide pills is ~$9, so \frac{365.25}{120} \times 9 \times 5 = 137. 2 commenters point out that my possible lack of result is due to my mistaken assumption that if nicotine is absorbable through skin, mouth, and lungs it ought to be perfectly fine to absorb it through my stomach by drinking it (rather than vaporizing it and breathing it with an e-cigarette machine) - it's apparently known that absorption differs in the stomach. Racetams are the best-known smart drugs on the market, and have decades of widespread use behind them. Piracetam is a leading smart drug, commonly prescribed to seniors with Alzheimer's or pre-dementia symptoms – but studies have shown Piracetam's beneficial effects extend to people of all ages, as young as university students. The Racetams speed up chemical exchange between brain cells. Effects include increases in verbal learning, mental clarity, and general IQ. Other members of the Racetam family include Pramiracetam, Oxiracetam, аnԁ Aniracetam, which differ from Piracetam primarily in their potency, not their actual effects. (I was more than a little nonplussed when the mushroom seller included a little pamphlet educating one about how papaya leaves can cure cancer, and how I'm shortening my life by decades by not eating many raw fruits & vegetables. There were some studies cited, but usually for points disconnected from any actual curing or longevity-inducing results.) Adderall increases dopamine and noradrenaline availability within the prefrontal cortex, an area in which our memory and attention are controlled. As such, this smart pill improves our mood, makes us feel more awake and attentive. It is also known for its lasting effect – depending on the dose, it can last up to 12 hours. However, note that it is crucial to get confirmation from your doctor on the exact dose you should take. In addition, the cognitive enhancing effects of stimulant drugs often depend on baseline performance. So whilst stimulants enhance performance in people with low baseline cognitive abilities, they often impair performance in those who are already at optimum. Indeed, in a study by Randall et al., modafinil only enhanced cognitive performance in subjects with a lower (although still above-average) IQ. The ethics of cognitive enhancement have been extensively debated in the academic literature (e.g., Bostrom & Sandberg, 2009; Farah et al., 2004; Greely et al., 2008; Mehlman, 2004; Sahakian & Morein-Zamir, 2007). We do not attempt to review this aspect of the problem here. Rather, we attempt to provide a firmer empirical basis for these discussions. Despite the widespread interest in the topic and its growing public health implications, there remains much researchers do not know about the use of prescription stimulants for cognitive enhancement. In addition, large national surveys, including the NSDUH, have generally classified prescription stimulants with other stimulants including street drugs such as methamphetamine. For example, since 1975, the National Institute on Drug Abuse–sponsored Monitoring the Future (MTF) survey has gathered data on drug use by young people in the United States (Johnston, O'Malley, Bachman, & Schulenberg, 2009a, 2009b). Originally, MTF grouped prescription stimulants under a broader class of stimulants so that respondents were asked specifically about MPH only after they had indicated use of some drug in the category of AMPs. As rates of MPH prescriptions increased and anecdotal reports of nonmedical use grew, the 2001 version of the survey was changed to include a separate standalone question about MPH use. This resulted in more than a doubling of estimated annual use among 12th graders, from 2.4% to 5.1%. More recent data from the MTF suggests Ritalin use has declined (3.4% in 2008). However, this may still underestimate use of MPH, as the question refers specifically to Ritalin and does not include other brand names such as Concerta (an extended release formulation of MPH). Tuesday: I went to bed at 1am, and first woke up at 6am, and I wrote down a dream; the lucid dreaming book I was reading advised that waking up in the morning and then going back for a short nap often causes lucid dreams, so I tried that - and wound up waking up at 10am with no dreams at all. Oops. I take a pill, but the whole day I don't feel so hot, although my conversation and arguments seem as cogent as ever. I'm also having a terrible time focusing on any actual work. At 8 I take another; I'm behind on too many things, and it looks like I need an all-nighter to catch up. The dose is no good; at 11, I still feel like at 8, possibly worse, and I take another along with the choline+piracetam (which makes a total of 600mg for the day). Come 12:30, and I disconsolately note that I don't seem any better, although I still seem to understand the IQ essays I am reading. I wonder if this is tolerance to modafinil, or perhaps sleep catching up to me? Possibly it's just that I don't remember what the quasi-light-headedness of modafinil felt like. I feel this sort of zombie-like state without change to 4am, so it must be doing something, when I give up and go to bed, getting up at 7:30 without too much trouble. Some N-backing at 9am gives me some low scores but also some pretty high scores (38/43/66/40/24/67/60/71/54 or ▂▂▆▂▁▆▅▇▄), which suggests I can perform normally if I concentrate. I take another pill and am fine the rest of the day, going to bed at 1am as usual. One might suggest just going to the gym or doing other activities which may increase endogenous testosterone secretion. This would be unsatisfying to me as it introduces confounds: the exercise may be doing all the work in any observed effect, and certainly can't be blinded. And blinding is especially important because the 2011 review discusses how some studies report that the famed influence of testosterone on aggression (eg. Wedrifid's anecdote above) is a placebo effect caused by the folk wisdom that testosterone causes aggression & rage! The first night I was eating some coconut oil, I did my n-backing past 11 PM; normally that damages my scores, but instead I got 66/66/75/88/77% (▁▁▂▇▃) on D4B and did not feel mentally exhausted by the end. The next day, I performed well on the Cambridge mental rotations test. An anecdote, of course, and it may be due to the vitamin D I simultaneously started. Or another day, I was slumped under apathy after a promising start to the day; a dose of fish & coconut oil, and 1 last vitamin D, and I was back to feeling chipper and optimist. Unfortunately I haven't been testing out coconut oil & vitamin D separately, so who knows which is to thank. But still interesting. Nootrobox co-founder Geoffrey Woo declines a caffeinated drink in favour of a capsule of his newest product when I meet him in a San Francisco coffee shop. The entire industry has a "wild west" aura about it, he tells me, and Nootrobox wants to fix it by pushing for "smarter regulation" so safe and effective drugs that are currently unclassified can be brought into the fold. Predictably, both companies stress the higher goal of pushing forward human cognition. "I am trying to make a smarter, better populace to solve all the problems we have created," says Nootroo founder Eric Matzner. Using prescription ADHD medications, racetams, and other synthetic nootropics can boost brain power. Yes, they can work. Even so, we advise against using them long-term since the research on their safety is still new. Use them at your own risk. For the majority of users, stick with all natural brain supplements for best results. What is your favorite smart pill for increasing focus and mental energy? Tell us about your favorite cognitive enhancer in the comments below. This article is for informational purposes only and does not constitute medical advice. Quartz does not recommend or endorse any specific products, studies, opinions, or other information mentioned in this article. This article is not intended to be used for, or as a substitute for, professional medical advice, diagnosis, or treatment. Always seek the advice of a physician or other qualified health provider with any questions you may have before starting any new treatment or discontinuing any existing treatment.Reliance on any information provided in this article or by Quartz is solely at your own risk. The nonmedical use of substances—often dubbed smart drugs—to increase memory or concentration is known as pharmacological cognitive enhancement (PCE), and it rose in all 15 nations included in the survey. The study looked at prescription medications such as Adderall and Ritalin—prescribed medically to treat attention deficit hyperactivity disorder (ADHD)—as well as the sleep-disorder medication modafinil and illegal stimulants such as cocaine. A key ingredient of Noehr's chemical "stack" is a stronger racetam called Phenylpiracetam. He adds a handful of other compounds considered to be mild cognitive enhancers. One supplement, L-theanine, a natural constituent in green tea, is claimed to neutralise the jittery side-effects of caffeine. Another supplement, choline, is said to be important for experiencing the full effects of racetams. Each nootropic is distinct and there can be a lot of variation in effect from person to person, says Lawler. Users semi-annonymously compare stacks and get advice from forums on sites such as Reddit. Noehr, who buys his powder in bulk and makes his own capsules, has been tweaking chemicals and quantities for about five years accumulating more than two dozens of jars of substances along the way. He says he meticulously researches anything he tries, buys only from trusted suppliers and even blind-tests the effects (he gets his fiancée to hand him either a real or inactive capsule). Certain pharmaceuticals could also qualify as nootropics. For at least the past 20 years, a lot of people—students, especially—have turned to attention deficit hyperactivity disorder (ADHD) drugs like Ritalin and Adderall for their supposed concentration-strengthening effects. While there's some evidence that these stimulants can improve focus in people without ADHD, they have also been linked, in both people with and without an ADHD diagnosis, to insomnia, hallucinations, seizures, heart trouble and sudden death, according to a 2012 review of the research in the journal Brain and Behavior. They're also addictive. Expect to experience an increase in focus and a drastic reduction in reaction time [11][12][13][14][15][16]. You'll have an easier time quickly switching between different mental tasks, and will experience an increase in general cognitive ability [17][18]. Queal Flow also improves cognition and motivation, by means of reducing anxiety and stress [19][20][21][22][23]. If you're using Flow regularly for a longer period of time, it's also very likely to improve your mental health in the long term (reducing cognitive decline), and might even improve your memory [24][25]. My general impression is positive; it does seem to help with endurance and extended the effect of piracetam+choline, but is not as effective as that combo. At $20 for 30g (bought from Smart Powders), I'm not sure it's worthwhile, but I think at $10-15 it would probably be worthwhile. Sulbutiamine seems to affect my sleep negatively, like caffeine. I bought 2 or 3 canisters for my third batch of pills along with the theanine. For a few nights in a row, I slept terribly and stayed awake thinking until the wee hours of the morning; eventually I realized it was because I was taking the theanine pills along with the sleep-mix pills, and the only ingredient that was a stimulant in the batch was - sulbutiamine. I cut out the theanine pills at night, and my sleep went back to normal. (While very annoying, this, like the creatine & taekwondo example, does tend to prove to me that sulbutiamine was doing something and it is not pure placebo effect.) Most people I talk to about modafinil seem to use it for daytime usage; for me that has not ever worked out well, but I had nothing in particular to show against it. So, as I was capping the last of my piracetam-caffeine mix and clearing off my desk, I put the 4 remaining Modalerts pills into capsules with the last of my creatine powder and then mixed them with 4 of the theanine-creatine pills. Like the previous Adderall trial, I will pick one pill blindly each day and guess at the end which it was. If it was active (modafinil-creatine), take a break the next day; if placebo (theanine-creatine), replace the placebo and try again the next day. We'll see if I notice anything on DNB or possibly gwern.net edits. Analgesics Anesthetics General Local Anorectics Anti-ADHD agents Antiaddictives Anticonvulsants Antidementia agents Antidepressants Antimigraine agents Antiparkinson agents Antipsychotics Anxiolytics Depressants Entactogens Entheogens Euphoriants Hallucinogens Psychedelics Dissociatives Deliriants Hypnotics/Sedatives Mood Stabilizers Neuroprotectives Nootropics Neurotoxins Orexigenics Serenics Stimulants Wakefulness-promoting agents
CommonCrawl
Thoughts: The Transparent Newcomb's Problem Published: 2020-08-18 Tags: Thoughts, Effective altruism Writing my summary of Evidence, Decision, and Causality, I got interested in how EDT might be able to succeed by precommitting, and how different simulation schemes that a predictor might run of the transparent Newcomb's problem might affect the way EDT and CDT reason about the problem. Precommitments hinge on an aspect of sophisticated choice that I haven't been able to find information on. Indexical (or anthropic) uncertainty seems to do the trick unambiguously. Be warned that these things were probably only new to me, so if you know some decision theory you may end up bored, and I wouldn't want that. Precommitments Updatelessness Some Observations Simulation Regimes 2 Sim. 1.5 Sim. (Consistent) Reasoning Under Indexical Uncertainty In the transparent Newcomb's problem, CDT and EDT are both said to two-box and fail to become millionaires. Most decision theorists would call me insane (or maybe they'd be more polite about their disapproval) for wanting to one-box in this situation, but maybe I just really like money. So I wonder whether I need to adopt a UDT-like decision theory to achieve this or whether I have other options, in particular options compatible with following the advice of EDT. Friends of mine said that EDT would update on the nonoptimality of its approach and choose to precommit. For that to work out, it'll need (1) a full understanding of the choice situation and (2) a chance to precommit. It'll need both of these before learning some crucial information, in this case the contents of the boxes. Thereby it would, in effect, implement updateless behavior in individual choice situations. But I somehow can't make that happen. I turn it into a sequential choice problem by adding a chance to precommit when the agent knows the choice situation but hasn't seen the (transparent) boxes. To precommit, the agent puts $1,001 in escrow, which they'll only get back if they one-box. I'll write M and K for the potential money under the boxes ($1,000,000 and $1,000 respectively) and E = $1,001 for the potential amount in escrow. The predictor is known to be perfect, so some cases are impossible. First the myopic case: Pred. to one-box Pred. to two-box Escrow, one-box M – Escrow, two-box – K - E = −$1 No escrow, one-box M – No escrow, two-box – K Myopic EDT may endorse escrow but there's nothing here that makes it recommend it over no escrow. The case where the agent gets the million is, by construction, identical to the case where they get the million and get the escrow money back. In practice, precommitments will come with small costs – transaction costs, lost interest, option value, vel sim. – but may still be very much worth it. Yet myopic EDT would then no longer even endorse them. The sophisticated choice situation is unclear to me. The materials on decision theory that I've read didn't feature a case where the likely payoffs of the final stage of a multi-stage decision problem varied depending on decisions at earlier stages. Usually the preferences changed or nothing changed. EDT normally recommends two-boxing if it encounters the second stage in isolation. If that recommendation were the starting point of the backward induction, it would make the escrow seem highly undesirable. But at least intuitively it seems unproblematic to me (without a textbook to back me up, I'm afraid I may be overlooking some incoherency) to take the bad news into account that two-boxing confers when it comes to the then-empty box. I just don't know if this is in the spirit of sophisticated choice or already touches on policy selection. But if this is compatible with how decision theorists typically understand sophisticated choice and EDT to operate, then EDT recommends escrow. The table above is the result of eliminating "escrow, two-box" because two-boxing at the final stage would violate the agent's preferences, and eliminating "No escrow, one-box" for the same reason and the reason that the actual preferences would make the case inconsistent. What remains is for the agent to choose "escrow, one-box." Yay! I just hope this way of reasoning is consistent. Resolute choice solves the issue, but then resolute choice doesn't require any separate precommitment stage to begin with. There is also the consideration that such thorough reasoners as these idealized agents will consider that they have some degree of indexical (or anthropic1) uncertainty. The predictor may determine the agent's bet conditional on the prediction by running a simulation of them, and the agent may currently be inside that simulation. This consideration would make both EDT and CDT updateless and thereby rich. For concision I'll write "one-boxer" for "an agent that has been found to de facto follow the one-boxer policy." Calling the agent a one-boxer makes it sound like this is an indelible property of the agent rather than an observation. This would be problematic in the standard Newcomb's problem since such an inherent property is likely to be known to the agent themselves, and then the Tickle Defense would apply to the resulting problem. Here I intend "one-boxer" etc. as mere shorthands without all these implications. In the transparent Newcomb's problem, the agents can update on the payoffs in both boxes, so there are four instead of two policies (see the table below): One-boxer: Take one box (the formally opaque one) no matter what. Two-boxer: Take both boxes no matter what. Conformist: One-box if there is a million in the box (M, K); otherwise two-box. Rebel: One-box if the box is empty (0, K); otherwise two-box. M, K 0, K One-boxer M 0 Two-boxer M+K 0+K Conformist M 0+K Rebel M+K 0 It's unclear what box configuration Omega should choose for conformists or rebels, so a default (M, K or 0, K) needs to be chosen for those cases. 0, K somehow strikes me as the more natural default, but see also point 5 below where I suggest a new configuration 0, 0. There may be different simulation regimes (see the table below): Simulating both box configurations multiple times ("n sim.") to capture whether agents follow different policies with some probability. This would need to be real randomness, not some pseudo-randomness that can be simulated. The predictor would then need a separate policy for how to respond to, say, agents that one-box only most of the time. I'll omit this case here. Simulating both box configurations once ("2 sim."). Simulating one box configuration, and only simulating the other one if the agent makes a decision that is inconsistent (or consistent, but that seems almost equivalent and intuitively odd) with seeing that box configuration ("1.5 sim."). Simulating only one box configuration ("1 sim.") and accepting that you can't distinguish the conformists and rebels from one of the other strategies. In cases 1.5 sim. and 1 sim. it is important which scenario is simulated first. The resulting differences are in the table below. A bold choice in the 1.5 sim. table means that the simulation completed after a single run. Regular type means that it took two runs. Encoding perfect predictions in the two configurations of boxed money is no longer possible as there are only two such configurations in the original problem. A rebel bot that always follows the rebel strategy could thwart any such prediction. But you could introduce a 0, 0 payoff combination as default (in the above sense) to discourage some strategies that update on the boxed money: The 0, 0 payoff could await both conformist and rebel. A conformist bot would then see 0, 0 even though they would've one-boxed if the payoffs had been M, K. The predictor could also choose to only use the default for rebels since that is the one policy that would otherwise lead to paradox. The predictor could then randomize what to give to the conformist. Finally, the predictor could use the (perhaps cheaper) 1.5 sim. regime to determine the payoffs in option b. It won't be able to tell the conformist from the one-boxer or the two-boxer, but it will recognize the rebel. If you know which strategy you'll follow, say, because you've precommited to it, and you encounter an inconsistent situation, you still can't exploit your knowledge because, less interestingly, you'll have precommitted and because, more interestingly, you'd need to behave in two different ways depending on whether you're in the simulation or outside, but you can't because you don't which is which: If you've secretly and nonbindingly committed to two-boxing, and you see M, K, it may be because you're in a simulation and should one-box or because you've one-boxed in the simulation should now really two-box. I'll mention another reservation later. In one column the M, K configuration is tested first and then the 0, K configuration. In the other column the order is reversed. It makes no difference in this case – the resulting configurations are all the same. M, K; 0, K 0, K; M, K One-boxer M, K M, K Two-boxer 0, K 0, K Conformist default default Rebel default default In each column, the second configuration is now bracketed because it is only tested if the agent behaved inconsistently in the first configuration. If they behaved consistently right away, the simulation is complete and the resulting configuration is bolded. M, K; (0, K) 0, K; (M, K) Conformist M, K 0, K With a single simulation, the conformist and the one-boxer, and the rebel and the two-boxer behave the same if only the M, K configuration is tested. Otherwise, if only the 0, K configuration is tested, the one-boxer and the rebel, and the two-boxer and the conformist behave the same. Rebel 0, K M, K There are two things that I'm uncertain about: The materials on decision theory that I've read didn't feature a case where the payoffs of the final stage of a multi-stage decision problem varied depending on decisions at earlier stages. So here I hope that a sophisticated agent (as opposed to a myopic or resolute one) will foresee in earlier stages that they'll, at the final stage, prefer to be in one rather than another decision situation. I don't know whether this is actually in accordance with the definitions of the terms. I'm also not clear on whether decision theoretic agents can realize that all their copies will make identical decisions in identical situations (regardless of the stage). But if this is all the case and maybe also if only the first assumption is true, it seems to me that the argument that EDT and CDT one-box if they realize their indexical uncertainty goes through for this transparent Newcomb's problem: First, even if the probability an agent assigns to being in a simulation were small, the opportunity to causally increase the final payoff by a factor of 1,000 by one-boxing would overwhelm even such rather small probabilities. Second, the probabilities are actually greater than 50%, so substantial. The self-sampling assumption (SSA) and the self-indication assumption (SIA) agree in the case of 2 Sim. and assign probability ⅔ to being in a simulation. (See, e.g., Anthropic Decision Theory.) In 1.5 Sim. they diverge: I'll assume that the agents have a flat prior over whether the predictor starts by simulating 0, K or M, K. This uncertainty is indicated by the configurations in bold vs. in square brackets in the graphic below. At each stage you see the content of the boxes, so if it is M, K (without loss of generality), you're either (1) in the bold sim. 1 case at the top, (2) the bracketed sim. 2 case at the top, (3) the bracketed reality case at the top, (4) the bold sim. 1 case at the bottom, or (5) the bold reality case at the bottom. (Hence the SIA probabilities of \(\frac{1}{5}\) each.) A few reservations and simplifications: I ignore "rebel" agents in this example as they could know for certain that they're not in one of Omega's simulations if they see 0, 0. That would make things more complicated. What would also make things complicated are agents who have a strong inclination to one- or two-box and are aware of it. For example, an agent with a strong inclination to one-box may be surprised to see 0, K and so conclude that it's even more likely to be in a simulation than its SSA or SIA prior implies. Conversely, having a strong inclination and seeing a configuration that is consistent with it may boost once confidence that one is not in a simulation. If the inclination is to one-box (with loss of generality), that might make two-boxing more attractive again, thereby undermining the very inclination to one-box. This seems complicated. Agents with no particular initial inclination (or that are not aware of it) seem to be a simpler case.2 Adding up the fractions yields an SSA probability of \(\frac{7}{12}\) and an SIA probability of \(\frac{3}{5}\) in favor of being in a simulation. I've wondered whether there is a way to tweak the payoffs of two-boxing such that an agent that doesn't take into account that they can't act differently depending on which stage they're at can be tricked into making a nonoptimal choice, but I haven't found one yet. My main caveat, as mentioned above, is that I'm uncertain over whether CDT and EDT realize that they'll make the same recommendation in the same situation regardless of the stage. But if that's the case (and maybe also if not), it seems to me that this argument checks out and indexical (or anthropic) uncertainty really lets CDT and EDT one-box in the transparent Newcomb's problem. Is there a difference? I use "indexical" because it seems more self-explanatory to me. ↩ This might be related to the metatickle defence. I haven't read the paper, though. ↩
CommonCrawl
Last 12 months (8) Zygote (83) Journal of Helminthology (34) Visual Neuroscience (20) Parasitology (15) The Journal of Symbolic Logic (6) Journal of Dairy Research (4) Journal of Developmental Origins of Health and Disease (3) Acta Neuropsychiatrica (2) Bulletin of Symbolic Logic (2) Thalamus & Related Systems (2) The Journal of Anatomy (2) The Journal of Laryngology & Otology (2) Chinese Journal of Agricultural Biotechnology (1) European Psychiatry (1) Expert Reviews in Molecular Medicine (1) Neuron Glia Biology (1) Association for Symbolic Logic (8) Nutrition Society (4) Universities Federation for Animal Welfare (4) AMA Mexican Society of Microscopy MMS (3) Royal College of Speech and Language Therapists (2) BSAS (1) Canadian Mathematical Society (1) Developmental Origins of Health and Disease Society (1) EAAP (1) European Psychiatric Association (1) JLO (1984) Ltd (1) The New Zealand Society of Otolaryngology, Head and Neck Surgery (1) A Mouse is Not Just a Mouse F Sluyter, G A Van Oortmerssen Journal: Animal Welfare / Volume 9 / Issue 2 / May 2000 Published online by Cambridge University Press: 11 January 2023, pp. 193-205 In this paper we describe the existence and consequences of subspecific and individual variation in the genetic make-up of house mice. The purpose is to illustrate forms of variation that are often neglected in discussions about animal care and experimental design. Towards this end, different inbred mouse strains as well as genetically selected mouse lines are compared in relation to their ecological origin. Firstly, the behaviour of BALB/c, C57BL/6J and CBA mice is described in relation to different habitats. Furthermore, their aggression is compared, as measured by two paradigms. It appears that some inbred lines (eg BALB/c and C57BL/6J) clearly show behaviour that reflects the functional adaptation to the natural habitats in which their ancestors lived. Other strains (eg CBA) show a lack of such behavioural adaptation and their phenotypes appear to be very unstable over time. Secondly, two fundamentally different characters, both present in populations of wild house mice and under genetic control, are described: on the one hand, active copers are characterized by aggressive behaviour; on the other hand, passive copers are reluctant to attack. The active, aggressive animals (manipulators) are well adapted to an invariant environment like their own territory, whereas the passive, non-aggressive copers (adjustors) are well adapted to a changing environment, eg when roaming. We discuss to what extent these coping styles are present in laboratory strains of mice. The major conclusion with regard to both phenomena is that individual and subspecific variation may have significant implications for experimental design and the welfare of the experimental animals. Fear-Related Behaviour in Two Mouse Strains Differing in Litter Size A M Janczak, B O Braastad, M Bakken Journal: Animal Welfare / Volume 9 / Issue 1 / February 2000 Published online by Cambridge University Press: 11 January 2023, pp. 25-38 The present experiment compared the fear-related behaviour of a mouse strain selected over 101 generations for high litter size with that of a randomly selected strain. The H-strain, selected for large litter size, has a mean (± SD) litter size at birth of 21.5 ± 3.5 pups. The randomly bred C-strain has a mean (± SD) litter size of 9.6 ± 2.2 pups. The elevated plus-maze, the light:dark test and a resident:intruder test were used to measure how the mice responded to novelty. In the elevated plus-maze, a well-validated model of animal anxiety, the H-strain was significantly more anxious (having a lower percentage of entries into open arms) than the C-strain at 9 weeks of age. In the light: dark test, in which the light levels were similar to those in the home environment, the H-strain did not differ significantly from the C-strain in its avoidance of the brightest area. In the resident: intruder test, where aggression-trained, older H-strain males were the residents, 11-week-old intruding mice of the C-strain spent a higher percentage of their time in flight and immobility than intruders of the H-strain. There were clear anxiety- and fear-related differences between the strains, which may be related to their selection history. The results illustrate a need for further studies on the consequences of selection for increased production for the ability of animals to adapt to their home environment and cope with environmental changes. Housing environment alters delayed-type hypersensitivity and corticosterone concentrations of individually housed male C57BL/6 mice GN Neigh, SL Bowers, B Korman, RJ Nelson Journal: Animal Welfare / Volume 14 / Issue 3 / August 2005 Housing conditions can alter both the physiology and behaviour of laboratory animals. Forced-air-ventilated micro-isolation systems increase the efficient use of space, decrease the incidence of disease among laboratory rodents, and provide better working conditions for animal care staff; however, such systems can increase breeding variability and mortality. We examined the possibility that stressors associated with automated housing conditions evoke subtle changes among immune, endocrine, and behavioural parameters in mice housed in a static versus a forced-air-ventilated micro-isolation system. In addition, we assessed the effects of housing in the forced-air-ventilated micro-isolation system both with and without the use of an automatic watering system. Housing in the forced-air-ventilated micro-isolation system, using the automatic watering system, suppressed delayed-type hypersensitivity (DTH) responses, a measure of cell mediated immune function, compared with the responses of mice housed in static cages. Hypothalamic–pituitary–adrenal axis function was also altered by housing in the forced-air-ventilated micro-isolation system with the use of the automatic watering system, such that mice in this housing system had lower resting corticosterone concentrations and increased reactivity to restraint. Despite these changes in corticosterone, housing condition did not alter activity level or exploratory, anxiety-like, or depressive-like behaviours. These results suggest that investigators should carefully consider housing conditions in studies of immune and endocrine function. Behaviour of laboratory mice is altered by light pollution within the housing environment TA Bedrosian, CA Vaughn, ZM Weil, RJ Nelson Journal: Animal Welfare / Volume 22 / Issue 4 / November 2013 Environmental light-dark cycles play an important role in behavioural and physiological processes. It is essential that laboratory vivaria be designed to properly control the light conditions in which laboratory mice are housed; however, this is not universally the case. Some laboratory vivarium doors are designed with windows, which allow light from the hallways to leak into the housing space during the rodents' dark phase. Personnel entering and exiting the housing space during the dark phase can also create excessive light leak from brightly illuminated hallways. In this study, we investigated the hypothesis that exposure to dim light at night, as commonly experienced in many laboratory rodent housing spaces, alters mouse (Mus musculus) behaviour. We specifically analysed patterns of locomotor activity, anxiety- and depressive-like responses. Exposure to dim (5 lux) light at night altered home-cage locomotor activity and increased anxiety and some depressive responses among laboratory mice. These results suggest that light conditions can alter mouse behaviour and potentially influence experimental outcomes. Increased care should be taken to properly control light-dark conditions for laboratory animals. Effects of diets containing proteins from fish muscles or fish byproducts on the circulating cholesterol concentration in rodents: a systematic review and meta-analysis Maria O'Keeffe, Oddrun Anita Gudbrandsen Journal: British Journal of Nutrition / Accepted manuscript Published online by Cambridge University Press: 21 October 2022, pp. 1-56 A high circulating cholesterol concentration is considered an important risk factor for the development of cardiovascular disease. Since lean fish intake and fish protein supplementation have been associated with lower cholesterol concentration in some but not all clinical studies, the main aim of this study was to investigate the effect of diets containing proteins from fish muscles and fish byproducts on the serum/plasma TC concentration in rodents. A systematic literature search was performed using the databases PubMed, Web of Science and Embase, structured around the population (rodents), intervention (type of fish and fraction, protein dose, duration), comparator (casein as control protein) and the primary outcome (circulating total cholesterol). Articles were assessed for risk of bias using the SYRCLE's tool. A meta-analysis was conducted in Review Manager v. 5.4.1 (the Cochrane Collaboration) to determine the effectiveness of proteins from fish on the circulating TC concentration. Thirty-nine articles were included in the systematic review and meta-analysis, with data from 935 rodents. The risk of bias is unclear since few of the entries in the SYRCLE's tool were addressed. Consumption of proteins from fish resulted in a significantly lower circulating TC concentration when compared to control groups (mean difference -0.24 mmol/l, 95% confidence interval -0.34, -0.15, P < 0.00001), with high statistical heterogeneity (I2 = 71%). To conclude, proteins from fish muscles and byproducts show promise as a functional dietary ingredient or supplement by preventing high cholesterol concentration in rodents, thus reducing one of the most important risk factors for developing cardiovascular disease. NEGATIVE RESULTS ON PRECIPITOUS IDEALS ON $\omega _1$ GRIGOR SARGSYAN Journal: The Journal of Symbolic Logic , First View We show that in many extender models, e.g., the minimal one with infinitely many Woodin cardinals or the minimal with a Woodin cardinal that is a limit of Woodin cardinals, there are no generic embeddings with critical point $\omega _1$ that resemble the stationary tower at the second Woodin cardinal. The meaning of "resemble" is made precise in the paper (see Definition 0.3). Dynamic and aberrant patterns of H3K4me3, H3K9me3, and H3K27me3 during early zygotic genome activation in cloned mouse embryos Zhihui Liu, Jing Cui, Weiguo Wang, Mingyang Li, Zhisong Wang, Giorgio Antonio Presicce, Xiuchun (Cindy) Tian, Liyou An, Fuliang Du Journal: Zygote / Volume 30 / Issue 6 / December 2022 Published online by Cambridge University Press: 15 September 2022, pp. 903-909 Somatic cell nuclear transfer (NT) is associated with aberrant changes in epigenetic reprogramming that impede the development of embryos, particularly during zygotic genome activation. Here, we characterized epigenetic patterns of H3K4me3, H3K9me3, and H3K27me3 in mouse NT embryos up to the second cell cycle (i.e. four-celled stage) during zygotic genome activation. In vivo fertilized and parthenogenetically activated (PA) embryos served as controls. In fertilized embryos, maternal and paternal pronuclei exhibited asymmetric H3K4me3, H3K9me3, and H3K27me3 modifications, with the paternal pronucleus showing delayed epigenetic modifications. Higher levels of H3K4me3 and H3K9me3 were observed in NT and PA embryos than in fertilized embryos. However, NT embryos exhibited a lower level of H3K27me3 than PA and fertilized embryos from pronuclear stage 3 to the four-celled stage. Our finding that NT embryos exhibited aberrant H3K4me3, H3K9me3, and H3K27me3 modifications in comparison with fertilized embryos during early zygotic genome activation help to unravel the epigenetic mechanisms of methylation changes in early NT reprogramming and provide an insight into the role of histone H3 in the regulation of cell plasticity during natural reproduction and somatic cell NT. The role of PACAP/PAC1R in PTSD: effects on fear extinction via the ventromedial hypothalamus R. Andero Journal: European Psychiatry / Volume 65 / Issue S1 / June 2022 Published online by Cambridge University Press: 01 September 2022, p. S67 The incidence and severity of posttraumatic stress disorder (PTSD) is higher in women than men because of environmental and biological factors. Specific mechanisms in the PACAP-PAC1R (pituitary adenylate cyclase-activating polypeptide and its type I receptor) system may confer PTSD risk in women. Interestingly, while the PACAP (ADCYAP1) - PAC1R (ADCYAP1R1) system is expressed highly in the hypothalamus, no relationship has been described for this pathway in the hypothalamus with fear processing or in PTSD. We studied whether the estrous/menstrual cycle at the moment of trauma predicts PTSD and the involvement of the PACAP neurons in the amygdala and hypothalamus during traumatic stress. Mice: DREADDs, immunohistochemistry and behavior. Humans: fear-potentiated startle and questionnaires. Here, we show that acute stress immobilization (IMO) produces fear extinction impairments in female mice. Also, IMO elicits Adcyap1 and Adcyap1r1 mRNA upregulation in the hypothalamus, PACAP/c-Fos downregulation in the medial amygdala (MeA), and PACAP/FosB/ΔFosB upregulation in the ventromedial hypothalamus dorsomedial part (VMHdm) after fear extinction. We also found that women with the risk genotype of ADCYAP1R1 rs2267735 SNP show impaired fear extinction. In mice, DREADD-mediated inhibition of the MeA neurons projecting to the VMHdm during IMO rescues both PACAP upregulation in VMHdm and the fear extinction impairment. We ruled out contributions from inherent hormonal states showing that the menstrual or estrous cycle phase at the moment of trauma does not result in a vulnerable phenotype. Our data suggest that the PACAP-PAC1R hypothalamic system may be a novel candidate to treat and prevent PTSD symptoms including fear dysregulations. No significant relationships. Molecular regulation of polycystic ovary syndrome: altered gene expression levels in mouse models pretreatment and post-treatment P. Tulay, T. Onal, S. Vatansever Journal: Zygote / Volume 30 / Issue 3 / June 2022 Published online by Cambridge University Press: 03 November 2021, pp. 352-357 Polycystic ovary syndrome (PCOS) is a complex disorder and genetic factors are believed to play a role. The main aim was to investigate expression levels of genes involved in PI3K/AKT signalling pathway pretreatment and post-treatment. Mouse models of PCOS were generated. Group one included control mice with no polycystic ovaries (n = 4), Group 2 included a PCOS mouse model (n = 8), Group 3 included PCOS mice treated with clomiphene citrate (n = 7) and Group 4 included PCOS mice treated with clomiphene citrate, metformin and pioglitazone (n = 8). Histochemical analyses were performed. Total RNA was extracted and cDNA was synthesized. Irs, Akt1 and Akt2, mTor and Pdpk1 gene expression levels were evaluated by RT-PCR amplification. In Group 1, cortex and medulla were evaluated as normal; in Group 2, ovarian cortex was composed of immature oocytes and cystic follicles with atretic follicles. In Groups 3 and 4, follicles were in the process of normal follicle differentiation. The expression levels of Akt1 and Pi3k were significantly different (P < 0.0001) between Groups 1 and 2. The significant differences in expression levels of Pi3k and Akt1 were also observed between the Group 1 and both Groups 3 and 4 (P < 0.0001). Furthermore, significant variations of the expression levels of mTor between Groups 1 and 4 were observed. The extrapolation of results of this study may imply that follicular development may be regulated by molecular pathways involving Pi3k, Akt1 and mTor expression. Therefore, genes in the PI3K/AKT pathway may have a direct regulatory role in the development of PCOS. Toxic effects of methomyl on mouse oocytes and its possible mechanisms Daohong He, Yongnan Xu, Lina Hou, Jing Wang, Shaoying Yang, Yu Wang, Shurui Zhang, Qingguo Jin, Qingshan Gao Published online by Cambridge University Press: 22 October 2021, pp. 358-364 Methomyl is a broad-spectrum carbamate insecticide that has a variety of toxic effects on humans and animals. However, there have been no studies on the toxicity of methomyl in female mammalian oocytes. This study investigated the toxic effects of environmental oestrogen methomyl exposure on mouse oocyte maturation and its possible mechanisms. Our results indicated that methomyl exposure inhibited polar body extrusion in mouse oocytes. Compared with that in the control group, in the methomyl treatment group, superoxide anion free radicals in oocytes were significantly increased. In addition, the mitochondrial membrane potential of metaphase II stage oocytes in the methomyl treatment group was significantly decreased, resulting in reduced mouse oocyte quality. After 8.5 h of exposure to methomyl, metaphase I stage mouse oocytes displayed an abnormal spindle morphology. mRNA expression of the pro-apoptotic genes Bax and Caspase-3 in methomyl-treated oocytes increased, which confirmed the apoptosis. Collectively, our results indicated that mouse oocyte maturation is defective after methomyl treatment at least through disruption of spindle morphology, mitochondrial function and by induction of oxidative stress. Use of alginate hydrogel to improve long-term 3D culture of spermatogonial stem cells: stemness gene expression and structural features Masoud Hemadi, Vahideh Assadollahi, Ghasem Saki, Afshin Pirnia, Masoud Alasvand, Abolfazl Zendehdel, Mohammadreza Gholami The quality and quantity of a spermatogonial stem-cell (SSC) culture can be measured in less time using a 3D culture in a scaffold. The present study investigated stemness gene expression and the morphological and structural characterization of SSCs encapsulated in alginate. SSCs were harvested from BALB/c neonatal mice testes through two-step mechanical and enzymatic digestion. The spermatogonial populations were separated using magnetic-activated cell sorting (MACS) using an anti-Thy1 antibody and c-Kit. The SSCs then were encapsulated in alginate hydrogel. After 2 months of SSC culturing, the alginate microbeads were extracted and stained to evaluate their histological properties. Real-time polymerase chain reaction (PCR) was performed to determine the stemness gene expression. Scanning electron microscopy (SEM) was performed to evaluate the SSC morphology, density and scaffold structure. The results showed that encapsulated SSCs had decreased expression of Oct4, Sox2 and Nanos2 genes, but the expression of Nanog, Bcl6b and Plzf genes was not significantly altered. Histological examination showed that SSCs with pale nuclei and numerous nucleolus formed colonies. SEM evaluation revealed that the alginate scaffold structure preserved the SSC morphology and density for more than 60 days. Cultivation of SSCs on alginate hydrogel can affect Oct4, Sox2 and Nanos2 expression. Effects of maternal controlled exercise on offspring adiposity and glucose tolerance Kristen M. Platt, Jenna Przybylowski, Richard J. Charnigo, Sara Y. Ngo Tenlep, Leryn J. Reynolds, Kevin J. Pearson Journal: Journal of Developmental Origins of Health and Disease / Volume 13 / Issue 4 / August 2022 While metabolic disorders such as obesity and diabetes are costly and deadly to the current population, they are also extremely detrimental to the next generation. Much of the current literature focuses on the negative impact of poor maternal choices on offspring disease, while there is little work examining maternal behaviors that may improve offspring health. Research has shown that voluntary maternal exercise in mouse models improves metabolic function in offspring. In this study, we hypothesized that controlled maternal exercise in a mouse model will effect positive change on offspring obesity and glucose homeostasis. Female mice were separated into three groups: home cage, sedentary, and exercise. The sedentary home cage group was not removed from the home cage, while the sedentary wheel group was removed from the cage and placed in an immobile wheel apparatus. The exercise group was removed from the home cage and run on the same wheel apparatus but with the motor activated at 5–10 m/min for 1 h/d prior to and during pregnancy. Offspring were subjected to oral glucose tolerance testing and body composition analysis. There was no significant difference in offspring glucose tolerance or body composition as a consequence of the maternal exercise intervention compared to the sedentary wheel group. There were no marked negative consequences of the maternal controlled exercise intervention. Further research should clarify the potential advantages of the controlled exercise model and improve experimental techniques to facilitate translation of this research to human applications. $\operatorname {HOD}$ IN INNER MODELS WITH WOODIN CARDINALS SANDRA MÜLLER, GRIGOR SARGSYAN Journal: The Journal of Symbolic Logic / Volume 86 / Issue 3 / September 2021 Print publication: September 2021 We analyze the hereditarily ordinal definable sets $\operatorname {HOD} $ in $M_n(x)[g]$ for a Turing cone of reals x, where $M_n(x)$ is the canonical inner model with n Woodin cardinals build over x and g is generic over $M_n(x)$ for the Lévy collapse up to its bottom inaccessible cardinal. We prove that assuming $\boldsymbol \Pi ^1_{n+2}$ -determinacy, for a Turing cone of reals x, $\operatorname {HOD} ^{M_n(x)[g]} = M_n(\mathcal {M}_{\infty } | \kappa _{\infty }, \Lambda ),$ where $\mathcal {M}_{\infty }$ is a direct limit of iterates of $M_{n+1}$ , $\delta _{\infty }$ is the least Woodin cardinal in $\mathcal {M}_{\infty }$ , $\kappa _{\infty }$ is the least inaccessible cardinal in $\mathcal {M}_{\infty }$ above $\delta _{\infty }$ , and $\Lambda $ is a partial iteration strategy for $\mathcal {M}_{\infty }$ . It will also be shown that under the same hypothesis $\operatorname {HOD}^{M_n(x)[g]} $ satisfies $\operatorname {GCH} $ . SEALING OF THE UNIVERSALLY BAIRE SETS GRIGOR SARGSYAN, NAM TRANG Journal: Bulletin of Symbolic Logic / Volume 27 / Issue 3 / September 2021 A set of reals is universally Baire if all of its continuous preimages in topological spaces have the Baire property. ${\sf Sealing}$ is a type of generic absoluteness condition introduced by Woodin that asserts in strong terms that the theory of the universally Baire sets cannot be changed by set forcings. The ${\sf Largest\ Suslin\ Axiom}$ ( ${\sf LSA}$ ) is a determinacy axiom isolated by Woodin. It asserts that the largest Suslin cardinal is inaccessible for ordinal definable surjections. Let ${\sf LSA}$ - ${\sf over}$ - ${\sf uB}$ be the statement that in all (set) generic extensions there is a model of $\sf {LSA}$ whose Suslin, co-Suslin sets are the universally Baire sets. We outline the proof that over some mild large cardinal theory, $\sf {Sealing}$ is equiconsistent with $\sf {LSA}$ - $\sf {over}$ - $\sf {uB}$ . In fact, we isolate an exact theory (in the hierarchy of strategy mice) that is equiconsistent with both (see Definition 3.1). As a consequence, we obtain that $\sf {Sealing}$ is weaker than the theory " $\sf {ZFC}$ + there is a Woodin cardinal which is a limit of Woodin cardinals." This significantly improves upon the earlier consistency proof of $\sf {Sealing}$ by Woodin. A variation of $\sf {Sealing}$ , called $\sf {Tower \ Sealing}$ , is also shown to be equiconsistent with $\sf {Sealing}$ over the same large cardinal theory. We also outline the proof that if V has a proper class of Woodin cardinals, a strong cardinal, and a generically universally Baire iteration strategy, then $\sf {Sealing}$ holds after collapsing the successor of the least strong cardinal to be countable. This result is complementary to the aforementioned equiconsistency result, where it is shown that $\sf {Sealing}$ holds in a generic extension of a certain minimal universe. This theorem is more general in that no minimal assumption is needed. A corollary of this is that $\sf {LSA}$ - $\sf {over}$ - $\sf {uB}$ is not equivalent to $\sf {Sealing}$ . Oxidative stress induced by methomyl exposure reduces the quality of early embryo development in mice Daohong He, Guobo Han, Xiaomeng Zhang, Jingyu Sun, Yongnan Xu, Qingguo Jin, Qingshan Gao Journal: Zygote / Volume 30 / Issue 1 / February 2022 Published online by Cambridge University Press: 10 May 2021, pp. 57-64 Methomyl is a widely used carbamate insecticide and environmental oestrogen that has adverse effects on the reproductive system. However, there have been no reports on the effect of methomyl on early embryos in mammals. In this study, we explored the effect of methomyl exposure on the quality of early embryonic development in mice and the possible mechanisms. During in vitro culture, different concentrations of methomyl (10, 20, 30 and 35 μM) were added to mouse zygote medium. The results showed that methomyl had an adverse effect on early embryonic development. Compared with the control group, the addition of 30 μM methomyl significantly reduced the rate of early embryo blastocyst formation. Methomyl exposure can increase oxidative stress and impair mitochondrial function, which may be the cause of blastocyst formation. In addition, we found that methomyl exposure promoted apoptosis and autophagy in mouse blastocysts. The toxic effect of methomyl on early embryos may be the result of oxidative stress induction. Taken together, our results indicate that methomyl can cause embryonic development defects in mice, thereby reducing the quality of early embryo development. Effect of cyanocobalamin on oocyte maturation, in vitro fertilization, and embryo development in mice Tamana Rostami, Fardin Fathi, Vahideh Assadollahi, Javad Hosseini, Mohamad Bagher Khadem Erfan, Asrin Rashidi, Golzar Amiri, Omid Banafshi, Masoud Alasvand Journal: Zygote / Volume 29 / Issue 2 / April 2021 The aim of this study was to investigate the effect of cyanocobalamin supplementation on in vitro maturation (IVM), in vitro fertilization (IVF), and subsequent embryonic development competence to the blastocyst stage, and in vitro development of mouse 2-cell embryos. Cumulus cells were prepared from mouse cumulus–oocyte complexes (COCs) and incubated for 24 h in an in vitro culture (IVC) medium that contained different concentrations of cyanocobalamin (100, 200, 300 or 500 pM). We collected 2-cell embryos from superovulated NMRI mice and cultured them in the same concentrations of cyanocobalamin (100, 200, 300 or 500 pM). After 42 h of IVM, we observed significantly increased oocyte maturation in the 200 pM cyanocobalamin-treated group compared with the control group (P < 0.0001). Mature oocytes cultured in 200 pM cyanocobalamin were fertilized and cultured in IVC medium with cyanocobalamin (100, 200, 300 or 500 pM) during early embryogenesis. The matured oocytes that were cultured in 200 pM cyanocobalamin had significantly higher 2-cell development rates compared with the control oocytes (P < 0.01). Embryos obtained from in vitro mature oocytes and in vivo fertilized oocytes that were cultured in 200 pM cyanocobalamin had significantly greater frequencies of development to the blastocyst stage and a significant reduction in 2-cell blocked and degenerated embryos compared with the control embryos (P < 0.0001). Embryos derived from oocytes fertilized in vivo with 200 pM cyanocobalamin had a higher percentage of blastocyst embryos compared with those derived from matured oocytes cultured in vitro (P < 0.0001). These finding demonstrated that the effects of cyanocobalamin on oocyte maturation, fertilization, and embryo development in mice depend on the concentration used in IVC medium. Dietary phytanic acid-induced changes in tissue fatty acid profiles in mice Tomonori Nakanishi, Kazuhiro Kagamizono, Sayaka Yokoyama, Ryoji Suzuki, Hiroyuki Sakakibara, Kazuhiro Sugamoto, Laurie Erickson, Satoshi Kawahara Journal: Journal of Dairy Research / Volume 87 / Issue 4 / November 2020 The aims of this research communication were to investigate the in vivo tissue accumulation of phytanic acid (PA) and any changes in the tissue fatty acid profiles in mice. Previous in vitro studies have demonstrated that PA is a milk component with the potential to cause both beneficial effects on lipid and glucose metabolism and detrimental effects on neuronal cells. However, there is limited information about its in vivo actions. In this study, mice were fed diets containing either 0.00 or 0.05% 3RS, 7R, 11R-PA, which is the isomer found in milk and the human body. After 4 weeks, adipose tissue, liver and brain were harvested and their fatty acid profiles were determined by gas chromatographic analysis. The results showed that PA and its metabolite pristanic acid accumulated in the adipose tissue of PA-fed mice, and that dietary PA decreased the hepatic compositions of several saturated fatty acids such as palmitic acid while increasing the compositions of polyunsaturated fatty acids including linoleic acid and docosahexaenoic acid. However, dietary PA neither accumulated nor had a high impact on the fatty acid profile in the brain. These results suggested that dietary PA could exert its biological activities in adipose tissue and liver, although the brain is relatively less affected by dietary PA. These data provide a basis for understanding the in vivo physiological actions of PA. Dynamic alterations in H4K12 acetylation during meiotic maturation and after parthenogenetic activation of mouse oocytes Ze Zhang, Baobao Chen, Haoliang Cui, Haixu Gao, Ming Gao, Chenyu Tao Journal: Zygote / Volume 28 / Issue 5 / October 2020 The aim of the study was to investigate the continuous changing pattern of H4K12 acetylation, and the expression levels of histone acetyltransferases (HATs) and histone deacetyltransferases (HDACs) in mouse oocytes during meiosis and after parthenogenetic activation (PA). The immunofluorescence results showed hyperacetylation of lysine-12 on histone H4 (H4K12) in the germinal vesicle (GV) oocytes that then decreased during germinal vesicle breakdown (GVBD), and disappeared in metaphase II (MII). However, it reappeared in the early 1-cell embryos derived after 4 h of PA. The expression levels of some selected HATs and HDACs also validated the changing pattern of H4K12 acetylation during meiosis and PA. In conclusion, H4K12 is deacetylated in GVBD and MII, and re-hyperacetylated after PA. Incorporation of arginine, glutamine or leucine in culture medium accelerates in vitro activation of primordial follicles in 1-day-old mouse ovary Parimah Alborzi, Mohammad Jafari Atrabi, Vahid Akbarinejad, Ramezan Khanbabaei, Rouhollah Fathi In vitro activation of primordial follicles provides cancer patients subjected to oncotherapy with a safe therapeutic strategy for fertility preservation, however a successful protocol for activation of primordial follicles in prepubertal patients has not yet been defined comprehensively. There is evidence that amino acids such as leucine, arginine and glutamine could stimulate the mammalian target of rapamycin (mTOR) pathway, which plays a pivotal role in primordial follicle activation. Nevertheless, there has been no report that elucidates the effect of these amino acids on in vitro development of ovarian follicles. Therefore, the present study was conducted to evaluate the effects of these amino acids and their combination on the formation and activation of primordial follicles in 1-day-old murine ovaries during an 11-day culture period. The experimental groups consisted of base medium (BM), base medium + arginine (ARG), base medium + glutamine (GLU), base medium + leucine (LEU) and base medium + a combination of arginine, glutamine and leucine (AGL). The proportions of different stages of ovarian follicles and gene expression of regulatory factors were assessed using histology and quantitative real-time PCR on days 5 and 11 of culture. The proportion of transitional and primary follicles was greater in all amino acid-treated groups compared with the BM group (P < 0.05). Moreover, leucine resulted in elevated expression of Gdf9 and Bmp15, and glutamine augmented the expression of Pi3k on day 11 of culture. In conclusion, the present study showed that inclusion of leucine, glutamine, arginine or their combination in the culture medium for murine ovarian tissue could accelerate the activation of primordial follicles and alter the expression of the corresponding factors. Hyper-polyploid embryos survive after implantation in mice Hiroyuki Imai, Tokuko Iwamori, Ken Takeshi Kusakabe, Yasuo Kiso, Etsuro Ono, Kiyoshi Kano Published online by Cambridge University Press: 10 March 2020, pp. 247-249 Polyploids generated by natural whole genome duplication have served as a dynamic force in vertebrate evolution. As evidence for evolution, polyploid organisms exist generally, however there have been no reports of polyploid organisms in mammals. In mice, polyploid embryos under normal culture conditions normally develop to the blastocyst stage. Nevertheless, most tetraploid embryos degenerate after implantation, indicating that whole genome duplication produces harmful effects on normal development in mice. Most previous research on polyploidy has mainly focused on tetraploid embryos. Analysis of various ploidy outcomes is important to comprehend the effects of polyploidization on embryo development. The purpose of this present study was to discover the extent of the polyploidization effect on implantation and development in post-implantation embryos. This paper describes for the first time an octaploid embryo implanted in mice despite hyper-polyploidization, and indicates that these mammalian embryos have the ability to implant, and even develop, despite the harmfulness of extreme whole genome duplication.
CommonCrawl
CellFishing.jl: an ultrafast and scalable cell search method for single-cell RNA sequencing Kenta Sato1,2, Koki Tsuyuzaki2, Kentaro Shimizu1 & Itoshi Nikaido2,3 Genome Biology volume 20, Article number: 31 (2019) Cite this article Recent technical improvements in single-cell RNA sequencing (scRNA-seq) have enabled massively parallel profiling of transcriptomes, thereby promoting large-scale studies encompassing a wide range of cell types of multicellular organisms. With this background, we propose CellFishing.jl, a new method for searching atlas-scale datasets for similar cells and detecting noteworthy genes of query cells with high accuracy and throughput. Using multiple scRNA-seq datasets, we validate that our method demonstrates comparable accuracy to and is markedly faster than the state-of-the-art software. Moreover, CellFishing.jl is scalable to more than one million cells, and the throughput of the search is approximately 1600 cells per second. The development of high-throughput single-cell RNA sequencing (scRNA-seq) technology for the past several years has enabled massively parallel profiling of transcriptome expressions at the single-cell level. In contrast to traditional RNA sequencing methods that profile the average of bulk samples, scRNA-seq has the potential to reveal heterogeneity within phenotypes of individual cells as it can distinguish the transcriptome expression of each cell by attaching a distinct cellular barcode [1, 2]. In addition, several protocols have been developed that utilize unique molecular identifiers (UMIs) to more accurately quantify expression by removing duplicated counts resulting from the amplification of molecules [3–8]. The advent of library preparation for multiplexed sequencing with cellular barcoding and the refinement of cDNA amplification method with UMIs lead to a higher throughput and more reliable quantification of single-cell expression profiles. These technologies have opened the door to research that comprehensively sequences and annotates massive numbers of cells to create a cell atlas for organs or multicellular organisms. Shekhar et al. [9] sequenced and performed unsupervised classification of 25,000 mouse retinal bipolar cells and identified novel cell types and marker genes, suggesting that sequencing a large number of cells is an essential factor for detecting underrepresented cell types. Similarly, Plass et al. [10] sequenced more than 20,000 planarian cells and rendered a single lineage tree representing continuous differentiation. We also see collaborative efforts to create a comprehensive catalog covering all cells types composing an organism, such as the Human Cell Atlas [11] and the Tabula Muris [12] project. This trend of sequencing higher numbers of cells is expected to continue until a complete list of cell types is generated. Emergence of these comprehensive single-cell sequencing studies shows a pressing demand for software to find similar cells by comparing their transcriptome expression patterns. Since discrete cell annotations are not always available or are even impossible to generate due to continuous cell state dynamics, software for cell-level searching is useful for comparative analysis. However, finding similar cells based on their transcriptome expression profiles is computationally challenging due to the unprecedented numbers of genes and cells. Recently, Kiselev et al. [13] developed a software package and web service named scmap to perform an approximate nearest neighbor search of cells using a product quantizer [14]. The scmap package contains two variations: scmap-cluster and scmap-cell. Scmap-cluster can be used to search for cell clusters that are defined by discrete cluster labels and hence requires cluster annotations in addition to expression profiles of reference cells. On the contrary, scmap-cell can be used to directly find similar cells only from their expression profiles and is applicable to scRNA-seq data without requiring cluster annotations for cells. The authors of scmap-cell claim that creating a search index is more rapid than employing machine-learning methods. However, the scalability of scmap-cell is limited and is not applicable to extremely large data sets. Srivastava et al. [15] have also developed a web service named CellAtlasSearch that searches existing scRNA-seq experiments using locality-sensitive hashing (LSH) and graphical processing units (GPUs) to accelerate the search. In LSH, expression profiles are hashed into bit vectors, and their similarities are estimated from the Hamming distance between bit vectors calculated by LSH [16]. However, it requires GPUs to extract maximum performance, and its implementation details are neither openly accessible nor well-described in their paper. We are also interested in determining cell state estimation. Although cell type estimation accomplished by matching query cells with similar cells found in annotated data sets provides important information concerning the query cells, relying on a single similarity score may result in overlooking significant differences in their gene expressions. For example, the developmental stages of the hematopoietic lineage from stem cells to completely differentiated cells are often characterized by the expression level of few marker genes. Additionally, using scRNA-seq, Park et al. [17] revealed that genes related to Mendelian disease are differentially expressed in specific cell types. These facts indicate that mutually similar cells of the same type but under different conditions can be further distinguished by noting differentially expressed genes (DEGs) between these cells. In this paper, we present CellFishing.jl [18] (cellfinder via hashing), a novel software package used to find similar cells from a prebuilt database based on their expression patterns with high accuracy and throughput. CellFishing.jl employs LSH, like CellAtlasSearch, to reduce the computational time and space required for searching; however, it does not require dedicated accelerators, and its implementation is freely available as an open-source software package written in the Julia programming language [19]. It also utilizes an indexing technique of bit vectors to rapidly narrow down candidates of similar cells. Moreover, a query cell can be compared with its neighboring cells in the database in order to prioritize noteworthy genes that are differentially expressed between the query and its neighbors, facilitating quick DEG analysis with single-cell resolution. Cell databases once created can be saved to a disk and quickly loaded for later searches. Here, we demonstrate the effectiveness and scalability of our approach using real scRNA-seq data sets, one of which includes more than one million cells. Workflow overview of CellFishing.jl CellFishing.jl first creates a search database of reference cells from a matrix of transcriptome expression profiles of scRNA-seq experiments, and then searches the database for cells with an expression pattern similar to the query cells. The schematic workflow of CellFishing.jl is illustrated in Fig. 1. When building a database, CellFishing.jl uses a digital gene expression (DGE) matrix as an input along with some metadata, if provided. It next applies preprocessing to the matrix, resulting in a reduced matrix, and subsequently hashes the column vectors of this reduced matrix to low-dimensional bit vectors. The preprocessing phase consists of five steps: feature (gene) selection, cell-wise normalization, variance stabilization, feature standardization, and dimensionality reduction. The information provided by these steps is stored in the database, and the same five steps are applied to the DGE matrix of query cells. In the hashing phase, random hyperplanes are generated from a pseudo-random number generator, and the column vectors of the reduced matrix are encoded into bit vectors according to the compartment in which the vector exists. This technique, termed LSH, is used to estimate the similarity between two data points by their hashed representation [16]. The bit vectors are indexed using multiple associative arrays that can be utilized for subsequent searches [20]. The implementation is written in the Julia language; the database object can be saved easily as a file, and the file can be transferred to other computers, which facilitates quick, comparative analyses across different scRNA-seq experiments. Schematic workflow of CellFishing.jl. CellFishing.jl first builds a database (DB) object that stores data preprocessors, indexed bit vectors, and cell metadata, if provided. The metadata can store any information including cell names, cell types, and transcript expressions of marker genes. When building a database, the DGE matrix of reference cells is preprocessed to extract important signals and then hashed into bit vectors by LSH. The preprocessors and the indexed bit vectors are stored in the database object. M, D, and T on the left side of the figure refer to the number of genes, number of reduced dimensions, and length of the bit vectors, respectively. N and N′ above the two DGE matrices represent the number of cells within the reference and query data, respectively. While searching the database for similar cells, the prebuilt preprocessors stored in the database are reused in a similar workflow that is involved in database building up to the hashing phase. The database object can be saved onto a disk and can be loaded from there We selected five data sets as benchmarks from scRNA-seq experiments, each including at least 10,000 cells, and one including more than 1.3 million cells, which was the largest publicly available data set. Cells without a cell type or cluster annotation were filtered out for evaluation. The data sets after filtering are summarized in Table 1. Table 1 Summary of scRNA-seq data sets Wagner et al. [21] recently reported that if there is no biological variation, excessive zero counts within a DGE matrix (dropouts) have not been observed in data generated from inDrop [5], Drop-seq [6], and Chromium [7] protocols. Similarly, Chen et al. [22] conducted a more thorough investigation and concluded that negative binomial models are preferred over zero-inflated negative binomial models for modeling scRNA-seq data with UMIs. We confirmed a similar observation using our control data generated from Quartz-Seq2 [8]. Therefore, we did not take into account the effects of dropout events in this study. Randomized singular value decomposition (SVD) SVD is commonly used in scRNA-seq to enhance the signal-to-noise ratio by reducing the dimensions of the transcriptome expression matrix. However, computing the full SVD of an expression matrix or eigendecomposition of its covariance matrix is time consuming and requires large memory space especially when the matrix contains a large number of cells. Since researchers are usually interested in only a few dozen of the top singular vectors, it is common practice to compute only those important singular vectors. This technique is called low-rank matrix approximation, or truncated SVD. Recently, Halko et al. [23] developed approximated low-rank decomposition using randomization and were able to demonstrate its superior performance compared with other low-rank approximation methods. To determine the effectiveness of the randomized SVD, in this study, we benchmarked the performance of three SVD algorithms (full, truncated, and randomized) for real scRNA-seq data sets and evaluated the relative errors of singular values calculated using the randomized SVD. Full SVD is implemented using the svd function of Julia and the truncated SVD is implemented using the svds function of the Arpack.jl package, which computes the decomposition of a matrix using implicitly restarted Lanczos iterations; the same algorithm is used in Seurat [24] and CellRanger [7]. We implemented the randomized SVD as described in [25] and included the implementation in the CellFishing.jl package. We then computed the top 50 singular values and the corresponding singular vectors for the first four data sets listed in Table 1 and measured the elapsed time. All mouse cells (1886 total) of the Baron2016 data set were excluded because merging expression profiles of human and mouse is neither trivial nor our focus here. The data sizes of the four data sets after feature selection were 2190×8569, 3270×27,499, 3099×21,612, and 2363×54,967 in this order. From the benchmarks, we found that the randomized SVD remarkably accelerates the computation of low-rank approximation for scRNA-seq data without introducing large errors in the components corresponding to the largest singular values (Fig. 2). It must be noted that in our application, obtaining exact singular vectors is not particularly important; rather, computing the subspace with high variability spanned by approximated singular vectors is more important because each data point is eventually projected onto random hyperplanes during hashing. Therefore, evaluating relative errors of singular values suffices to quantify the precision of randomized SVD. Benchmarks of randomized SVD. a Elapsed time of different SVD algorithms. The blue, orange, and green points indicate the elapsed time of the full, truncated, and randomized SVD, respectively. b Relative errors of the randomized SVD. The error bars denote the standard deviation of ten trials. The relative error of the ith largest singular value σi is defined as \(\left |1 - \frac {\hat \sigma _{i}}{\sigma _{i}}\right |\), where \(\hat \sigma _{i}\) is an approximated value of σi. The error bars denote the standard deviation of ten trials. The approximation error for a real matrix A with a low-rank matrix is bounded by a singular value as illustrated in the following formula: minrank(X)≤j||A−X||=σj+1, where ||·|| denotes the operator norm of a matrix Similarity estimation using bit vectors In LSH, the angular distance between two expression profiles can be estimated from the Hamming distance between their hashed bit vectors. Assuming θ is the angle between two numerical vectors representing the expression profiles, the estimator of θ derived from two bit vectors p and q is \(\theta \approx \frac {\pi }{T} d_{H}(p, q)\), where T is the length of a bit vector and dH(·,·) is a function that calculates the Hamming distance between the two bit vectors. This estimator is unbiased but occasionally suffers from its high variance, which can pose a problem. To counter this issue, CellFishing.jl employs an orthogonalization technique that creates more informative hyperplanes from random hyperplanes before hashing data points by orthogonalizing the normal vectors of these random hyperplanes [26]. We confirmed the variance reduction effect of this orthogonalization technique by comparing the estimators with the exact values using randomly sampled expression profiles, as shown in Fig. 3a. The effect was consistent for all other data sets as expected (Additional file 1: Figures 12–14). Locality-sensitive hashing of expression profiles (Shekhar2016). a Distributions of estimation errors for angles. The blue lines show the distributions without orthogonalization, and the orange lines show the distributions with orthogonalization. Five hash values were generated independently, and their estimation errors in radian were computed for 100 cells randomly sampled from Shekhar2016. b Scatter plots of the Hamming distance versus the cosine distance. The rows are four different bit lengths (64, 128, 256, and 512 bits) and the columns are six cells randomly sampled from Shekhar2016. The Hamming distance is normalized to [0,1] for comparison across different bit lengths, and the cosine distance is truncated at 0.3 for brevity. c Two-dimensional embedding of expression profiles with UMAP. The upper-left plot was derived from the cosine distances following dimensionality reduction. The other three plots were derived from the Hamming distances after hashing with 64, 128, and 256 bits. Colors indicate the annotations (18 cell types and doublets/contaminants) of Shekhar2016 As can be seen in Fig. 3a, the estimator of the angular distance becomes less variable as the length of the bit vectors increases due to the central limit theorem. However, using more extended bit vectors requires more computational time and space. To investigate reasonable candidates for the length of bit vectors, we compared the Hamming and cosine distances of 100 random cells (Fig. 3b). The Shekhar2016 data set shows that only 28% of random cells could find their true nearest neighbor in the top ten candidates nominated by 64-bit vectors, while 42%, 72%, and 85% could find their true neighbor by 128-, 256-, and 512-bit vectors, respectively. This result suggests that hashing expression profiles with 64-bit vectors is insufficient to find neighboring cells. We next confirmed that the hashed expression profiles preserve the original differences among cell types by visualizing low-dimensional embedding of the data. Here we used Uniform Manifold Approximation and Projection (UMAP) [27], because it more explicitly preserves the global structure of the input data than t-distributed Stochastic Neighbor Embedding (t-SNE). The two-dimensional embedding of expression profiles of Shekhar2016 is visualized in Fig. 3c. Comparing the embedding derived from the cosine distances (upper left) and the other three embeddings derived from the Hamming distances shows that the hashed expression profiles preserve the original structure of the cell types denoted by different colors. However, some cell-type clusters are more scattered with the 64-bit Hamming distance, which suggests that using 64-bit vectors is insufficient to discriminate cell types by their subtle expression differences. We also observed that the batch effects were considerably mitigated by projecting query cells onto the space spanned by variability derived from the database cells (Additional file 1: Figures 19–22), which is consistent with the observation of Li et al. [28]. CellFishing.jl indexes bit vectors in order to accelerate the cell searching process. The algorithm used in this bit search progressively expands the search space that is centered at the query bit vector; thus, using longer bit vectors is not feasible in practice because the search space rapidly expands as more bits are used. In our preliminary experiments, index searches using longer than or equal to 512 bits often consumed more time than linear searches for a wide range of database sizes due to this expansion of the search space. As a result, we limited the length of the bit vectors to 128 or 256 bits in the following experiments. To further reliably find similar cells with these limited bits, CellFishing.jl generates mutually independent bit indexes from a reduced matrix. When searching a database, CellFishing.jl separately searches these bit indexes within the database and aggregates the results by ranking candidate cells by the total Hamming distance from the query. This requires more time than using a single bit index, but as we show in the following section, it appreciably reduces the risk of overlooking potentially neighboring cells. Self-mapping evaluation To compare the performance of CellFishing.jl with that of scmap-cell, we performed fivefold cross-validations by mapping one fifth of cells randomly sampled from a data set to the remaining four fifth of cells from the same data set, and computed the consistency and Cohen's kappa score [29] of the neighboring cell's label. A value of 1 in the consistency score indicates the perfect agreements of cluster (cell type) assignments and 0 indicates no agreements, while a value of 1 in Cohen's kappa score indicates the perfect agreements and 0 indicates random assignments. We obtained the ten nearest neighbors for each cell, which is the default parameter of scmap-cell, but only the nearest neighbor was used to compute the scores. This evaluation assumes that cells with similar expression patterns belong to the same cell-type cluster, and hence a query cell and its nearest neighbors ought to have the same cluster assignment. In CellFishing.jl, we varied only the number of bits and number of indexes, which control the trade-off between estimation accuracy and computational cost; other parameters (i.e., the number of features and number of dimensions of a reduced matrix) were fixed to the defaults. In scmap-cell, DGE matrices were normalized and log-transformed using the normalize function of the scater package [30], and we varied two parameters: the number of centroids (landmark points calculated by k-means clustering of cells, used to approximate the similarity between cells) and the number of features, in order to find parameter sets that achieve better scores. Figure 4a, b shows the consistency and Cohen's kappa scores of CellFishing.jl and scmap-cell with different parameter sets. The overall scores were high (> 0.94) for both methods, with the exception of the Plass2018 data set. With the default parameters (see Fig. 4 for a description), CellFishing.jl consistently outperformed scmap-cell in both consistency and Cohen's kappa score for all data sets. In CellFishing.jl, using multiple independent hashes significantly improved the scores, suggesting that using 128 or 256 bits is insufficient to reliably estimate the similarity between cells. Instead of log transformation, using the Freeman–Tukey transformation [21, 31] resulted in almost similar scores (Additional file 2). In scmap-cell, increasing the centroids or incorporating more features did not remarkably improve the scores. Comparison between CellFishing.jl and scmap-cell by self-mapping. a Consistency scores. b Cohen's kappa scores. c Index times. d Query times. Each data point corresponds to a fivefold cross-validation. The n-bits and n-lshashes parameter of CellFishing.jl are the number of bits and the number of independent locality-sensitive hashes, respectively (n-bits=128 and n-lshashes=4 are the defaults). The n-centroids-factor and n-features parameter of scmap-cell are the multiplier of the default number of centroids (i.e., \(\lfloor \sqrt {N}\rfloor \), where N refers to the number of reference cells) parameter and the number of features parameter, respectively (n-centroids-factor=1 and n-features=500 correspond to the defaults) In this evaluation, we also measured the elapsed time of indexing and querying. The measured values do not include the cost of reading data from a disk because it varies depending on the file format and the disk type. In both indexing and querying, CellFishing.jl was faster than scmap-cell by a large margin, as shown in Fig. 4c, d. For example, comparing the median of the elapsed time with the default parameters in the TabulaMuris data set, CellFishing.jl was 22 times faster for the indexing time (30.3 vs. 661.9 s) and 118 times faster for the querying time (6.6 vs. 780.6 s). Since-cell types have highly skewed or long-tail distributions (Additional file 1: Figures 1–7); the global evaluation scores used tend to be dominated by large subpopulations and could overlook underrepresented cell types. The cluster-specific consistency scores for each cluster assignment are visualized in Fig. 5. Here, the parameters were fixed to the defaults for both CellFishing.jl and scmap-cell. From this figure, CellFishing.jl evidently includes minor cell types, although the scores are relatively unstable for those cell types. For example, while Baron2016 contains only 18 Schwann cells and seven T cells, CellFishing.jl found these cell types with high consistency (> 0.8). Also, CellFishing.jl shows comparable or better consistency scores than scmap-cell for the majority of the cell types. Cluster-specific consistency scores. a–d Cluster-specific consistency scores of Baron2016 (human), Shekhar2016, Plass2018, and TabulaMuris (Chromium). The blue and orange points denote the consistency scores of CellFishing.jl and scmap-cell, respectively. Cluster labels are ordered by the cluster size, from top to bottom and then left to right in decreasing order The consistency scores of Plass2018 are relatively lower than those of the other data sets. This could be because Plass2018 contains more progenitor or less-differentiated cells, and thus it is more difficult to distinguish these cell types from their expression profiles. As indicated in Fig. 6, a significant number of neoblasts and progenitors are assigned to neoblast 1, which is the largest subpopulation (29.35%) of the data set. These subtypes of neoblasts and progenitors are almost indistinguishable from the t-SNE plot (see [10], Figure 1B), which suggests that these cell types have very slight differences in their expression profiles. Still, when comparing CellFishing.jl and scmap-cell, the former more clearly discriminates these cell types. Proportions of cluster assignments for CellFishing.jl and scmap-cell. The rows are labels of query cells and the columns are labels of their nearest neighbor. Cluster labels are ordered by the cluster size, from top to bottom (rows) or left to right (columns) in decreasing order. The values are derived from a cross-validation with the default parameters and normalized row-wise (i.e., the sum of values in a row is 1) To see the effects of selected features, we evaluated the scores by exchanging features selected by CellFishing.jl and scmap-cell. When CellFishing.jl used features selected by scmap-cell, we could not detect meaningful differences in the consistency scores except within Plass2018, which improved the score by around 2.0% when n-features is 2000. Likewise, when scmap-cell used features selected by CellFishing.jl, we could not detect meaningful differences except within Shekhar2016, which decreased the score by around 5.5% when n-min-features is 500. These results seem to indicate that while scmap-cell selects better features, it has limited effects on the performance of CellFishing.jl. Evaluation of the search outcome and DEG detection CellFishing.jl always retrieves the nearest neighboring cells of a query cell in a Hamming space, and its algorithm does not cease to search until all the nearest cells are found. Although this is an important feature because the user does not need to specify any parameters except the number of neighbors when commencing a search, it also means that CellFishing.jl may retrieve far distant cells that are virtually unrelated to the query cell. To evaluate the search outcomes, CellFishing.jl provides a function estimating the cosine similarity between two cells from their Hamming distance. Here we evaluated the cosine similarities of the nearest neighbors either with or without specific cell types in the database in order to simulate a situation wherein a cell type does not exist in the database. We performed fivefold cross-validations in the same manner as in the previous experiment using seven cell types arbitrarily selected from the minor cell types (comprising approximately 2% or less) of each Shekhar2016 and TabulaMuris. The similarity distributions for the first three nearest neighbors are shown in Fig. 7. In the majority of the cell types, the cosine similarity remarkably drops if cells of the query cell type are removed from the database. Notably, the similarity distribution of amacrine cells from Shekhar2016 (Fig. 7a) and basal cells from TabulaMuris (Fig. 7b) after removal hardly overlap with the distribution before removal. However, the differences are less evident in some cell types, such as immature B cells and kidney cells from TabulaMuris, which is not surprising because these cell types have similar cells with different labels. For example, 9%, 9%, and 81% of immature B cells were mapped to early pro-B cells, late pro-B cells, and B cells, respectively, when immature B cells were removed from the database. Distributions of similarities with or without a specific cell type. a Seven cell types from Shekhar2016. b Seven cell types from TabulaMuris. The x-axis (rank) refers to the rank of the nearest neighbors (e.g., rank=1 refers to the nearest cell of a query), and the y-axis (similarity) refers to the cosine similarity estimated from the Hamming distance. The blue or orange regions denote the distribution of cosine similarities with or without the cell type shown in the title, respectively. The lower dotted lines, dashed lines, and upper dotted lines denote the first, second, and third quartiles, respectively Once the user identifies cells with high similarity, the next question will probably be concerning the differences between the query and its nearest neighbors. To answer this, CellFishing.jl provides a function to detect DEGs of the query cell compared to reference cells in the database. It estimates the average expression of neighboring cells using their raw UMI counts and then computes the probability of observing the count of the query or more extreme values for each gene. Although a CellFishing.jl database requires additional space to store raw counts for DEG detection, it efficiently compresses the raw counts and avoids loading the whole count matrix into memory, thereby saving disk space and memory. Here we focus on the performance evaluation of our DEG detection method, and the cost required for storing raw counts is presented in a following section. For the performance evaluation, we selected immature B cells of TabulaMuris as an example because they are very similar to other subtypes of B cells, as can be seen in Fig. 7b, and the development of B lymphocytes is well characterized. First, all immature B cells were removed from the data set and mapped to the remaining cells. Then, we compared the query cells with their nearest neighbors, and filtered genes with a probability of being less than 10−4 (0.01%) as significant DEGs for each query cell (see "Methods" for details). Out of the 113 immature B cells mapped as queries, 9, 10, and 91 cells were mapped to early pro-B cells, late pro-B cells, and B cells, respectively, and the other 3 cells were mapped to separate subtypes of lymphocytes. We ignored these three cells here because they represent a relatively minor percentage of the results (less than 3%), and it is impossible to deduce meaningful conclusions only from a single sample mapped to a unique cell type. We summarize the detected DEGs in Table 2; the result was reasonably robust to the selection of the number of neighbors being between 5 and 20 (see Additional file 2); thus, here we show the results when the number of neighbors is set to 10 (i.e., the default parameter). Table 2 Top DEGs of immature B cells Next, we carefully examined these genes to validate that biologically meaningful results were obtained using our analysis method. For the nine cells mapped to early pro-B cells, Vpreb1 and Dntt are negatively regulated when compared to immature B cells as shown in Table 2, which is consistent with the annotation by scientific experts because these two genes are used to distinguish immature B cells and early pro-B cells (see [12], Supplementary Information). Although Igll1 was not used as a marker gene in the annotation [12], it is known to play a critical role in B-cell development (see UniProt accession: P20764). For the ten cells mapped to late pro-B cells, Hmgb2 is involved in V(D)J recombination (see UniProt accession: P30681). We detected fewer DEGs from late pro-B cells in comparison with early pro-B cells and B cells with the same threshold, even if the number of cells for each cell type is considered, which may reflect the fact that late pro-B cells more closely resemble immature B cells than the other two cell types. For the 91 cells mapped to B cells, Cd74 is negatively regulated, and was used as a marker gene to discriminate immature from mature B cells. Similarly, H2-Ab1, H2-Eb1, and H2-Aa, which encode components of the major histocompatibility complex present on the surface of antigen-presenting cells, are negatively regulated, suggesting that immature B cells do not actively express these genes as much as mature B cells. We also found that S100a8, Beta-s, S100a9, Fos, and Camp are positively regulated in immature B cells; although these genes appear to be upregulated in a tissue-specific manner rather than in a cell type-specific manner, cytokines encoded by S100a8 and S100a9 were recently reported to regulate B lymphopoiesis in rabbit bone marrow [32]. Overall, our method detected reasonable DEGs in many cases; however, we could not find evidence implying the relationship between Malat1 (a long intergenic noncoding RNA) and B-cell development, and therefore, we hypothesize that this is a false positive due to its high variability regardless of cell type. Mapping cells across different batches Researchers are often interested in searching for cells across different experiments or batches. To verify the robustness of our method in this situation, we performed cell mapping from one batch to other batches and evaluated the performance scores. Shekhar2016 consists of two different batches (1 and 2), which exhibit remarkable differences in their expression patterns (see [9], Figure S1H). We mapped cells from one batch to the other using CellFishing.jl and scmap-cell, and calculated the consistency and Cohen's kappa score. Similarly, we selected two batches (plan1 and plan2, derived from wild-type samples) from Plass2018 and mapped cells from one batch to the remaining batches. This data set exhibits relatively weak batch effects. The default parameters were used in this experiment. Figure 8a shows the results of these two data sets for consistency scores. In both data sets, the consistency scores of CellFishing.jl were close to the mean score obtained from the self-mapping experiment. Moreover, their distances from the mean score in CellFishing.jl were smaller than those in scmap-cell, suggesting that CellFishing.jl is more robust to batch effects. The results of the Cohen's kappa scores were consistent with these results (Additional file 1: Figure 44). We predict that the differences in scores between batches are due to differences in cluster sizes. For example, batch 1 of Shekhar2016 contains many more rod bipolar cells than batch 2, while the latter contains more minor cell types than the former (Fig. 8b). The discrepancy in cluster sizes across batches leads to a difference in scores because each cluster has a different consistency score, as observed in the self-mapping experiment. Cell mapping across batches. a Consistency scores of cell mapping (Shekhar2016 and Plass2018). The red lines denote the mean score of the corresponding self-mapping experiment with the same parameters. b Distribution of cluster sizes (Shekhar2016) Mapping cells across different species Comparing transcriptome expressions across different species provides important information on the function of unknown cell types. Since the Baron2016 data set includes cells derived from human and mouse, we attempted to match cells between both species. To match genes from different species, we downloaded a list of homologous genes between human and mouse from the Vertebrate Homology database of Mouse Genome Informatics and removed non-unique relations from the list. A total of 12,413 one-to-one gene pairs were included. We compared the performance of CellFishing.jl and scmap-cell with the default parameters. In CellFishing.jl, the feature statistics were estimated only from the query cells because they were expected to be considerably different between species. The consistency scores of CellFishing.jl and scmap-cell mapping from human to mouse were 0.681 and 0.563, respectively, and from mouse to human were 0.787 and 0.832, respectively. Cohen's kappa scores of CellFishing.jl and scmap-cell mapping from human to mouse were 0.599 and 0.455, respectively, and from mouse to human were 0.715 and 0.753, respectively. These results show that CellFishing.jl and scmap-cell are roughly comparable for cell mapping accuracy across different species. Mapping cells across different protocols Mapping cells across different sequencing protocols is also important. To validate the robustness of CellFishing.jl in this case, we used TabulaMuris, which consists of two data sets derived from different sequencing platforms, and mapped 44,807 cells sequenced with Smart-Seq2 [33, 34] to 54,967 cells with Chromium. The default parameters were used in this experiment. Because cluster labels are not identical between the two data sets, it is not possible to compute the consistency or Cohen's kappa score. Thus, the matrices of cluster assignments are visualized in Fig. 9. On the whole, these two matrices show very similar assignment patterns, although CellFishing.jl failed to detect a large number of fibroblast cells. Cluster assignments across different protocols (TabulaMuris). The rows are the query's cluster labels (Smart-Seq2) and the columns are its neighbor's cluster labels (Chromium) Saving and loading databases CellFishing.jl is designed to search multiple scRNA-seq experiments, and the database objects can be serialized into a disk and later deserialized. For this purpose, CellFishing.jl provides a pair of functions to save and load a database object into and from a file. To verify the feasibility of this approach, we measured the elapsed time of saving and loading a database object with cell names as metadata, as well as the memory and file size of the object. The results are summarized in Table 3. The memory and file size of database objects are reasonably small, even for current low-end laptop computers. Although the memory and file sizes become several times larger when raw UMI counts are stored in the database, the actual memory usage is usually much smaller because the raw counts are loaded on request as described in "Methods". The elapsed time required for saving and loading a database is also small relative to the time required for querying. From these results, we predict that CellFishing.jl can be used to quickly search multiple scRNA-seq experiments by building and serializing their database objects in advance. Table 3 Performance of saving and loading databases and their memory and file size Scalability for large data sets To check the scalability of our approach for large data sets, we measured the index time, query time, and the memory size of a database by changing the number of cells within the database. In this benchmark, we randomly sampled 10,000 cells from the 1M_neurons data set as queries and then randomly sampled N=213,214,…,220 cells from the remaining cells to create a database (213=8192 and 220=1,048,576, which covers a wide range of high-throughput scRNA-seq experiments); there are no overlapping cells between or within the query and database sets. The number of bit indexes was fixed to the default (i.e., 4) in all cases. For comparison, we also benchmarked the performance of the linear search that scans all hash values in a database instead of using indexes. The elapsed time does not include the time of loading expression profiles from a file. The benchmark results are summarized in Table 4. As for the index search, the query time is sublinear to the database size, while the index time and memory size are roughly linear. For example, when 128-bit vectors are used, the query time becomes only 2.8 times longer in the same range, as the database size becomes 128 times larger from N=213 to 220. The linearities of the index time and memory size are expected, because when building a database all the reference cells need to be scanned and stored, though these are not wholly proportional to the database size because some overhead costs are included in the measured values (e.g., generating and storing projection matrices). Also, the memory usage per cell is only 183.3 bytes for 128-bit vectors and 365.9 bytes for 256-bit vectors when N=220, which is approximately 22 and 11 times smaller than storing UMI counts of 1000 genes in 32-bit integers, respectively. Table 4 Scalability of CellFishing.jl (1M_neurons) The index time remains almost unchanged between the index and the linear search, suggesting that the computational cost of creating hash indexes is, in effect, negligible. In contrast, the gap of the query time between the two search methods expands as the database becomes larger, which can be attributed to the searching phase of bit vectors because the cost of the preprocessing phase is constant (Fig. 10). In addition, even though indexing bit vectors requires extra memory, the relative difference from a database without indexes is approximately equal to or less than three times, and the absolute memory size of a database with indexes is small enough for modern computing environments, including laptops. Computational costs of different phases. The upper and lower plots show the costs of the linear and index searches, respectively. The elapsed time of each phase was measured using the time_ns function and divided by the number of query cells to compute the average time per cell We also evaluated the consistency scores and found that they were slightly improved by incorporating more cells into a database (when using 128-bit vectors, the consistency scores were 0.775 and 0.830 for N=213 and N=220, respectively) (Additional file 1: Figure 51). This result suggests that building a database with more cells plays an important role in identifying cell types. The consistency scores did not vary significantly between the index and linear search as expected, because nearest neighbors found by the index search have the same Hamming distances as those found by the linear search. In summary, indexing bit vectors is effective in reducing the search time for high-throughput scRNA-seq data and is scalable for extremely large data sets containing more than one million cells. Our LSH-based method is particularly suitable for middle- or large-scale scRNA-seq data sets because it circumvents costly brute-force search by using indexes for low-dimensional bit vectors. We considered relatively large data sets consisting of at least ten thousand cells, one of which contains more than one million cells, and confirmed our approach outperforms scmap-cell, the state-of-the-art method for cell searching, in both accuracy and throughput using real scRNA-seq data sets. Searching across multiple experiments will be feasible because our method is reasonably robust to batch differences, and serialized database objects can be loaded quickly. In this paper, we did not compare our method with CellAtlasSearch, mainly because its source code is not freely available, and its algorithm is not well described in the original paper, which makes it difficult to compare the performance fairly. Moreover, CellAtlasSearch requires a GPU to achieve its maximum performance, but this is not always available on server machines. The application of cell searching is not limited to mapping cells between different data sets. The task of finding similar cells within a data set is a subroutine in many analysis methods, such as data smoothing [21], clustering [35], community detection [36], and visualization [37]. As we have demonstrated in the self-mapping experiment, our LSH-based method can find similar cells within a data set with high accuracy and throughput; thus, it would be possible to speed up analysis by utilizing our cell search method in lieu of the currently available method. The feature selection used in CellFishing.jl is relatively simple and rapid. We confirmed that it works well with our search method; however, we also found that the criterion based on the dropout rate used in scmap-cell performed slightly better in a data set. This fact suggests that our simple selection method is not necessarily suitable for all scRNA-seq data sets, and a more careful feature selection, such as adding marker genes selected by domain experts or more careful selection methods such as GiniClust [38], may significantly improve the accuracy of cell typing. For this purpose, CellFishing.jl provides for addition and removal of specific features to or from a feature set. Handling batch effects is still a persistent problem in scRNA-seq. We performed cell mapping experiments across different batches and protocols and confirmed that the performance of our approach is at least comparable to scmap-cell. We consider that the robustness of CellFishing.jl comes from projecting expression profiles to a space with dataset-specific variability, as previously reported by Li et al. [28]. This type of technique is also discussed in the context of information retrieval and is termed folding-in [39]. Although substantial literature exists on the removal of batch effects in scRNA-seq data [24, 40–42], the existing methods require merging raw expression profiles of reference and query cells to obtain their batch-adjusted profiles, and the computational costs are relatively high; these characteristics are not suitable for our low-memory/high-throughput search method. In contrast, folding-in is very affordable because projection matrices can be computed when building a database and reused for any query. The DEG detection method introduced in this paper assumes that the database encompasses enough cells to retrieve a small group of homogeneous neighbors containing no biological differences, and each UMI count follows a Poisson distribution. The former can be justified by considering the high-throughput characteristic of recent scRNA-seq experiments, the feasibility of which we have demonstrated using the TabulaMuris data set; the latter is experimentally verified by several works [8, 43, 44]. However, some highly expressed genes, such as Malat1, seem to be exceptions to these model assumptions, and as a consequence, despite it being unlikely that Malat1 is related to biological differences between cells, it was falsely detected as a DEG in many cells within our experiment. We predict that this problem can be partially mitigated by replacing point estimation of the mean expression with some interval estimation, such as Bayesian inference. In this work, we have focused on unsupervised cell searching: no cellular information is required except their transcriptome expression data. This makes our method even more useful because it is widely applicable to any scRNA-seq data with no cell annotations. However, incorporating cell annotations or prior knowledge of reference cells could remarkably improve our method's performance. For example, if cell-type annotations are available, it would be possible to generate tailored hash values separating cell types more efficiently by focusing on their marker genes. Further research is needed in this direction. In summary, the new cell search method we propose in this manuscript outperforms the state-of-the-art method and is scalable to large data sets containing more than one million cells. We confirmed that our method considers very rare cell types and is reasonably robust in response to differences between batches, species, and protocols. The low-memory footprint and database serialization facilitate comparative analysis between different scRNA-seq experiments. In the preprocessing step, biological signals are extracted from a DGE matrix. When building a database of reference cells, CellFishing.jl takes a DGE matrix of M rows and N columns, with the rows being features (genes) and the columns being cells, along with some metadata (e.g., cell names). In scRNA-seq analysis, it is common practice to filter out low-abundance or low-variance features because these do not contain enough information to distinguish differences among cells [45]. In the filtering step of CellFishing.jl, features that have smaller maximum count across cells than a specific threshold are excluded. We found that this criterion is rapid and sometimes more robust than other gene filtering methods, such as selecting highly variable genes [46]. The optimal threshold depends on various factors such as sequencing protocol and depth. CellFishing.jl uses a somewhat conservative threshold that retains at least 10% of features by default; however, retaining an excessive amount of features would not be detrimental with regard to accuracy because CellFishing.jl uses the principal components of the data matrix in a later step. One parameter can change the threshold, and it is possible to specify a list of features to be retained or excluded if, for example, some marker genes are known a priori. The filtered DGE matrix is then normalized so that the total counts are equal across all the cells, which reduces the differences in the library size among cells [43]. After normalization, each count x is transformed by log transformation log(x+1), which is a common transformation in scRNA-seq, or FTT \(\sqrt {x} + \sqrt {x+1}\) [31]. FTT is a variance-stabilizing transformation assuming the Poisson noise, which is observed as a technical noise in scRNA-seq [21, 43]. Also, the computation of FTT is significantly faster than that of log transformation. The user can specify a preferable transformation, and the choice will be saved in a preprocessor object. In this paper, we used log transformation if not otherwise specified. After transformation, the feature counts are standardized so that their mean and variance are equal to zero and one, respectively. Finally, the column vectors in the matrix are projected onto a subspace to reduce the number of dimensions. The advantages of this projection are threefold: (1) computational time and working space for preprocessing are saved, (2) the number of bits required for hash expression profiles is reduced, and (3) batch effects between the query and the database are mitigated. As recently reported [28], projecting data onto a subspace defined by variability in reference cells greatly reduces unwanted technical variation and improves the accuracy of cell-type clustering. A similar approach is found elsewhere [6]. In CellFishing.jl, a subspace with high variance is calculated by applying the SVD to the reference data matrix. Since the number of cells may be extremely large and singular vectors corresponding to small singular values are irrelevant, CellFishing.jl uses a randomized SVD algorithm [23, 25] that approximately computes singular vectors corresponding to the top D singular values. The dimension of the subspace D is set to 50 by default, but it can be changed easily by passing a parameter. The net result of the preprocessing phase is a matrix of D rows and N columns. The information of the preprocessing phase (e.g., gene names and projection matrices) is stored in the database object, and the same process is applied to the count vectors of query cells. Hashing expression profiles LSH is a technique used to approximately compute the similarity between two data points [16], which hashes a numerical vector \(x \in \mathbb {R}^{D}\) to a bit vector p∈{0,1}T that preserves some similarity or distance between the original vectors. Hashed values (bits) of LSH collide with high probability if two original vectors are similar. This is the fundamental property of hash functions used in LSH, because by comparing binary hash values of numerical vectors their similarity can be estimated, which bypasses the time-consuming distance computation between the numerical vectors. In LSH, it is common to use T hash functions to generate T bits; each hash function returns zero or one for an input vector, and the results of multiple hash functions for the vector are bundled into a bit vector of length T. More formally, given a similarity function \(\text {sim}(\cdot, \cdot): \mathbb {R}^{D} \times \mathbb {R}^{D} \to [0, 1]\) that measures some similarity between two data points, an LSH function \(h(\cdot): \mathbb {R}^{D} \to \{0, 1\}\) is defined as a function that satisfies the following relation: $$ \text{Pr}_{h\in\mathcal{F}}[h(x) = h(y)] = \text{sim}(x, y), $$ where \(\text {Pr}_{h\in \mathcal {F}}[h(x) = h(y)]\) is the probability of hash collision for a hash function h(·) generated from a family of hash functions \(\mathcal {F}\) given a pair of points x and y. The existence of a family of hash functions that satisfies Eq. 1 depends on the similarity function sim(·,·). Similarity, if it exists, it can be approximated by randomly generating hash functions from \(\mathcal {F}\) as follows: $$ \begin{aligned} \text{sim}(x, y) &= \mathbb{E}_{h \in \mathcal{F}}\left[ \mathbb{I}[h(x) = h(y)] \right] \\ & \approx \frac{1}{T} \sum\limits_{i=1}^{T} \mathbb{I}[h_{i}(x) = h_{i}(y)], \end{aligned} $$ where \(\mathbb {E}_{h\in \mathcal {F}}[\cdot ]\) is an expectation over \(\mathcal {F}\), \(\mathbb {I}[\cdot ]\) is the indicator function, and \(\{h_{i}(\cdot)\}_{i=1}^{T}\) is a set of hash functions sampled from \(\mathcal {F}\). In this work, we used a signed random projection LSH (SRP-LSH) [16] to hash expression profiles, as it estimates angular similarity between two numerical vectors, which is reasonable to measure the similarity between expression profiles, and is straightforward to implement. Briefly, SRP-LSH divides a set of data points in a space into two disjoint sets by drawing a random hyperplane on the space, then the data points in a set of the two are hashed to zeros, with the remaining points hashed to ones. This procedure is repeated T times to get a bit vector of length T for each data point. Intuitively, the closer two points are to each other, the more likely it is that they occur in the same half-space with respect to a random hyperplane. Therefore, we can stochastically estimate the similarity of two points by calculating the Hamming distance of their bit vectors. In SRP-LSH, the angular similarity function \(\text {sim}(x, y) = 1 - \frac {\theta (x, y)}{\pi }\), where θ(x,y) refers to the angle between two vectors x and y, is estimated from Eq. 2. In this way, we randomly generate T hyperplanes, calculate their hash values, and bundle their zero or one bits into a bit vector of length T. While SRP-LSH dramatically reduces the computational time and space of the approximate search of neighboring cells, it suffers from its high variance of the similarity estimator. To alleviate the problem, CellFishing.jl orthogonalizes the vectors perpendicular to the random hyperplanes (normal vectors) because it reduces the variance of the estimator by removing the linear dependency among hyperplanes without introducing estimation bias [26]. Specifically, T normal vectors of length D are generated independently from the standard isotropic Gaussian distribution and then orthogonalized using the QR decomposition. If T is larger than D, the T vectors are divided into ⌈T/D⌉ batches of vectors, each of which contains at most D vectors, and the vectors in a batch are orthogonalized separately. Indexing hash values Although comparing bit vectors is much faster than comparing numerical vectors, it is still lengthy process to scan all the bit vectors in a database, especially in a large database. To reduce the computational cost of hash searching, CellFishing.jl creates search indexes in the space of bit vectors. Specifically, CellFishing.jl creates multi-index hash (MIH) [20] tables to find the nearest neighbors quickly in the Hamming space \(\mathcal {H}_{T} := \{0,1\}^{T}\). Briefly, an MIH divides bit vectors into shorter subvectors and indexes them separately using multiple associative arrays (tables); when searching for the nearest neighbors of a query, it divides the query bit vector in the same way and picks candidates of neighbors from the tables of the subvectors. It then computes the full Hamming distances between the query and the candidates by scanning the list of the candidates and finally returns the k-nearest neighbors. The search algorithm progressively expands the search space in \(\mathcal {H}_{T}\) to find all k-nearest neighbors in the database; hence, the result is equivalent to that of the brute-force search, disregarding a possible difference in the order of ties with the same Hamming distance. The main point here is that dividing and separately indexing the bit vectors dramatically reduces the search space that needs to be explored. For example, when attempting to find all bit vectors within r-bit differences from a query using a table, the number of buckets of the table we need to check is \({\sum \nolimits }_{i=0}^{r} \left (\begin {array}{c}T\\i\end {array}\right)\), where T is the length of the bit vectors and \(\left (\begin {array}{c}n\\k\end {array}\right)\) is the number of combinations choosing k distinct items from n. This value rapidly increases even for a small r, which would easily exceed the number of elements in the table. For instance, the total number of combinations for T=128 and r=9 is roughly 20.6 trillion, which is the same order of magnitude as the estimated number of cells in the human body [47]. To avoid the problem, CellAtlasSearch seems to stop searching at some cutoff distance from the query bit vector [15], but choosing a good threshold for each cell is rather difficult. Instead, by dividing a bit vector of length T into m subvectors of the same length (we assume T is divisible by m for brevity), when r<m, it is possible to find at least one subvector that perfectly matches the corresponding subvector of the query with the pigeonhole principle. This partial matching can be used to find candidate bit vectors quickly using a table data structure. Even when there are no perfect matches in subvectors, the search space greatly shrinks by the division. Our implementation uses a direct-address table to index subvectors, and because the buckets of the table are fairly sparse (i.e., mostly vacant), we devised a data structure, illustrated in Fig. 11, in order to exploit the sparsity and CPU caches. In addition, we found that inserting a data prefetch instruction greatly improves the performance of scanning candidates in buckets, because it reduces cache misses when bit vectors do not fit into the CPU caches. Please refer to [20] and our source code for details of the algorithm and the data structure. Data structure indexing subvectors. Given a subvector of the query bit vector, the subindex calculates the locations of the bit vectors in the database that contains the subvector in the same position. The subindex consists of three arrays: filled, offsets, and buckets. The filled array is a bit vector of length 2s, where s is the length of indexed subvectors (in this figure, T=12 and s=4), and supports bit counting in a specific range with a constant time, which is used to calculate the location of the offsets array for a given subvector (highlighted four bits of the query). The offsets and buckets arrays are jointly used to obtain the locations of the database array at which bit vectors with a given subvector are stored The technique of dividing and indexing bit vectors reduces the cost of the search. However, it is still difficult in practice using long bit vectors due to expansion of the search space. To overcome this problem, we use multiple MIH indexes that are independent of each other. We refer to the number of MIH indexes as L; thus, the number of bits stored in a database per cell is TL. The L indexes separately find their own k-nearest neighbors of a given query and thus collect kL possibly duplicated neighboring candidates for each query. These candidates are passed to the next ranking phase. In our method, the two parameters T and L control the trade-off between accuracy and computational cost of the search. Ranking cells After collecting kL candidates from the L indexes, CellFishing.jl orders the kL cells to return only the top k cells. The algorithm computes total Hamming distances from a query and retains the top k candidates with the smallest Hamming distances. Candidates with identical distances, if any, are ordered in an arbitrary but deterministic way. Similarity estimation CellFishing.jl can retrieve all the k-nearest neighbors of a query cell without any cut-off distance. This is particularly important when the query cell and its nearest neighbors in the database are similar but considerably different due to various factors (e.g., batch effects). However, it also means that CellFishing.jl may retrieve unrelated cells with very low similarity. Using hashed bit vectors, we can estimate similarities between the query and its neighbors by their Hamming distances, but the range of this distance varies depending on the parameters T and L, and thus is therefore counter-intuitive. Accordingly, CellFishing.jl provides a utility function to estimate the cosine similarity between cells from their Hamming distance. The cosine similarity is normalized between −1 and 1 and is therefore easier to interpret than the Hamming distance. Single-cell DEG detection CellFishing.jl implements a utility function to detect DEGs between two cells (e.g., a query cell and its nearest neighbor in a database), which can be used to evaluate the search outcome in a post hoc manner. Here we refer to a query and a reference cells as u and v, respectively. The DEG detection function first retrieves the k-nearest neighbors of v from the database and we collectively refer to the set of neighbors as \(\mathcal V = \{v_{1}, v_{2}, \ldots, v_{k}\}\) (note that v will be included in \(\mathcal V\) as it is also in the same database). Then the raw counts of v1,v2,…,vk are normalized so that their cell-wise total counts are equal to the total count of u. The arithmetic mean of the normalized counts for gene i, λi, is used as an estimator of the mean parameter of the gene of u. Finally, the probability of observing a count yi or more extremes for each gene i is calculated as \(\mathrm P(Y \le y_{i}) = \sum _{k \le y_{i}}\text {Pois}(k \mid \lambda _{i})\) (negative) or \(\mathrm P(Y \ge y_{i}) = \sum _{k \ge y_{i}} \text {Pois}(k \mid \lambda _{i})\) (positive), where Pois(·∣λ) is the probability mass function of the Poisson distribution with mean λ. This procedure assumes that all the cells in a local neighborhood, \(\mathcal V\), are not biologically different from each other, and therefore, the differences of their normalized counts are due to randomness. The parameter k controls the bias-variance tradeoff of this model assumption. In our method, k is set to ten by default. The estimator of λi introduced above can be justified as follows. Here, we assume that the UMI count of a gene for u and v follows yu∼Pois(βuλ) and yv∼Pois(βvλ), where βu and βv denote the capture efficiency of u and v and λ denotes the true gene expression (the concentration of mRNA) of the two cells; the index i indicating genes is dropped for brevity. Note that the same expression level λ is shared between u and v, which is the fundamental assumption for DEG detection. We also assume that the ratio of βu and βv is equal to the ratio of the total counts (or the library sizes), denoted by nu and nv. Under these assumptions, we can derive the expectation of yu from the normalized count of yv as follows: $$\begin{array}{*{20}l} \mathbb{E}\left[\frac{n_{u}}{n_{v}} y_{v}\right] = \mathbb{E}\left[\frac{\beta_{u}}{\beta_{v}} y_{v}\right] = \frac{\beta_{u}}{\beta_{v}}\mathbb{E}[y_{v}] = \frac{\beta_{u}}{\beta_{v}} (\beta_{v} \lambda)\\ = \beta_{u} \lambda = \mathbb{E}[y_{u}]. \end{array} $$ Therefore, the arithmetic mean of the normalized counts of nearby cells is an unbiased estimator of the expectation of yu. DEG detection functionality is an optional utility of CellFishing.jl because it requires additional disk space to store the raw UMI counts of the database cells. If the database stores the raw counts, the count matrix is compressed by Blosc (http://blosc.org/), a high performance compressor optimized for binary data. CellFishing.jl uses LZ4HC (https://lz4.github.io/lz4/) as the backend compressor of Blosc with the maximum compression level (i.e., level = 9) and shuffling, achieving high compression ratio and rapid decompression in our preliminary experiments using actual scRNA-seq data. When loading a database object with a count matrix from a file, CellFishing.jl does not directly load the matrix into memory. Instead, the compressed matrix is mapped to the memory space as a memory-mapped file using the mmap system call on a POSIX-compliant platform or its counterpart in Windows, and essential parts of the matrix are decompressed on request. This has several advantages such as reducing data loading time, avoiding unnecessary memory allocation, and sharing the same data among different processes without duplication. CellFishing.jl is an open-source package written in the Julia language [19] and is distributed under the MIT License. Julia is a high-performance dynamic programming language for technical computing, which compiles the source code at run-time and makes it easier to install CellFishing.jl, since the user does not need to compile the source code during installation. The entire code of the package is written in Julia, as it makes the code simpler while its performance is closely comparable to other compiled programming languages such as C. The installation can be done using a package manager bundled with Julia. The source code and the documentation of CellFishing.jl are hosted on GitHub: https://github.com/bicycle1885/CellFishing.jl. The maximum performance of CellFishing.jl is achieved by exploiting the characteristics of modern processors. For example, CellFishing.jl heavily uses the POPCNT instruction (to count the number of 1 bits) and the PREFETCHNTA instruction (to prefetch data into caches from memory) introduced by the Streaming SIMD Extensions to compute the Hamming distance between bit vectors. These instructions are available on most processors manufactured by Intel or AMD. Since Julia compiles the source code at run time, suitable instructions for a processor are automatically selected. Also, the linear algebra libraries included in Julia, such as OpenBLAS and LAPACK, contribute to the performance of the preprocessing and hashing phases. We consider that using accelerators such as GPUs is not particularly important in CellFishing.jl because these phases do not represent a major bottleneck. The script files used in this study are included in Additional file 3. To ensure reproducibility, all experiments were run using Snakemake [48], a Python-based workflow management tool for bioinformatics. We used Julia 1.0.1 to run CellFishing.jl 0.3.0 and R 3.5.0 to run scmap 1.2.0. R and scmap were installed in a Docker image built on top of Bioconductor's Docker image (https://hub.docker.com/r/bioconductor/release_base2/, R3.5.0_Bioc3.7) [49], with the Dockerfile included in the additional file. All the plots and tables in this manuscript were generated in a Jupyter notebook [50], which is also included in the same additional file. We used Linux machines with Intel Xeon Gold 6126 CPU (629.4 GiB of memory, hard disk drive) or Intel Xeon CPU E5-2637 v4 (251.6 GiB of memory, hard disk drive) to benchmark the run-time performance. Performance comparisons between CellFishing.jl and scmap-cell were performed on the same machine. Islam S, Kjällquist U, Moliner A, Zajac P, Fan JB, Lönnerberg P, Linnarsson S. Characterization of the single-cell transcriptional landscape by highly multiplex RNA-seq. Genome Res. 2011; 21(7):1160–7. https://doi.org/10.1101/gr.110882.110. Hashimshony T, Wagner F, Sher N, Yanai I. CEL-Seq: Single-Cell RNA-Seq by Multiplexed Linear Amplification. Cell Rep. 2012; 2(3):666–73. https://doi.org/10.1016/j.celrep.2012.08.003. Kivioja T, Vähärautio A, Karlsson K, Bonke M, Enge M, Linnarsson S, Taipale J. Counting absolute numbers of molecules using unique molecular identifiers. Nat Methods. 2012; 9(1):72–4. https://doi.org/10.1038/nmeth.1778. Islam S, Zeisel A, Joost S, La Manno G, Zajac P, Kasper M, Lönnerberg P, Linnarsson S. Quantitative single-cell RNA-seq with unique molecular identifiers. Nat Methods. 2014; 11(2):163–6. https://doi.org/10.1038/nmeth.2772. Klein AM, Mazutis L, Akartuna I, Tallapragada N, Veres A, Li V, Peshkin L, Weitz DA, Kirschner MW. Droplet barcoding for single-cell transcriptomics applied to embryonic stem cells. Cell. 2015; 161(5):1187–201. https://doi.org/10.1016/j.cell.2015.04.044. Macosko EZ, Basu A, Satija R, Nemesh J, Shekhar K, Goldman M, Tirosh I, Bialas AR, Kamitaki N, Martersteck EM, Trombetta JJ, Weitz DA, Sanes JR, Shalek AK, Regev A, McCarroll SA. Highly parallel genome-wide expression profiling of individual cells using nanoliter droplets. Cell. 2015; 161(5):1202–14. https://doi.org/10.1016/j.cell.2015.05.002. Zheng GXY, Terry JM, Belgrader P, Ryvkin P, Bent ZW, Wilson R, Ziraldo SB, Wheeler TD, McDermott GP, Zhu J, Gregory MT, Shuga J, Montesclaros L, Underwood JG, Masquelier DA, Nishimura SY, Schnall-Levin M, Wyatt PW, Hindson CM, Bharadwaj R, Wong A, Ness KD, Beppu LW, Deeg HJ, McFarland C, Loeb KR, Valente WJ, Ericson NG, Stevens EA, Radich JP, Mikkelsen TS, Hindson BJ, Bielas JH. Massively parallel digital transcriptional profiling of single cells. Nat Commun. 2017; 8:14049. https://doi.org/10.1038/ncomms14049. Sasagawa Y, Danno H, Takada H, Ebisawa M, Tanaka K, Hayashi T, Kurisaki A, Nikaido I. Quartz-Seq2: A high-throughput single-cell RNA-sequencing method that effectively uses limited sequence reads. Genome Biol. 2018; 19:29. https://doi.org/10.1186/s13059-018-1407-3. Shekhar K, Lapan SW, Whitney IE, Tran NM, Macosko EZ, Kowalczyk M, Adiconis X, Levin JZ, Nemesh J, Goldman M, McCarroll SA, Cepko CL, Regev A, Sanes JR. Comprehensive Classification of Retinal Bipolar Neurons by Single-Cell Transcriptomics. Cell. 2016; 166(5):1308–23.e30. https://doi.org/10.1016/j.cell.2016.07.054. Plass M, Solana J, Wolf FA, Ayoub S, Misios A, Glažar P, Obermayer B, Theis FJ, Kocks C, Rajewsky N. Cell type atlas and lineage tree of a whole complex animal by single-cell transcriptomics. Science. 2018; 360(6391):eaa1723. https://doi.org/10.1126/science.aaq1723. Regev A, Teichmann SA, Lander ES, Amit I, Benoist C, Birney E, Bodenmiller B, Campbell P, Carninci P, Clatworthy M, Clevers H, Deplancke B, Dunham I, Eberwine J, Eils R, Enard W, Farmer A, Fugger L, Göttgens B, Hacohen N, Haniffa M, Hemberg M, Kim S, Klenerman P, Kriegstein A, Lein E, Linnarsson S, Lundberg E, Lundeberg J, Majumder P, Marioni JC, Merad M, Mhlanga M, Nawijn M, Netea M, Nolan G, Pe'er D, Phillipakis A, Ponting CP, Quake S, Reik W, Rozenblatt-Rosen O, Sanes J, Satija R, Schumacher TN, Shalek A, Shapiro E, Sharma P, Shin JW, Stegle O, Stratton M, Stubbington MJT, Theis FJ, Uhlen M, Van Oudenaarden A, Wagner A, Watt F, Weissman J, Wold B, Xavier R, Yosef N. The human cell atlas. eLife. 2017; 6:e27041. https://doi.org/10.7554/eLife.27041. The Tabula Muris Consortium. Single-cell transcriptomics of 20 mouse organs creates a Tabula Muris. Nature. 2018; 562(7727):367–372. https://doi.org/10.1038/s41586-018-0590-4. Kiselev VY, Yiu A, Hemberg M. scmap: projection of single-cell RNA-seq data across data sets. Nat Methods. 2018; 15(5):359–62. https://doi.org/10.1038/nmeth.4644. Jegou H, Douze M, Schmid C. Product Quantization for Nearest Neighbor Search Herve. IEEE Trans Pattern Anal Mach Intell. 2011; 33(1):117–28. https://doi.org/10.1109/TPAMI.2010.57. Srivastava D, Iyer A, Kumar V, Sengupta D. CellAtlasSearch: a scalable search engine for single cells. Nucleic Acids Res. 2018; 46(W1):W141–W147. https://doi.org/10.1093/nar/gky421. Charikar MS. Similarity estimation techniques from rounding algorithms. Proceedings of the thiry-fourth annual ACM symposium on Theory of computing - STOC '02. 2002;:380–388. https://doi.org/10.1145/509961.509965. Park J, Shrestha R, Qiu C, Kondo A, Huang S, Werth M, Li M, Barasch J, Suszták K. Single-cell transcriptomics of the mouse kidney reveals potential cellular targets of kidney disease. Science. 2018; 360(6390):758–63. https://doi.org/10.1126/science.aar2131. Sato K. Fast and scalable cell search tool. GitHub. https://github.com/bicycle1885/CellFishing.jl. Bezanson J, Edelman A, Karpinski S, Shah VB. Julia: A Fresh Approach to Numerical Computing. SIAM Rev. 2017; 59(1):65–98. https://doi.org/10.1137/141000671. Norouzi M, Punjani A, Fleet DJ. Fast Exact Search in Hamming Space With Multi-Index Hashing. IEEE Trans Pattern Anal Mach Intell. 2014; 36(6):1107–19. https://doi.org/10.1109/TPAMI.2013.231. Wagner F, Yan Y, Yanai I. K-nearest neighbor smoothing for single-cell RNA-Seq data. bioRxiv. 2017. https://doi.org/10.1101/217737. Chen W, Li Y, Easton J, Finkelstein D, Wu G, Chen X. UMI-count modeling and differential expression analysis for single-cell RNA sequencing. Genome Biol. 2018; 19:70. https://doi.org/10.1186/s13059-018-1438-9. Halko N, Martinsson PG, Tropp JA. Finding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions. SIAM Rev. 2011; 53(2):217–88. https://doi.org/10.1137/090771806. Butler A, Hoffman P, Smibert P, Papalexi E, Satija R. Integrating single-cell transcriptomic data across different conditions, technologies, and species. Nat Biotechnol. 2018; 36(5):411–20. https://doi.org/10.1038/nbt.4096. Li H, Linderman GC, Szlam A, Stanton KP, Kluger Y, Tygert M. Algorithm 971. ACM Trans Math Softw. 2017; 43(3):28. https://doi.org/10.1145/3004053. Ji J, Li J, Yan S, Zhang B, Tian Q. Super-bit locality-sensitive hashing. Adv Neural Inf Process Syst. 2012;:108–116. McInnes L, Healy J. UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction. arXiv:1802.03426. Li H, Courtois ET, Sengupta D, Tan Y, Chen KH, Goh JJL, Kong SL, Chua C, Hon LK, Tan WS, Wong M, Choi PJ, Wee LJK, Hillmer AM, Tan IB, Robson P, Prabhakar S. Reference component analysis of single-cell transcriptomes elucidates cellular heterogeneity in human colorectal tumors. Nat Genet. 2017; 49(5):708–18. https://doi.org/10.1038/ng.3818. Cohen J. Weighted kappa: Nominal scale agreement provision for scaled disagreement or partial credit. Psychol Bull. 1968; 70(4):213–20. https://doi.org/10.1037/h0026256. McCarthy DJ, Campbell KR, Lun ATL, Wills QF. Scater: pre-processing, quality control, normalization and visualization of single-cell RNA-seq data in R. Bioinformatics. 2017; 33(8):1179–86. https://doi.org/10.1093/bioinformatics/btw777. Freeman MF, Tukey JW. Transformations Related to the Angular and the Square Root. Ann Math Stat. 1950; 21(4):607–11. https://doi.org/10.1214/aoms/1177729756. Kennedy DE, Knight KL. Inflammatory Changes in Bone Marrow Microenvironment Associated with Declining B Lymphopoiesis. J Immunol. 2017; 198(9):3471–479. https://doi.org/10.4049/jimmunol. Consortium TM, Webber J, Batson J, Pisco A. Single-cell RNA-seq data from Smart-seq2 sequencing of FACS sorted cells (v2). figshare. https://doi.org/10.6084/m9.figshare.5829687.v8. Picelli S, Björklund Å. K., Faridani OR, Sagasser S, Winberg G, Sandberg R. Smart-seq2 for sensitive full-length transcriptome profiling in single cells. Nat Methods. 2013; 10(11):1096–1098. https://doi.org/10.1038/nmeth.2639. Sinha D, Kumar A, Kumar H, Bandyopadhyay S, Sengupta D. dropClust: efficient clustering of ultra-large scRNA-seq data. Nucleic Acids Res. 2018; 46(6):e36. https://doi.org/10.1093/nar/gky007. Zitnik M, Sosic R, Leskovec J. Prioritizing network communities. Nat Commun. 2018; 9:2544. https://doi.org/10.1038/s41467-018-04948-5. Van Der Maaten L. Accelerating t-SNE using Tree-Based Algorithms. J Mach Learn Res. 2014; 15(Oct):3221–3245. http://jmlr.org/papers/v15/vandermaaten14a.html. Jiang L, Chen H, Pinello L, Yuan GC. GiniClust: Detecting rare cell types from single-cell gene expression data with Gini index. Genome Biol. 2016; 17:144. https://doi.org/10.1186/s13059-016-1010-4. Berry MW, Drmac Z, Jessup ER. Matrices, Vector Spaces, and Information Retrieval. SIAM Rev. 1999; 41(2):335–62. https://doi.org/10.1137/S0036144598347035. Johnson WE, Li C, Rabinovic A. Adjusting batch effects in microarray expression data using empirical Bayes methods. Biostatistics. 2007; 8(1):118–27. https://doi.org/10.1093/biostatistics/kxj037. Haghverdi L, Lun ATL, Morgan MD, Marioni JC. Batch effects in single-cell RNA-sequencing data are corrected by matching mutual nearest neighbors. Nat Biotechnol. 2018; 36(5):421–7. https://doi.org/10.1038/nbt.4091. Risso D, Perraudeau F, Gribkova S, Dudoit S, Vert JP. A general and flexible method for signal extraction from single-cell RNA-seq data. Nat Commun. 2018; 9(1):284. https://doi.org/10.1038/s41467-017-02554-5. Grün D, Kester L, van Oudenaarden A. Validation of noise models for single-cell transcriptomics. Nat Methods. 2014; 11(6):637–40. https://doi.org/10.1038/nmeth.2930. Ziegenhain C, Vieth B, Parekh S, Reinius B, Guillaumet-Adkins A, Smets M, Leonhardt H, Heyn H, Hellmann I, Enard W. Comparative Analysis of Single-Cell RNA Sequencing Methods. Mol Cell. 2017; 65(4):631–643.e4. https://doi.org/10.1016/j.molcel.2017.01.023. Lun ATL, McCarthy DJ, Marioni JC. A step-by-step workflow for low-level analysis of single-cell RNA-seq data. F1000Research. 2016; 5:2122. https://doi.org/10.12688/f1000research.9501.1. Brennecke P, Anders S, Kim JK, Kołodziejczyk Aa, Zhang X, Proserpio V, Baying B, Benes V, Teichmann S. a., Marioni JC, Heisler MG. Accounting for technical noise in single-cell RNA-seq experiments. Nat Methods. 2013; 10(11):1093–5. https://doi.org/10.1038/nmeth.2645. Bianconi E, Piovesan A, Facchin F, Beraudi A, Casadei R, Frabetti F, Vitale L, Pelleri MC, Tassani S, Piva F, Perez-Amodio S, Strippoli P, Canaider S. An estimation of the number of cells in the human body. Ann Hum Biol. 2013; 40(6):463–71. https://doi.org/10.3109/03014460.2013.807878. Koster J, Rahmann S. Snakemake–a scalable bioinformatics workflow engine. Bioinformatics. 2012; 28(19):2520–2. https://doi.org/10.1093/bioinformatics/bts480. Gentleman RC, Carey VJ, Bates DM, Bolstad B, Dettling M, Dudoit S, Ellis B, Gautier L, Ge Y, Gentry J, Hornik K, Hothorn T, Huber W, Iacus S, Irizarry R, Leisch F, Li C, Maechler M, Rossini AJ, Sawitzki G, Smith C, Smyth G, Tierney L, Yang JY, Zhang J. Bioconductor: open software development for computational biology and bioinformatics. Genome Biol. 2004; 5(10):80. https://doi.org/10.1186/gb-2004-5-10-r80. Kluyver T, Ragan-kelley B, Pérez F, Granger B, Bussonnier M, Frederic J, Kelley K, Hamrick J, Grout J, Corlay S, Ivanov P, Avila D, Abdalla S, Willing C. Jupyter Notebooks–a publishing format for reproducible computational workflows. Positioning and Power in Academic Publishing: Players, Agents and Agendas. 2016. :87–90. https://doi.org/10.3233/978-1-61499-649-1-87. Baron M, Veres A, Wolock SL, Faust AL, Gaujoux R, Vetere A, Ryu JH, Wagner BK, Shen-Orr SS, Klein AM, Melton DA, Yanai I. A Single-Cell Transcriptomic Map of the Human and Mouse Pancreas Reveals Inter- and Intra-cell Population Structure. Cell Syst. 2016; 3(4):346–60. https://doi.org/10.1016/j.cels.2016.08.011. Veres A, Baron M. A single-cell transcriptomic map of the human and mouse pancreas reveals inter- and intra-cell population structure. NCBI GEO. https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE84133. Accessed 14 June 2018. Shekhar K. Drop-Seq analysis of P17 FACS sorted retinal cells from the Tg(Chx10-EGFP/cre,-ALPP)2Clc or Vsx2-GFP transgenic line. NCBI GEO. https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE81904. Accessed 14 June 2018. Glažar P. Planaria Single Cell Atlas. website. https://shiny.mdc-berlin.de/psca/. Accessed 14 June 2018. Botvinnik O, Webber J, Batson J, Pisco A. Single-cell RNA-seq data from microfluidic emulsion (v2). figshare. https://doi.org/10.6084/m9.figshare.5968960.v3. 10x Genomics Inc.1.3 Million Brain Cells from E18 Mice. 2017. https://support.10xgenomics.com/single-cell-gene-expression/datasets/1.3.0/1M_neurons. Accessed: 2018-7-5. Sato K, Tsuyuzaki K, Shimizu K, Nikaido I. CellFishing.jl: an ultrafast and scalable cell search method for single-cell RNA-sequencing. Source code. Github. https://github.com/bicycle1885/CellFishing.jl. We thank Hirotaka Matsumoto for helpful discussions. We also thank Mr. Akihiro Matsushima and Mr. Manabu Ishii for their assistance with the IT infrastructure for the data analysis. We are also grateful to all members of the Laboratory for Bioinformatics Research, RIKEN Center for Biosystems Dynamics Research for giving us helpful advice. This work was supported by MEXT KAKENHI Grant Number 16K16152. This work was supported by the Projects for Technological Development, Research Center Network for Realization of Regenerative Medicine by Japan (18bm0404024h0001), the Japan Agency for Medical Research and Development (AMED), and JST CREST grant number JPMJCR16G3, Japan to I.N. CellFishing.jl is implemented in the Julia programming language, and the source code is freely available under the MIT license at https://github.com/bicycle1885/CellFishing.jl(DOI: https://doi.org/10.5281/zenodo.1495440)[57]. All analyses and figures in this paper can be reproduced using the scripts in Additional file 3. The Baron2016 [51, 52] and Shekhar2016 data sets [9, 53], were downloaded from the Gene Expression Omnibus with accession numbers GSE84133 and GSE81904, respectively. The list of homologous genes between human and mouse was downloaded from the Vertebrate Homology database of Mouse Genome Informatics at http://www.informatics.jax.org/homology.shtml. The cluster annotation file of Shekhar2016 was downloaded from https://portals.broadinstitute.org/single_cell/study/retinal-bipolar-neuron-drop-seq. The Plass2018 data set[10,26] was downloaded from https://shiny.mdc-berlin.de/psca/. The TabulaMuris data set was downloaded from https://figshare.com/articles/Single-cell_RNA-seq_data_from_Smart-seq2_sequencing_of_FACS_sorted_cells_v2_/5829687and https://figshare.com/articles/Single-cell_RNA-seq_data_from_microfluidic_emulsion_v2_/5968960. The 1M neurons data set was downloaded from https://support.10xgenomics.com/single-cell-gene-expression/datasets/1.3.0/1M_neurons. Department of Biotechnology, Graduate School of Agricultural and Life Sciences, The University of Tokyo, Bunkyo-ku, Tokyo, 113-8657, Japan Kenta Sato & Kentaro Shimizu Laboratory for Bioinformatics Research RIKEN Center for Biosystems Dynamics Research, Wako, Saitama, 351-0198, Japan Kenta Sato, Koki Tsuyuzaki & Itoshi Nikaido Bioinformatics Course, Master's/Doctoral Program in Life Science Innovation (T-LSI), School of Integrative and Global Majors (SIGMA), University of Tsukuba, Wako, Saitama, 351-0198, Japan Itoshi Nikaido Kenta Sato Koki Tsuyuzaki Kentaro Shimizu KST, KT, KSH, and IN designed the study. KST implemented all the software. KT validated all the R scripts. KST, KT, and IN wrote the manuscript. All authors have read and approved the final manuscript. Correspondence to Kenta Sato or Itoshi Nikaido. Additional file 1 Additional and high-resolution figures. Additional plots referred in the manuscript. (PDF 12,953 kb) Analysis notebook. Compiled Jupyter notebook to run analysis of experiments. (HTML 17,183 kb) Experiment scripts. Archived script files to reproduce the experiments. (TAR 81 kb) Review history. (DOCX 31 kb) Sato, K., Tsuyuzaki, K., Shimizu, K. et al. CellFishing.jl: an ultrafast and scalable cell search method for single-cell RNA sequencing. Genome Biol 20, 31 (2019). https://doi.org/10.1186/s13059-019-1639-x Cell searching Cell typing Locality-sensitive hashing Submission enquiries: [email protected]
CommonCrawl
Materials Research (187) MRS Online Proceedings Library Archive (179) Epidemiology & Infection (15) Acta geneticae medicae et gemellologiae: twin research (6) Journal of Plasma Physics (6) European Journal of Anaesthesiology (4) Radiocarbon (4) The Journal of Agricultural Science (3) The Journal of Laryngology & Otology (3) Journal of Helminthology (2) Materials Research Society Internet Journal of Nitride Semiconductor Research (2) Revue de Métallurgie – International Journal of Metallurgy (2) Materials Research Society (189) International Astronomical Union (16) MSC - Microscopical Society of Canada (10) Testing Membership Number Upload (2) JLO (1984) Ltd (1) MiMi / EMAS - European Microbeam Analysis Society (1) Society for Psychophysiological Research (1) The Australian Society of Otolaryngology Head and Neck Surgery (1) The New Zealand Society of Otolaryngology, Head and Neck Surgery (1) Radial density profile and stability of capillary discharge plasma waveguides of lengths up to 40 cm Target Fabrication 2021 M. Turner, A. J. Gonsalves, S. S. Bulanov, C. Benedetti, N. A. Bobrova, V. A. Gasilov, P. V. Sasorov, G. Korn, K. Nakamura, J. van Tilborg, C. G. Geddes, C. B. Schroeder, E. Esarey Journal: High Power Laser Science and Engineering / Volume 9 / 2021 Published online by Cambridge University Press: 26 April 2021, e17 We measured the parameter reproducibility and radial electron density profile of capillary discharge waveguides with diameters of 650 $\mathrm{\mu} \mathrm{m}$ to 2 mm and lengths of 9 to 40 cm. To the best of the authors' knowledge, 40 cm is the longest discharge capillary plasma waveguide to date. This length is important for $\ge$10 GeV electron energy gain in a single laser-driven plasma wakefield acceleration stage. Evaluation of waveguide parameter variations showed that their focusing strength was stable and reproducible to $<0.2$% and their average on-axis plasma electron density to $<1$%. These variations explain only a small fraction of laser-driven plasma wakefield acceleration electron bunch variations observed in experiments to date. Measurements of laser pulse centroid oscillations revealed that the radial channel profile rises faster than parabolic and is in excellent agreement with magnetohydrodynamic simulation results. We show that the effects of non-parabolic contributions on Gaussian pulse propagation were negligible when the pulse was approximately matched to the channel. However, they affected pulse propagation for a non-matched configuration in which the waveguide was used as a plasma telescope to change the focused laser pulse spot size. Nightlife clusters of coronavirus disease in Tokyo between March and April 2020 S. Takaya, S. Tsuzuki, K. Hayakawa, A. Kawashima, A. Okuhama, K. Kanda, T. Suzuki, Y. Akiyama, Y. Miyazato, S. Ide, K. Nakamura, H. Nomoto, T. Nakamoto, S. Hikida, J. Tanuma, K. Ohara, T. Ito, T. Baba, K. Yamamoto, M. Ujiie, S. Saito, S. Morioka, M. Ishikane, N. Kinoshita, S. Kutsuna, N. Ohmagari Journal: Epidemiology & Infection / Volume 148 / 2020 Published online by Cambridge University Press: 13 October 2020, e250 We analysed associations between exposure to nightlife businesses and severe acute respiratory syndrome coronavirus 2 PCR test results at a tertiary hospital in Tokyo between March and April 2020. A nightlife group was defined as those who had worked at or visited the businesses. We included 1517 individuals; 196 (12.9%) were categorised as the nightlife group. After propensity score matching, the proportion of positive PCR tests in the nightlife group was significantly higher than that in the non-nightlife group (nightlife, 63.8%; non-nightlife, 23.0%; P < 0.001). An inclusive approach to mitigate risks related to the businesses needs to be identified. P01-305 - Differences in the Treatment Response to Antithyroid Drugs Versus Electroconvulsive Therapy in a Case of Recurrent Catatonia With Graves' Disease T. Saito, R. Saito, H. Suwa, F. Yakushiji, K. Takezawa, M. Nakamura Journal: European Psychiatry / Volume 25 / Issue S1 / 2010 Published online by Cambridge University Press: 17 April 2020, p. 1 Treatment of medical catatonia sometimes requires management of presenting psychiatric syndrome as well as treatment of causative medical illnesses. Treatment response to these approaches may, however, differ even in the same patient suffering a relapse of a similar clinical picture, and this presents a therapeutic challenge for physicians. We therefore report a case of medical catatonia due to Graves' disease which responded to antithyroid therapy in the first episode but required electroconvulsive therapy (ECT) when relapsed, then discuss the possible mechanism which caused different treatment response. We describe the clinical course with the laboratory findings of recurrent catatonic episodes of a patient with Graves' disease. A 25-year-old woman manifested stupor, catalepsy and echolalia accompanying goiter, proptosis and tachycardia. Laboratory findings on thyroid function were consistent with Graves' disease. Treated with thyamazole (TMZ), her psychiatric and physical symptoms gradually subsided. She had remained euthyroid and symptom free for 10 years on TMZ 5 mg every other day. However, at age 35, she exhibited stupor with mannerism and stereotypy. Laboratory findings revealed mild exercerbation of thyroid function. Though antithyroid therapy regained euthyroid, her psychiatric symptoms persisted. Further treatment with neuroleptics did not ameliorate her symptoms. She was thus administered bitemporal ECT and recovered after 7th trial. Aggravation of thyroid function associated with catatonic symptoms when relapsed was milder compared to the first episode. This suggested that dysregulation of thyroid homeostasis in the central nervous system might have persisted despite the peripheral euthyroid and could affect the treatment response. PW01-147 - Diminished Limbic Activation Associated With Face Perception In Patients With Schizophrenia Y. Kawasaki, A. Furuichi, K. Nakamura, T. Takahashi, M. Suzuki The authors evaluated cerebral blood flow response in schizophrenia patients during face perception to test the hypothesis of diminished limbic activation related to emotional relevance of facial stimuli. Thirteen patients with schizophrenia and 17 comparison subjects viewed facial displays of happiness, sadness, surprise, anger, fear, and disgust as well as neutral faces using the Japanese and Caucasian Facial Expressions of Emotion and Neutral Faces (Matsumoto and Ekman, 1988). Functional magnetic resonance imaging was used to measure blood-oxygen-level-dependent signal changes as the subjects alternated between tasks of discriminating sex with an interleaved reference condition. The groups did not differ in performance on the task. Healthy participants showed activation in the bilateral fusiform gyrus, medial temporal structures, occipital lobe, and inferior frontal cortex relative to the baseline condition. The increase was greater these regions in the right hemisphere than those in the left hemisphere. In the patients with schizophrenia, minimal focal response in the right fusiform gyrus, medial temporal structures, and occipital lobe was observed for the facial perception task relative to the baseline condition. Contrasting patients and comparison subjects revealed voxels in the left medial temporal structures, occipital lobe in which the healthy comparison subjects had significantly greater activation. Impaired activation was seen in patients with schizophrenia for detection of facial attributes such as sex. Impairment in the medial temporal structure such as amygdale may lead to misunderstanding of social communication and may underlie difficulties in social adjustment experienced by people with schizophrenia. P02-98 - Improving Cognitive Functions of Seniors by Reading and Solving Arithmetic Problems K. Nakamura, M. Taira, R. Kawashima In general, recent cognitive training has received increasing interest as a solution to age-related cognitive decline. Although the general public's interest in cognitive or brain training is increasing, the generalizing or transfer effect of such training remains unclear. We previously introduced a new cognitive intervention program for senile dementia, named learning therapy. The principle of learning therapy is to activate the prefrontal cortex by solving simple mathematical and language problems. The purpose of this study was to examine the beneficial effects of a new cognitive intervention program designed for normal aged people, the concepts of which are is derived from learning therapy. We applied a similar daily cognitive training program to community-dwelling seniors to determine the effects of training on cognitive functions, particularly on the function of the prefrontal cortex, by a single-blind randomized controlled trial and a large scale cohort study. In these studies, neuropsychological measures were determined prior to and after six months of the intervention (post-test) by mini-mental state examination (MMSE), frontal assessment battery at bed side (FAB), and a digit-symbol substitution test (DST) of WAIS-R. The results indicate that the transfer effect of cognitive intervention by reading and solving arithmetic problems on non-targeted cognitive functions was demonstrated in both studies, and are convincing evidence that cognitive training has the beneficial effects of maintaining and improving cognitive functions. The psychological stimulations made possible by expertise and technology in brain science can improve the higher cognitive functions of seniors. P02-336 Longitudinal voxel - based morphometry to evaluate progressive gray mater changes in first - episode schizophrenia Y. Kawasaki, M. Suzuki, T. Takahashi, K. Nakamura Journal: European Psychiatry / Volume 26 / Issue S2 / March 2011 Published online by Cambridge University Press: 16 April 2020, p. 932 Although longitudinal magnetic resonance imaging (MRI) studies have shown that various brain regions undergo progressive tissue loss during the early phases of schizophrenia, regional pattern of these changes remain unclear. Longitudinal MRI data were obtained from 18 (12 males and 6 females) patients with first-episode schizophrenia and 20 (11 males and 9 females) healthy controls and at baseline and follow-up with mean scan interval of 2.7 years. To compare gray matter changes over time between patients and controls were evaluated with voxel-based morphometry (VBM) using SPM8 following the longitudinal DARTEL protocol. In both groups of patient and control longitudinal gray mater reduction was observed in various brain regions including lateral and medial frontal regions and superior temporal region. Excessive decrease in gray matter was found in patients as compared to healthy controls in the left superior temporal region and right inferior frontal region. Our findings suggest that there are differing longitudinal gray matter changes in patients with schizophrenia during the early phases of the illness as compared to healthy individuals. Neuropsychiatric and Neuropsychologic Typology of Fronto-Temporal Dementia S. Yoshida, M. Nakanishi, S. Nakaaki, H. Nakamura, T. Hamanaka, K. Hadano Published online by Cambridge University Press: 16 April 2020, p. 217s P0165 - Neuroradiologic evidence of dopaminergic involvement in idiopathic basal ganglia calcification T. Saito, M. Nakamura, Y. Shima, T. Shimizu, S. Murayama, K. Oda, K. Ishiwata, K. Ishii, K. Isse Journal: European Psychiatry / Volume 23 / Issue S2 / April 2008 Published online by Cambridge University Press: 16 April 2020, pp. S240-S241 Background and Aims: Idiopathic basal ganglia calcification (IBGC) is a neuropathological finding known to manifest motor disturbance, cognitive impairment and psychiatric symptoms. Pathophysiology of psychiatric symptoms, however, remains controversial. Previous biochemical study suggests that dopaminergic impairment is involved in IBGC. We therefore performed positron emission tomography (PET) to elucidate the pre- and postsynaptic dopaminergic function and glucose metabolism in two IBGC patients. Case 1 is a 44 years old woman presented with disorganized thought, echolalia, verbigeration and parkinsonism. She was administered bitemporal electro-convulsive therapy (ECT). Case 2 is a 35 years old woman with persecutory delusion. Computed tomography showed bilateral symmetric calcification of striatum, globus pallidus and dentate nucei. Other causes of intracranial calcification were excluded. PET scans were obtained using [11C]-labeled 2β-carbomethoxy-3β-(4-flurophenyl)-tropane, [11C]-labeled raclopride and [18F] fluorodeoxyglucose. The decreased binding potential was severe in bilateral head of caudate nuclei and anterior putamen. In case 1, the decline was also found in posterior putamen. There were widespread decreases of glucose uptake in frontal, temporal and parietal cortices bilaterally in case 1. Significant hypometabolism was observed in the right frontal, temporal and parietal cortices. After the ECT session, the previous areas of significant hypometabolism in the right hemisphere had improved. In case 2, there was no significant change of glucose metabolism in cerebral cortex. The difference in affected region within basal ganglia might be associated with the diverse clinical pictures in IBGC. Particularly, in the psychiatric manifestation, dopaminergic dysfunction in caudate nucleus and anterior putamen could be participated. Comparison of brain N-acetylaspartate levels and serum brain-derived neurotrophic factor (BDNF) levels between patients with first-episode schizophrenia psychosis and healthy controls N. Goto, R. Yoshimura, S. Kakeda, J. Moriya, K. Hayashi, A. Ikenouchi-Sugita, W. Umene-Nakano, H. Hori, N. Ueda, Y. Korogi, J. Nakamura Journal: European Psychiatry / Volume 26 / Issue 1 / January 2011 Published online by Cambridge University Press: 16 April 2020, pp. 57-63 N-acetylaspartate (NAA) levels and serum brain-derived neurotrophic factor (BDNF) levels in patients with first-episode schizophrenia psychosis and age- and sex-matched healthy control subjects were investigated. In addition, plasma levels of homovanillic acid (HVA) and 3-methoxy-4-hydroxyphenylglycol (MHPG) were compared between the two groups. Eighteen patients (nine males, nine females; age range: 13–52 years) were enrolled in the study, and 18 volunteers (nine males, nine females; age range: 15–49 years) with no current or past psychiatric history were also studied by magnetic resonance spectroscopy (MRS) as sex- and age-matched controls. Levels of NAA/Cr in the left basal ganglia (p = 0.0065) and parieto-occipital lobe (p = 0.00498), but not in the frontal lobe, were significantly lower in patients with first-episode schizophrenia psychosis than in control subjects. No difference was observed between the serum BDNF levels of patients with first-episode schizophrenia psychosis and control subjects. In regard to the plasma levels of catecholamine metabolites, plasma MHPG, but not HVA, was significantly lower in the patients with first-episode psychosis than in control subjects. In addition, a significantly positive correlation was observed between the levels of NAA/Cr of the left basal ganglia and plasma MHPG in all subjects. These results suggest that brain NAA levels in the left basal ganglia and plasma MHPG levels were significantly reduced at the first episode of schizophrenia psychosis, indicating that neurodegeneration via noradrenergic neurons might be associated with the initial progression of the disease. P-1313 - Altered Brain Activation During Self-referential Processing in Schizophrenia: an Fmri Study A. Furuichi, Y. Kawasaki, T. Takahashi, K. Nakamura, M. Suzuki Impaired self-awareness has been noted as a core feature of schizophrenia. Recent neuroimaging studies examining self-referential process in schizophrenia have yielded inconsistent results. We aimed to examine the self-referential neural network using the self- and other-evaluation tasks in schizophrenia. Fifteen schizophrenia patients and fifteen age-, sex- and parental education-matched healthy subjects underwent functional magnetic resonance imaging. Subjects were required to make a decision whether the sentence described their own personal trait (self-evaluation) and that of their close friends (other-evaluation). Both patients and healthy groups showed significant activation in multiple brain regions including the medial- and lateral-prefrontal, temporal and parietal cortices during self- and other-evaluation tasks. The control subjects showed higher activations in left posterior cingulate and parahippocampal gyri during self-evaluation than other-evaluation, whereas there was no difference in activated regions between self- and other-conditions in the patients. As compared with the controls, the patients showed higher activations in the right superior frontal and right supramarginal gyri during self-evaluation. These findings provide evidence for neural basis for deficits in self-awareness in schizophrenia and may underlie core clinical symptomatology of schizophrenia. Transnasal inferior meatal antrostomy with a mucosal flap for post-Caldwell–Luc mucoceles in the maxillary sinus M Suzuki, T Matsumoto, M Yokota, K Toyoda, Y Nakamura Journal: The Journal of Laryngology & Otology / Volume 133 / Issue 8 / August 2019 Transnasal inferior meatal antrostomy is increasingly used for the treatment of post-Caldwell–Luc mucoceles in maxillary sinus. This study aimed to report the outcomes after inferior meatal antrostomy with a mucosal flap for recurrent mucoceles. The records of patients who had undergone transnasal inferior meatal antrostomy with or without a mucosal flap were reviewed. Transnasal endoscopic inferior meatal antrostomy with or without a mucosal flap was performed in 21 and 49 patients, respectively. No complications were observed. A closing of the antrostomy was found in 9 (18.4 per cent) of the 49 patients who underwent antrostomy without a mucosal flap. No closings were observed in the 21 patients who underwent antrostomy with a mucosal flap. There was a significant difference in the rate of closing for surgery with and without the mucosal flap. Transnasal endoscopic inferior meatal antrostomy with a mucosal flap is a safe method for the treatment of post-Caldwell–Luc maxillary mucoceles that effectively prevents recurrence. 2 - Terrestrial Atmospheres By Jonathan L. Mitchell, Thomas Birner, Guillaume Lapeyre, Noboru Nakamura, Peter L Read, Gwendal Riviére, Agustín Sánchez-Lavega, Geoffrey K. Vallis Edited by Boris Galperin, University of South Florida, Peter L. Read, University of Oxford Book: Zonal Jets Print publication: 28 February 2019, pp 9-45 Novel host immune evasion strategy of the endoparasitoid Drino inconspicuoides K. Yamashita, K. Zhang, R.T. Ichiki, S. Nakamura, S. Furukawa Journal: Bulletin of Entomological Research / Volume 109 / Issue 5 / October 2019 The tachinid fly Drino inconspicuoides (Diptera: Tachinidae) is an ovolarviparous endoparasitoid whose larvae develop in the host haemocoel and avoids the host immune system. In this study, we investigated the immune evasion mechanisms of this species during infestation in the host Mythimna separata (Lepidoptera: Noctuidae). We discovered a unique 'cloak' that surrounded D. inconspicuoides larvae that penetrated into the host and determined through genomic polymerase chain reaction analysis that this structure originated from the host rather than the tachinid. The 'cloak' contained both haemocytes and fat body cells from the host, with the haemocytes assembling around the larvae first and the fat body cells then covering the haemocyte layer, following which the two mixed. Living D. inconspicuoides larvae that were wrapped in the 'cloak' were not melanized whereas encapsulated dead larvae were melanized, suggesting that this structure contributes to the avoidance of host immune reactions. The CAESAR New Frontiers Comet Sample Return Mission K. Nakamura-Messenger, S. W. Squyres, L. F. Pace, S. Messenger, D. F. Mitchell, D. S. Lauretta, D. P. Glavin, M. Houghton, A. G. Hayes, T. Nakamura, J. P. Dworkin, A. Nguyen, S. Clemett, Y. Furukawa, Y. Kimura, A. Takigawa, G. Blake, T. J. Zega, M. Mumma, S. Milam, C. D. K. Herd, J. Mitchell, and the CAESAR Project Team Three Dimensional Structure Analysis of Cell Nuclei in Mice Cerebellar Cortex Using Array Tomography M. Suga, H. Nisioka, M. Nakamura, K. Suzuki, K. Konishi, T. Nonaka, S. Kume, M. Maeda, Y. Kataoka, K. Ohta FM14 Session 1: Bridging the Astronomy Research and Outreach Communities - Recent Highlights, Emerging Collaborations, Best Practices and Support Structures Sze-leung Cheung, Sylvie D. Vauclair, Chenzhou Cui, Shanshan Li, Yoichiro Hanaoka, Sharon E. Hunt, Shio K. Kawagoe, Nobuhiko Kusakabe, Shigeru Nakamura, Grigoris Maravelias, Emmanouel Vourliotis, Krinio Marouda, Ioannis Belias, Emmanouel Kardasis, Pierros Papadeas, Iakovos D. Strikis, Eleftherios Vakalopoulos, Orfefs Voutyras, Lucia Marchetti, Thomas H. Jarrett, Franck Marchis, Arnaud Malvache, Laurent Marfisi, Antonin Borot, Emmanuel Arbouch, I. Villicaña-Pedraza, F. Carreto-Parra, S. Prugh, K. Lopez, J. Nuss, D. Cadena, V. Lopez, Priya Shah Journal: Proceedings of the International Astronomical Union / Volume 14 / Issue A30 / August 2018 Section 1 of the FM14 focus on bridging the astronomy research and outreach communities - recent highlights, emerging collaborations, best practices and support structures. This paper also contains supplementary materials that point to contributed talks and poster presentations that can be found online. Environmental Real Time Imaging with 200kV FE Aberration-corrected Analytical Scanning Transmission Electron Microscope (ESTEM) System with an Open Window Gas Injection H. Inada, H. Kikuchi, A. Hanawa, Y. Suzuki, M. Shirai, K. Nakamura Web survey-based selection of controls for epidemiological analyses of a multi-prefectural outbreak of enterohaemorrhagic Escherichia coli O157 in Japan associated with consumption of self-grilled beef hanging tender Y. Yahata, N. Ohshima, F. Odaira, N. Nakamura, H. Ichikawa, H. Ichikawa, K. Matsuno, J. Shuri, T. Toyozawa, J. Terajima, H. Watanabe, K. Nakashima, T. Sunagawa, K. Taniguchi, N. Okabe Journal: Epidemiology & Infection / Volume 146 / Issue 4 / March 2018 An outbreak of enterohaemorrhagic Escherichia coli O157 occurred in multiple prefectures of Japan in November 2009. We conducted two case–control studies with trace-back and trace-forward investigations to determine the source. The case definition was met by 21 individuals; 14 (66.7%) were hospitalised, but no haemolytic uraemic syndrome, acute encephalopathy or deaths occurred. Median age was 23 (range 12–48) years and 14 cases were male (66.7%). No significant associations with food were found in a case–control study by local public health centres, but our matched case–control study using Internet surveys found that beef hanging tender (or hanger steak), derived from the diaphragm of the cattle, was significantly associated with illness (odds ratio = 15.77; 95% confidence interval, 2.00–124.11). Pulsed-field gel electrophoresis analysis of isolates from patients and the suspected food showed five different patterns: two in faecal and food samples, and another three in patient faecal samples only, although there were epidemiological links to the meat consumed at the restaurants. Trace-back investigation implicated a common food processing company from outside Japan. Examination of the logistics of the meat processing company suggested that contamination did not occur in Japan. We concluded that the source of the outbreak was imported hanging tender. This investigation revealed that Internet surveys could be useful for outbreak investigations. Turbulence Heating ObserveR – satellite mission proposal A. Vaivads, A. Retinò, J. Soucek, Yu. V. Khotyaintsev, F. Valentini, C. P. Escoubet, O. Alexandrova, M. André, S. D. Bale, M. Balikhin, D. Burgess, E. Camporeale, D. Caprioli, C. H. K. Chen, E. Clacey, C. M. Cully, J. De Keyser, J. P. Eastwood, A. N. Fazakerley, S. Eriksson, M. L. Goldstein, D. B. Graham, S. Haaland, M. Hoshino, H. Ji, H. Karimabadi, H. Kucharek, B. Lavraud, F. Marcucci, W. H. Matthaeus, T. E. Moore, R. Nakamura, Y. Narita, Z. Nemecek, C. Norgren, H. Opgenoorth, M. Palmroth, D. Perrone, J.-L. Pinçon, P. Rathsman, H. Rothkaehl, F. Sahraoui, S. Servidio, L. Sorriso-Valvo, R. Vainio, Z. Vörös, R. F. Wimmer-Schweingruber Journal: Journal of Plasma Physics / Volume 82 / Issue 5 / October 2016 Published online by Cambridge University Press: 22 September 2016, 905820501 The Universe is permeated by hot, turbulent, magnetized plasmas. Turbulent plasma is a major constituent of active galactic nuclei, supernova remnants, the intergalactic and interstellar medium, the solar corona, the solar wind and the Earth's magnetosphere, just to mention a few examples. Energy dissipation of turbulent fluctuations plays a key role in plasma heating and energization, yet we still do not understand the underlying physical mechanisms involved. THOR is a mission designed to answer the questions of how turbulent plasma is heated and particles accelerated, how the dissipated energy is partitioned and how dissipation operates in different regimes of turbulence. THOR is a single-spacecraft mission with an orbit tuned to maximize data return from regions in near-Earth space – magnetosheath, shock, foreshock and pristine solar wind – featuring different kinds of turbulence. Here we summarize the THOR proposal submitted on 15 January 2015 to the 'Call for a Medium-size mission opportunity in ESAs Science Programme for a launch in 2025 (M4)'. THOR has been selected by European Space Agency (ESA) for the study phase. Polyandry increases reproductive performance but does not decrease survival in female Brontispa longissima K. Kawazu, W. Sugeno, A. Mochizuki, S. Nakamura Journal: Bulletin of Entomological Research / Volume 107 / Issue 2 / April 2017 The costs and benefits of polyandry are still not well understood. We studied the effects of multiple mating on the reproductive performance of female Brontispa longissima (Coleoptera: Chrysomelidae), one of the most serious pests of the coconut palm, by using three experimental treatments: (1) singly-mated females (single treatment); (2) females that mated 10 times with the same male (repetition treatment); and (3) females that mated once with each of 10 different males (polyandry treatment). Both multiple mating treatments resulted in significantly greater total egg production and the proportion of eggs that successfully hatched (hatching success) than with the single mating treatment. Furthermore, the polyandry treatment resulted in greater total egg production and hatching success than with the repetition treatment. Thus, mate diversity may affect the direct and indirect benefits of multiple mating. Female longevity, the length of the preoviposition period, the length of the period from emergence to termination of oviposition, and the length of the ovipositing period did not differ among treatments. The pronounced fecundity and fertility benefits that females gain from multiple mating, coupled with a lack of longevity costs, apparently explain the extreme polyandry in B. longissima.
CommonCrawl
Research | Open | Published: 22 February 2018 Spatiotemporal patterns and environmental drivers of human echinococcoses over a twenty-year period in Ningxia Hui Autonomous Region, China Angela M. Cadavid Restrepo1, Yu Rong Yang2,3, Donald P. McManus3, Darren J. Gray1,3, Tamsin S. Barnes4,5, Gail M. Williams6, Ricardo J. Soares Magalhães4,7, Nicholas A. S. Hamm8 & Archie C. A. Clements1 Parasites & Vectorsvolume 11, Article number: 108 (2018) | Download Citation Human cystic (CE) and alveolar (AE) echinococcoses are zoonotic parasitic diseases that can be influenced by environmental variability and change through effects on the parasites, animal intermediate and definitive hosts, and human populations. We aimed to assess and quantify the spatiotemporal patterns of human echinococcoses in Ningxia Hui Autonomous Region (NHAR), China between January 1994 and December 2013, and examine associations between these infections and indicators of environmental variability and change, including large-scale landscape regeneration undertaken by the Chinese authorities. Data on the number of human echinococcosis cases were obtained from a hospital-based retrospective survey conducted in NHAR for the period 1 January 1994 through 31 December 2013. High-resolution imagery from Landsat 4/5-TM and 8-OLI was used to create single date land cover maps. Meteorological data were also collected for the period January 1980 to December 2013 to derive time series of bioclimatic variables. A Bayesian spatio-temporal conditional autoregressive model was used to quantify the relationship between annual cases of CE and AE and environmental variables. Annual CE incidence demonstrated a negative temporal trend and was positively associated with winter mean temperature at a 10-year lag. There was also a significant, nonlinear effect of annual mean temperature at 13-year lag. The findings also revealed a negative association between AE incidence with temporal moving averages of bareland/artificial surface coverage and annual mean temperature calculated for the period 11–15 years before diagnosis and winter mean temperature for the period 0–4 years. Unlike CE risk, the selected environmental covariates accounted for some of the spatial variation in the risk of AE. The present study contributes towards efforts to understand the role of environmental factors in determining the spatial heterogeneity of human echinococcoses. The identification of areas with high incidence of CE and AE may assist in the development and refinement of interventions for these diseases, and enhanced environmental change risk assessment. Cystic (CE) and alveolar (AE) echinococcoses, caused by Echinococcus granulosus and E. multilocularis, respectively, are the two forms of human echinococcosis of major public health importance worldwide [1]. Both diseases are distributed widely and potentially life threatening if left untreated [2,3,4]. Within China, E. granulosus and E. multilocularis are responsible for approximately 0.6–1.3 million human cases, with transmission occurring predominantly in central and western areas. Based on reports from the Chinese Ministry of Health, more than 98% of patients with human echinococcoses originate from Gansu, Qinghai and Sichuan Provinces and from Xinjiang Uygur, Ningxia Hui and Inner Mongolia Autonomous Regions [5]. Although these regions constitute highly endemic areas for these diseases in East Asia, significant differences in parasite prevalences have been demonstrated at regional and local levels [6,7,8]. On the Qinghai-Tibet Plateau, where there is high transmission of Echinococcus spp., the prevalence of both CE and AE ranges between 0.4–9.5%, being higher in communities where pastoralism and poor socio-economic conditions are predominant [9, 10]. The patchy AE endemicity distribution has been associated with landscape characteristics and climatic factors that determine habitat suitability for the definitive and intermediate hosts [11,12,13,14,15,16,17]. Hence, understanding how environmental and social factors interact to determine parasite transmission is essential for the design and implementation of effective strategies against echinococcosis, and to target resources to the communities most in need. Echinococcus spp. are maintained primarily through complex domestic and sylvatic life-cycles that involve a wide range of intermediate and definitive hosts and a free-living egg stage. Humans are accidental hosts, that acquire the infection through direct contact with definitive hosts or through a contaminated environment [2]. In the sylvatic and semi-domestic (E. multilocularis) and domestic (E. granulosus) life-cycles of the parasites, distinct socio-demographic and environmental factors modulate the parasite-host-human interplay at different spatial scales [18, 19]. Therefore, different processes of environmental change have the potential to modify the transmission pathways of these parasites [18]. Various land reform policies and incentive programmes have been developed in China to recover degraded lands and promote sustainable development in rural areas [20]. The Grain for Green Project (GGP), also called the Sloping Land Conversion Programme, implemented since 1999, is one of the largest payment for ecosystems services schemes in China [21]. The main focus of the programme is to rehabilitate the ecological environment by promoting three different types of land conversion on steep slopes: cropland to grasslands, cropland to forest and wasteland to forest [21]. The GGP also advocates for small ruminant enclosure and grazing prohibition. In highly endemic areas for echinococcoses, the anthropogenic-driven land cover modifications that resulted apparently from the implementation of the GGP and other reforestation programmes might have favoured the transmission of E. multilocularis. Evidence on the impact of deforestation [13, 22], afforestation [11] and fencing/agricultural practices [23,24,25] on the population density and distribution of small mammals is increasing. Recognizing the public health and economic significance of human echinococcoses, and the potential risk of parasite range expansion, the National Health and Family Planning Commission (NHFPC) launched a national action plan for echinococcosis control in 2005 [26]. This initiative aims to decrease the seropositivity rate in children aged < 12 years and to reduce infestation rates in dogs. To achieve these goals, five interventions were designed to reduce the impacts of these infections in 217 endemic counties: community-based epidemiological surveys involving serological, abdominal ultrasound and chest X-ray screening for early detection of cases; treatment and surveillance of patients diagnosed with the disease; education campaigns to enhance awareness among local people and health officials; regular anti-helmintic treatment for deworming of dogs; and improved control of slaughter practices [27]. In general, the coordination of these efforts has proven difficult, especially in rural areas [26, 27]. In order to improve the establishment and monitoring of realistic targets for control, it is necessary to estimate the real impact of these infections and the permissive factors for transmission at local and regional scales [28]. Using geographical information systems (GIS), Earth observation data and a Bayesian statistical framework, the present study describes the spatio-temporal patterns of CE and AE in NHAR between January 1994 and December 2013. The aims were to identify highly endemic areas for these infections in the autonomous region, and to determine the environmental covariates that are shaping their local geographical distributions, in particular those that may be indicators of the potential impact of the GGP on the NHAR land cover profile. The findings may help the targeting of resources to communities most in need of echinococcosis control, and by contributing to environmental risk assessments of major landscape regeneration programmes such as the GGP. NHAR is a province-level autonomous region located in Northwest China between latitudes 35°26′N and 39°30′N, and between longitudes 104°50′E and 107°40′E. The provincial territory covers an area of 66,400 km2 and is bordered by the Inner Mongolia Autonomous Region to the North, Gansu Province to the South and West and Shaanxi Province to the East. Administratively, NHAR is divided into 5 prefectures that are subsequently subdivided into counties/districts/county-level cities, townships and villages. The population reached about 6.6 million people in 2014, of whom the majority were living in urban areas (53.6%) compared to rural areas (46.4%) [29]. From those living in rural areas, 54.7% belonged to the Hui minority ethnic group and 45.3% were Han Chinese [29]. Internal migration, movement of people from one area (a province, prefecture, county or township) to another within one country [30], is particularly high in NHAR with 54.6% of households reporting at least one migrant in 2001 [31]. Also, a report from the Beijing Normal University and Hitotsubashi University in 2009 indicated that internal migration in NHAR increased from 17.2% in 2002 to 28.3% in 2008, among the working-age population (aged between 16 and 65 years) who participated in the GGP (participation period between 3 and 6 years), and decreased from 24% in 2002 to 17.6% in 2008 among people who did not participate in the programme [32]. The report also demonstrated that migration decision depends on various demographic and socioeconomic factors. In NHAR, migrants are young men with an education level of about 6–9 years, which coincides with the population with high tendency towards migration in China [33]. Variations in migration propensity between Han and Hui nationality groups or between married and non-married people were not found [32]. NHAR lies in a temperate continental monsoon climate zone that is characterized by large seasonal variation in temperature, rainfall and humidity. About 80% of the annual rainfall occurs during the summer and autumn months and generally increases from North to South. Elevation increases from North to South with the highest peak at 3556 m (Fig. 1) [16]. Map and elevation of NHAR and location of the province within China (insert). The blue lines divide the three major natural regions Data on human CE and AE Data on the number of human CE and AE cases were derived from a hospital-based retrospective survey. Hospital medical records for the period between January 1 1992 and December 31 2013 were reviewed in 25 public hospitals in NHAR: 1 hospital from each county (n = 21), three hospitals from the capital city, Yinchuan, and 1 hospital from Guyuan Prefecture. Data collection was conducted during two different time periods, 2002–2003 and 2012–2013 and both involved the same number of hospitals. These hospitals were selected because they provide clinical and surgical care for most echinococcosis patients from rural and urban areas in the province. When patients with a presumptive echinococcosis diagnosis are admitted to local rural medical centres, they are usually referred to the county hospital for confirmation, treatment and follow-up examination. All patients whose diagnoses of CE and AE infection were established during the study period were included in the analysis. Inclusion criteria required that the diagnosis of a CE or AE case was confirmed based on imaging, serological, surgical and/or histopathologic findings. A standard form was used to extract individual information on relevant clinical, pathological and demographic data for all confirmed cases. Data were geo-referenced to the township in which each patient resided: this was assumed to be the geographical area where the infection occurred. The day of diagnosis was considered to be the date of primary surgical and confirmatory procedures. If a confirmed case was readmitted to hospital with the same diagnosis, only the initial admission was included in the analysis. The design and methods of the hospital survey for the period 1992–2002 have been described in detail elsewhere [34, 35]. The review of medical records for the period 2003–2013 followed the same protocol. Because the data collected between 1992 and 1993 had considerable gaps, the CE and AE cases derived from these years were excluded from the analysis. For the purpose of our analyses the time period for the study was set from January 1 1994 to December 31 2013. To conduct the analysis, CE and AE cases were aggregated by township and year. Data on population for the year 2010 were downloaded from the WorldPop Project website [36]. A grid (i.e. raster surface) was available for the area of China at the resolution of 100 m. Population counts were extracted for each township using the ArcGIS software [37] and an administrative map of NHAR. In addition, data on population at the prefecture level were also obtained for the years 1990 and 2000 from the national censuses [38]. These data were used to calculate an average annual population growth rate for each prefecture between the years as follows: r = (P 2 -P 1 /P 1 )/t; where r is the average rate of growth, P 1 and P 2 are the population totals for the first and second reference years, respectively, and t is the number of years between the two census counts. Applying a Taylor series approximation to remove non-linear terms [39], the growth rate estimates were then used to calculate population counts for each township and year based on the 2010 population values derived from the WorldPop grid, (P 2 = P 1 e(rt)) [40]. However, it should be noted that the approximation becomes increasingly erroneous as t increases (Additional file 1) [39]. Climate and physical environment data The independent variables included in the analysis were derived from the following datasets: land cover maps, elevation, monthly mean temperature and precipitation. Because data on human echinococcoses were collected for the period 1994–2013, the environmental datasets were derived from 1980 to 2013 to investigate environmental conditions that could account for exposures during the long incubation period that characterises echinococcosis infections (5–15 years) [41]. Single date land cover maps were created for the years 1991, 1996, 2000, 2005, 2010 and 2015. The scientific background and processing steps have already been published [9] so are only outlined in brief here. These maps were produced using images retrieved from the Landsat Surface Reflectance Climate Data Record available in Earth Explorer [42]. Four scenes processed from Landsat 4–5 Thematic Mapper and Landsat 8 Operational Land Imager and Thermal Infrared Sensor were collected for each year. Most scenes were retrieved from the summer and autumn season that correspond to the period June to November [43, 44]. When there were no scenes available for the selected months, the closest-in-time and most cloud-free scenes available were downloaded for the analyses. Minnaert topographic correction, cloud and cloud shadow removal were performed using the Landsat package in the R language and environment for statistical computing [45, 46]. Images were mosaicked and classified by applying the maximum likelihood algorithm in ENVI version 5.3 [47]. Reference datasets for land cover classification (training) were produced by random sampling of a combination of relatively fine-scale global maps, the GlobeLand30 and the global forest/non-forest maps (FNF) [48, 49] using the ArcGIS software version 10.3.1 [37]. Six land cover classes were identified: water bodies, artificial surfaces, bare or sparsely vegetated areas, herbaceous vegetation, cultivated land, shrubland and forest. Due to substantial similarities between the spectral values of artificial surfaces and bare or sparsely vegetated areas, these two classes were merged and represented as a single land cover category called bareland/artificial surfaces (Table 1). Table 1 Land cover classification scheme and definitions Sets of space- and time-referenced photographs from the website Panoramio [50] were downloaded for each year to produce datasets for accuracy assessments of the land cover classes. In order to reduce the level of uncertainty due to the use of this type of data [51], all selected photographs were labelled manually based on visual interpretation, and cross-checked against historical imagery from Google Earth Pro (GEP) version 7.1.5.1557 [52]. The overall classification accuracies of all maps were higher or equal to 80% and the total Kappa coefficients were greater than 0.7. These results represent a substantial agreement between the reference datasets and the classified maps. The six land cover maps and more specific and detailed information about the process of land cover classification and validation is available elsewhere [53]. Monthly averages of temperature and precipitation data for the period January 1 1980 to December 31 2013 were provided by the Chinese Academy of Sciences. Data were first collected from 16 local weather stations and interpolated using the Inverse Distance Weighting (IDW) method. ESRI grids including the monthly data were obtained at the resolution of 1 km (approximately 30 arc-seconds) grid (Additional files 2, 3 and 4). Elevation data from the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) Global Digital Elevation Model (GDEM) version 2 were downloaded from the USGS Earth Explorer website [54]. The ASTER GDEM is available globally in GeoTIFF format at the resolution of 1 arcsecond (approximately 30 m) (Fig. 2). Environmental variables and variable selection process for the spatiotemporal analysis of human echinococcosis in NHAR for the period 1 January 1994 to 31 December 2013 A township-level shapefile (boundary map) of NHAR was produced using MapInfo Pro software version 15.0 [55] and a scanned and geo-referenced administrative map of NHAR provided by the Bureau of Geology and Mineral Resource. The administrative boundary map included 227 township-level areas. The small area of the townships (median 154 km2, interquartile range 73.5–297.5 km2) permitted an analysis of human echinococcoses at a high level of spatial disaggregation. The spatial datasets including human echinococcosis cases, demographic and environmental data were imported into the ArcGIS software [37] and projected to the Universal Transverse Mercator (UTM) coordinate system zone 48 N. The datasets were linked according to location to the administrative boundary map of NHAR to summarise and extract the data by township area and define parameters for subsequent statistical analyses. The spatial mean values of elevation and the spatial extent (as percentage of area) of each land cover class for the years 1991, 1996, 2000, 2005, 2010 and 2015 were calculated for all the townships. Because there were only six land cover maps, data extracted by class were then used to impute change rates at the township level for the periods 1991–1996, 1996–2000, 2000–2005, 2005–2010, and 2010–2015. In this way, it was possible to estimate the spatial extent of all land cover classes in each township for all years between 1980 and 2015. Annual series of bioclimatic variables were calculated at the township level from the climate datasets. Monthly temperature and precipitation records were summed in the GIS to provide annual, summer (June-August) and winter (December-February) averages. Other variables that were calculated include maximum, minimum, standard deviation, range values and precipitation of the driest and wettest quarters of each year. Crude standardised morbidity ratios (SMRs) for each administrative area were calculated for the periods 1994–1998, 1999–2003, 2004–2008 and 2009–2013. SMRs were computed by dividing the observed number of cases by the expected number of cases in the study population (overall incidence rate of human echinococcoses for the whole province from 1994 to 2013 multiplied by the population of each township). To account for the long incubation period of CE and AE, temporal lags in the effects of land cover and bioclimatic variables were incorporated in the analysis by calculating cross-correlation coefficients between the CE and AE counts in a given year and the value of each environmental predictor at time t (t–0, t–1, t–2… t–34 years). From each bivariate time-lagged correlation, only the lag with the highest correlation value was selected for the analysis. A moving average (MA) technique was also applied to generate temporally smoothed estimates of the land cover and climate data. In this way, it was possible to capture the interplay between the parasite, hosts and the environment over an extended period of time rather than at a single point in time. In order to examine different short-, intermediate-and long-term exposure windows, the MAs were calculated by aggregating the environmental data in 5-year lagged periods as follows: MA 1 (t–0, t–1, t–2, t–3, t–4) MA 4 (t–10, t–11, t–12, t–13, t–14) MA 5 (t–6, t–7, t–8, t–9, t–10) Univariate Zero-inflated negative binomial regression models were developed using the R software version 3.2.2. [45]. In this way, it was possible to assess separately the association of the response variables, CE and AE counts, with the environmental factors with the highest lagged correlation and all MAs. Zero-inflated negative binomial regression models were preferred over Poisson, negative binomial and zero-inflated Poisson models based on the results of the Vuong test [56]. Pearson correlation analyses were applied to assess collinearity among all environmental predictors. If the correlation coefficient between a pair of variables was > 0.9, the variable with the highest value of the Akaike information criterion (AIC) in the univariate regression model was excluded from the multivariate analysis. Nonlinear associations between the environmental covariates and CE/AE counts were also examined using quadratic terms (Fig. 2). A Bayesian framework was used to construct three different Poisson regression models of the observed incidence data of CE and AE using the OpenBUGS software 3.2.3 rev 1012 [57]. The first model (Model I) was based on the assumption that spatial autocorrelation was not present in the relative risk of these infections. This model was developed incorporating time in years, the selected explanatory variables and an unstructured random effect for township; the second model (Model II) included the explanatory variables and a spatially structured random effect; the third model (Model III) was constructed without explanatory variables and incorporating only a spatially structured random effect (enabling an assessment of the degree to which the explanatory variables characterised spatial clustering of infections). The mathematical notation for Model II is provided below, and contains all of the components of Model I and Model III. Model II, assumed that the observed counts of the infection (CE or AE), Y, for the ith township (i = 1.. .227) in the jth year (1994–2013) followed a Poisson distribution with mean (μij), that is, $$ {Y}_i\sim Poisson\left({\mu}_{ij}\right) $$ $$ \log \left({\mu}_{ij}\right)=\log \left({Exp}_{ij}\right)+{\theta}_{ij} $$ $$ {\theta}_{ij}=\alpha +{\mathrm{Year}}_j\mathsf{x}\ \gamma +\sum \limits_{z=1}^z{\beta}_z\ \mathsf{x}\ {\uplambda}_{zij}+{s}_i $$ where Exp ij is the expected number of cases in township i in year j (acting as an offset to control for population size) and θ ij is the mean log relative risk (RR); α is the intercept, γ is the coefficient for temporal trend, β is a vector of z coefficients, λ is a matrix of z environmental covariates, and s i is the spatially structured random effect with mean zero and variance σs2. Standardization of environmental variables was used to allow comparability of the effects and provide a more meaningful interpretation on the results. Standardization, involved subtracting the mean from each environmental variable and the difference was divided by the standard deviation, which resulted in a standard deviation of one. The spatially structured random effect (Models II and III) was modelled using a conditional autoregressive (CAR) prior structure [58]. This approach uses an adjacency weights matrix to determine spatial relationships between townships. If two townships share a border, it was assumed the weight = 1 and if they do not, the weight = 0. The adjacency matrix was constructed using the 'Adjacency Tool' of the OpenBUGS software 3.2.3 rev 1012 [57]. A flat prior distribution was specified for the intercept, whereas a normal prior distribution was used for the coefficients (with a mean = 0 and a precision = 0.001). The priors for the precision (1/σ t 2) of spatially structured random effects were specified using non-informative gamma distributions (0.5, 0.0005) (Additional files 5 and 6). The first 1000 iterations were run as a burn-in period and discarded. Subsequent sets of 20,000 iterations were run and examined for convergence. Convergence was determined by visual inspection of posterior density and history plots and by examining autocorrelation plots of model parameters. Convergence occurred at approximately 100,000 iterations for each model. The last 20,000 values from the posterior distributions of each model parameters were stored and summarised for the analysis. The deviance information criterion (DIC) was used to compare the goodness-of-fit between models, where lower DIC indicates a better model fit. An α-level of 0.05 was used in all analyses to indicate statistical significance (as indicated by 95% credible intervals (95% CrI) for relative risks (RR) that excluded 1). Choropleth maps were created using the ArcGIS software [37] to visualise the geographical distribution of crude SMRs for the 227 townships in NHAR. The relative risks of infection were expressed as a percentage by multiplying by 100. The posterior means of the random effects obtained from the models were also mapped. Descriptive analysis Summary statistics for annual mean numbers of human echinococcoses in NHAR for the period 1 January 1994–31 December 2013 were calculated (Table 2). A total of 4472 cases were diagnosed in the hospitals during the study period. From the total number of cases, 4402 cases (98.4%) were CE and 72 (1.6%) were AE. Two patients were diagnosed with both diseases. The number of annual cases of CE increased slightly from 1994 to 2013 (Additional file 7). Apart from the peak in the annual number of AE cases in 2007 and 2008, the annual human echinococcosis cases remained relatively stable during the study period (Additional file 8). While the number of annual CE cases by township ranged between 0 and 32 with a mean of 0.9, the annual number of AE cases ranged between 0 and 5 with a mean of 0.02. Annual maximum and minimum temperatures for the townships in NHAR were 26.3 °C and − 13.9 °C, respectively, with a mean of 8.7 °C between 1980 and 2013. In the same period, annual maximum precipitation was 19,981.3 mm and annual minimum precipitation was 0.01 mm with a mean of 255.6 mm (Additional files 9 and 10). The mean elevation of the administrative areas was 1506.3 m above sea level. Township area covered by each land cover class in NHAR for the period 1 January 1980 to 31 December 2013 is presented in Additional file 11. Table 2 Numbers of total echinococcosis cases in Ningxia Hui Autonomous Region by year from 1994 to 2013 The maps of SMRs for the number of CE infections by township in the four time periods show some degree of spatial variability across the province (Fig. 3). In general, higher incidence rates of CE were observed in townships located in the northern Yellow River Irrigated District and the southern mountainous and loess hilly district, whereas lower incidence rates were recorded in the central desertified district of NHAR. The maps of AE incidence show that this infection was mainly distributed in the South with occasional foci identified in the North (Fig. 4). Raw standardised morbidity ratios for cystic echinococcosis by township in NHAR for four different periods: a 1994–1998; b 1999–2003; c 2004–2008; d 2009–2013 Raw standardised morbidity ratios for alveolar echinococcosis by township in NHAR for four different periods: a 1994–1998; b 1999–2003; c 2004–2008; d 2009–2013 Bayesian spatio-temporal models of human CE and AE Based on the DIC estimates, Models II of CE and AE had the best parsimonious characterization of the data among all the models examined (Tables 3, 4). The higher DIC for Model I and III than Model II indicates that the addition of spatial structure to the random effects improved model fit. In model II of CE, winter mean temperature at 10-year lag had a statistically significant association with the incidence of cases (Additional file 12). There was an estimated increase of 15.0% (95% CrI: 10.8–19.3%) in the risk of CE for a 1 °C increase in winter mean temperature 10 years prior to the diagnosis of the infection. Conversely, there was a decrease of 2.2% (95% CrI: 1.2–3.4%) in the risk of CE for every year during the study period. The quadratic term for annual mean temperature was also significant, indicating that the association between this variable and the outcome was nonlinear (Additional file 13). The MA2 of annual mean temperature, the MA4 of annual mean precipitation and the MAs calculated for the percentage of township area covered by the land cover types were not significant. The difference in the variance of the spatially structured random effect between Model III (9.1; 95% CrI: 7.4–11.6) and Model II (8.9; 95% CrI: 7.1–11.1) indicates that the covariates accounted for only a small proportion of the spatial variability in the data (Table 3 and Fig. 5a, b). Table 3 Regression coefficients, RRs and 95% CrI from Bayesian spatial and non-spatial models for cystic echinococcosis in NHAR from 1 January 1994 to 31 December 2013 Table 4 Regression coefficients, RRs and 95% CrI from Bayesian spatial and non-spatial models for alveolar echinococcosis in NHAR from 1 January 1994 to 31 December 2013 Spatial distribution of the posterior means of random effects for cystic and alveolar echinococcoses in NHAR. Spatially structured random effects of Models II (a) and III (b) of cystic echinococcosis, and spatially structured random effects of Models II (c) and III (d) of alveolar echinococcosis Model II of AE showed that the MA1 of winter mean temperature (Additional file 14), the MA6s of annual mean temperature (Additional file 15) and the percentage of township area covered by bareland/artificial surfaces, had a significant negative association with AE cases. There was a decrease of 65.7% (95% CrI: 19.6–85.4%) in the risk of AE for a 1 °C increase in the average of winter temperature calculated for the 5-year period previous to the diagnosis of the disease (0–4 years). Also, the decrease in the risk of AE was 97.4% (95% CrI: 70.8–99.8%) and 5.0% (95% CrI: 0.9–9.3%) for an increase of 1 °C in annual mean temperature and 1% increase in MA6 of township area covered by bareland/artificial surfaces, respectively. There was a statistically significant increasing temporal trend in the risk of AE. The difference between the DIC of Model II, 184.8, and that of model III, 486.7, indicates that the inclusion of the environmental covariates improved model parsimony. The variance of the spatially structured random effect decreased from 10.6 (95% CrI: 5.5–25.0) in Model III to 9.5 (95% CrI: 4.6–23.8) in Model II. These results may suggest that, unlike the findings in the model of CE, the selected environmental covariates characterised a higher proportion of the spatial variation in the risk of AE (Fig. 5c, d). The maps of the residual spatial variation of CE, before (Model III) and after (Model II) accounting for the environmental covariates, show almost identical spatial patterns without clear evidence of disease clustering (Fig. 5a, b). Conversely, the maps of the distribution of the residual spatial variation of AE risk demonstrated evidence of clustering when the model did not incorporate the environmental covariates (Model III). The degree of clustering decreased when the effect of these variables was included (Model II), suggesting that the covariates contributed to clustering in the south of NHAR (Fig. 5c, d). Maps of the raw relative risks were generated for CE and AE by township and year (Additional files 16 and 17). These maps show that the risk of CE was distributed across all geographic regions in NHAR during the entire study period, while the risk of AE was confined to the south. However, based on the environmental factors associated with AE risk in NHAR, it was also possible to identify an area at high risk of AE in the northeastern part of the central desertified district (Additional file 17). The results indicate that winter mean temperature and annual mean temperature, 10 and 13 years prior to diagnosis, respectively, have influenced the incidence of E. granulosus at the township level in NHAR. Temperature is a major determinant of the survival and longevity of Echinococcus spp. eggs in the external environment [59, 60]. In vivo studies have concluded that the eggs of E. granulosus remain viable and infective after 41 months of exposure to an inferior arid climate, which is characterised by large thermal amplitude (from − 3 to 37 °C) and low precipitation (under 300 mm/year) [59]. The present study revealed a positive association of CE cases with winter temperature at 10-year lag and a non-linear association with annual mean temperature at 13-year lag. These findings indicate that the number of CE cases may have increased progressively when eggs were exposed to optimal temperatures but decreased with extreme temperatures that fell outside the optimal range. The relationship between E. granulosus infection and these two variables was significant after a time lag of more than 10 years. This is in agreement with the long incubation period of this parasite that has been reported to be between 5 and 15 years [41]. Of note, we do not suggest that the specific lag periods for each variable are important, but that the general pattern of lags indicate environmental conditions in the range of 10 to 15 years previously influence current patterns of disease. CE cases were distributed across all the three biogeographical areas of NHAR: the northern Yellow River Irrigated District, the central desertified district and the southern mountainous and loess hilly district (Fig. 1). A higher risk of infection was observed in townships located in the North in close geographical proximity to Yinchuan. Urban areas may provide better job prospects and higher salaries for rural migrants who were exposed in their home township. In the cities, people who contracted the infection in their rural areas of origin may have had an improved access to healthcare services and the confirmation of the diagnosis of echinococcosis and management [61]. These findings raise the need for further studies to determine how access to healthcare may affect the incidence of the infection. The risk of CE was found to be higher in townships from the southern mountainous and loess hill district. This part of NHAR is one of the three poorest areas in China where almost half the population belong to the Hui minority ethnic group [16]. Livestock and arable farming, which are common practices among these communities, represent higher risk of exposure to Echinococcus spp. [62, 63]. The Provincial technical standards for livestock slaughtering and antemortem and post-mortem meat inspection in NHAR are in agreement with the recommendations proposed by Food and Agriculture Organization of the United Nations [64, 65]. However, government-run abattoirs in NHAR are scarce, particularly in the South, where livestock slaughter is carried out mostly at rural market places or in domestic environments that are not legally compliant [66]. Unrestricted post-slaughter offal disposal is common in the region and has been identified as a potential local factor increasing the risk of CE [67]. Under similar circumstances in Qinghai Province, previous studies have suggested that domestic dogs may have a higher probability of access to livestock viscera in early winter and spring [68]. The prevalence of CE in sheep was estimated to be 52% in NHAR in 2008, and between 0 and 9% according to more recent reports of studies conducted at small spatial scale (no larger than county level) [66, 69, 70]. The variance in these prevalence estimates may be related to local or individual conditions that favour hotspots of high transmission within discrete patches of CE endemicity. Also, in the Autonomous Region, 3% of goats, 81% of cattle, 24% of pigs and 19% of camels were reported infected with E. granulosus in 2008 [71]. Although there is evidence of spatial clustering within the central desertified district, lower risk of CE was observed in this biogeographical area where communities are more scattered in isolated settlements. The environmental covariates accounted for a relatively small proportion of the spatiotemporal variation in CE risk in NHAR. These findings suggest that there may be other local unmeasured factors that influence the spatial distribution of E. granulosus in the province. Some local socio-economic and behavioural drivers that have also been found to be positively related to CE in this hyperendemic area include low income, limited education, poor hygiene practices, female gender and Hui ethnicity. In contrast, tap water consumption has been identified as a factor that can protect against E. granulosus infection [35]. Although infection control in dogs has been identified as a key measure against echinococcoses in China, dog ownership still remains as an important risk factor for the infection in NHAR [35, 72]. The western China echinococcosis control programme recommends supervised treatment of all owned dogs four to eight times a year with praziquantel [73]. However, this is a measure that has been hard to apply, monitor and sustain in the remote-settled communities of the Autonomous Region [74]. The findings of the model of AE concur with previous studies conducted in different regions in echinococcosis-endemic countries that indicated that E. multilocularis has a focal spatial distribution [6,7,8]. The study also concurs with previous evidence that indicates that land cover and temperature influence AE incidence [22, 60, 75]. AE risk was spatially clustered in an area covered by Xiji, Guyuan and Haiyuan Counties, located in the southern mountainous and loess hill district (Fig. 1). This part of NHAR has been greatly transformed by the implementation of the GGP. Forest growth has primarily occurred in the northern and southern parts, in the Helan and Liupan mountains in the North and South, respectively (Fig. 1) [76, 77]. An increase in herbaceous vegetation has also been described in the central arid area of NHAR, and around the margin of the forestland [76, 77]. Hence, the distribution of AE risk observed in the current study concurs with the spatial patterns of the GGP land conversion processes that have been described in this autonomous region. Echinococcus multilocularis is transmitted in semi-domestic and sylvatic life cycles that involve dogs and foxes as main definitive hosts, respectively, and small mammals as intermediate hosts [6, 78]. It has been demonstrated that landscape structure may influence the transmission patterns of this parasite by influencing the suitability and location of ecological habitats for its hosts [11]. With regards to land cover, it was found that the merged category of bareland/artificial surfaces was not associated with the transmission of E. multilocularis at the township level in NHAR. This observation suggests that the life-cycle of the parasite is supported in vegetated areas (i.e. forest, shrubland and cropland). These findings raise the need for further studies to determine the association of the GGP and other potential drivers of land cover change with the risk of human AE. The impact of forest fragmentation on small mammals assemblages has now been demonstrated and explained by the interaction between forest patch metrics and small mammal species richness, abundance and composition [21, 31, 79,80,81]. In Xiji County, in the South of NHAR, a previous study indicated that the abundance of degraded lowland pasture was related to higher prevalence of AE in humans [14]. In the same area, a small-mammal survey conducted in relation to different land cover types at the community level revealed the presence of 16 species of small mammals [11]. That study indicated that in areas that experienced afforestation, the diversity of assemblages was lower than that of assemblages in areas where deforestation occurred [11]. However, species richness did not seem to be affected by these land conversion processes [11]. Trapping success of potential E. multilocularis intermediate hosts such as, Cricetulus longicaudatus and Ochotona daurica, was higher in recently afforested set-aside fields and abandoned grasslands, and Spermophilus alashanicus/dauricus in young plantations [11]. Therefore, it can be assumed that landscape transformation process that is taking place in NHAR may have disturbed rodent assemblages, and therefore affect the risk of E. multilocularis transmission. In Zhang County, Gansu Province, a study revealed that the process of land cover change that occurred in this endemic area affected the transmission dynamics of the parasite. There, the increase in grassland/shrubland that followed a process of deforestation favoured the creation of peri-domestic habitats of intermediate host species, and the development of peri-domestic cycles involving dogs [13, 82]. Similarly, the percentage of area covered by grassland and E. multilocularis infection in humans and foxes had a positive relationship in Eastern France [13, 83, 84]. In this area various studies also reported regular outbreaks of Microtus arvalis and Arvicola terrestris, key intermediate hosts for E. multilocularis [13, 17]. However, the picture is complex, given that in Sichuan Province, a negative cross-sectional association was observed between Ochonta spp. and Enhanced Vegetation Index, and previous evidence showed that this recognised intermediate host of E. multilocularis is more common in areas with low vegetation cover [16, 85, 86]. The negative association between AE cases and winter temperatures may be due to the influence of temperature exposure on eggs of E. multilocularis, and potentially the influence of temperature on small mammal population dynamics and fox/dog/small mammal predator-prey relationships [60, 87]. Evidence indicates that temperature affects the geographical range and changes the composition of small mammal communities [88, 89]. Also, climate has been identified as a factor contributing to changes in the distribution and abundance of the red and Arctic foxes, which are sylvatic definitive hosts for E. multilocularis in Arctic Canada [90, 91]. Reports of infection with E. multilocularis in red foxes in NHAR are only available for the mid-1980s [92]. At that time, 15% of trapped red foxes were infected with E. multilocularis in Xiji and Guyuan Counties [92]. Although there is not current evidence on how the local environment fluctuations influence the ecology of this type of vertebrates in the Autonomous Region, it can be thought that variations in climate and land cover have the potential to affect the dynamics and predator-prey interactions of potential hosts for E. multilocularis in NHAR. Also, climate and the landscape may favour the presence of other potential definitive hosts for this parasite in NHAR. Infection with E. multilocularis has also been detected in wolves (Canis lupus) and corsac foxes (Vulpes corsac) in other parts of the P.R. China [82]. Since the latent phase of AE in humans has been estimated to be between 5 and 15 years, the associations between AE incidence and land cover and temperature are plausible [93]. However, in this study there was also a significant association between AE cases with the average of winter temperature for the years 0, 1, 2, 3 and 4 prior to diagnosis. This finding may suggest other effects of temperature on the health-seeking behaviour of the inhabitants of NHAR, rather than on exposure to, or infection with the parasite. The high cost of medical care and the lack of health insurance have been identified previously as primary factors for the under-utilization of health care services in China [94, 95]. Therefore, seasonal rural-urban migration and temporary employment in NHAR could be related to this association between winter temperature and the risk of human AE. As initiatives to address the high burden of human echinococcoses in China have already been established [27], there is a current need to identify high-risk areas for undetected infection to provide guidance to local authorities in implementation of the surveillance and control interventions [27]. The present study provides important evidence that indicates that populations living in southern mountainous and loess hilly district of NHAR were at greatest risk of acquiring CE or AE during the study period. Hence, these findings can be used to inform a decision to prioritise screening surveys in communities from Xiji, Guyuan and Haiyuan Counties which areas heavily affected by both forms of the infection. In this way, it will be possible to provide better estimates of the real impact of human echinococcoses in the autonomous region and to monitor the patterns of E. granulosus and E. multilocularis transmission [96]. To further improve the predictive performance of our models, particularly in remote areas with limited access to health services, the surveillance data should be analysed with other socio-demographic data [18]. The use of GIS technologies, Earth observation data and spatial statistical analysis for the study of the spatio-temporal dynamics of CE and AE cases may help to monitor trends in echinococcosis occurrence in hyperendemic regions. This information is relevant particularly in areas where ecological projects that alter local ecosystems are currently being implemented. Therefore, these technologies may be used to estimate future requirements for scaling up and targeting of essential control strategies, and to provide risk assessments for future landscape planning and ecosystem management and protection initiatives [19]. The limitations of this study include that it relied mainly on data collected from selected county hospitals, which overlooks the contribution of CE and AE cases that were not referred to these health care institutions for confirmation of diagnosis treatment and follow-up. Human echinococcoses are not legally notifiable diseases in China. Most patients are commonly identified in clinical records and mass screening surveys conducted in the most affected areas to help reduce the medical, social and economic burden of the infections. Therefore, further work needs to be carried out to evaluate and improve the surveillance and provide real estimates of the number echinococcosis cases in the country. Also, in this study, data on the number of patients who were immunosuppressed at the time of diagnosis were not available. Among these patients, CE and AE behave differently and may develop during a relatively short period of time [97]. Therefore, it is recommended that future studies to identify environmental risk factors for transmission also involve indices of individual biological condition that may be associated with progression and times of infection and diagnosis of the disease. In the study, the township in which patients resided at the time of diagnosis was assumed as the place where acquisition of infection occurred. Although the patient's place of residence may be a reliable indicator for establishing the geographical origin of the infections, this may not apply for all cases. The human labour migration that has characterised NHAR in past decades may have had an impact on the observed trends of infection and results need to be interpreted with caution. Here, we explored the spatio-temporal patterns of echinococcosis infection in NHAR, and the association of environmental variables with the transmission of Echinococcus spp. at the township level. Hence, the results do not allow for inferences to be made about the risk of human echinococcoses at the commune or individual levels. More detailed information about the local structure of these infections may be further included to improve the CE and AE models. The impact of the GGP and other ecological restoration projects was not formally tested in this study. Therefore, it is necessary to establish evidence for the impact of such projects to facilitate environmental risk assessments of future ecosystem management and protection programmes. [98]. The use of interpolated surfaces for the estimation of climatic and land cover variables also represented a challenge for the interpretation of the findings. The precision of the interpolated values at point locations may vary considerably over time and over the entire study area. Also, the IDW interpolation method used by the Chinese Academy of Sciences is a simple and intuitive deterministic method based on the assumption that sample values closer to the prediction location have more influence on prediction value than sample values farther apart. However, IDW has sensitivity to outliers or sampling configuration and does not allow the incorporation of ancillary data [99, 100]. We believe that a meaningful assessment of the associations between human echinococcosis risk and the environment can only be achieved with the use of consistent and long-term climate and land cover records that allow to capture significant spatial variability. In this study, maps of the geographical distribution of CE and AE for a highly endemic area of China (NHAR) have been produced, and some of the environmental factors that are associated with the transmission patterns of E. granulosus and E. multilocularis at the township level were quantified. Selected environmental covariates characterised a large proportion of the spatiotemporal variation in the risk of AE. However, the CE appears to be less spatially variable and the geographical distribution is likely determined by other unmeasured factors. Evidence on the potential effects of the GGP on the risk of AE was presented due to the association with vegetated areas and a decrease in bareland. By mapping the distribution of the risk of these infections, we provide evidence that can be used to scale-up and target essential control strategies, and to inform risk assessment of large-scale landscape regeneration initiatives. McManus DP, Gray DJ, Zhang W, Yang Y. Diagnosis, treatment, and management of echinococcosis. BMJ. 2012;344(7861):39–44. Eckert J, Deplazes P. Biological, epidemiological, and clinical aspects of echinococcosis, a zoonosis of increasing concern. Clin Microbiol Rev. 2004;17(1):107–35. McManus DP, Zhang W, Li J, Bartley PB. Echinococcosis. Lancet. 2003;362(9392):1295–304. Moro P, Schantz PM. Echinococcosis: a review. Int J Infect Dis. 2009;13(2):125–33. Ministry of Health. Report on the national survey of current status of major human parasitic diseases in China. Beijing: People's Health Publishing House; 2008. p. 3–257. Eckert J. WHO/OIE manual on echinococcosis in humans and animals: a public health problem of global concern 2001. http://apps.who.int/iris/bitstream/10665/42427/1/929044522X.pdf. Accessed 2 June 2015. Giraudoux P, Pleydell D, Raoul F, Quéré J-P, Wang Q, Yang Y, et al. Transmission ecology of Echinococcus multilocularis: what are the ranges of parasite stability among various host communities in China? Parasitol Int. 2006;55:S237–46. Giraudoux P, Raoul F, Afonso EVE, Ziadinov I, Yang Y, Li LI, et al. Transmission ecosystems of Echinococcus multilocularis in China and central Asia. Parasitology. 2013;140(13):1655–66. Sen-Hai Y, Hu W, Xian-Hong W, Xiao M, Pei-Yun L, Yu-fang L, et al. Cystic and alveolar echinococcosis: an epidemiological survey in a Tibetan population in southeast Qinghai, China. Jpn J Infect Dis. 2008;61(3):242. Zhang W, Zhang Z, Wu W, Shi B, Li J, Zhou X, et al. Epidemiology and control of echinococcosis in central Asia, with particular reference to the People's Republic of China. Acta Trop. 2015;141(Pt B):235–43. Raoul F, Pleydell D, Quéré J-P, Vaniscotte A, Rieffel D, Takahashi K, et al. Small-mammal assemblage response to deforestation and afforestation in central China. Mammalia. 2008;72(4):320–32. Raoul F, Deplazes P, Rieffel D, Lambert J-C, Giraudoux P. Predator dietary response to prey density variation and consequences for cestode transmission. Oecologia. 2010;164(1):129–39. Giraudoux P, Craig P, Delattre P, Bao G, Bartholomot B, Harraga S, et al. Interactions between landscape changes and host communities can regulate Echinococcus multilocularis transmission. Parasitology. 2003;127(S1):S121–31. Pleydell DR, Yang YR, Danson FM, Raoul F, Craig PS, McManus DP, et al. Landscape composition and spatial prediction of alveolar echinococcosis in southern Ningxia, China. PLoS Negl Trop Dis. 2008;2(9):e287. Wang Q, Vuitton DA, Xiao Y, Budke CM, Campos-Ponce M, Schantz PM, et al. Pasture types and Echinococcus multilocularis, Tibetan communities. Emerg Infect Dis. 2006;12(6):1008–10. The Ecosystems Services for Poverty Alleviation (ESPA) Programme. China ecosystem services and poverty alleviation situation analysis and research strategy. Ningxia case study. London: Department for International Development; 2008. Giraudoux P, Delattre P, Habert M, Quéré JP, Deblay S, Defaut R, et al. Population dynamics of fossorial water vole (Arvicola terrestris Scherman): a land use and landscape perspective. Agric Ecosyst Environ. 1997;66(1):47–60. Atkinson J-AM, Gray DJ, Clements ACA, Barnes TS, McManus DP, Yang YR. Environmental changes impacting Echinococcus transmission: research to support predictive surveillance and control. Glob Chang Biol. 2013;19(3):677–88. Cadavid Restrepo AM, Yang Y, McManus D, Gray D, Giraudoux P, Barnes T, et al. The landscape epidemiology of echinococcoses. Infect Dis Poverty. 2015;5(1):13. Liu C, Wu B. Grain for Green Programme in China: Policy making and implementation. Policy Briefing Series. Nottingham: China Policy Institute; 2010. p. 10. Vieira MV, Olifiers N, Delciellos AC, Antunes VZ, Bernardo LR, Grelle CE, et al. Land use vs. fragment size and isolation as determinants of small mammal composition and richness in Atlantic Forest remnants. Biol Conserv. 2009;142(6):1191–200. Giraudoux P, Quéré J-P, Delattre P, Bao G, Wang X, Shi D, et al. Distribution of small mammals along a deforestation gradient in southern Gansu, central China. Acta Theriol (Warsz). 1998;43(4):349–62. Raoul F, Quéré J-P, Rieffel D, Bernard N, Takahashi K, Scheifler R, et al. Distribution of small mammals in a pastoral landscape of the Tibetan plateaus (western Sichuan, China) and relationship with grazing practices. Mammalia. 2006;70(3/4):214–25. Wang Q, Vuitton DA, Qiu J, Giraudoux P, Xiao Y, Schantz PM, et al. Fenced pasture: a possible risk factor for human alveolar echinococcosis in Tibetan pastoralist communities of Sichuan, China. Acta Trop. 2004;90(3):285–93. Wang Q, Raoul F, Budke C, Craig PS, Xiao Y-F, Vuitton DA, et al. Grass height and transmission ecology of Echinococcus multilocularis in Tibetan communities, China. Chin Med J. 2010;123 1:61. Yang G-J, Liu L, Zhu H-R, Griffiths SM, Tanner M, Bergquist R, et al. China's sustained drive to eliminate neglected tropical diseases. Lancet Infect Dis. 2014;14(9):881–92. Ministry of Health of the People's Republic of China. National control program of key parasitic diseases in 2006–2015. Beijing: People's Health Publishing House; 2006. Wang Q, Huang Y, Huang L, Yu W, He W, Zhong B, et al. Review of risk factors for human echinococcosis prevalence on the Qinghai-Tibet plateau, China: a prospective for control options. Infect Dis Poverty. 2014;3(1):3. National Bureau of Statistics. Population data. 2016. http://data.stats.gov.cn/english/easyquery.htm?cn=C01. United Nations Educational, Scientific and Cultural Organization. Migrant/Migration. 2017. http://www.unesco.org/new/en/social-and-human-sciences/themes/international-migration/glossary/migrant/. Accessed 2 Oct 2017. Du Y, Park A, Wang S. Migration and rural poverty in China. J Comp Econ. 2005;33(4):688–709. Démurger S, Wan H. Payments for ecological restoration and internal migration in China: the sloping land conversion program in Ningxia. IZA J Migr. 2012;1(1):10. Zhao Z. Migration, labor market flexibility, and wage determination in China: a review. Dev Econ. 2005;43(2):285–312. Yang YR, Sun T, Li Z, Li X, Zhao R, Cheng L, et al. Echinococcosis, Ningxia, China. Emerg Infect Dis. 2005;11(11):1314–6. Yang YR, Sun T, Li Z, Zhang J, Teng J, Liu X, et al. Community surveys and risk factor analysis of human alveolar and cystic echinococcosis in Ningxia hui autonomous region, China. Bull World Health Organ. 2006;84(9):714–21. The WorldPop population mapping program. The WorldPop Project. 2017. http://www.worldpop.org.uk/. Accessed 21 Oct 2015. ESRI: Environmental Systems Research Institute. ArcGIS Software version 10.3.1. http://www.esri.com/arcgis/about-arcgis. National Bureau of Statistiscs of China. The year book-Population 2014. http://www.stats.gov.cn/english/. Accessed 12 Sept 2016. Wachter KW. Essential demographic methods. Cambridge: Harvard University Press; 2014. Malthus TR. An essay on the principle of population, as it affects the future improvement of society. Vol. 2. Washington: Roger Chew Weightman; 1809. Ammann RW, Eckert J. Cestodes. Echinococcus. Gastroenterol Clin North Am. 1996;25(3):655–89. The United States Geological Survey (USGS). EarthExplorer. 2016. http://earthexplorer.usgs.gov/. Accessed 21 Oct 2016. Department of the Interior - The United States Geological Survey (USGS). Landsat 4–7 Climate Data Record (CDR) Surface Reflectance, Version 6.4. Product Guide. 2016. https://landsat.usgs.gov/landsat-surface-reflectance-data-products. Accessed 2 May 2016. Department of the Interior - The United States Geological Survey (USGS). Provisional Landsat 8 Surface Reflectance Product. https://landsat.usgs.gov/landsat-surface-reflectance-data-products. Accessed 2 May 2016. R Core Team. R. A language and environment for statistical computing. Vienna: R Foundation for Statistical Computing; 2015. Goslee SC. Analyzing Remote sensing data in R: the landsat package. J Stat Softw. 2011;43(4):1–25. Exelis Visual Information Solutions I. ENVI software version 5.3. http://www.harrisgeospatial.com/ProductsandSolutions/GeospatialProducts/ENVI.aspx. National Geomatics Center of China. GlobeLand30. A 30-meter Global Land Cover Dataset. 2010. http://www.globallandcover.com/user/login.aspx?para=1. Accessed 12 Dec 2015. Global 25m Resolution PALSAR-2/PALSAR Mosaic and Forest/Non-Forest Map. 2010. http://www.eorc.jaxa.jp/ALOS/en/palsar_fnf/fnf_index.htm. Accessed 12 Nov 2015. Google Inc. Panoramio. Photos of the world 2005. https://www.panoramio.com/. Accessed 15 June 2015. Fonte CC, Bastin L, See L, Foody G, Lupia F. Usability of VGI for validation of land cover maps. Int J Geogr Inf Sci. 2015;29(7):1269–91. Google Earth Pro version 7.1.5.1557. https://www.google.com.au/earth/. Accessed 15 June 2015. Cadavid Restrepo AM, Yang YR, Hamm NA, Gray DJ, Barnes TS, Williams GM, et al. Land cover change during a period of extensive landscape restoration in Ningxia hui autonomous region, China. Sci Total Environ. 2017;598:669–79. The National Aeronautics and Space Administration (NASA) and Ministry of Economy Trade and Industry (METI). The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) Global Digital Elevation Model (GDEM). Version 2. ASTER GDEM is a product of NASA and METI. 2011. https://asterweb.jpl.nasa.gov/gdem.asp. Accessed 16 Nov 2015. Pitney Bowes Software Inc. MapInfo, Group 1 Software and MapInfo Professional. https://www.pitneybowes.com/us/location-intelligence/geographic-information-systems/mapinfo-pro.html. Vuong QH. Likelihood ratio tests for model selection and non-nested hypotheses. Econometrica. 1989:307–33. Members of OpenBUGS Project Management Group. OpenBUGS software version 3.2.2 rev 1012. 2014. http://www.openbugs.net/w/Downloads. Besag J, York J, Mollié A. Bayesian image restoration, with two applications in spatial statistics. Ann Inst Stat Math. 1991;43(1):1–20. Thevenet PS, Jensen O, Drut R, Cerrone GE, Grenóvero MS, Alvarez HM, et al. Viability and infectiousness of eggs of Echinococcus granulosus aged under natural conditions of inferior arid climate. Vet Parasitol. 2005;133(1):71–7. Veit P, Bilger B, Schad V, Schäfer J, Frank W, Lucius R. Influence of environmental factors on the infectivity of Echinococcus multilocularis eggs. Parasitology. 1995;110(01):79–86. Hesketh T, Jun YX, Lu L, Mei WH. Health status and access to health care of migrant workers in China. Public Health Rep. 2008;123(2):189–97. Cringoli G, Rinaldi L, Musella V, Veneziano V, Maurelli MP, Di Pietro F, et al. Geo-referencing livestock farms as tool for studying cystic echinococcosis epidemiology in cattle and water buffaloes from southern Italy. Geospat Health. 2007;2(1):105–11. Carmona C, Perdomo R, Carbo A, Alvarez C, Monti J, Grauert R, et al. Risk factors associated with human cystic echinococcosis in Florida, Uruguay: results of a mass screening study using ultrasound and serology. Am J Trop Med Hyg. 1998;58(5):599–605. Food and Agriculture Organization of the United Nations. Manual on meat inspection for developing countries. 1994. http://www.fao.org/docrep/003/t0756e/T0756E00.htm. Accessed 21 June 2017. Food and Agriculture Organization of the United Nations. Guidelines for humane handling, transport and slaughter of livestock. 2001. http://www.fao.org/3/a-x6909e.pdf. Accessed 21 June 2017. Cleary E, Barnes TS, Xu Y, Zhao H, Clements AC, Gray DJ, et al. Impact of "grain to green" Programme on echinococcosis infection in Ningxia hui autonomous region of China. Vet Parasitol. 2014;205(3):523–31. Yang YR, Clements AC, Gray DJ, Atkinson J-AM, Williams GM, Barnes TS, McManus DP. Impact of anthropogenic and natural environmental changes on Echinococcus transmission in Ningxia hui autonomous region, the People's Republic of China. Parasit Vectors. 2012;5:146. Wang H, Ma S, Cao D, Zhao H, Liu F, Schantz P, et al. An epidemiological survey on human hydatidosis in southern Qinghai Plateau. Chin J Parasit Dis Con. 2000;13(1):37–41. Ma T, Wu X, Marvin F, Yun L. Infection prevalence study of the main hosts of Echinococcus in Ningxia. Ning Med J. 2014;4:376e378. Wu X. Investigation of Ningxia livestock hydatid infection status in 2012. Ning Med J. 2015;37:277–8. Zhenghuan W, Xiaoming W, Xiaoqing L. Echinococcosis in China, a review of the epidemiology of Echinococcus spp. EcoHealth. 2008;5(2):115–26. Yang YR, Cheng L, Yang SK, Pan X, Sun T, Li X, et al. A hospital-based retrospective survey of human cystic and alveolar echinococcosis in Ningxia Hui Autonomous Region, PR China. Acta Trop. 2006;97(3):284–91. World Health Organization and World Organisation for Animal Health. Report of the WHO informal working group on cystic and alveolar echinococcosis surveillance, prevention and control, with the participation of the Food and Agriculture Organization of the United Nations and the World Organisation for Animal Health. 2011. http://apps.who.int/iris/bitstream/10665/44785/1/9789241502924_eng.pdf. Accessed 15 Dec 2014. Craig P, Hegglin D, Lightowlers M, Torgerson P, Wang Q. Chapter 2 Echinococcosis. Control and prevention Adv Parasitol. 2017;96:55–158. Danson FM, Craig PS, Man W, Shi D, Giraudoux P. Landscape dynamics and risk modeling of human alveolar echinococcosis. Photogramm Eng Remote Sensing. 2004;70(3):359–66. Li J, Zheng G, Liu H, Wang L, Tang Z, Shi H, et al. Situation analysis of Ningxia Province. In: China Climate Change Partnership Framework - Enhanced strategies for climate-proofed and environmentally sound agricultural production in the Yellow River Basin (C-PESAP) 2008. http://www.fao.org/fileadmin/templates/cpesap/Data/Ningxia/SASNingxiawp.pdf. Accessed 22 Mar 2016. Li Y, Conway D, Wu Y, Gao Q, Rothausen S, Xiong W, et al. Rural livelihoods and climate variability in Ningxia, Northwest China. Clim Chang. 2013;119(3–4):891–904. Rausch R. Life cycle patterns and geographic distribution of Echinococcus species. Echinococcus and hydatid disease. Wallingford: CAB International; 1995. Kelt DA. Small mammal communities in rainforest fragments in central southern Chile. Biol Conserv. 2000;92(3):345–58. Cox MP, Dickman CR, Hunter J. Effects of rainforest fragmentation on non-flying mammals of the eastern Dorrigo Plateau, Australia. Biol Conserv. 2004;115(2):175–89. Pardini R. Effects of forest fragmentation on small mammals in an Atlantic Forest landscape. Biodivers Conserv. 2004;13(13):2567–86. Craig P, Giraudoux P, Shi D, Bartholomot B, Barnish G, Delattre P, et al. An epidemiological and ecological study of human alveolar echinococcosis transmission in south Gansu, China. Acta Trop. 2000;77(2):167–77. Viel J-F, Giraudoux P, Abrial V, Bresson-Hadni S. Water vole (Arvicola terrestris Scherman) density as risk factor for human alveolar echinococcosis. Am J Trop Med Hyg. 1999;61(4):559–65. Raoul F, Deplazes P, Nonaka N, Piarroux R, Vuitton DA, Giraudoux P. Assessment of the epidemiological status of Echinococcus multilocularis in foxes in France using ELISA coprotests on fox faeces collected in the field. Int J Parasitol. 2001;31(14):1579–88. Pleydell DR, Chrétien S. Mixtures of GAMs for habitat suitability analysis with overdispersed presence/absence data. Comput Stat Data Anal. 2010;54(5):1405–18. Marston CG, Danson FM, Armitage RP, Giraudoux P, Pleydell DR, Wang Q, et al. A random forest approach for predicting the presence of Echinococcus multilocularis intermediate host Ochotona spp. presence in relation to landscape characteristics in western China. Appl Geogr. 2014;55:176–83. Federer K, Armua-Fernandez MT, Hoby S, Wenker C, Deplazes P. In vivo viability of Echinococcus multilocularis eggs in a rodent model after different thermo-treatments. Exp Parasitol. 2015;154:14–9. Moritz C, Patton JL, Conroy CJ, Parra JL, White GC, Beissinger SR. Impact of a century of climate change on small-mammal communities in Yosemite National Park, USA. Science. 2008;322(5899):261–4. Myers P, Lundrigan BL, Hoffman SM, Haraminac AP, Seto SH. Climate-induced changes in the small mammal communities of the northern Great Lakes region. Glob Chang Biol. 2009;15(6):1434–54. Hersteinsson P, Macdonald DW. Interspecific competition and the geographical distribution of red and arctic foxes Vulpes vulpes and Alopex lagopus. Oikos. 1992:505–15. Jenkins EJ, Schurer JM, Gesy KM. Old problems on a new playing field: helminth zoonoses transmitted among dogs, wildlife, and people in a changing northern climate. Vet Parasitol. 2011;182(1):54–69. Hamm NA, Magalhaes RJS, Clements AC. Earth observation, spatial data quality, and neglected tropical diseases. PLoS Negl Trop Dis. 2015;9(12):e0004164. Eckert J, Deplazes P. Alveolar echinococcosis in humans: the current situation in central Europe and the need for countermeasures. Parasitol Today. 1999;15(8):315–9. Hong Y, Li X, Stanton B, Lin D, Fang X, Rong M, Wang J. Too costly to be ill: health care access and health seeking behaviors among rural-to-urban migrants in China. World Health Popul. 2006;8(2):22. Xu B, Fochsen G, Xiu Y, Thorson A, Kemp J, Jiang Q. Perceptions and experiences of health care seeking and access to TB care - a qualitative study in rural Jiangsu Province, China. Health Policy. 2004;69(2):139–49. Deplazes P, Rinaldi L, Rojas CA, Torgerson P, Harandi M, Romig T, et al. Global Distribution of Alveolar and Cystic Echinococcosis. Adv Parasitol. 2017;95:315-493. Vuitton DA. The ambiguous role of immunity in echinococcosis: protection of the host or of the parasite? Acta Trop. 2003;85(2):119–32. Stehman SV. Sampling designs for accuracy assessment of land cover. Int J Remote Sens. 2009;30(20):5243–72. Hofstra N, Haylock M, New M, Jones P, Frei C. Comparison of six methods for the interpolation of daily, European climate data. J Geophys Res Atmos. 2008;113(D21) Ashraf M, Loftis JC, Hubbard K. Application of geostatistics to evaluate partial weather station networks. Agric For Meteorol. 1997;84(3–4):255–71. The authors are grateful to the Chinese Academy of Sciences for providing us with the climate data from 1980 to 2013. We acknowledge financial support by the National Health and Medical Research Council (NHMRC) of Australia of a NHMRC Project Grant (APP1009539). AMCR is a PhD Candidate supported by a Postgraduate Award from The Australian National University; ACAC is a NHMRC Senior Research Fellow; DPM is a NHMRC Senior Principal Research Fellow; and DJG is a NHMRC Career Development Fellow. We acknowledge financial support by the National Health and Medical Research Council (NHMRC) of Australia (APP1009539). AMCR is a PhD Candidate supported by a Postgraduate Award from The Australian National University, ACAC is a NHMRC Senior Research Fellow, DPM is a NHMRC Senior Principal Research Fellow and DJG is a NHMRC Career Development Fellow. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. The data used in the present study are available from the corresponding author upon reasonable request. Research School of Population Health, The Australian National University, Canberra, ACT, Australia Angela M. Cadavid Restrepo , Darren J. Gray & Archie C. A. Clements Ningxia Medical University, Yinchuan, Ningxia Hui Autonomous Region, People's Republic of China Yu Rong Yang Molecular Parasitology Laboratory, QIMR Berghofer Medical Research Institute, Brisbane, QLD, Australia , Donald P. McManus & Darren J. Gray School of Veterinary Science, The University of Queensland, Gatton, QLD, Australia Tamsin S. Barnes & Ricardo J. Soares Magalhães Queensland Alliance for Agriculture and Food Innovation, The University of Queensland, Gatton, QLD, Australia School of Public Health, The University of Queensland, Brisbane, QLD, Australia Gail M. Williams Children's Health and Environment Program, Child Health Research Centre, The University of Queensland, Brisbane, QLD, Australia Ricardo J. Soares Magalhães Faculty of Geo-Information Science and Earth Observation (ITC), University of Twente, Enschede, The Netherlands Nicholas A. S. Hamm Search for Angela M. Cadavid Restrepo in: Search for Yu Rong Yang in: Search for Donald P. McManus in: Search for Darren J. Gray in: Search for Tamsin S. Barnes in: Search for Gail M. Williams in: Search for Ricardo J. Soares Magalhães in: Search for Nicholas A. S. Hamm in: Search for Archie C. A. Clements in: AMCR and ACAC designed the study. AMCR and YRY collected, standardised and geo-referenced the clinical and environmental data. AMCR and ACAC developed the models with input from RJSM. ACAC, YRY, DPM, DJG, RJSM, TSB, GMW and NASH provided critical comments and helped in drafting the manuscript. AMCR and ACAC finalized the manuscript. All authors read and approved the final manuscript. Correspondence to Angela M. Cadavid Restrepo. The protocol for this study was reviewed and approved by the Human Research Ethics Committees of Ningxia Medical University, QIMR Berghofer Medical Research Institute and The Australian National University. Percent population change in NHAR for the periods 1980–1990, 1991–2001 and 2002–2013. (JPEG 3042 kb) Spatial distribution of the average annual mean temperature in °C in NHAR for the period 1980–2013. (DOCX 209 kb) Spatial distribution of the average annual mean precipitation in mm in NHAR for the period 1980–2013. (DOCX 182 kb) Maps of the spatial distribution of a annual, b summer and c winter temperature trends, and d annual, e summer and f winter precipitation trends in NHAR for the period 1 January 1980 to 31 December 2013. Note, the values presented in the figure are relative to the provincial average per decade. (DOCX 244 kb) OpenBUGS code used to develop the Bayesian spatial model (Model II) for cystic echinococcosis in NHAR from 1 January 1994 to 31 December 2013. (DOCX 13 kb) OpenBUGS code used to develop the Bayesian spatial model (Model II) for alveolar echinococcosis in NHAR from 1 January 1994 to 31 December 2013. (DOCX 13 kb) Number of observed and expected number of CE cases by year (1994–2013) in NHAR for the period 1 January 1994 to 31 December 2013. (DOCX 78 kb) Number of observed and expected number of AE cases by year (1994–2013) in NHAR for the period 1 January 1994 to 31 December 2013. (DOCX 72 kb) Annual temperature in NHAR for the period 1 January 1980 to 31 December 2013 and number of cases of CE and AE for the period 1 January 1994 to 31 December 2013. (DOCX 98 kb) Additional file 10: Annual precipitation in NHAR for the period 1 January 1980 to 31 December 2013 and number of cases of CE and AE for the period 1 January 1994 to 31 December 2013. (DOCX 119 kb) Township area covered by each land cover class in NHAR for the period 1 January 1980 to 31 December 2013 and number of cases of CE and AE for the period 1 January 1994 to 31 December 2013. (DOCX 208 kb) Scatterplots of number of CE cases by township against winter mean temperature at a 10-year lag. (DOCX 135 kb) Scatterplots of number of CE cases by township against annual mean temperature at 13-year lag. (DOCX 145 kb) Scatterplots of number of AE cases by township against winter mean temperature for the period 0–4 years before diagnosis. (DOCX 75 kb) Scatterplots of number of AE cases by township against annual mean temperature calculated for the period 11–15 years before diagnosis. (DOCX 72 kb) Spatial distribution of annual raw relative risks for CE in NHAR for the period 1994 to 2013. (JPEG 7859 kb) Spatial distribution of annual relative risks for AE in NHAR for the period 1994 to 2013. (JPEG 7722 kb) Cystic echinococcosis Alveolar echinococcosis
CommonCrawl
On the use of aggregated human mobility data to estimate the reproduction number Real-time tracking and prediction of COVID-19 infection using digital proxies of population mobility and mixing Kathy Leung, Joseph T. Wu & Gabriel M. Leung Heterogeneous interventions reduce the spread of COVID-19 in simulations on real mobility data Haotian Wang, Abhirup Ghosh, … Jie Gao How the individual human mobility spatio-temporally shapes the disease transmission dynamics Suttikiat Changruenngam, Dominique J. Bicout & Charin Modchang A single-agent extension of the SIR model describes the impact of mobility restrictions on the COVID-19 epidemic Matteo Paoluzzi, Nicoletta Gnan, … Andrea Crisanti Modelling the propagation of infectious disease via transportation networks Anupriya, Prateek Bansal & Daniel J. Graham Estimating, monitoring, and forecasting COVID-19 epidemics: a spatiotemporal approach applied to NYC data Vinicius V. L. Albani, Roberto M. Velho & Jorge P. Zubelli After the lockdown: simulating mobility, public health and economic recovery scenarios Alessandro Spelta, Andrea Flori, … Fabio Pammolli Superspreading quantified from bursty epidemic trajectories Julius B. Kirkegaard & Kim Sneppen A novel geo-hierarchical population mobility model for spatial spreading of resurgent epidemics Alexandru Topîrceanu & Radu-Emil Precup Fabio Vanni1,2,3, David Lambert3,4, Luigi Palatella5 & Paolo Grigolini3 Scientific Reports volume 11, Article number: 23286 (2021) Cite this article Statistical physics The reproduction number of an infectious disease, such as CoViD-19, can be described through a modified version of the susceptible-infected-recovered (SIR) model with time-dependent contact rate, where mobility data are used as proxy of average movement trends and interpersonal distances. We introduce a theoretical framework to explain and predict changes in the reproduction number of SARS-CoV-2 in terms of aggregated individual mobility and interpersonal proximity (alongside other epidemiological and environmental variables) during and after the lockdown period. We use an infection-age structured model described by a renewal equation. The model predicts the evolution of the reproduction number up to a week ahead of well-established estimates used in the literature. We show how lockdown policies, via reduction of proximity and mobility, reduce the impact of CoViD-19 and mitigate the risk of disease resurgence. We validate our theoretical framework using data from Google, Voxel51, Unacast, The CoViD-19 Mobility Data Network, and Analisi Distribuzione Aiuti. Understanding the effectiveness of public service announcements and large-scale physical distancing interventions is critical for managing the short and long-term phases of spread of the epidemic, as in the case of the CoViD-19 epidemic. Many countries have reacted via intervention strategies based on mobility and physical lockdowns together with regional and international border restrictions1,2,3,4. Many of these intervention policies are based on assessing the risk of an outbreak through compartmental disease models5,6,7,8,9. We intend our model to be complementary to other well-assessed estimates of the reproduction number. These estimates are based on phenomenological models which provide a starting point for estimation of key transmission parameters, such as the reproduction number, and forecasts of epidemic impact10,11,12,13,14. From a practical point of view, it is fundamental to understand which approach best permits one to forecast epidemic dynamics in the presence of incomplete data. This is especially true when a country's healthcare system is overwhelmed and data collection becomes sporadic. It is also important during the early phases of disease spread, when testing is incomplete or non-existent. For CoViD-19 there is the additional problem of undocumented cases15,16. In our analysis, we focus our attention on the contribution of asymptomatic or undiagnosed (and thus undocumented) individuals to the propagation of the contagion, assuming that these hidden infectious agents have the ability to spread the disease in an environment where susceptible agents are present and all the individuals have uniform mobility and physical proximity parameters. Consequently, we evaluate the impact of physical distancing policies in response to the CoViD-19 epidemic in Italy, the US, and selected European locations. Our model is a renewal SIR model with a time-dependent contact rate. We provide an expression for the contact rate using real-world mobility and social distancing data from Google and other providers. Our approach is complementary to typical (fixed contact-rate, non-renewal) compartmental SIR models, with two essential differences: the time dependence of \(\beta \) (contact rate in the SIR model) and the fact that we split \(\beta \) into several factors. In our model we formulate a specific factorization of the time-dependent contact rate into variables directly related to mobility and social distancing behaviors from real world data, together with other epidemiological and clinical variables. In particular, we believe that the advantage of our model is that it is parsimonious in capturing the aggregated reproduction trends, splitting the contributions of different factors of a disease diffusion framework. Our model posits that undiagnosed individuals, captured by the variable \(\lambda \), drive and sustain the infection process through a contact web disentangled into two aspects: mobility as movement trends and proximity as proxy for interpersonal distance. This framework takes into account a natural delay between time of contacts and the triggering of new infection chains, making the model a forward estimate for the values of \(R_t\) to come. This perspective can provide useful insight for policy makers and regulators planning mobility restrictions or other strategies for mitigating the diffusion of an epidemic like CoViD-19. We interpret this approach in terms of a macroscopic collision theory of infected individuals in a region with a given susceptible population, taking into account the mobility of individuals as well as their radii of interaction as reliable proxies of physical distancing measures (as explained in the "Methods" section): $$\begin{aligned} R_{t}= {R}_0 {\tilde{S}}_t\tilde{\mathscr {B}}_t. \end{aligned}$$ Here, the tildes indicate that the variable is to be evaluated with a delay of \(\tau _g\), a variable that accounts for the typical time it takes to observe newly generated positive tests (see17,18 and the "Calibration" section). \({\hat{R}}_0\) represents the reproduction number calculated in a given period of time which also embodies the constant contribution over that time. Next, \(S_t\) is the fraction of individuals that are susceptible. Finally, \({\mathscr {B}}_{t}\) is the transmission rate function, which depends on average contact frequency, the virus's infectiousness, and the infectious age of individuals in the contagion process. In this way, it is a generalization of the interaction variable in compartmental models. The model of Eq. (1) is distinguished from other estimates in literature by its forward rather than backward-looking estimation procedure. We name the model proposed in this paper as the social distancing based model, aka SDM. Imagine that some infectious individuals have not been detected and isolated. We wish to evaluate a measure of risk of exposure for a given susceptible individual. We take a kinetic approach to the evaluation of this risk. We imagine unobserved spreaders are free to infect other individuals and that the contagion acts within a certain radius r of an infected individual. We imagine an environment in which two types of individuals are present at a calendar time t. \(n_s\) is the density of susceptible individuals in a region, while j is the density of actual new infected individuals where diagnosed and undocumented cases are both taken into consideration, as discussed in the "Methods" section. We consider the regional mobility, \(\nu \), to be the average distance explored by each individual during the time interval, \(\Delta t\), (usually daily). We define the distance, r, to be the maximum distance that an infected person can be from a susceptible person (in the model) and still cause them to become infected. This distance depends, for example, on the virus' infectiousness as a function of distance and on the use of personal protective equipment, which can create a physical barrier so increasing effective distances. Physical distancing regulations, personal protection devices (such as mask wearing), and hygienic norms will result in a decrease in r, as also assessed in19,20. The interpersonal proximity \(\rho \) has an inverse effect on infectivity to that of the interaction radius and it is defined to be the inverse of the square root of the average density of individuals in a region as discussed in the S.I. in more detail. As specified in Table 1, \(\lambda \) represents the efficiency of detecting real cases of infection, and takes on values between 0 and 1. In particular, \(\lambda \) would have a value equal to 1 if testing and contact tracing technologies were maximally efficient, and its value approaches 0 if very few in a large infected population are detected, as shown by16,21. The value of \(\lambda \) changes with infection age as well as t during the disease outbreak. These changes might depend, for example, on the ability to detect and isolate individuals, or the efficiency of contact tracing during the epidemic. Contact tracing efficiency varies with the characteristics of the infection and the speed and coverage of the tracing process. Centralized manual testing and tracing may become an impractical strategy and a lockdown may become a more efficient and effective means of controlling an epidemic. However, lockdowns are not sustainable in the long term because of their social, economic, physical, and psychological effects. Lockdown policies have reduced the spread of CoViD-19, but as restrictions are relaxed transmission often goes up again. Finally, the number of people at risk (susceptible individuals) is $$\begin{aligned} n_s = n-\sum _{i=\tau _g}^\infty j(t-i) =n - \sum _{i=\tau _g}^\infty \frac{j_o(t-i) }{ \lambda (t-i)}, \end{aligned}$$ where n is initial fraction of susceptible population respect to the total population, and \(j_0\) is the density of only the new diagnosed cases (official data) which is only a fraction of the total actual cases of new infections, note that \(j_o(t)=j(t)=0\) for \(t<0\). There are various factors which contribute to the transmission of a disease. The biological and environmental properties are accounted for in the transmissivity variable \(\eta \), as explained in the "Methods" section. Physical proximity, viral load, and environmental conditions determine the infectious dose necessary to trigger the infection in a new host. For example, enclosed environments such as workplaces and schools correspond to higher \(\eta \) values in the model as compared to an outdoor space. A summary of all the variables in the model appears in Table 1. Table 1 Parameters of kinetic approach to infectious contacts. Now let us recall the actual (or effective) reproduction number which represents the average number of secondary infections generated by each new infectious case (assuming \(n_s\) and other environmental variables retain their current values forever). The actual reproduction number can be used as a predictive tool to track the epidemic's evolution. It is also a measure of epidemic risk, in the sense that if it is significantly above one for long enough, then an outbreak will occur. Thus, by linking a dynamical model with time-series data, one obtains a measure of epidemic risk. This risk is derived (see "Methods" section) leading to the effective reproduction number: $$\begin{aligned} {R(t)}\sim R(t_0) \dfrac{n_s(t-\tau _g)}{n_s(t_0-\tau _g)} \dfrac{\rho (t_0-\tau _g)}{\rho (t-\tau _g)} \dfrac{\nu (t-\tau _g)}{\nu (t_0-\tau _g)} , \end{aligned}$$ where \(t_0\) is an initial (or calibration) time, and we have taken the testing efficiency \(\lambda \) and the transmissibility \(\eta \) constant during the lockdown periods, as discussed in the "Methods" section. We call the reproduction number as expressed in the previous equation \(R_t\) SDM. The above equation represents the change in the average number of secondary cases caused by a single primary case throughout the course of infection at calendar time t calibrated at an initial value (for example, before the lockdown). In the present section, we apply Eq. (2) to data from various sources in order to validate our modeling framework. We assume the spatial homogeneity of every variable. In particular, \(\rho \) to be the average proximity between individuals, and \(\nu (t)\) to be their average mobility. Moreover we consider the fraction of missed cases, \(\lambda \), to be constant with respect to infection age. Additionally, we define a typical time interval, the generative time \(\tau _g\) as the average infection age at which positive test result is generated. Reproduction number estimates for two US states. Comparison between the reproduction number calculated from symptom onset data as in literature22 (dashed red line) and the reproduction number computed according our kinetic SDM approach, using data from23 for mobility24, for social proximity and25 for epidemic data. Ribbons are the \(90\%\) credible interval obtained via bootstrapping. Insets represent the single components of the reproduction number as in Eq. (1), specifically solid black and gray line is R(t) using only mobility and interpersonal proximity variables respectively, and dashed black line is R(t) due to the depletion of susceptibles only. The scale of the insets are the same the main plot. Calibration coefficients in the two examples are \(c=1.21\) and \(c=1.05\) respectively (see "Calibration" subsection of Methods). The changing trends of the reproduction number may be due to several interrelated reasons apart from physical distancing policies. These reasons can be collected into two groups. The first has to do with the virus itself and its capacity to spread. Favorable environmental conditions or the emergence of less dangerous strains can decrease the effective infectiousness of the contagion. The other group of reasons is connected to the decrease in the susceptible population. On the other hand, physical distancing (also known as social distancing) is a practice recommended by public health officials to stop or slow down the spread of contagious diseases. It requires maintaining physical space between individuals who may spread certain infectious diseases. The data repositories used to obtain our results are listed in the Supplementary Information. As proxies for mobility we consider both the changes in movement fluxes and percent change in average distance traveled, as released respectively in Google's mobility report23 and Unacast's scoreboard26. We take the mobility to represent the average relative speed of the individuals with respect to each other. The fact that we use relative velocity is important, as it properly accounts for situations in which people move rapidly in a coordinated way. We infer a measure of proximity from the active population density, i.e., the number of people per unit area moving about in selected locations, as variously reported by Voxel51's proximity index24 and Unacast's human encounters26. Effective reproduction number for Italy during the lockdown period (March 9th to May 18th). We compare the \(R_t\) (dashed red) estimate by well established approaches with \(R_t\) SDM (blue) from the method we propose using human mobility data. Includes depletion of the susceptible population, individual mobility and physical proximity. The left inset compares the \(R_t\) (dashed) with the \(R_t\) SDM (solid) by using mobility data only and the right inset compares the \(R_t\) (dashed) with the \(R_t\) SDM (solid) by using physical proximity data only. The scale of the insets are the same as the main plot. Calibration coefficient \(c=1.14\) (see "Calibration" subsection of Methods). In the general analysis of epidemic data we refer to reported infected persons by their dates of diagnosis via laboratory test. However, some countries also report infections by the date of first symptoms reported by patients. In particular, we have used the latter type of data when possible (Italy) and inferred it in the case of the USA and the UK via an analysis of the effective reproduction number assessed by22,27,28. We use epidemiological data at the level of states and mobility data at the level of cities for US locations and at the level of state for EU countries. We have studied and analyzed these regions during the period in which lockdown policies were in action as reported in29. Finally, for US states, we use22 as estimation of the reproduction number as well as the estimation of susceptible population considered. When analyzing other countries, we use various sources, averaging Epiforecast27 and Covid19 projections28 so as to have an ensemble calculation of the actual reproduction number R(t). In Fig. 1, we show the hardest hit states in the US as of June 2020: New York and Florida. Note the good agreement between the theory of this paper using mobility and proximity and independent estimates of the reproduction number. Note that for New York state an important cause for the reduction in R(t) is due to the depletion of the susceptible population, while physical distancing has a smaller impact. Meanwhile, in Florida, the behavior of R(t) is mainly due to physical distancing restrictions taken up at the end of the shelter-at-home policy. Effective reproduction number for New York (a) and Florida (b) state in USA for a more extended period of the epidemic, by using data from Google23 mobility trend. The use of appropriate face coverings should reduce the transmission of CoViD-19 by individuals who do not have symptoms and may reinforce physical distancing. Public health officials also caution that face coverings may increase risk if users reduce their use of other efficacious measures such as physical distancing and frequent hand washing. In the singular case of Italy as report in Fig. 2, we take the number of face masks distributed to the population as a proxy for physical proximity (i.e., we assume the number distributed is effectively equivalent to a certain interpersonal distance), since at the beginning of the outbreak Italy has reported the number of distributed face mask in the country30. In Fig. 3, we show the two derivations of the effective reproduction number R(t). The first is found using RtLive22. We use this to study the diffusion of the second wave of CoViD-19 in USA. We have sourced mobility and proximity data from the Data for Good program31 (see also the SI section B for further discussions and results). The analysis covers the period from March to November 2020 in two US states (New York and Florida). Our analysis closely matches the epidemic risk trend by using mobility data and new cases yielding an R(t) value 6 days sooner than typical R(t) estimations in literature. National Effective reproduction number for Italy during the period from March 2020 to May 2021. Here, \(R_t literature\) is calculated using the method of32. After the vertical dash line this estimate is not based on completed data. Calibration coefficients are \(c=1.15\) for the \(R_t SDM\) estimate based on Google data and \(c=1.17\) for the \(R_t SDM\) estimate based on Facebook data, and \(\tau _g=12\) (see "Calibration" subsection of Methods). Alternatively we perform a further analysis comparing our \(R_t\) SDM with the reproduction number computed by a direct renewal equation where we use the number of cases by the onset of symptoms, as in Fig. 4, data from33 for Italy. In this figure we have used the renewal estimate of the reproduction number as in16 and the social distancing based estimation using two dataset for human mobility trend from Google23 and31. We notice that the anomalous peak observed in the epidemiological estimate in October is not present in the social estimate. This effect is largely due to an abrupt increase of number of performed tests in that period. Finally, we call attention to the fact that mobility alone is not sufficient to explain the dynamics of epidemics, as discussed in34. We see that physical proximity is crucial in resolving why a relatively stable R(t) below 1 has persisted, despite an increase in mobility after the end of the lockdown period. On the other hand, one should subtract from the susceptible population the number of asymptomatic or undocumented infected individuals, which are not counted in official reports. We provide an estimate for this number in the "Methods" section. The effects of vaccination on \(R_t\) are difficult to estimate. In Fig. 4 we present a preliminary analysis of these effects. Taking into account vaccination is necessary to accurately estimate the SDM reproduction number once a significant fraction of the population has been vaccinated. We assume that vaccination reduces the fraction of the population that is susceptible by a factor of \(1-\nu (t)\), where \(\nu (t)\) is the fraction of the population that has received the vaccine at time t. It is outside the scope of this paper to explore the effects of vaccination; however, we stress the importance of using this information to properly assess the reproduction number via mobility data. In a follow-up paper, the authors intend to analyze the effects of different types of vaccines on \(R_t\) at the regional (and higher) level. The outbreak of the CoViD-19 pandemic has pushed many countries towards a response that relies on the policy of social distancing, the implementation of which has important social and economic impacts on the organization of production and on the work process. In response to the CoViD-19 pandemic, countries have introduced various levels of 'lockdown' to reduce the number of new infections. From Eq. (2) it is evident that as the epidemic evolves the force of infection is reduced for various reasons, primarily due to physical distancing policies adopted by most countries in the form of a lockdown of human mobility. Since it is not practical to reduce physical distancing beyond a certain socially and economically acceptable level, the only foreseeable reasons for the end of an epidemic are the depletion of susceptible population (immunization), a change in the intrinsic infectiousness of the virus, a sustained change in public hygiene habits (mask wearing, physical distancing, etc.), or innovation in contact tracing, testing, and isolation, see35 for a discussion. Mechanistic models of disease transmission are often used to forecast disease trajectories and likely disease burden, but are hampered by substantial uncertainty in disease epidemiology in the presence of significant social feedback. Models of disease transmission dynamics are hindered by uncertainty in the role of asymptomatic transmission, the length of the incubation period, the generation interval, and the contribution of different modes of transmission. Infectiousness depends on the frequency of contacts and on the level of infection within each individual. In airborne infections, the former can be decomposed as a product of mobility and physical proximity, interpreted broadly as an effective distance measure which also includes the amount and type of physical protection used. The latter involves an internal micro-scale competition between the virus and the immune system which depends on environmental factors like pollution levels and repeated viral exposure, which can modify the viral load shed by infectious individuals. We have mainly focused our study on the spread of a contagion in a homogeneous population, using a renewal collisional equation which has proven to be a powerful tool for analyzing and modeling epidemic data along side other well established measurements of the reproduction number. We have found it to be both practically and conceptually useful. This analysis has focused on the lockdown, but the same theoretical tools along with additional technology and data resources show promise for the analysis of the post-lockdown response and further mitigation of this disease. At this stage, we do not investigate the dynamics of the severity of the disease. In order to examine these dynamics, we would need to focus our attention on the microscale corresponding to viral particles and immune cells. Since these agents induce the dynamics of the varying intensities of the disease observed at the macroscopic scale of the human population. Furthermore, to assess the severity of an epidemic in a population, one should take into account both the reproduction number R(t) and the absolute number of cases. A high R(t) is manageable in the very short run as long as there are not many people sick to begin with. An important aspect of R(t) is that it represents only an average across a region. This average can miss regional clusters of infection. Another subtlety not captured by R(t) is that many people never infect others, but a few 'superspreaders' pass on the disease many more times than average, perhaps because they mingle in crowded, indoor events where the virus spreads more easily. This means that bans on certain crowded indoor activities could have more benefit than blanket restrictions introduced whenever the R(t) value hits one. In conclusion, in addition to R(t) one should look at trends in numbers of new infections, deaths, hospital admissions, and cohort surveys to see how many people in a population currently have the disease, or have already had it. Fatality rates and intensive care hospitalization rates are related to disease severity. In our collisional kinetic framework we have considered contacts among individuals to be random. In addition to these erratic contacts , one can consider structured contacts occurring at home, in hospitals, workplaces, and schools, just to mention a few of the possibilities. For structured contacts, we should consider the use of a different approach than collision theory. One example of a situation in which interactions are more structured is in the theory of random growth of surfaces. In the model considered by36, the growing surface is represented as a set of columns, which can be thought of as the individuals of a society that interact. These individuals influence each other and self-organize in the presence of noise so that anomalous scaling and long-range correlations are produced, which are a manifestation of the cooperation among individuals. Since people interact in correlated ways37, an extension of the collisional model of the present paper to include correlations among the movements of individuals would be more realistic (and likely important for small population sizes or parameter values near \(R=1\) or \(R=0\)). For a simulation of the interplay between the social and epidemiological effects in a two-layer network, see38. We stress some advantages of using \(R_t\) SDM alongside the well-established estimations in literature. First, the social estimate is available a couple of weeks earlier then the epidemiological estimate. Second, if a deviation is observed between these two estimates, it may be a sign of a change in the transmissibility of the virus. Furthermore, the approach we have presented allow us to disentangle the effects of population mobility, physical proximity, and depletion of susceptibles on the progression of the epidemic. Knowing the effects of each of these components of the response of the government and society to the CoViD-19 epidemic should allow for less costly and more effective strategies for defeating and mitigating epidemics. In particular, this collision model approach to estimation of infection spread should help policy makers and governments to better assess the continuing threat of CoViD-19 to the public welfare. The renewal equation was introduced in the context of population dynamics studies. Later it was reinterpreted along the lines of stochastic processes, as in39, where transmission occurred via a Poisson infection process. This process is such that the probability that, between time t and \(t+\delta t\), someone infected a time \(\tau \) ago successfully infects someone else is \(A(t, \tau )\delta t\) , where \(\delta t\) is a very small time interval. As a consequence, the predicted mean infectious incidence at time t follows the so-called renewal equation: $$\begin{aligned} j(t)=-\frac{d}{dt}n_s(t)=\int _{0}^{\infty }A(t,\tau )j(t-\tau )d\tau +i(t) \end{aligned}$$ where \(\tau \) is known as the infection age and j(t) is the rate of production of infectious individuals. The kernel \(A(t,\tau )\) is the average rate at which an individual infected \(\tau \) time units earlier generates secondary cases. In other words, \(A(t,\tau )\) is the expected infectivity of an individual with infection-age \(\tau \), it can be interpreted as the reproduction function for new infections at time t. A practical issue concerns the extrinsic dynamics (e.g., public health interventions) of time inhomogeneities in the number of cases highlighting the depletion of susceptible individuals when contact tracing, quarantine, and isolation are implemented during the course of an epidemic. Finally, i(t) is a function that describes the effects of an external source of infected persons. For the special case \(i(t) = A\delta (t)\), it encodes the initial number of imported infected individuals. Let us notice that one could completely disregard the external source of infectious individuals, by modelling an infinitely old epidemic where \(\tau \in [0,\infty )\) in the renewal integral so disregarding the imported cases. The kernel A can be factorized as $$\begin{aligned} A(t,\tau )=n_s(t)\beta (t,\tau )\Gamma (t,\tau ), \end{aligned}$$ where \(\beta (t,\tau )\) is the product of the contact rate and the risk of infection (i.e., the effective contact rate), and \(\Gamma (t,\tau )\) is the probability of being infectious at infection age \(\tau \). So, reduction in contact frequency with calendar time t affects \(\beta (t, \tau )\) while early removal of infectious individuals at calendar time t changes the form of \(\Gamma (t, \tau )\). An earlier average infection age at first transmission of the disease will result from contact tracing and isolation. However, the classic approach to renewal equations for epidemics assumes, as in12,40,41,42, that the non-linearity of an epidemic is characterized by the depletion of susceptible individuals alone, so that $$\begin{aligned} A(t,\tau )=n_s(t)\beta (\tau )\Gamma (\tau ). \end{aligned}$$ Finally, the proportion of persons who have the ability to infect at a given calendar time is given by the number of infected individuals which is called prevalence, $$\begin{aligned} p(t)=\int _{0}^{\infty } \Gamma (\tau ) j(t-\tau ) d\tau . \end{aligned}$$ Notice that p(t) is not the number of active infected individuals generally reported in epidemic data published by different national health services. This is because the officially detected cases are actively confined (in hospitals or at home) and so their contribution to the spread of the epidemic is not so relevant. On the contrary p(t) represents the infected people that are still conducting their lives as usual, possibly infecting other people. The most important assumptions in our use of phenomenological models are (1) Short time scale of the epidemic (much shorter than the characteristic birth and death time scales of the population) (2) Well mixed population (force of infection homogeneously the same for all ages, sexes, etc.) (3) closed population (no immigration or emigration) (4) initial small shock (the initial infected group extremely small with respect to the size of the susceptible population). Using the collision theory for chemical reactions in solution with two types of molecules, we can write down the rate of contacts between the two types in a given volume, per unit time \(z= n_s j_{\text {x}} 2\pi r \nu \). Where we have assumed that all agents are ideal point particles that do not interact directly, and travel through space in straight lines. We further investigate the assumption of such collision model in43. However, not all contacts will result in secondary infectious, rather only those contacts that have sufficient viral load so as to surmount a certain threshold for triggering the infection. Such transmission efficacy should depend inversely on the physical distance between individuals. Moreover, the collision rate, in reality, depends on time and, in general, on the epidemic's evolution. This is because the total number of agents changes over time. As an approximation, we embed all of these complexities in the choice of the radius r, so to maintain the simplest form of crossection z. Suppose that during an outbreak only a certain fraction of infectious persons are observed through direct testing, other infectious individuals are not observed, e.g., because of lack of symptoms or the mildness of their illness. In particular, asymptomatic secondary transmissions, caused by those who have been infected and have not developed symptoms yet, and also by those who have been infected and will not become symptomatic throughout the course of infection, must be considered. At a given calendar time, t, we imagine that the important new cases are not the observed newly infected (which are quarantined or self-isolate), but rather the fraction of newly infected that are not observed. Some of these unobserved infected spread the disease. The observed cases are a fraction \(\lambda _t\) of all cases, i.e., $$\begin{aligned} j_o(t)=\lambda _t \,j(t), \end{aligned}$$ where \(\lambda _t\) is the rate of detection which can change over time depending on the details of and degree of adherence to testing protocols and medical screenings. Moreover, the observed cases together with the undocumented cases constitute all cases so that $$\begin{aligned} j(t)=j_o(t)+j_{\text {x}}(t). \end{aligned}$$ Thus, the relation between undocumented infected and documented infected individuals is $$\begin{aligned} j_{\text {x}}(t)= j_o(t)\frac{1-\lambda _t}{\lambda _t}. \end{aligned}$$ If the population screening procedure is effective, we have \(\lambda =1\). This could happen, for example, if the infected group is made up of only symptomatic persons which are infectious only after the onset of symptoms. As a first approximation, we have considered \(\eta \) to be constant over the time periods we considered, and \(\lambda \) to be a slowly changing function (over a time scale of \(\tau _A\) with respect to the calendar time t) so that \(\lambda (t-\tau )\approx \lambda (t) \). Finally, the actual (or effective) reproduction number can be written as the incidence-prevalence ratio $$\begin{aligned} R(t)= D \frac{j(t)}{p(t)}, \end{aligned}$$ where prevalence is the proportion of persons who have the ability to infect at a given calendar time. This ratio indicates the propensity of currently infected individuals to infect susceptibles and $$\begin{aligned} D:=\int _{0}^{\infty }\Gamma (\tau )d\tau \end{aligned}$$ is the average infectious period (or mean generation time). Therefore the actual reproduction number written as incidence persistence ratio is: $$\begin{aligned} {R(t+\tau _g)}\sim R(t_0+\tau _g) \dfrac{n_s(t)}{n_s(t_0)}\dfrac{\eta (t)}{\eta (t_0)}\dfrac{\rho (t_0)}{\rho (t)} \dfrac{\nu (t)}{\nu (t_0)} \dfrac{ 1-\lambda (t)}{ 1-\lambda (t_0)} \equiv {R}_0 {\tilde{S}}_t\tilde{\mathscr {B}}_t, \end{aligned}$$ where we have also considered some practical issues in the calculation of the reproduction number as given in18. Note that, R(t) does not depend explicitly on \(\Gamma (\tau )\), except through its integral over all possible values of \(\tau \). Thus, to a an adequate degree of approximation, it only depends on the typical time between infection and detection. Indeed, one can replace the \(\Gamma \) distribution with any distribution with the same mean recovery time (i.e., time to become non-infectious). As a consequence, the most changing \(\Gamma \) (as a function of \(\tau \)) can do is change R(t) by a re-scaling. However, effectively, the infectious age distribution depends on t. Since contact tracing, testing, and isolation (as well as treatments) will tend to reduce the active infectious period (and their use depends on t). On the other hand, the absolute scale of R(t) is also important, since one would like to maintain a value of R below 1. We discuss some important points for the calibration of the social distancing estimate of \(R_t\). First, calibration is required since we need to align the SDM reproduction number we compute with the reproduction number derived using the epidemiological data obtained by estimation methods in literature. Regressing the two variables, we find a constant c which is then embedded in the the \({\hat{R}}_0\). The second point is essentially due to possible misalignment between the two different estimation procedures because of intrinsic discrepancies in data we use. For this purpose, we evaluate the multiplicative scaling factor in the reproduction number Eq. (4), using a zero-intercept linear regression between the two time series of the reproduction number. Additionally, we have set the generative time \(\tau _g=6\), which takes into account the typical time to generate positive test results. This time scale is interpreted as detection period which is the time between exposure (contact) and detection (isolation)44,45,46. The generative time has been estimated through the synchronization between the signals in the calibration steps. This is equivalent to the delay estimate used by RtLive and Epiforecast and other reconstruction infection estimates as17,18,47,48 as in Fig. 3. Note that for Fig. 4 we used the method of Ref.32 to estimate R(t). Since this estimate was not adjusted for the delay in reporting symptoms, it was necessary to add an additional 6 days to the value of \(\tau _g\) for a value of \(\tau _g=12\). The values chosen produce a good alignment between the features of R(t) derived as in the literature and corresponding features of R(t) as given by Eq. (4). A last step in the calibration consists in the estimation of the fraction of the population that is infected, which is particularly important for a longer-term analysis has. This is accomplished by studying the dependence of the reproductive number R(t) on the ratio, c(t), between the official number of people infected and the total population of the region (Italy) or state (US)16. The value \(\lambda \) has changed over time throughout the epidemic and after the end of the spring 2020 lockdowns it increased, possibly due to the increased number of tests performed. Finally, when plotting the reproduction number, to visualize the trend, we use non-parametric regression analysis with LoWeSS (Locally Weighted Scatterplot Smoothing) surrounded by a \(90\%\) confidence interval obtained through bootstrapping. Kishore, N. et al. Measuring mobility to monitor travel and physical distancing interventions: A common framework for mobile phone data analysis. Lancet Digit. Health (2020). Flaxman, S. et al. Estimating the effects of non-pharmaceutical interventions on covid-19 in Europe. Nature 584, 257–261 (2020). Article ADS CAS Google Scholar Nouvellet, P. et al. Reduction in mobility and covid-19 transmission. Nat. Commun. 12, 1–9 (2021). Gatalo, O., Tseng, K., Hamilton, A., Lin, G. & Klein, E. Associations between phone mobility data and covid-19 cases. Lancet Infect. Dis. 21, e111 (2021). Badr, H. S. et al. Association between mobility patterns and covid-19 transmission in the USA: A mathematical modelling study. Lancet Infect. Dis. 20, 1247–1254 (2020). Arenas, A. et al. Modeling the spatiotemporal epidemic spreading of covid-19 and the impact of mobility and social distancing interventions. Phys. Rev. X 10, 041055 (2020). Oka, T., Wei, W. & Zhu, D. The effect of human mobility restrictions on the covid-19 transmission network in China. PLoS ONE 16, e0254403 (2021). Aleta, A. et al. Modelling the impact of testing, contact tracing and household quarantine on second waves of covid-19. Nat. Hum. Behav. 4, 964–971 (2020). Chang, S. et al. Mobility network models of covid-19 explain inequities and inform reopening. Nature 589, 82–87 (2021). Yan, P. & Chowell, G. Quantitative Methods for Investigating Infectious Disease Outbreaks, vol. 70 (Springer, 2019). Chowell, G., Hyman, J. M., Bettencourt, L. M. & Castillo-Chavez, C. Mathematical and Statistical Estimation Approaches in Epidemiology (Springer, 2009). Breda, D., Diekmann, O., De Graaf, W., Pugliese, A. & Vermiglio, R. On the formulation of epidemic models (an appraisal of kermack and mckendrick). J. Biol. Dyn. 6, 103–117 (2012). Article MathSciNet Google Scholar Nishiura, H. Correcting the actual reproduction number: A simple method to estimate r0 from early epidemic growth data. Int. J. Environ. Res. Public Health 7, 291–302 (2010). Metz, J. The epidemic in a closed population with all susceptibles equally vulnerable; some results for large susceptible populations and small initial infections. Acta Biotheor. 27, 75–123 (1978). Li, R. et al. Substantial undocumented infection facilitates the rapid dissemination of novel coronavirus (sars-cov-2). Science 368, 489–493 (2020). Palatella, L., Vanni, F. & Lambert, D. A phenomenological estimate of the true scale of covid-19 from primary data. Chaos Solitons Fract. 110854 (2021). Abbott, S. et al. Estimating the time-varying reproduction number of sars-cov-2 using national and subnational case counts [version 1; peer review: awaiting peer review]. Wellcome Open Research 5 (2020). Gostic, K. M. et al. Practical considerations for measuring the effective reproductive number, r t. PLoS Comput. Biol. 16, e1008409 (2020). Zhang, R., Li, Y., Zhang, A. L., Wang, Y. & Molina, M. J. Identifying airborne transmission as the dominant route for the spread of covid-19. Proc. Natl. Acad. Sci. (2020). MacIntyre, C. R. & Wang, Q. Physical distancing, face masks, and eye protection for prevention of covid-19. Lancet (2020). Ferretti, L. et al. Quantifying sars-cov-2 transmission suggests epidemic control with digital contact tracing. Science 368 (2020). Systrom, K., Vladek, T. & Krieger, M. Project title. https://github.com/rtcovidlive/covid-model (2020). Google, L. Google covid-19 community mobility reports (2020). Data retrieved from, https://www.google.com/covid19/mobility. Voxel51. Measuring the social impact of the coronavirus pandemic (2020). https://pdi.voxel51.com. CovidTracking. The covid tracking project (2020). https://covidtracking.com/. Unacast. Unacast social distancing dataset (5/27/2020). https://www.unacast.com/data-for-good. Epiforecast. Covid working group, cmmid (2020). https://epiforecasts.io/covid/. Covid19Projections. Covid-19 projections using machine learning (2020). https://covid19-projections.com/. ACAPS. Government measures dataset (2020). https://www.acaps.org/covid19-government-measures-dataset. DPC. Analisi distribuzione aiuti (ada) (2020). Sito del Dipartimento della Protezione Civile - Presidenza del Consiglio dei Ministri, http://www.protezionecivile.gov.it/attivita-rischi/rischio-sanitario/emergenze/coronavirus/materiali-distribuiti-alle-regioni. Facebook. The covid-19 mobility data network (12/20/2020). https://research.fb.com/blog/2020/06/protecting-privacy-in-facebook-mobility-data-during-the-covid-19-response/. Cori, A., Ferguson, N. M., Fraser, C. & Cauchemez, S. A new framework and software to estimate time-varying reproduction numbers during epidemics. Am. J. Epidemiol. 178, 1505–1512 (2013). ISS. Integrated surveillance of covid-19 in Italy (2020). Scientific coordination by Centro Nazionale per la Prevenzione delle malattie e la Promozione della Salute, CNAPPS - ISS. https://www.epicentro.iss.it/en/coronavirus/. Cintia, P. et al. The relationship between human mobility and viral transmissibility during the covid-19 epidemics in Italy. arXiv preprint arXiv:2006.03141 (2020). Nepomuceno, M. R. et al. Besides population age structure, health and other demographic factors can contribute to understanding the covid-19 burden. Proc. Natl. Acad. Sci. 117, 13881–13883 (2020). Failla, R., Grigolini, P., Ignaccolo, M. & Schwettmann, A. Random growth of interfaces as a subordinated process. Phys. Rev. E 70, 010101 (2004). Song, C., Koren, T., Wang, P. & Barabási, A.-L. Modelling the scaling properties of human mobility. Nat. Phys. 6, 818–823 (2010). Grigolini, P., Lambert, D., Mahmoodi, K. & Piccinini, N. From social to epidemic criticality and back 2103, 13288 (2021). Fraser, C. Estimating individual and household reproduction numbers in an emerging epidemic. PLoS ONE 2 (2007). Nishiura, H. Time variations in the generation time of an infectious disease: Implications for sampling to appropriately quantify transmission potential. Math. Biosci. Eng. 7, 851–869 (2010). Champredon, D., Dushoff, J. & Earn, D. J. Equivalence of the erlang-distributed seir epidemic model and the renewal equation. SIAM J. Appl. Math. 78, 3258–3278 (2018). Nishiura, H. & Chowell, G. The effective reproduction number as a prelude to statistical estimation of time-dependent epidemic trends. In Mathematical and Statistical Estimation Approaches in Epidemiology, 103–121 (Springer, 2009). Vanni, F. & Lambert, D. On the regularity of human mobility patterns at times of a pandemic. arXiv preprint arXiv:2104.08975 (2021). Lehtinen, S., Ashcroft, P. & Bonhoeffer, S. On the relationship between serial interval, infectiousness profile and generation time. J. R. Soc. Interface 18, 20200756 (2021). Scarabel, F., Pellis, L., Ogden, N. H. & Wu, J. A renewal equation model to assess roles and limitations of contact tracing for disease outbreak control. R. Soc. Open Sci. 8, 202091 (2021). Article ADS Google Scholar Alene, M. et al. Serial interval and incubation period of covid-19: A systematic review and meta-analysis. BMC Infect. Dis. 21, 1–9 (2021). Zhao, S. et al. Estimating the generation interval and inferring the latent period of covid-19 from the contact tracing data. Epidemics 36, 100482 (2021). Steinegger, B. et al. Retrospective study of the first wave of covid-19 in Spain: Analysis of counterfactual scenarios. medRxiv (2021). Fabio Vanni acknowledges support from the European Union's Horizon 2020 research and innovation programme under Grant Agreement No.822781 GROWINPRO - Growth Welfare Innovation Productivity. Paolo Grigolini thanks U.S. Army Research Office Grant Number W911NF1910104 for the support of his research work. CNRS, GREDEG, Université Côte d'Azur, Nice, France Fabio Vanni Sciences Po, OFCE, Campus de Sophia Antipolis, Nice, France Center for Nonlinear Science, University of North Texas, Denton, TX, USA Fabio Vanni, David Lambert & Paolo Grigolini Department of Mathematics, University of North Texas, Denton, TX, USA David Lambert Liceo Scientifico Statale "C. De Giorgi", Lecce, Italy Luigi Palatella Paolo Grigolini F.V. and D.L. conceived the theory, F.V. and L.P. collected data, F.V. D.L. and L.P. analyzed the results. All authors reviewed the manuscript. Correspondence to Fabio Vanni. Supplementary Information 1. Vanni, F., Lambert, D., Palatella, L. et al. On the use of aggregated human mobility data to estimate the reproduction number. Sci Rep 11, 23286 (2021). https://doi.org/10.1038/s41598-021-02760-8 Quantifying the information in noisy epidemic curves Kris V. Parag Christl A. Donnelly Alexander E. Zarebski Nature Computational Science (2022) By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate. About Scientific Reports Guide to referees Scientific Reports (Sci Rep) ISSN 2045-2322 (online) Close banner Close Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily. I agree my information will be processed in accordance with the Nature and Springer Nature Limited Privacy Policy. Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing
CommonCrawl
RSA Algorithm 11-10-2019 11-10-2019 article 17 minutes read (About 2551 words) 0 visits RSA (Rivest–Shamir–Adleman) algorithm is an asymmetric cryptographic algorithm that is widely used in the modern public-key cryptosystems. We have been hearing RSA algorithm all the time, but some of us actually did not know what it really is and how it works. In this article, I will systematically discuss the theory behind the RSA algorithm. The theory guarantees that the cryptosystems built on the top of the RSA algorithm are relatively safe and hard to crack, which is fundamentally interesting. Euler's Totient Function In number theory, Euler's totient function, also called Euler's phi function, denoted as $\varphi(n)$, counts the positive integers up to a given integer $n$ that are relatively prime to $n$. In other words, it is the number of integers $k$ in the range $1 \leq k \leq n$ for which the greatest common divisor $\gcd(n, k)$ is equal to 1. Euler's totient function is a multiplicative function, meaning that if two numbers $m$ and $n$ are relatively prime, then, \varphi(mn) = \varphi(m)\varphi(n) If $k$ numbers, $\{m_1, m_2, \cdots, m_k\}$, are pairwise relatively prime, then \varphi(\prod_{i=1}^{k}m_i) = \prod_{i=1}^{k} \varphi(m_i) A concrete proof of this property could be found here, which requires to use the Chinese remainder theorem. When $n$ is prime number, according to the definition of prime, $\varphi(n) = n-1$. If $m$ and $n$ are different prime numbers, because $m$ and $n$ are relatively prime, we have \varphi(mn) &= \varphi(m)\varphi(n) \\ &= (m-1)(n-1) Euler's Theorem If $m$ and $n$ are relatively prime, then, m^{\varphi(n)} \equiv 1 \pmod n where $\varphi(n)$ is Euler's totient function. This theorem is very famous, and there a couple of different proofs to it. One of the proofs could be found here. Multiplicative Inverse Theorem Let $n$ and $x$ be positive integers. Then $x$ has a multiplicative inverse modulo $n$ if and only if $\gcd(n, x) = 1$. Moreover, if it exists, then the multiplicative inverse is unique. Equivalently, that is to say, let $n$ and $x$ be positive integers, xy \equiv 1 \pmod n $y \bmod n$ exists if and only if $\gcd(n, x) = 1$, and $y \bmod n$ is unique. Note that as long as the multiplicative inverse $y \bmod n$ exists, all the integers that has the same $y \bmod n$ satisfy $xy \equiv 1 \pmod n$. But there is only one such integer that is $0 \leq y \leq n-1$. For instance, if there is a $y^\ast$ such that $xy^\ast \equiv 1 \pmod n$, then $xy^\ast - 1 = kn$ for some integer $k$. Any other $y$ where $y=y^\ast + tn$, for any integer $t$, also satisfies $xy - 1 = k^\prime n$ for some integer $k^\prime$ (This is easy to show). Therefore, we also have $xy \equiv 1 \pmod n$. Because there could be infinite number of $y$ which satisfies $xy \equiv 1 \pmod n$, we consider the multiplicative inverse to be $y \bmod n$ which is unique if it exists. To prove for the sufficient condition and the uniqueness. There are $n$ possibilities of $y \pmod n$, $0, 1, 2, \cdots, n-1$. Then the value of $xy$ could be $0x, 1x, 2x, \cdots, (n-1)x$. We are going to show that $0x \bmod n, 1x \bmod n, 2x \bmod n, \cdots, (n-1)x \bmod n$ are distinct if $\gcd(n, x) = 1$. Suppose there are two distinct integer $a, b$, $0 \leq a, b \leq n-1$, and $ax \equiv bx \pmod n$. Then $(a-b)x = kn$ for some integer $k$. Because $\gcd(n, x) = 1$, $a-b = hn$ for some integer $h$. However, since $a-b$ is in a range of $[-n+1, n-1]$ and $a-b \neq 0$ because $a$ and $b$ are distinct, there is no integer $h$ which could satisfy $a-b = hn$. Thus, $0x \bmod n, 1x \bmod n, 2x \bmod n, \cdots, (n-1)x \bmod n$ have to be distinct if $\gcd(n, x) = 1$. Since there are $n$ possible values for $0x \bmod n, 1x \bmod n, 2x \bmod n, \cdots, (n-1)x \bmod n$ which are distinct, there is only one of them which must be 1. Therefore, $y \bmod n$ exists if $\gcd(n, x) = 1$ and $y \bmod n$ is unique. To prove for the necessary condition. Given non-negative integer $y$ such that $xy \equiv 1 \pmod n$, we have $xy - 1 = kn$ for some integer $k$. Suppose $\gcd(n, x) > 1$, we divide $\gcd(n, x)$ at both sides of the equation. \begin{gather} \frac{xy - 1}{\gcd(n, x)} = \frac{kn}{\gcd(n, x)} \\ \frac{xy}{\gcd(n, x)} - \frac{1}{\gcd(n, x)} = \frac{kn}{\gcd(n, x)} \end{gather} Because $\frac{xy}{\gcd(n, x)}$ and $\frac{kn}{\gcd(n, x)}$ are integers, but $\frac{1}{\gcd(n, x)}$ is not an integer because $\gcd(n, x) > 1$, there is no way for the equivalence. This raises a contradiction. Therefore, $\gcd(n, x) = 1$ if $y \bmod n$ exists. This concludes the proof. $\square$ Lemma 1 m^{k\varphi(n)+1} \equiv m \pmod n where $\varphi(n)$ is Euler's totient function, and $k$ could be any integer. Using the compatibility with scaling in modular arithmetic properties, we multiply $m$ at both sides of $m^{\varphi(n)} \equiv 1 \pmod n$ from Euler's theorem, we have m^{\varphi(n)+1} \equiv m \pmod n We further multiply $m^{\varphi(n)}$ at the both sides of $m^{\varphi(n)+1} \equiv m \pmod n$, we have m^{2\varphi(n)+1} \equiv m^{\varphi(n)+1} \pmod n By induction, we could show that for any non-negative integer $k$. Similarly, we multiply $m^{-\varphi(n)}$ at the both sides of $m^{\varphi(n)+1} \equiv m \pmod n$, we have m \equiv m^{-\varphi(n)+1} \pmod n m \equiv m^{-k\varphi(n)+1} \pmod n for any negative integer $-k$. This concludes the proof that the congruence is valid for any integer $k$. $\square$ Basic Features of Public-Key Cryptosystems The RSA algorithm is used as a typical public-key cryptosystem. Therefore, it has to match the four basic features for public-key cryptosystems. I am copying the descriptions for the features for the completeness of this article. The encryption and decryption functions using the public key and private key in the RSA algorithm are denoted by $E$ and $D$, respectively. $M$ is used to represent the message to be encrypted and sent. The four basic features of a public-key cryptosystem, as well as the RSA algorithm, are: Decrypting an encrypted message gives you the original message. D(E(M)) = M Encrypting a decrypted message gives you the original message. E(D(M)) = M $E$ and $D$ are easy to compute. The publicity of $E$ does not compromise the secrecy of $D$. RSA Basic Principle A basic principle behind RSA is the observation that it is practical to find three very large positive integers $e$, $d$ and $n$ such that with modular exponentiation for all integers $m$ (with $0 \leq m < n$): (m^e)^d \equiv m \pmod n Here, the tuple $(n, e)$ is usually called the public key for encryption, and the tuple $(n, d)$ is usually called the private key for decryption. $m$ is the message because you could always represent a message using an integer uniquely. If somehow the message is too long and $m$ exceeds $n$, we dissect the messages into trunks and encrypt separately. RSA Key Generations We would show how the $e$, $d$, and $n$ were generated in the RSA algorithm to satisfy the RSA basic principle. In the RSA algorithm, n = p \times q where $p$ and $q$ are some large distinct prime numbers. Because of the Euler's theorem in the prerequisite, if the message $m$ and $n$ are relatively prime, then where $\varphi(n)$ is Euler's totient function. There is an extremely rare case that when $m$ and $n$ are not relatively prime, that is only when $m = p$ or $m = q$, the decryption of the encrypted message would not recover the original content of the message. If we were that lucky, we would have cracked the RSA encryption system. I am not sure people have set rules to eliminate this extremely rare corner case. If we really want to do so, in each encryption, we could encrypt the same message several times, say three times, using $E(m)$, $E(m+1)$, and $E(m+2)$. We also have the digital signature for each of the messages. If the messages were from the authentic author, there is no message content modification, and we did not hit $p$ and $q$ by chance, the three digital signatures should all pass the digital signature verifications. We would recover the three messages, $m$, $m+1$, and $m+2$, and further recover the three messages to the exact same message $m$, $m$ and $m$. However, if we somehow hit $p$ or $q$ by chance, some of the digital signatures would fail the verification. We just have to extract the message information from the messages that passed the digital signature verification. After all, the three messages contain the exact same information. According to the pigeonhole principle, the three distinct messages could not be all relatively prime to $p$ or $q$. Based on the property of the Euler's totient function in the prerequisite, computing the Euler's totient function for the product of two distinct prime numbers is actually very easy. \varphi(n) &= \varphi(pq) \\ &= \varphi(p)\varphi(q) \\ &= (p-1)(q-1) Based on the Lemma 1 in the prerequisite, for any integer $k$, We immediately found that, based on the RSA basic principle, $ed = k\varphi(n)+1$. Although $n$ is public, factorizing $n$ to $p$ and $q$ is almost impossible using an modern computer, computing $\varphi(n)$ using mathematical definition and the equation $\varphi(n) = (p-1)(q-1)$ are almost not possible neither. Therefore, releasing $e$ as the public key does not lead to the disclosure of $d$ easily. Then the question becomes how to choose appropriate integers $e$ and $d$. It seems that $e$ and $d$ could be any values as long as the equivalence $ed = k\varphi(n)+1$ is satisfied for some integer $k$. This is equivalent to say we need to satisfy ed \equiv 1 \pmod {\varphi(n)} Based on the multiplicative inverse theorem in the prerequisite, as long as $\gcd(e,\varphi(n)) = 1$, there must be a unique $d$ which satisfies the above congruence. Getting such $e$ might not be hard. Although $e$ does not have to be prime, for our convenience, we could simply select a prime number from a corpus of prime numbers and verify whether $\gcd(e,\varphi(n)) = 1$ since verifying relatively primeness is easy if one of the numbers are known to be prime. A typical $e$ generally used could be 65537, which is a prime number. Once $e$ is determined, $d \bmod \varphi(n)$ could be determined using the Extend Euclidean algorithm which takes $O((\log\varphi(n))^2)$ to run. Note that it is not necessary to make $d$ infinitely large large to make the private key $d$ less suspectable to cracking. Using any $d$s that have the same remainder $d \bmod \varphi(n)$ decrypt the encrypted message exactly the same. With such $e$ and $d$, the RSA basic principle is satisfied. Message Encryption and Decryption With the orchestrated $e$ and $d$, and the RSA basic principle, it is not hard to find that the encryption function $E$ is c \equiv m^e \pmod n where $c$ is the encrypted message. In practice, c = m^e \bmod n The decryption function $D$ is c^d \equiv (m^e)^d \equiv m \pmod n In the above congruences, the first congruence is due to the compatibility with exponentiation in modular arithmetic properties(https://en.wikipedia.org/wiki/Modular_arithmetic#Properties), the second congruence is because of the RSA basic principle. similarly, in practice, m = c^d \bmod n Without any doubt, using such encryption and decryption, the first feature of the public-key cryptosystem, decrypting an encrypted message gives you the original message, is satisfied. If we swap the positions of $e$ and $d$ in the RSA basic principle, surprisingly (or not?), the congruences and equivalences still hold, meaning that the second feature of the public-key cryptosystem, encrypting an decrypted message gives you the original message, also holds. How about the third feature, $E$ and $D$ are easy to compute? $E$ and $D$ both involve exponential computations which would be extremely slow and memory consuming (actually no memory so far could fit a number with not very large exponents) using trivial algorithms if the exponents $e$ and $d$ are large. However, given $e$, it is meaningless for $d$ to be "infinitely" large to satisfy the congruence $ed \equiv 1 \pmod {\varphi(n)}$. Note that only $e$ and $d \bmod \varphi(n)$ are the actual keys. In addition, there are actually fast modular exponentiation algorithms which take $O(\log e)$ or $O(\log d)$ (i.e., $O(\log (d \bmod \varphi(n)))$) to run and are memory saving. We would not elaborate on it here, and would take it for granted that the third feature is satisfied. The fourth feature of the public-key cryptosystems has also been satisfied. We have seen the related information in the earlier sections and will see more in the following sections. Cracking the RSA Cryptosystem Modern Computer Cracking the RSA encryption system using brute force is not practically feasible. If the private key $d$ is large, it would take an extremely large number of iterations to guess the correct private key $d$, not even mention in each iteration, since you usually don't know the message content, there is hardly any way to verify whether the decrypted message using the guessed private key $d$ is the original message that you have never seen. A better way is to factorize the public $n$. If somehow you know the value of $\varphi(n)$, with the public key $e$, you would derive the value of $d \bmod \varphi(n)$. Remember what is important in the RSA algorithm is $d \bmod \varphi(n)$ instead of the actual value of $d$. The ordinary algorithm to do integer factorization takes sub-exponential time according to the Wikipedia. This is the fundamental reason which makes the RSA cryptosystem so reliable. Quantum computer, however, is good at integer factorization. Using Shor's algorithm, quantum computer does integer factorization in polynomial time, which makes cracking the RSA cryptosystem possible. The RSA Algorithm What are Digital Signatures and How Do They Work? https://leimao.github.io/article/RSA-Algorithm/ Cryptography, RSA Bilingual Evaluation Understudy (BLEU) Public-Key Cryptosystems and Digital Signatures 2Prerequisites 2.1Euler's Totient Function 2.2Euler's Theorem 2.3Multiplicative Inverse Theorem 2.4Lemma 1 3RSA Algorithm 3.1Basic Features of Public-Key Cryptosystems 3.2RSA Basic Principle 3.3RSA Key Generations 3.4Message Encryption and Decryption 4Cracking the RSA Cryptosystem 4.1Modern Computer 4.2Quantum Computer
CommonCrawl
RUS ENG JOURNALS PEOPLE ORGANISATIONS CONFERENCES SEMINARS VIDEO LIBRARY PACKAGE AMSBIB Search references What is RSS TMF: Personal entry: TMF, 2001, Volume 126, Number 2, Pages 214–227 (Mi tmf426) This article is cited in 10 scientific papers (total in 10 papers) Asymptotic behavior in the trailing edge domain of the solution of the KdV equation with an initial condition of the "threshold type" V. B. Baranetskii, V. P. Kotlyarov B. Verkin Institute for Low Temperature Physics and Engineering, National Academy of Sciences of Ukraine Abstract: We derive a new integral equation that linearizes the Cauchy problem for the Korteweg–de Vries equation for the initial condition of the threshold type, where the initial function vanishes as $x\to-\infty$ and tends to some periodic function as $x\to+\infty$. We also expand the solution of the Cauchy problem into a radiation component determined by the reflection coefficient and a component determined by the nonvanishing initial condition. For the second component, we derive an approximate determinant formula that is valid for any $t\ge 0$ and $x\in(-\infty,X_N)$, where $X_N\to\infty$ with the unboundedly increasing parameter $N$ that determines the finite-dimensional approximation to the integral equation. We prove that as $t\to\infty$, the solution of the Cauchy problem in the neighborhood of the trailing edge decays into asymptotic solitons, whose phases can be explicitly evaluated in terms of the reflection coefficient and other parameters of the problem. DOI: https://doi.org/10.4213/tmf426 Full text: PDF file (274 kB) References: PDF file HTML file Theoretical and Mathematical Physics, 2001, 126:2, 175–186 Bibliographic databases: Received: 23.06.2000 Citation: V. B. Baranetskii, V. P. Kotlyarov, "Asymptotic behavior in the trailing edge domain of the solution of the KdV equation with an initial condition of the "threshold type"", TMF, 126:2 (2001), 214–227; Theoret. and Math. Phys., 126:2 (2001), 175–186 Citation in format AMSBIB \Bibitem{BarKot01} \by V.~B.~Baranetskii, V.~P.~Kotlyarov \paper Asymptotic behavior in the trailing edge domain of the solution of the KdV equation with an initial condition of the ``threshold type'' \jour TMF \mathnet{http://mi.mathnet.ru/tmf426} \crossref{https://doi.org/10.4213/tmf426} \elib{https://elibrary.ru/item.asp?id=5032921} \jour Theoret. and Math. Phys. \crossref{https://doi.org/10.1023/A:1005291626477} Linking options: http://mi.mathnet.ru/eng/tmf426 https://doi.org/10.4213/tmf426 http://mi.mathnet.ru/eng/tmf/v126/i2/p214 This publication is cited in the following articles: de Monvel, AB, "Soliton asymptotics of rear part of non-localized solutions of the Kadomtsev-Petviashvili equation", Journal of Nonlinear Mathematical Physics, 9:1 (2002), 58 Egorova, I, "On the Cauchy problem for the Korteweg-de Vries equation with steplike finite-gap initial data: I. Schwartz-type perturbations", Nonlinearity, 22:6 (2009), 1431 Kotlyarov V., Minakov A., "Riemann–Hilbert problem to the modified Korteveg-de Vries equation: Long-time dynamics of the steplike initial data", J Math Phys, 51:9 (2010), 093506 A. Minakov, "Asymptotics of rarefaction wave solution to the mKdV equation", Zhurn. matem. fiz., anal., geom., 7:1 (2011), 59–86 Minakov A., "Long-time behavior of the solution to the mKdV equation with step-like initial data", J. Phys. A: Math. Theor., 44:8 (2011), 085206 Egorova I., Teschl G., "On the Cauchy Problem for the Kortewegde Vries Equation With Steplike Finite-Gap Initial Data II. Perturbations With Finite Moments", J Anal Math, 115 (2011), 71–101 V. Kotlyarov, A. Minakov, "Step-initial function to the mKdV equation: hyper-elliptic long-time asymptotics of the solution", Zhurn. matem. fiz., anal., geom., 8:1 (2012), 38–62 Samoilenko V.H., Samoilenko Yu.I., "Two-Phase Solitonlike Solutions of the Cauchy Problem For a Singularly Perturbed Korteweg-de-Vries Equation With Variable Coefficients", Ukr. Math. J., 65:11 (2014), 1681–1697 Alexander E. Elbert, Sergey V. Zakharov, "Dispersive rarefaction wave with a large initial gradient", Ural Math. J., 3:1 (2017), 33–43 Zhu J., Wang L., Qiao Zh., "Inverse Spectral Transform For the Ragnisco-Tu Equation With Heaviside Initial Condition", J. Math. Anal. Appl., 474:1 (2019), 452–466 This page: 282 Full text: 118 References: 22 First page: 1 Terms of Use Registration Logotypes © Steklov Mathematical Institute RAS, 2021
CommonCrawl
A tag is a keyword or label that categorizes your question with other, similar questions. Using the right tags makes it easier for others to find and answer your question. Popular Name New arduino× 6362 Be sure to use the Arduino Stack Exchange for questions that are more Arduino and less electronics. 17 asked this week, 71 this month microcontroller× 6244 A device which includes a central processing unit (CPU), memory, and (generally) an assortment of I/O peripherals (UART, ADC, DAC, general-purpose I/O, I2C, etc.) in a tightly-coupled standalone packa… 6 asked today, 14 this week power-supply× 5612 An electronic device which supplies electrical energy to a load. Can be AC or DC input. Typically DC output. voltage× 4860 Voltage, otherwise known as electrical potential difference (denoted ∆V and measured in volts) is the difference in electric potential between two points (adapted from Wikipedia). Voltage can be const… operational-amplifier× 4456 Questions relating to the construction and applications of operational amplifiers, power× 4279 a primary concern for the design under discussion. Use the "low-power" tag when that applies. transistors× 4218 a semiconductor device that can amplify signals and switch power. Most used types are bipolar (BJT, for Bipolar Junction Transistor), UJT (Unijunction transistor) and MOSFET (FET, fo… led× 4139 a light-emitting diode. Lighting an LED is considered the "Hello world" of a circuit design, and it can be as simple as putting a series resistor or can get more complicated, involving PWM a… batteries× 3765 a device consisting of one or more electrochemical cells that convert stored chemical energy into electrical energy capacitor× 3631 A fundamental electronic component that stores energy in an electric field, commonly used in filtering applications. 8 asked this week, 43 this month mosfet× 3613 An transconductance (using voltage to control current) electronic component used for switching and amplification. Acronym for Metal-Oxide-Semiconductor Field-Effect Transistor. (from http://en.wikiped… pcb× 3365 the acronym to Printed Circuit Board. A PCB is a carrier for the circuit's components and their electrical connections. current× 3310 Flow of electric charge - typically movement of charge carriers, such as electrons. Measured in amperes (A). circuit-analysis× 3257 the process of finding the voltages across, and the currents through, every component in the network. amplifier× 2948 adapt the range of the signal to a requirement, to make it more robust for transmission, or to satisfy interface requirement (like input/output impedance) digital-logic× 2849 Digital electronics treats discrete signals, unlike analog electronics that treat continuous signals. Digital logic is used to perform arithmetic operations with electric signals, and constitutes the … resistors× 2727 A resistor obeys Ohm's law (V=IR); the current through it is equal to the voltage across it divided by the resistance (equivalently \$I=\frac{V}{R}\$) 21 asked this month, 431 this year pic× 2338 a brand of 8, 16, and 32 bit RISC microcontrollers manufactured by Microchip. "PIC" originally was an acronym for "Peripheral Interface Controller". rf× 2283 Short for Radio-Frequency. Frequencies at which radiation (intentional or not) plays a role. Typically associated with wireless communications, but also relevant for high-speed PCB design. usb× 2225 Universal Serial Bus. If your question relates to a specific chip, please mention it in your question. motor× 2224 An electrical actuator that converts electrical energy into rotational motion or sustained linear motion (linear motor). There are many types of electric motors. If the specific type of motor is known… switches× 2160 Devices to interrupt or route a signal or power one of several ways. pcb-design× 2146 About designing the boards which carry the components of an electronic circuit. For questions about getting them built instead use PCB-fabrication. If your question is specific to a certain CAD tool, … voltage-regulator× 2142 an analog circuit that produces a stable output voltage that doesn't vary with input voltage or load changes. Switching regulators are much more efficient than linear ones. sensor× 2126 Sensors convert a physical quantity (e.g. temperature, pressure) into an electrical signal. diodes× 2094 semiconductor components made from a P-type and N-type silicon material, that allows current to only flow in one direction. audio× 2073 Questions about designing electronics for measuring, processing, and amplifying audio signals. transformer× 2067 A transformer couples two or more AC signals through a magnetic field. Often used as galvanic isolation and to transform one AC voltage to another. ac× 2035 refers to alternating current mains power. It usually applies to voltages with >100V RMS, but can also be used for ex. 24V industrial AC power. fpga× 2016 a logic chip that is configured by the customer after manufacturing—hence "field-programmable". analog× 1909 Analog circuits have a range of voltages, rather than just two as in digital logic. relay× 1904 an electrically controlled switch. Electromechanical relays use an electromagnet to activate mechanical contacts, solid-state relays use semiconductor switches. integrated-circuit× 1825 an electronic circuit built onto a single plate of a semiconductor material, normally silicon. Modern ICs may contain billions of transistors and they have played a major… adc× 1824 an Analog to Digital Converter. This device converts analog signals into digital form. It is mainly used by the digital circuitry to take analog measurements. dc× 1810 DC stands for Direct Current, which means the flow of electric charge in a single direction. Examples of a DC source are batteries, solar panels, dynamos. battery-charging× 1791 Please specify the battery type in your question. Include chemistry (e.g. lead-acid), voltage, number of cells and how they are connected (series or parallel), capacity (in A·h or W·h). tag synonyms
CommonCrawl
solving systems of linear equations by graphing and career path that can help you find the school that's right for you. There are three possibilities: The lines intersect at zero points. :) https://www.patreon.com/patrickjmt !! In two variables So, a slope of -3 means that you should go down 3 units in the y direction for every 1 unit you go over in the x direction. {{courseNav.course.topics.length}} chapters | Consider the system. Improve your math knowledge with free questions in "Solve a system of equations by graphing" and thousands of other math skills. x + y = –3 2x + y = 1. Each point on the line is a solution to the equation. Identify the solution to the system. Step 1. step 1 answer. Get the unbiased info you need to find the right school. 2 In this chapter we will use three methods to solve a system of linear equations. Not sure what college you want to attend yet? The first line has a slope of Use the graphing tool to graph the system. + first two years of college and save thousands off your degree. (The two equations represent the same line. To unlock this lesson you must be a Study.com Member. Did you know… We have over 220 college Betsy teaches college physics, biology, and engineering and has a Ph.D. in Biomedical Engineering. Use graphs of linear systems to solve problems. For the second equation, remember that in y = mx + b, m is the slope of the line and b is the y-intercept. Solve the system of linear equations by graphing. (4, 7) courses that prepare you to earn and a , | {{course.flashcardSetCount}} ©y n2M0E1N2x VKQumt6aX xSxo6f MtNwuarhe 0 bLTLjC e.D g gA ql0l e XroiNguh9t Msn lr ceyspeTrhv4e Md5.L 3 WMPaOd EeZ AwFift Xh6 HIQnMf1i qnOi Btfe 3 MAGlLg9e hb Dr9aI H1R.3 Worksheet by Kuta Software LLC - Definition, Equation & Examples, Finding Equations of Horizontal & Vertical Lines, The Axiomatic System: Definition & Properties, Slope-Intercept Form: Definition & Examples, Independent & Dependent Variables in Math, Identifying Linear & Nonlinear Functions Using Graphs & Tables, Comparing Theoretical & Experimental Probability, Congruency of Right Triangles: Definition of LA and LL Theorems, High School Geometry: Homework Help Resource, PSSA - Mathematics Grade 8: Test Prep & Practice, Saxon Algebra 1 Homeschool: Online Textbook Help, NES Middle Grades Mathematics (203): Practice & Study Guide, High School Precalculus: Homework Help Resource, High School Trigonometry: Tutoring Solution, Intermediate Algebra for College Students, MEGA Elementary Education Mathematics Subtest: Practice & Study Guide, McDougal Littell Geometry: Online Textbook Help, Prentice Hall Algebra 2: Online Textbook Help, TExMaT Master Mathematics Teacher 8-12 (089): Practice & Study Guide, Glencoe Math Connects: Online Textbook Help. Earn Transferable Credit & Get your Degree, How to Solve a System of Equations by Substitution, How to Solve a System of Linear Equations in Two Variables, Linear Inequality: Solving, Graphing & Problems, Elimination Method in Algebra: Definition & Examples, System of Linear Equations: Definition & Examples, Factoring By Grouping: Steps, Verification & Examples, What Is Slope Intercept Form? Give the network (draw chart with nodes) model and create an LP to fin. Graphing systems of equations involves graphing each individual linear equation in the system. To solve the system of equations, you need to find the exact values of x and y that will solve both equations. First go to the Algebra Calculator main page. First, a linear equation is one that forms a line when graphed. It compares examples of systems with 1 solution (x, y), infinite, and no solutions. Then, look at the slope. 2 Sciences, Culinary Arts and Personal ), The lines intersect at exactly one point. y = 2x + 1. 1 If you did, you are right! A system of equations is a set of equations with the same variables. The solution is where the two lines intersect, the point Working Scholars® Bringing Tuition-Free College to the Community. 2 Now, why don't you try it? Welcome to The Solve Systems of Linear Equations by Graphing (Standard) (A) Math Worksheet from the Algebra Worksheets Page at Math-Drills.com. What's a System of Linear Equations? y = 2x - 5 \\x - y = 6 What is the solution to the system of equations? In this chapter, I'll show you three different ways to solve these. Graph the second equation on the same rectangular coordinate system. There are always three ways to solve a system of equations. The x and y coordinates of the intersection point will be the solution to the system of equations! By using this website, you agree to our Cookie Policy. Lists: Family of … You da real mvps! In this lesson, you will learn how to solve a system of linear equations by graphing. Select a subject to preview related courses: Let's look at an example. . What's a System of Linear Equations? Names of standardized tests are owned by the trademark holders and are not affiliated with Varsity Tutors LLC. ( - 960662 12345678900 12345678900 02/08/2016 Mathematics High School Solve the system of linear equations by graphing. credit by exam that is accepted by over 1,500 colleges and universities. Solving Systems of Equations by Graphing. Solving systems of equations by graphing is one method to find the point that is a solution to both (or all) original equations. Power Point includes animated step-by-step instructions for creating a foldable for solving systems of linear equations using graphing, substitution, and elimination. Here's an example: The goal is to find an x guy and a y guy that work in both equations.. − The first method is graphing. Linear Systems: SOLVE WITH GRAPHING Guided Notes Vocabulary: System of Linear Equations two or more linear equations Solution of a System of Linear Equations an ordered pair that makes all of the equations in a system true; the point of intersection Solutions to Systems: One Solution: (-2, 2) (Where the lines intersect.) To solve a system of linear equations by graphing. -intercept of Use graphing to identify systems with no solution or infinitely many solutions. Learn how to use the Algebra Calculator to graph equations. . This tutorial will introduce you to these systems. All other trademarks and copyrights are the property of their respective owners. Let's review the steps for each method. study First, you want to rearrange the equation so it's in slope-intercept form. In two variables ( x and y ) , the graph of a system of two equations is a pair of lines in the plane. Enrolling in a course lets you earn progress by passing quizzes and exams. The slope is a ratio of how far the line goes up in the y direction divided by how far it goes over in the x direction. The coordinates of the point of intersection would be the solution to the system of equations. As a member, you'll also get unlimited access to over 83,000 To learn more, visit our Earning Credit Page. Study.com has thousands of articles about every To solve a system of linear equations by graphing, which are systems made up of two linear equations, first, make sure that you have two linear equations, or equations that form lines when graphed. So, in this equation, the slope is 2 and the y-intercept is 16. A Statistics: 4th Order Polynomial. Section 8.1: Solving Systems by Graphing A1.3.12 Represent and solve problems that can be modeled using a system of linear equations and/or inequalities in two variables, sketch the solution sets, and interpret the results within the context of the problem; For a system of linear equations in two variables, we can determine both the type of system and the solution by graphing the system of equations on the same set of axes. Log in here for access. Remember, you need to graph both lines and see where they intersect to find the solution. © copyright 2003-2020 Study.com. From the graph, you can see that the lines intersect each other at (-3,-3), so the solution to the system of linear equations is x = -3 and y =-3. ) y This algebra video tutorial explains how to solve systems of equations by graphing. Linear equations (ones that graph as straight lines) are simpler than non-linear equations, and the simplest linear system is one with two equations and two variables. − Create an account to start this course today. Why is this intersection point the solution to the system of equations? That is, and Elementary School Math Worksheets & Printables, 8th Grade Life Science: Enrichment Program, Social Psychology for Teachers: Professional Development, Political Science for Teachers: Professional Development, AEPA History (NT302): Practice & Study Guide, OUP Oxford IB Math Studies Chapter 4: Mathematical Models, Quiz & Worksheet - Properties of Atonal Music, Quiz & Worksheet - Accreditation of Schools, Quiz & Worksheet - Comte's Stages of Society & Theory of Positivism, Quiz & Worksheet - Methods to Presenting The Self, Central Tendency: Dot Plots, Histograms & Box Plots, Exposure Therapy for Phobias: Definition & Concept, Creative Writing Competitions for High School Students, Companies That Offer Tuition Reimbursement, Tech and Engineering - Questions & Answers, Health and Medicine - Questions & Answers, Canning Transport is to move goods from three factories to three distribution centers. No more than 1,400 calories can be starch, no fewer than 400 can be protein, an. The objective is to feed the students at the least cost, but the diet must have between 1,800 and 3,600 calories. Step 4. -intercept of x Graph the first equation. y=0.5x+2 and y=x-2. Besides solving systems of equations by graphing, other methods of finding the solution to systems of equations include substitution, elimination and matrices. There are three ways to solve systems of linear equations: substitution, elimination, and graphing. imaginable degree, area of 4.9/5.0 Satisfaction Rating over the last 100,000 sessions. 1 Flipped Classrooms | What is a Flipped Classroom? Get access risk-free for 30 days, Systems, 2 x 2 in this case, are when you have 2 equations and 2 unknowns (letters). Let's look at how to find the solution by graphing. Materials needed for each student: scissors, rul Solving a System of Equations with Two Unknowns, Quiz & Worksheet - Using Graphs to Solve Systems of Linear Equations, Over 83,000 lessons in all major subjects, {{courseNav.course.mDynamicIntFields.lessonCount}}, Solving Problems Involving Systems of Equations, Big Ideas Math Common Core 8th Grade: Online Textbook Help, Biological and Biomedical If the lines intersect, identify the point of intersection. y The solution of such a system is the ordered pair that is a solution to both equations. High School Math Solutions – Systems of Equations Calculator, Elimination A system of equations is a collection of two or more equations with the same set of variables. ... For graphing the lines: Since the x-intercept and y-intercept of the line 2x + … List the coordinates of the intersection. Determine whether the lines intersect, are parallel, or are the same line. Examples: 1. To solve a system of equations, you need to figure out the variable values that solve all the equations involved. Do It Faster, Learn It Better. If the … This means that the solution to the system of linear equations is x = -2 and y = 12. Thanks to all of you who support me on Patreon. = Let's see how to do that with this equation: Now, you can tell that the slope of the line (m) is -3 and the y intercept (b) is 6. The second line has a slope of . Solving Linear Systems by Graphing The systems in this section will consist of two linear equations and two unknowns Given linear equations, we are asked to find out if they have simultaneous solutions. Solve a System of Linear Equations by Graphing. *See complete details for Better Score Guarantee. It may be printed, downloaded or saved and used in your classroom, home school, or other educational environment to help someone learn math. Anyone can earn This is the only point that falls on both lines, so it's the only combination of x and y values that will make each equation true. The places where the lines intersect represent solutions where two or more of the linear equations share a common solution, and that point is regarded as the solution to the entire system. You can test out of the Then, graph the line represented by each equation and see where the two lines intersect each other. flashcard set{{course.flashcardSetCoun > 1 ? Step 1. Try to do this before you scroll down to see the answer. Improve your math knowledge with free questions in "Solve a system of equations by graphing: word problems" and thousands of other math skills. Step 3. 2 Log in or sign up to add this lesson to a Custom Course. When you are solving systems of equations (linear or otherwise), you are, in terms of the equations' related graphed lines, finding any intersection points of those lines. (The lines are parallel. $1 per month helps!! (b) What is the solution to the system of equations? just create an account. Media outlet trademarks are owned by the respective media outlets and are not affiliated with Varsity Tutors. x = One good way to do this is to graph each line and see where they intersect. Solve the following system of simultaneous equations: 6x_1 + 4x_2 = 40; 2x_1 + 3x_2 = 20. Varsity Tutors connects learners with experts. Loading... Graphing A System of Linear Equations Graphing A System of Linear Equations ... Statistics: Linear Regression. Solving Systems Of Equations By Graphing And Substitution - Displaying top 8 worksheets found for this concept.. Should I Major in Math? credit-by-exam regardless of age or education level. Think back to linear equations. system of y Get a variable by itself in one of the equations. Then, classify the system as consistent or inconsistent and as dependent or independent. How to solve a system of linear equations by graphing. 2 Graph the linear equation by creating a table of values. − Learn vocabulary, terms, and more with flashcards, games, and other study tools. ). The solution of a linear system is the ordered pair that is a solution to all equations in the system. example. Varsity Tutors © 2007 - 2020 All Rights Reserved, SAT Subject Test in United States History Courses & Classes, MOS - Microsoft Office Specialist Courses & Classes, CISM - Certified Information Security Manager Training, Exam FM - Financial Mathematics Test Prep, NMLS - Nationwide Mortgage Licensing System Courses & Classes, SHRM-SCP - Society for Human Resource Management- Senior Certified Professional Courses & Classes, Computer Science Tutors in San Francisco-Bay Area, Calculus Tutors in San Francisco-Bay Area, The lines intersect at zero points. Solving Systems of Linear Equations Using Graphing A system of linear equations is just a set of two or more linear equations. = A system of equations is a set of equations with the same variables. y example. ) Already registered? Before you can graph a linear equation, you need to make sure that it is written in slope-intercept form: The slope-intercept form of a linear equation is: y = mx + b. Determine if each ordered pair is a solution of the system. Alright, let's take a moment to review what we've learned! As of 4/27/18. and a The solution is the point of intersection of the two graphs. is just a set of two or more linear equations. − − Step 2. step 1 answer. Here are two linear equations that form a system of equations: Graph both of these lines and then see where they intersect each other. In slope-intercept form, m is the slope of the line and b is the y-intercept, so in the equation we looked at earlier, y = 3x + 5, the slope would be 3 and the y-intercept would be 5. This math worksheet was created on 2015-04-16 and has been viewed 158 times this week and 771 times this month. This tutorial will introduce you to these systems. (Most cases. x Solve the system of equation by graphing. y How to solve systems lines (2 variable linear equations) by graphing explained with pictures, examples, ... Use the graph method to solve the system of equations below $$ y = 2x +1 \\ y = 4x -1 $$ Show Answer. methods and materials. succeed. There are multiple methods of solving systems of linear equations. Graph the second equation on the same rectangular coordinate system. All rights reserved. y - Quiz & Self-Assessment Test, Universities with Master's Degrees in Math: How to Choose, Learn Math in the Blogosphere: 10 Top Math Blogs, Register for the 2010 American Math Challenge, Tau Day Generates Controversy Among Math Scholars, Stationary Engineer: Job Description & Career Info, Childrens Literature Degree and Training Program Information, Consumer Psychology Training and Degree Program Info, Master of Science MS Holistic Nutrition Degree Overview, Medical Decoding Certificate and Diploma Program Information, Big Ideas Math 8th Grade Chapter 1: Equations, Big Ideas Math 8th Grade Chapter 2: Transformations, Big Ideas Math 8th Grade Chapter 3: Angles & Triangles, Big Ideas Math 8th Grade Chapter 4: Graphing & Writing Linear Equations, How to Solve Systems of Linear Equations by Graphing, Big Ideas Math 8th Grade Chapter 5: Systems of Linear Equations, Big Ideas Math 8th Grade Chapter 6: Functions, Big Ideas Math 8th Grade Chapter 7: Real Numbers & the Pythagorean Theorem, Big Ideas Math 8th Grade Chapter 8: Volume & Similar Solids, Big Ideas Math 8th Grade Chapter 9: Data Analysis & Displays, Big Ideas Math 8th Grade Chapter 10: Exponents & Scientific Notation, SAT Subject Test Mathematics Level 1: Tutoring Solution, Praxis Mathematics - Content Knowledge (5161): Practice & Study Guide, Congruent Polygons: Definition & Examples, How to Solve Problems with the Elimination in Algebra: Examples, Factoring the Sum of Cubes: Formula & Examples, Quiz & Worksheet - Angle Addition Postulate, Quiz & Worksheet - Perpendicular Transversal Theorem & Converse, Quiz & Worksheet - How to Find Parallel Lines, Quiz & Worksheet - Proving Angle Relationships, Glencoe Geometry Chapter 3: Parallel and Perpendicular Lines, Glencoe Geometry Chapter 4: Congruent Triangles, Glencoe Geometry Chapter 5: Relationships in Triangles, Glencoe Geometry Chapter 6: Proportions and Similarity, Glencoe Geometry Chapter 7: Right Triangles and Trigonometry, CPA Subtest IV - Regulation (REG): Study Guide & Practice, CPA Subtest III - Financial Accounting & Reporting (FAR): Study Guide & Practice, ANCC Family Nurse Practitioner: Study Guide & Practice, Advantages of Self-Paced Distance Learning, Advantages of Distance Learning Compared to Face-to-Face Learning, Top 50 K-12 School Districts for Teachers in Georgia, Finding Good Online Homeschool Programs for the 2020-2021 School Year, Coronavirus Safety Tips for Students Headed Back to School, Soraya in The Kite Runner: Description & Character Analysis, The Pit and the Pendulum: Theme & Symbolism, Hassan in The Kite Runner: Description & Character Analysis, Congruence Properties of Line Segments & Angles, Quiz & Worksheet - World Literature & Social Issues, Quiz & Worksheet - Renaissance Period Furniture, Quiz & Worksheet - Reducing Negative Fractions, Quiz & Worksheet - Data Modeling in Software Engineering, Flashcards - Real Estate Marketing Basics, Flashcards - Promotional Marketing in Real Estate. If the equations are all linear, then you have a system of linear equations! Graphing is one of the simplest ways to solve a system of linear equations. Determine whether the lines intersect, are parallel, or are the same line. Solve the system by graphing: y= \frac{1}{2}x +3 \\y = 2, A diet is being prepared for the University of Arizona dorms. Just graph the two lines, and see where they intersect! 0.5 The two equations are in slope-intercept form. , the graph of a system of two equations is a pair of lines in the plane. Statistics: Anscombe's Quartet. Substitution. Solve the system by graphing y = 5x - 2 \\y = 2x + 1. y =\frac{2}{3}x -8 \\ y= -2x + 8 (a) Graph the system of equations. − Varsity Tutors does not have affiliation with universities mentioned on its website. 4. Start studying Solving Systems of Linear Equations: Graphing. Plus, get practice tests, quizzes, and personalized coaching to help you Using this information to graph the lines, you can see that the lines intersect at the point (-2,12). The x and y coordinates of the intersection will be the solution to the system of equations! Graph the first equation. Identify the solution to the system. Instructors are independent contractors who tailor their services to each client, using their own style, and Create your account. For a system of two equations, we will graph two lines. You can use that to plot a second point and then use a ruler to connect the points and make a straight line. Step 2. First, find the slope and y-intercept for each equation using the y = mx + b formula. To solve systems of equations or simultaneous equations by the graphical method, we draw the graph for each of the equation and look for a point of intersection between the two graphs. Determine whether the two lines are parallel, perpendicular, coinciding, or only intersecting. ), The lines intersect at infinitely many points. 3. Free linear equation calculator - solve linear equations step-by-step This website uses cookies to ensure you get the best experience. The graph of a linear equation is a line. This method is useful when you just need a rough answer, or you're pretty sure the intersection happens at integer coordinates. We already saw that for the first equation, -3 is the slope, while the y-intercept is 6. The solution to the system will then be in the point in which the two equations intersect. Solve the system by graphing 3x - y =1 and 3x - y = 2. 0.5 Example Problem Graph the following equation: y=2x+1 How to Graph the Equation in Algebra Calculator. To solve a system of linear equations graphically we graph both equations in the same coordinate system. Visit the Big Ideas Math Common Core 8th Grade: Online Textbook Help page to learn more. A system of linear equations contains two or more equations e.g. To solve a system of equations, you need to figure out the variable values that solve all the equations involved. ( {{courseNav.course.mDynamicIntFields.lessonCount}} lessons . One way of solving a linear system is by graphing. x y = -12; -4y = -12. Award-Winning claim based on CBS Local and Houston Press awards. 2 3 To solve a system of linear equations by graphing, you will graph both lines and then see where they intersect each other. lessons in math, English, science, history, and more. A system of linear equation comprises two or more linear equations. Before you can do that, though, you need to know how to recognize a system of linear equations. Find the solution to the system of equations shown below. It has two variables, usually x and y, and typically looks like this: A system of linear equations is a system made up of two linear equations. Services. First, the y-intercept is the point where the line crosses the y axis, so you can plot this point first. Did you get the solution to be x = -3 and y = -3? Try refreshing the page, or contact customer support. To solve a system of linear equations by graphing, which are systems made up of two linear equations, first, make sure that you have two linear equations, or equations … linear equations example. a. 3. = Information about the move is given below. Now that we know how to recognize a linear equation, let's review how to graph a line. The first method we'll use is graphing. Type the following: y=2x+1; Try it now: y=2x+1 Clickable Demo Try entering y=2x+1 into the text box. 's' : ''}}. In this lesson, learn one way to find the solution of a system of linear equations by graphing. To graph this line, you can use a graphing calculator or computer, but you can also do it by hand on paper. Systems of linear equations are important in many branches of math and science, so knowing how to solve them is important. An error occurred trying to load this video. How to Solve a System of Equations Using the Graphing Method. Some of the worksheets for this concept are Systems of equations substitution, Practice solving systems of equations 3 different, Solving systems of equations by graphing, Systems of two equations, Students thinking about continuing solving systems of, Solving nonlinear systems … Solve systems of linear equations by graphing. For example, consider the following system of linear equations containing the variables x andy: y = x + 3 y = -1x - 3 All you have to do is graph each equation as a line and find the point(s) where the lines intersect. Math Homework. A "system" of equations is a set or collection of equations that you deal with all together at once. Graphing A System of Linear Equations. If the equations are all linear, then you have a system of linear equations! Kirkland Seaweed Nutrition, Music Theory Exercises, Journal Of Mental Health Counseling Impact Factor, Campbell Run Apartments, L'oreal Hair Color Remover Vs Color Oops, Best Dual Fuel Ranges With Downdraft, Weather In Santorini In November, Lion Guard Deluxe Figure Set, solving systems of linear equations by graphing 2020
CommonCrawl
Nondestructive estimation of potato yield using relative variables derived from multi-period LAI and hyperspectral data based on weighted growth stage Shanjun Luo1,2, Yingbin He ORCID: orcid.org/0000-0002-7091-80901, Qian Li3, Weihua Jiao4, Yaqiu Zhu3 & Xihai Zhao1 The accurate estimation of potato yield at regional scales is crucial for food security, precision agriculture, and agricultural sustainable development. In this study, we developed a new method using multi-period relative vegetation indices (rVIs) and relative leaf area index (rLAI) data to improve the accuracy of potato yield estimation based on the weighted growth stage. Two experiments of field and greenhouse (water and nitrogen fertilizer experiments) in 2018 were performed to obtain the spectra and LAI data of the whole growth stage of potato. Then the weighted growth stage was determined by three weighting methods (improved analytic hierarchy process method, IAHP; entropy weight method, EW; and optimal combination weighting method, OCW) and the Slogistic model. A comparison of the estimation performance of rVI-based and rLAI-based models with a single and weighted stage was completed. The results showed that among the six test rVIs, the relative red edge chlorophyll index (rCIred edge) was the optimal index of the single-stage estimation models with the correlation with potato yield. The most suitable single stage for potato yield estimation was the tuber expansion stage. For weighted growth stage models, the OCW-LAI model was determined as the best one to accurately predict the potato yield with an adjusted R2 value of 0.8333, and the estimation error about 8%. This study emphasizes the importance of inconsistent contributions of multi-period or different types of data to the results when they are used together, and the weights need to be considered. Potato (Solanum tuberosum L.), a mixed grain, forage, and vegetable crop [1], is the fourth most important crop in the world [2, 3]. Since the launch of the potato staple food strategy in 2015 in China, potato has become another major staple food crop after rice, wheat, and corn [4]. Timely forecasting potato yield data is a vital reference index for variety breeding determined by the combination of genes and growth environment [5]. The accurate prediction of potato yield, especially at the regional level, is of great significance for ensuring food security and promoting the sustainable development of agriculture, which is related to the formulation of major policies and guidelines of the national economy and people's livelihood. The method of crop growth models (CGM), costly, time-consuming, and not always accurate, is often used in conventional yield estimation, which relies on a large amount of data collection [6, 7]. It is reported that there are approximately 32 types of CGM combining multiple data sources and methods to monitor the potato yield under conditions of water, nitrogen fertilizer, and CO2 atmospheric levels [8]. However, the difficulty of obtaining large amounts of input data is one of the major limitations of the widespread employment of models due to their complexity [9,10,11]. Furthermore, field investigation, another traditional method, is a destructive estimation way. Although the accuracy of the final results can be guaranteed by comprehensive surveys, it is undoubtedly a laborious and time-intensive work [12, 13]. Remote estimation of yield is an approach to establish the relationship between crop spectra and yield data [14]. Remote sensing (RS), an emerging technique, can be used to effectively obtain spectral data of vegetation canopy from space in a non-destructive manner, which carries much valuable information indicating the interaction between canopy and solar radiation such as vegetation absorption and scattering [15]. Vegetation canopy spectrum is closely related to crop growth, especially the visible range affected by pigment and the near-infrared (NIR) bands affected by cell tissue and canopy structure [16, 17]. Therefore, the vegetation index (VI) calculated by these bands has been widely used for the monitoring and estimation of vegetation characteristic parameters, such as leaf area index (LAI) [18], biomass [19], chlorophyll content [20], nitrogen content and carbon content [21], and achieved high accuracy. In addition, various VIs showed great differences when applied in diverse scenarios. For example, when the fractional vegetation cover (FVC) is over 50%, the ratio vegetation index (RVI) has a high sensitivity for vegetation [22]. The normalized difference vegetation index (NDVI) is commonly used to research the vegetation growth and distinguish vegetation from non-vegetation with eliminating most of the radiation errors, but it is prone to saturation [23]. Not only that, VIs also have many applications in the yield estimation of different crops on account of the sensitivity to plant photosynthesis. Gong et al. [24] found that NDVI is of great help to the prediction of rapeseed yield using unmanned aerial vehicle (UAV) imagery. Moreover, VI also contributes significantly to yield estimation for crops such as rice [25, 26], maize [27, 28], and wheat [29, 30]. The simulation results of crop characteristic parameters can be obtained by constructing the linear or nonlinear empirical relationship [31] or by machine learning methods [32] like support vector machine (SVM), random forest (RF), partial least squares (PLS) and artificial neural network (ANN) between VIs and these parameters. So far, the VI-based parameter statistics is the simplest and most widely studied estimation method, which has been extensively applied in crop growth monitoring [33]. And the crop growth status monitored by RS directly determines the final crop yield. Hence, remote estimation of crop yield based on VI exhibits good potential, especially in a large-scale domain of estimation scenarios [34]. LAI is one of the vital parameters of crop canopy structure that related to photosynthesis, respiration, and transpiration [35]. Peng et al. [36] proved that LAI can be applied to estimate yield in oilseed rape using UAV data with the estimation error below 15%. Liu et al. [37] calculated the canopy density (Chl) using LAI and then constructed the simple linear prediction model of rice yield with an R2 value of 0.81. Therefore, LAI can be determined for yield estimation. Employing RS technique including UAV, satellite, and ground measurement, massive data of multiple time series can be obtained. However, there are still some issues worthy of our attention. The atmospheric environment, soil background, and solar radiation conditions all will change during the process of obtaining data for many times [38]. Actually, eliminating the interference caused by illumination, aerosol, and background environment among multi-period data is a prerequisite for accurate yield estimation. For example, some reference whiteboards can be used for radiometric correction of remotely sensed images, but it is still difficult to obtain absolutely accurate data [39]. Therefore, we try to use the method of relative variables by subtraction to reduce the differences of data caused by the external environment. In our experiment, the whole-stage canopy spectra of potato field were remotely measured from ground platforms, which had the advantage of reflecting the field variations well. Meanwhile, the LAI data in the same period were obtained. With potato grown under different water and nitrogen fertilizer treatments, our objectives are (1) to determine the optimal VI for single-stage yield estimation of potato; (2) to determine the optimal single stage for potato yield prediction, and (3) to compare the performance of rVI-based and rLAI-based models using single-stage and weighted stage data, and determine the final potato yield prediction model. The experiment area (Fig. 1a) was located at the experimental base of Economic Plants Research Institute (43.45°N, 124.99°E), Jilin Academy of Agricultural Sciences, Gongzhuling City, Jilin Province, China. The greenhouse experiment (Fig. 1c) was conducted in May–September 2018. Shepody [40], a widely planted potato variety in Jilin Province, was selected as the experimental object. Potatoes were sown on May 2nd and harvested on September 10th including the whole growth stages. Through the combination of nitrogen fertilizer and water, 27 plots (Fig. 1d) including three nitrogen levels (N1: half of the normal nitrogen fertilizer, N2: normal nitrogen fertilizer, and N3: two times normal nitrogen fertilizer) and three water levels (EM: excessive moisture, NM: normal moisture, and IM: insufficient moisture) were set up. The water–nitrogen combination experiment was divided into 9 treatments, and each treatment was repeated 3 times randomly. To ensure that there is no water interference between the treatments, two partitions between IM and NM, and three partitions between EM and NM were set. The same potato variety was planted in the field experiment (Fig. 1b) to avoid the influence of sampling on the greenhouse experiment. The field experiment was used to study the change of dry weight with time to simulate the growth of potato, while the greenhouse experiment was applied to estimate potato yield by measuring hyperspectral and LAI. Study area location and plot distribution: a location of the study area; b field experiment; c greenhouse experiment; d water and nitrogen combination diagram of greenhouse The experimental area was located in the middle of the Songliao Plain, with a temperate continental monsoon climate, an average temperature from May to August of 18–20 °C, and abundant natural resources. It is a key commodity grain base in China and a demonstration area for potato cultivation. Data collection and multi-period data processing The collection of data covered five key stages of potato growth: seeding stage (SS), tuber formation stage (TFS), tuber expansion stage (TES), starch accumulation stage (SAS), and harvest stage (HS). The field data, including LAI and hyperspectral, were collected for five times from SS (14 June), TFS (28 June), TES (23 July), SAS (9 August), to HS (27 August). The SUNSCAN Canopy Analysis System (Delta-T Devices, Ltd., Burwell, Cambridge, UK) [41] was used to acquire the potato LAI data under conditions of windless and stable light. Since potatoes were planted following the ridges, our measurements were made 5 times parallel to the ridges and perpendicular to the ridges, respectively. Five different places were selected for measurement in each plot, and the mean values of 25 measurements in total were taken as canopy LAI values of the plot. The USB 2000 spectrometer (Ocean Optics, Inc., Dunedin, Florida, United States) [42] was adopted to collect potato canopy hyperspectral under cloudless and windless conditions, with a spectral sampling interval of 0.46 nm. The spectral measurement was performed daily from 10:00 to 14:00 with the field-of-view angle of 25°, the probe vertically downward and about 1 m away from the top of the potato canopy. The observation was repeated five times for each plot, and the average value was regarded as the canopy spectral reflection. The reference whiteboard (chemical composition is BaSO4) was used for relative radiometric correction prior to measurement. Dry weight measurements of potato plants (including stems, leaves, roots, flowers, etc.) were conducted by destructive sampling. In each growth stage, the sampling interval is 3–6 days. Ten points were randomly selected for each measurement. The collected plants were dried in the laboratory after drying in the field until their weights remained unchanged when weighing again. The average value was taken as the dry weight data of this measurement. In total, 17 times of sampling were taken on June 14 (SS), June 22, June 25, June 28, July 1 (TFS), July 9, July 13, July 16, July 19, July 23 (TES), July 31, August 3, August 6, August 9, August 16, August 21 (SAS), and August 27 (HS), respectively. At HS, the potatoes in all plots were harvested manually. Then plot-level potatoes were weighed immediately. For LAI and VI of multiple periods, the utilization of relative VI (rVI) and relative LAI (rLAI) is expected to reduce the limitation of uncertain information about background, light and atmospheric conditions at different growth stages. Firstly, plot-level rVI and rLAI were proposed under the premise of the hypothesis that solar radiation, atmospheric conditions, and field background were similar at each data acquisition. A standard plot can then be selected as a reference to help diminish the difference caused by time. In this study, rVI, rLAI, and relative yield were calculated based on a reference of an appropriate plot. The calculation of rVI, rLAI, and relative yield was carried out through the differences of VI, LAI, and yield between the study plot and reference plot (Eqs. 1–3). The method of eliminating the influence of external factors by subtraction can keep the correlation between original data unchanged. $$rLAI = {LAI}_{\left(mea\right)} - {LAI}_{\left(Ref\right)}$$ where rLAI is the plot-level relative LAI, LAI(mea) is the measured LAI of a study plot, LAI(Ref) is the measured LAI of reference plot. $$rVI = {VI}_{\left(mea\right)} - {VI}_{\left(Ref\right)}$$ where rVI is the plot-level relative VI, VI(mea) is the plot-level VI calculated by measured spectra, VI(Ref) is the VI calculated by measured spectra of reference plot. $$relative\, yield = {yield}_{\left(mea\right)} - {yield}_{\left(Ref\right)}$$ where yield(mea) is the measured yield of a study plot, yield(Ref) is the measured yield of the reference plot. Vegetation index selection Many scholars have determined that the optimal bands for studying the relationship between vegetation spectra and biophysical parameters lie in the visible and near-infrared ranges [43, 44]. According to this, VIs of NDVI, CIred edge, CIgreen, EVI2, NDRE, and MTCI (Table 1) calculated by the green (550 nm), red (670 nm), red edge (720 nm), and near-infrared (800 nm) bands were built. The reason why these six VIs were selected is that many scholars have achieved good results in relevant studies. Table 1 Vegetation indices used in this study Algorithms for determining the weights of growth stages Slogistic model The curve expression of the Slogistic model is shown as Eq. (4). With the increase of independent variable, the value of the dependent variable increases slowly at first, but rapidly in a certain range later. When the independent variable reaches a certain limit, the growth of the dependent variable tends to be slow, and the whole curve shows a shape of flat "S". This equation is extensively used in epidemiology and agrometeorology [50]. $$y= \frac{a}{1+b{e}^{-kx}}$$ where a refers to the maximum value of the dependent variable, b and k are the characteristic parameters of the Slogistic curve equation. The first-order and second-order partial derivatives of the independent variable of Eq. (4) were calculated to obtain Eqs. (5) and (6). According to the trend of curve change, the Slogistic model can be divided into three parts: the range of \([0 \sim [\mathrm{lnb}-\mathrm{ln}(2+\sqrt{3})]/\mathrm{k}]\) is the gradually increasing stage, \([[\mathrm{lnb}-\mathrm{ln}(2+\sqrt{3})]/\mathrm{k }\sim [\mathrm{lnb}+\mathrm{ln}(2+\sqrt{3})]/\mathrm{k}]\) is the rapidly increasing stage, and \([[\mathrm{lnb}+\mathrm{ln}(2+\sqrt{3})]/\mathrm{k }\sim \mathrm{ \infty }]\) is the slowly increasing stage. When the independent variable is lnb/k, the increasing speed of the dependent variable reaches the maximum value. The establishment of the model is helpful to judge the potato growth stages and determine their weights. $$\frac{dy}{dx}= \frac{{abk}^{-kt}}{{(1+{bk}^{-kt})}^{2}}$$ $$\frac{{d}^{2}y}{d{x}^{2}}= \frac{{abk}^{-kt}({abk}^{-kt}-k)}{{(1+b{e}^{-kt})}^{3}}$$ Improved analytic hierarchy process Analytic hierarchy process (AHP) is a system analysis method that combines qualitative and quantitative analysis, which was put forward by T.L. Saaty, a famous American operational research scientist in the early 1970s [51]. The judgment matrix of the traditional AHP adopted a nine-scale method (1–9). The subjective factors of experts play a leading role, which will lead to the deviation of the evaluation results. In addition, if the judgment matrix is not consistent in the consistency test, it will destroy the main function of the AHP's scheme optimization and sorting, with a large amount of calculation and low accuracy. The improved analytic hierarchy process (IAHP) developed a new three-scale method (0–2), which made it easy for experts to make a comparison of the relative importance of the two factors, without the need for a consistency test. Moreover, IAHP can greatly reduce the number of iterations, improve the convergence speed, and meet the requirements of calculation accuracy [52]. The specific calculation steps are as follows: Construction of comparison matrix A(aij). As shown in Eq. (7), according to the relative importance of potato growth stages, a comparison matrix A(aij)5×5 was constructed. $$A\left({a}_{ij}\right)= \left(\begin{array}{ccccc}1& 0& 0& 0& 0\\ 2& 1& 0& 0& 2\\ 2& 2& 1& 2& 2\\ 2& 2& 0& 1& 2\\ 2& 0& 0& 0& 1\end{array}\right)$$ where 0 indicates that the stage i is not as important as stage j; 1 indicates that the stage i is as important as stage j; 2 indicates that the stage i is more important than stage j. Construction of judgment matrix B(bij). Firstly, the importance coefficients (\({\mathrm{r}}_{j}= \sum_{i=1}^{5}{b}_{ij}\)) of five potato growth stages were calculated, and then the judgment matrix B(bij) was constructed as shown in Eq. (8): $$B\left({b}_{ij}\right)= \left\{\begin{array}{l}\frac{{r}_{i}-{r}_{j}}{{r}_{max}-{r}_{min}} \times \left(k-1\right)+1 {r}_{i}\ge {r}_{j}\\ {\left[\frac{\left|{r}_{i}-{r}_{j}\right|}{{r}_{max}-{r}_{min}} \times \left(k-1\right)+1\right]}^{-1} {r}_{i}<{r}_{j}\end{array}\right.$$ where \({r}_{max}=\mathrm{max}\left\{{\mathrm{r}}_{j}\right\}\), \({r}_{min}=\mathrm{min}\left\{{\mathrm{r}}_{j}\right\}\), \(\mathrm{k}= {\mathrm{r}}_{max}/{\mathrm{r}}_{min}\). Calculation of transfer matrix C(cij) and quasi-optimal uniform matrix C* (cij*). The elements in transfer matrix C(cij) and quasi-optimal uniform matrix C* (cij*) need to meet Eqs. (9) and (10). $$C\left({c}_{ij}\right)= \frac{1}{5}\sum_{t=1}^{5}\left(\mathrm{lg}\frac{{b}_{it}}{{b}_{jt}}\right)$$ $${C}^{*}\left({{c}_{ij}}^{*}\right)= {10}^{{c}_{ij}}$$ Weight determination. The maximum eigenvalue and the maximum eigenvector of the quasi-optimal matrix C* were calculated, and the weight of each growth stage can be obtained after normalization. Entropy weight method The entropy weight method (EW) determines the index weight according to the variation degree of each index value. It is an objective weighting method, which has been widely used in the fields of economy, engineering, and finance [53]. The advantage of this method is that it can avoid the influence of human factors, but it ignores the importance of the index itself. Sometimes the weight of the index determined is far from the expected result, and the dimension of the evaluation index cannot be reduced [54]. The data matrix of G(gij)5×5 was constructed based on the potato characteristic parameters of different plot-level in different stages, then the entropy value (ej) and the difference coefficient (dj) of each growth stage were calculated as shown in Eqs. (11) and (12). $${e}_{j}= -\frac{1}{\mathrm{ln}n}\sum_{i=1}^{n}\left(\frac{{\mathrm{g}}_{ij}}{\sum_{i=1}^{n}{\mathrm{g}}_{ij}}\mathrm{ln}\frac{{\mathrm{g}}_{ij}}{\sum_{i=1}^{n}{\mathrm{g}}_{ij}}\right)$$ $${d}_{j}= \frac{1-{\mathrm{e}}_{j}}{5-\sum_{j=1}^{5}{\mathrm{e}}_{j}}$$ The weight wj of the growth stage j can be obtained by normalizing the difference coefficient dj as shown in Eq. (13). $${w}_{j}= \frac{{\mathrm{d}}_{j}}{\sum_{j=1}^{5}{\mathrm{d}}_{j}} (0<{w}_{j}<1, \sum_{j=1}^{5}{w}_{j}=1)$$ Optimal combination weighting method An optimal combination weighting method (OCW) was employed to solve the proportion of weights in the combined decision-making based on obtaining subjective and objective weights, then the decision weights considering both subjective will and objective existence were obtained [55]. To select a set of weights with the largest total distance (R) between the subjective weights and objective weights, the weight determined by the subjective weighting method was written as \({\mathrm{W}}_{1}=({\mathrm{w}}_{1}^{1}, {\mathrm{w}}_{2}^{2}, {\mathrm{w}}_{3}^{3}, {\mathrm{w}}_{4}^{4})\), the weight determined by the objective weighting method was written as \({\mathrm{W}}_{2}=({\mathrm{w}}_{1}^{2}, {\mathrm{w}}_{2}^{2}, {\mathrm{w}}_{3}^{2}, {\mathrm{w}}_{4}^{2})\), and the combined weight determined by OCW was written as \(\mathrm{W}=({\mathrm{w}}_{1}, {\mathrm{w}}_{2}, {\mathrm{w}}_{3}, {\mathrm{w}}_{4})\). The optimal combination weight can be obtained by constructing the optimization model of Eq. (14) below. $$\left\{\begin{array}{l}maxR= \sum_{m=1}^{2}(1 - \sqrt{\frac{1}{5}\sum_{j=1}^{5}{({w}_{j} - {w}_{j}^{m})}^{2}}) \\ \sum_{j=1}^{5}{w}_{j}=1\end{array}\right.$$ Leave-one-out cross-validation The technical flow chart (Fig. 2) demonstrates the experimental methodology in this study, including experimental design, data collection, data processing, methods, and writing logic. The estimation and validation models of potato yield were established using leave-one-out cross-validation (LOOCV). This method is widely employed in model construction and validation to reduce the dependence on a single random part of the calibration and validation datasets [56]. Firstly, the original population samples were divided into K mutually exclusive sets (K = 26 in this study), of which K − 1 sets were used iteratively as training data for calibrating the coefficients (Coefi) of the algorithm, and then the remaining single sample was retained as the validation to obtain R2i and the estimation error (E(yi) − yi). The whole training and validation process should be repeated K times until each sample participates in the validation process. After K iterations, the coefficients and precision of the final algorithm can be expressed as follows: Technical flow chart in this study $$Coef = \frac{{\sum }_{i=1}^{K}{Coef}_{i}}{K}$$ $${R}^{2} = \frac{{\sum }_{i=1}^{K}{R}_{i}^{2}}{K}$$ $$RMSE = \sqrt{\frac{{\sum }_{i=1}^{K}{(E\left({y}_{i}\right)-{y}_{i})}^{2}}{K}}$$ where E(y) is the actual observed value, and y is the predicted value simulated by the model. Determination of the optimal rVI Each VI in this study was converted to rVI through the transformation of Eq. (2). The correlation coefficients between rVIs of different growth stages and relative yield are shown in Fig. 3. It can be seen that the correlation coefficients between each rVI and its corresponding relative yield showed an overall trend of increasing first (SS to SAS) and then decreasing (SAS to HS) during the whole growth stage. Correlated with relative yield, correlation coefficients of all selected rVIs in different stages exhibited consistent changes: reaching maximum values at SAS (with a correlation coefficient of 0.867 for rCIred edge, 0.860 for rEVI2, 0.845 for rNDRE, 0.841 for rNDVI, 0.817 for rCIgreen, and 0.803 for rMTCI.) and showing smaller values at SS and HS. At each potato growth stage, there was the strongest correlation between rCIred edge and relative yield. Therefore, it can be concluded that SAS is the most effective stage for potato yield estimating using VI, and rCIred edge has the best performance. When using rVI to construct yield models, only rCIred edge will be considered. Comparison of correlation coefficients between different relative vegetation indices and yield Simulation of potato growth based on Slogistic model As shown in Fig. 4, the dry weight data of the whole growth stage were used to construct the Slogistic model to characterize the growth process of potatoes. It can be found that the simulation accuracy is high, with the adjusted R2 close to 0.9. Generally speaking, the growth speed of potato is relatively slow in the early and late stages, and faster in the middle stage. Equations (5) and (6) were utilized to calculate the length of the growth stage, and the time nodes of the gradually increasing stage, the rapidly increasing stage, and the slowly increasing stage were 60th and 86th days, respectively. Based on these three stages, the five growth stages of this study can be obtained by increasing the seeding stage and harvest stage. The importance degree of each growth stage relative to yield can be sorted according to the growth rate of different stages. Combined with the actual planting experience, the final importance ranking was determined as TES > SAS > TFS > HS > SS. This result can provide a reference for the determination of the weights of different growth stages. Variation of potato dry weight with time during the whole growth stage Estimation of potato yield based on a single developmental stage The new VI (rVI) and LAI(rLAI) datasets were compared with the relative yield data at five different developmental stages respectively. Adjust coefficient of determination (R2) and root mean square error (RMSE) of all estimation models of single-stage rLAI and rCIred edge at each growth stage are shown in Table 2. At the same time, F-test was conducted on the whole regression models at 0.01 probability level, and the results were measured by P-value. Potato rLAI and rCIred edge at TES closely related to the relative yield having the adjusted R2 above 0.7, much lower correlations were found at SS and HS. From the perspective of different stages, TES is the optimal stage when using rVI and rLAI to estimate potato yield, and the models' expressions are shown in Eqs. (18) and (19). In this stage, the prediction performance of VI is better than that of LAI (Adjusted R2 of 0.7415 vs. 0.7034, RMSE of 0.2671 vs. 0.2864). Table 2 Potato yield estimation models for single stage y is the relative yield $${yield}_{(VI)} = {0.4421 \times r{CI}_{red\,edge}}_{[TES]} - {0.1823 +\mathrm{yield}}_{\left(Ref\right)}$$ where yield(VI) is the estimated potato yield using single-stage rVI (rCIred edge at TES in this study). $${yield}_{(LAI)} = {0.4305 \times rLAI}_{[TES]} - {0.1747 +\mathrm{ yield}}_{\left(Ref\right)}$$ where yield(LAI) is the estimated potato yield using single-stage rLAI. Estimation of potato yield based on weighted growth stage Three weight calculation methods (subjectivity, objectivity, and their combined form) were used to determine the weights of potato growth stages (Table 3). The results showed that the weights of each growth stage determined by EW were very close. The weights determined by IAHP and OCW were the largest at TES, followed at SAS and TFS, and the smallest at HS and SS. Based on the weighting results of the three methods (IAHP, EW, and OCW), the rVI and rLAI data of the potato's critical growth stages in the study area were calculated (ie, the weighted rCIred edge and rLAI), and then the linear regression models between the weighted relative variables and the relative potato yield were obtained. It can be found that the correlation between the potato yield and the weighted variables (rCIred edge and rLAI) obtained by the three weighting methods was very significant (P < 0.001). For the three different weighting method models, the EW-based and OCW-based methods had the lowest and the highest model accuracy, respectively. But the results obtained by these three methods were significantly improved compared to the single-stage models. By comparing the fitting models of the two relative variables, the results obtained by the three weighting methods all showed that the weighted rLAI-based models had higher accuracy than the weighted rCIred edge-based models. The optimal estimation models of potato yield can be determined as Eqs. (20) and (21). As the final estimation models of potato yield were based on the relative yield model by adding the yield of the reference spot, their prediction ability remains unchanged (Table 3). Table 3 Potato yield estimation models based on weighted growth stage $${yield}_{(VI)} = {0.4775 \times r{CI}_{red\,edge}}_{[weighted\,stage]} - {0.1788 +\mathrm{yield}}_{\left(Ref\right)}$$ where yield(VI) is the estimated potato yield using rVI based on the weighted growth stage. $${yield}_{(LAI)} = {0.5169 \times rLAI}_{[weighted\,stage]} - {0.2330 +\mathrm{ yield}}_{\left(Ref\right)}$$ where yield(LAI) is the estimated yield using rLAI based on the weighted growth stage. Accuracy assessment using leave-one-out cross-validation The leave-one-out cross-validation (LOOCV) method was utilized to obtain the potato yield validation models (Fig. 5). R2, RMSE, and mean relative error (MRE) were taken as evaluation indices. The results indicated that the accuracy of all models was acceptable (R2 > 0.75 and RMSE < 0.26). In general, models with high simulation accuracy also have high verification accuracy, with the minimum error less than 9%. Based on the combination models of three weighting methods and two different variables, the EW-based LAI model has the lowest accuracy, while the OCW-based LAI model has the highest accuracy (R2 = 0.8234, RMSE = 0.2267, MRE = 0.0833), explaining 82% of the variability. Therefore, combining the estimation and the verification models, the LAI model based on the OCW method to determine the weights of different growth stages is the optimal model for potato yield estimation. Validation models of potato yield estimation based on weighted growth stages and different variables: a IAHP-CIred edge; b IAHP-LAI; c EW-CIred edge; d EW-LAI; e OCW-CIred edge; f OCW-LAI For potato yield estimation, most scholars used to employ some crop growth models derived from general crop growth models or from gramineous (rice, wheat, corn, etc.) crop growth models [57]. Based on the principle and structure of the original model, the corresponding parameters were modified to conform to the growth characteristics of potato, and the growth process of potato was simulated, so as to output the physiological characteristic parameters and yield data and realize the model simulation function. Quiroz et al. [3] proposed that the incorporation of remotely sensed data in crop growth models with different temporal resolutions and levels of complexity could help to improve the yield estimation in potato. Moreover, it was identified that LAI at the initiation of stem elongation stage was closely related to yield, thus the remote estimation of LAI at this stage could be used to indicate the yield in oilseed rape [36]. Sharma et al. [58] tested Trimble GreenSeeker® (TGS) and Holland Scientific Crop Circle™ ACS-430 (HCCACS-430) wavebands to predict potato yield using LAI and NDVI with R2 reaching 0.7. These studies indicate that both remote sensing and LAI data have potential for yield prediction. Therefore, spectra and LAI data were selected in this paper to estimate potato yield. Six VIs of NDVI, CIred edge, CIgreen, EVI2, NDRE, and MTCI were utilized to non-destructively estimate potato yield in this study (Table 1) and CIred edge showed the most excellent performance in the correlation with potato yield (Fig. 3). Gong et al. [24] also proved that CIred edge had a good effect on the estimation of rapeseed yield using UAV data. Ma et al. [43] pointed out that at the seeding and bolting stage, the CIred edge exhibited good performance compared to the other VIs. These conclusions are consistent with the results of this study, proving the credibility of this study. In this study, the concepts of rVI and rLAI were proposed to solve the problem that the data acquired in different stages would be affected by solar radiation, aerosol, and soil background. Under the assumption of constant external conditions, subtraction can effectively remove these interferences, so that multi-period data can be used in combination. Furthermore, this method has the advantage of not changing the degree of aggregation and preserving the deviation of the original data. Wang et al. [59] used division to construct several relative vegetation indices (ΔVI) to estimate rice yield with hyperspectral imagery. Although the influence of external conditions such as background can be eliminated to some extent, the problem of changing the aggregation degree of data is ignored, resulting in lower RMSE and larger R2. The dry weight data of the whole growth period were used to fit the Slogistic model (Fig. 4) by analyzing the growth process of potato (the growth rate is slow in the early and late stages, and fast in the middle stage). According to the model, we can not only divide the different growth stages of potato but also provide the basis for determining the weights of each growth stage. There are few systematic and specific divisions of potato growth stages in the existing literature. The main reason is that potato tubers are buried in the soil, and the changes can not be observed directly by the eyes. Therefore, the joint utilization of potato multi-period data is subject to certain restrictions [60]. With a clear division of growth stages, more refined research can be carried out like crops such as rice [61] and wheat [62]. At present, there are many problems about the joint use of multiple-period or various kinds of data. To improve the accuracy of the research results, many scholars blindly used the data of multiple-period or diversified data directly. For example, Zhou et al. [63] predicted rice grain yield using multiple linear regression (MLR) with multi-temporal VIs derived from the multi-spectral and digital images to improve the estimation accuracy. Obviously, the contributions of different developmental stages to yield estimation are not consistent, so it can not be directly used for MLR, ignoring the weights of growth stages. Wang et al. [64] estimated LAI of paddy rice using MLR, partial least squares (PLS) regression, and least squares support vector machines (LS-SVM) regression with 15 optimal hyperspectral bands to product more accuracy. No research has shown that the contributions of these 15 bands to LAI estimation are the same, so these data can not be directly used together. Of course, if multiple data are obtained in the same period, it can be used directly in combination. For example, Duan et al. [35] predicted rice LAI using SVM regression with spectral features and the texture features to determine the texture feature effective. In this study, IAHP, EW, and OCW methods were employed to confirm the weights of different stages of potato. From the perspective of subjectivity, objectivity, and the combination of them, the most suitable method (OCW) was selected, which solved the problem of joint use of multi-period data. The weighting results (Table 3) of different potato growth stages determined by EW are relatively close, thus they cannot reflect the degree of impact of different growth stages on yield. The calculation results of IAHP and OCW are in accordance with the actual situation. When the spectra and LAI data of a single stage were used to predict the potato yield, the estimation accuracy of each stage basically met (1) TES > SAS > TFS > HS > SS for the same variable, which is consistent with the ranking of weights determined by IAHP, EW, and OCW; (2) VI > LAI for the same stage (Table 2). At HS, the simulation accuracy of VI is lower than that of LAI (adjusted R2 of 0.4692 vs. 0.5174), and when using the variable of VI, the accuracy at HS is lower than at SS (adjusted R2 of 0.4692 vs. 0.5912). The reason for this result is probably that the withering of potato leaves at HS resulted in the change of canopy spectra and the decrease of yield prediction ability. To compare the accuracy of yield estimation, the linear regression models were constructed based on plot-level weighted variables (rVI and rLAI) and relative yield (Table 3). The accuracy of different models was shown in Fig. 6. It can be found that OCW-based models have the highest accuracy. Unlike the single-stage results, in the OCW-based models, the accuracy of the rLAI model is higher than that of the rVI model. Because the appearance of saturation phenomenon in yield estimation using spectral index will limit the accuracy of models to some extent [25]. The LAI data is the three-dimensional (3D) information of the crop, and the limitations will be reduced. Adjusted R2 and RMSE of the different models To improve the suitability of the model, this experiment was set as water and fertilizer conditions, which can meet the current situation of water stress in potato planting areas in China and even the world [65]. Our future work will contain more data from different platforms for analysis, especially the UAV and satellite data because they can well express data at the spot-level. In addition, we will conduct experiments in more regions to verify the robustness of the models. And a new instrument of the LI-3100C table leaf area meter, (LI-COR Inc., LincoIn, Nebraska, USA) [66] will be used to avoid the impacts of the stems and flowers on the output of LAI, and more realistic LAI data will be obtained to improve and validate the accuracy of potato yield estimation by ground-measurement data. In this study, we developed a technique to improve the estimation of potato yield using weighted relative variables at plot-level derived from multi-period LAI and hyperspectral data. Plot-level relative vegetative index and LAI (rVI and rLAI) were proposed to eliminate the influence of external factors (solar radiation, aerosol, and soil background). The weights of different growth stages of potato were determined based on the Slogistic model and three weight calculation methods (IAHP, EW, and OCW). The linear regression was performed to estimate potato yield using single-stage and weighted multiple-stage variables respectively. The results indicated that rCIred edge was the optimal index for the potato yield estimation among all the test rVIs. TES is most suitable for potato yield estimation using a single growth stage. When multi-period data were applied to estimate the potato yield, the accuracy was greatly improved. The estimation model of LAI using the OCW-based method combining subjectivity and objectivity (OCW-LAI) showed the best performance with the estimation error about 8%. Although the idea of weighted developmental stage based on the Slogistic model and weighting calculation methods proposed in this study were tested in potato yield estimation, this work may offer a theoretical reference for other key parameters retrieving in crops that have an apparent division of growth stages. In future work, we will attempt to apply this technique to predict other growth parameters in potato and other crops. The remotely sensed and yield data used in this study is available upon the approval of Dr. Yingbin He from the Institute of Agricultural Resources and Regional Planning, Chinese Academy of Agricultural Sciences, China. Luo SJ, He YB, Duan DD, Wang ZZ, Zhang JK, Zhang YT, et al. Analysis of hyperspectral variation of different potato cultivars based on continuum removed spectra. Spectrosc Spec Anal. 2018;38:3231–7. Sulli M, Mandolino G, Sturaro M, Onofri C, Diretto G, Parisi B, et al. Molecular and biochemical characterization of a potato collection with contrasting tuber carotenoid content. PLoS ONE. 2017;12:e0184143. Quiroz R, Loayza H, Barreda C, Gavilan C, Posadas A. Ramirez DA Linking process-based potato models with light reflectance data: does model complexity enhance yield prediction accuracy? Eur J Agron. 2017;82:104–12. Duan DD, He YB, Luo SJ, Wang ZZ. Analysis on the ability of distinguishing potato varieties with different hyperspectral parameters. Spectrosc Spec Anal. 2018;38:3215–20. Al-Gaadi KA, Hassaballa AA, Tola E, Kayad AG, Madugundu R, Alblewi B, et al. Prediction of potato crop yield using precision agriculture techniques. PLoS ONE. 2016;11:e0162219. Reynolds CA, Yitayew M, Slack DC, Hutchinson CF, Huete A, Petersen MS. Estimating crop yields and production by integrating the FAO crop specific water balance model with real-time satellite data and ground-based ancillary data. Int J Remote Sens. 2000;21:3487–508. Campos I, Neale CMU, Arkebauer TJ, Suyker AE, Goncalves IZ. Water productivity and crop yield: a simplified remote sensing driven operational approach. Agric Forest Meteorol. 2018;249:501–11. Raymundo R, Asseng S, Cammarano D, Quiroz R. Potato, sweet potato, and yam models for climate change: a review. Field Crops Res. 2014;166:173–85. Setiyono TD, Quicho ED, Holecz FH, Khan NI, Romuga G, Maunahan A, et al. Rice yield estimation using synthetic aperture radar (SAR) and the ORYZA crop growth model: development and application of the system in South and South-east Asian countries. Int J Remote Sens. 2019;40:8093–124. Novelli F, Vuolo F. Assimilation of sentinel-2 leaf area index data into a physically-based crop growth model for yield estimation. Agronomy. 2019;9:255. Luo SJ, He YB, Wang ZZ, Duan DD, Zhang JK, Zhang YT, et al. Comparison of the retrieving precision of potato leaf area index derived from several vegetation indices and spectral parameters of the continuum removal method. Eur J Remote Sens. 2019;52:155–68. Li SY, Ding XZ, Kuang QL, Ata-Ul-Karim ST, Cheng T, Liu XJ, et al. Potential of UAV-based active sensing for monitoring rice leaf nitrogen status. Front Plant Sci. 2018;9:1934. Yao YJ, Liu QH, Liu Q, Li XW. LAI retrieval and uncertainty evaluations for typical row-planted crops at different growth stages. Remote Sens. 2008;112:94–106. Battude M, Al Bitar A, Morin D, Cros J, Huc M, Sicre CM, et al. Estimating maize biomass and yield over large areas using high spatial and temporal resolution Sentinel-2 like remote sensing data. Remote Sens Environ. 2016;184:668–81. Liu NF, Budkewitsch P, Treitz P. Examining spectral reflectance features related to Arctic percent vegetation cover: implications for hyperspectral remote sensing of Arctic tundra. Remote Sens Environ. 2017;192:58–72. Woolley JT. Reflectance and transmittance of light by leaves. Plant physiol. 1971;47:656–62. Gausman HW, Allen WA, Cardenas R. Reflectance of cotton leaves and their structure. Remote Sens Environ. 1969;1:19–22. Towers PC, Strever A, Poblete-Echeverria C. Comparison of vegetation indices for leaf area index estimation in vertical shoot positioned Vine canopies with and without grenbiule hail-protection netting. Remote Sens. 2019;11:1073. Wu JD. Developing general equations for urban tree biomass estimation with high-resolution satellite imagery. Sustain. 2019;11:4347. Zhang XH, He Y, Wang C, Xu F, Li XH, Tan CW, et al. Estimation of corn canopy chlorophyll content using derivative spectra in the O-2-A absorption band. Front Plant Sci. 2019;10:1047. Chen JX, Li F, Wang R, Fan YF, Raza MA, Liu QL, et al. Estimation of nitrogen and carbon content from soybean leaf reflectance spectra using wavelet analysis under shade stress. Comput Electron Agric. 2019;156:482–9. Anderson GL, Hanson JD, Haas RH. Evaluating landsat thematic mapper derived vegetation indexes for estimating aboveground biomass on semiarid rangelands. Remote Sens Environ. 1993;45:165–75. Miller JR, Hare EW, Wu J. Quantitative characterization of the vegetation red edge reflectance. 1. An inverted-gaussian reflectance model. Int J Remote Sens. 1990;11:1755–73. Gong Y, Duan B, Fang SH, Zhu RS, Wu XT, Ma Y, et al. Remote estimation of rapeseed yield with unmanned aerial vehicle (UAV) imaging and spectral mixture analysis. Plant Methods. 2018;14:70. Duan B, Fang SH, Zhu RS, Wu XT, Wang SQ, Gong Y, et al. Remote estimation of rice yield with unmanned aerial vehicle (UAV) data and spectral mixture analysis. Front Plant Sci. 2019;10:204. Shiu YS. Chuang YC Yield estimation of paddy rice based on satellite imagery: comparison of global and local regression models. Remote Sens. 2019;11:111. Joshi VR, Thorp KR, Coulter JA, Johnson GA, Porter PM, Strock JS, et al. Improving site-specific maize yield estimation by integrating satellite multispectral data into a crop model. Agronomy. 2019;9:719. Sakamoto T, Gitelson AA, Arkebauer TJ. Near real-time prediction of US corn yields based on time-series MODIS data. Remote Sens Environ. 2014;147:219–31. Mateo-Sanchis A, Piles M, Munoz-Mari J, Adsuara JE, Perez-Suay A, Camps-Valls G. Synergistic integration of optical and microwave satellite data for crop yield estimation. Remote Sens Environ. 2019;234:12. Becker-Reshef I, Justice C, Sullivan M, Vermote E, Tucker C, Anyamba A, et al. Monitoring global croplands with coarse resolution earth observations: the global agriculture monitoring (GLAM) project. Remote Sens. 2010;2:1589–609. Dong TF, Liu JG, Shang JL, Qian BD, Ma BL, Kovacs JM, et al. Assessment of red-edge vegetation indices for crop leaf area index estimation. Remote Sens Environ. 2019;222:133–43. Li SY, Yuan F, Ata-Ui-Karim ST, Zheng HB, Cheng T, Liu XJ, et al. Combining color indices and textures of UAV-based digital imagery for rice LAI estimation. Remote Sens. 2019;11:1763. Verrelst J, Camps-Valls G, Munoz-Mari J, Rivera JP, Veroustraete F, Clevers JGPW, et al. Optical remote sensing and the retrieval of terrestrial vegetation bio-geophysical properties—a review. ISPRS J Photogramm Remote Sens. 2015;108:273–90. Sun L, Gao F, Anderson MC, Kustas WP, Alsina MM, Sanchez L, et al. Daily mapping of 30 m LAI and NDVI for grape yield prediction in California Vineyards. Remote Sens. 2017;9:317. Duan B, Liu YT, Gong Y, Peng Y, Wu XT, Zhu RS, et al. Remote estimation of rice LAI based on Fourier spectrum texture from UAV image. Plant Methods. 2019;15:124. Peng Y, Zhu TE, Li YC, Dai C, Fang SH, Gong Y, et al. Remote prediction of yield based on LAI estimation in oilseed rape under different planting methods and nitrogen fertilizer applications. Agric For Meteorol. 2019;271:116–25. Liu XJ, Zhang K, Zhang ZY, Cao Q, Lv ZF, Yuan ZF, et al. Canopy chlorophyll density based index for estimating nitrogen status and predicting grain yield in rice. Front Plant Sci. 2017;8:1829. Wang ZX, Liu C, Huete A. From AVHRR-NDVI to MODIS-EVI: advances in vegetation index research. Acta Ecol Sin. 2003;23:979–87. Du Y, Teillet PM, Cihlar J. Radiometric normalization of multitemporal high-resolution satellite images with quality control for land cover change detection. Remote Sens Environ. 2002;82:123–34. Xu F, Liu W, Huang YJ, Liu QN, Zhang CJ, Hu HH, et al. Screening of potato flour varieties suitable for noodle processing. J Food Process Preserv. 2020;44:e14344. Zhao P, Fan WJ, Liu Y, Mu XH, Xu XR, Peng JJ. Study of the remote sensing model of FAPAR over rugged terrains. Remote Sens. 2016;8:309. Delgado AJ, Castellanos EM, Sinhoreti MAC, Oliveira DC, Abdulhameed N, Geraldeli S, et al. The use of different photoinitiator systems in photopolymerizing resin cements through ceramic veneers. Oper Dent. 2019;44:396–404. Ma Y, Fang SH, Peng Y, Gong Y, Wang D. Remote estimation of biomass in winter oilseed rape (Brassica napus L.) using canopy hyperspectral data at different growth stages. Appl Sci. 2019;9:545. le Maire G, Francois C, Soudani K, Berveiller D, Pontailler JY, Breda N, et al. Calibration and validation of hyperspectral indices for the estimation of broadleaved forest leaf chlorophyll content, leaf mass per area, leaf area index and leaf canopy biomass. Remote Sens Environ. 2008;112:3846–64. Rouse JW, Haas RH, Schell JA, Deering DW. Monitoring vegetation systems in the great plains with ERTS. NASA Spec Publ. 1974;309–317. Gitelson AA, Vina A, Ciganda V, Rundquist DC, Arkebauer TJ. Remote estimation of canopy chlorophyll content in crops. Geophys Res Lett. 2005;32:L08403. Jiang ZY, Huete AR, Didan K, Miura T. Development of a two-band enhanced vegetation index without a blue band. Remote Sens Environ. 2008;112:3833–45. Gitelson AA, Merzlyak MN. Remote estimation of chlorophyll content in higher plant leaves. Int J Remote Sens. 1997;18:2691–7. Dash J, Curran PJ. The MERIS terrestrial chlorophyll index. Int J Remote Sens. 2004;25:5403–13. van Smeden M, Moons KGM, de Groot JAH, Collins GS, Altman DG, Eijkemans MJC, et al. Sample size for binary logistic prediction models: beyond events per variable criteria. Stat Methods Med Res. 2019;28:2455–74. Sun HY, Wang SF, Hao XM. An improved analytic hierarchy process method for the evaluation of agricultural water management in irrigation districts of north China. Agric Water Manag. 2017;179:324–37. Geng ZQ, Yang X, Han YM, Zhu QX. Energy optimization and analysis modeling based on extreme learning machine integrated index decomposition analysis: application to complex chemical processes. Energy. 2017;120:67–78. Zou ZH, Yun Y, Sun JN. Entropy method for determination of weight of evaluating indicators in fuzzy synthetic evaluation for water quality assessment. J Environ Sci. 2006;18:1020–3. Zhang ML, Li BZ. How to improve regional innovation quality From the perspective of green development? Findings from entropy weight method and Fuzzy-Set qualitative comparative analysis. IEEE ACCESS. 2020;8:32575–86. Wang JJ, Jing YY, Zhang CF. Fuzzy multi-criteria evaluation model of HVAC schemes in optimal combination weighting method. Build Serv Eng Res Technol. 2009;30:287–304. Fielding AH, Bell JF. A review of methods for the assessment of prediction errors in conservation presence/absence models. Environ Conserv. 1997;24:38–49. Oliveira JS, Brown HE, Gash A, Moot DJ. An explanation of yield differences in three potato cultivars. Agron J. 2016;108:1434–46. Sharma LK, Bali SK, Dwyer JD, Plant AB, Bhowmik A. A case study of improving yield prediction and sulfur deficiency detection using optical sensors and relationship of historical potato yield with weather data in maine. Sensors. 2017;17:1095. Wang FL, Wang FM, Zhang Y, Hu JH, Huang JF, Xie JK. Rice yield estimation using parcel-level relative spectral variables from UAV-based hyperspectral imagery. Front Plant Sci. 2019;10:453. Li B, Xu XM, Zhang L, Han JW, Bian CS, Li GC, et al. Above-ground biomass estimation and yield prediction in potato by using UAV-based RGB and hyperspectral imaging. ISPRS J Photogramm Remote Sens. 2020;162:161–72. Nemoto M, Hamasaki T, Matsuba S, Hayashi S, Yanagihara S. Estimation of rice yield components with meteorological elements divided according to developmental stages. J Agric Meteorol. 2016;72:128–41. Fu ZP, Jiang J, Gao Y, Krienke B, Wang M, Zhong KT, et al. Wheat growth monitoring and yield estimation based on multi-rotor unmanned aerial vehicle. Remote Sens. 2020;12:508. Zhou X, Zheng HB, Xu XQ, He JY, Ge XK, Yao X, et al. Predicting grain yield in rice using multi-temporal vegetation indices from UAV-based multispectral and digital imagery. ISPRS J Photogramm Remote Sens. 2017;130:246–55. Wang FM, Huang JF, Lou ZH. A comparison of three methods for estimating leaf area index of paddy rice from optimal hyperspectral bands. Precision Agric. 2011;12:439–47. Rodríguez PL, Sanjuanelo CD, Ñústez LCE, Moreno-Fonseca LP. Growth and phenology of three Andean potato varieties (Solanumtuberosum L.) under water stress. Agron Colomb. 2016;34:141–54. Brandao ZN, Zonta JH. Hemispherical photography to estimate biophysical variables of cotton. Rev Bra Eng Agric Ambient. 2016;20:789–94. We thank the Potato Science Institute of Jilin Academy of Vegetables and Flower Sciences for preparing the seed and planting for the experiments, and Professor Shengli Zhang, Dr. Fei Xu, and Mr. Zhongcai Han for designing the trail. This study was supported by the National Natural Science Foundation of China "Study on temporally and spatially precise assessment on potato cultivation suitability based on dynamic process-oriented mode" (41771562) and "Innovation Project" of the Chinese Academy of Agricultural Sciences (2016–2020, IARRP). Institute of Agricultural Resources and Regional Planning, Chinese Academy of Agricultural Sciences, Beijing, 100081, China Shanjun Luo, Yingbin He & Xihai Zhao School of Remote Sensing and Information Engineering, Wuhan University, Wuhan, 430079, China Shanjun Luo School of Economics and Management, Tiangong University, Tianjin, 300387, China Qian Li & Yaqiu Zhu Center for Agricultural and Rural Economic Research, Shandong University of Finance and Economics, Jinan, 250014, China Weihua Jiao Yingbin He Qian Li Yaqiu Zhu Xihai Zhao SL and YH conceived and designed the experiments. QL and YZ performed the experiments. SL analyzed the data and wrote this manuscript. WJ and XZ checked the language. YH, QL, WJ, and YZ contributed to the discussion. All authors read and approved the final manuscript. Correspondence to Yingbin He. All authors agreed to publish this manuscript. Luo, S., He, Y., Li, Q. et al. Nondestructive estimation of potato yield using relative variables derived from multi-period LAI and hyperspectral data based on weighted growth stage. Plant Methods 16, 150 (2020). https://doi.org/10.1186/s13007-020-00693-3 Yield estimation Relative variables Weighted growth stage
CommonCrawl
Covariant derivative of determinant of the metric tensor Let $(M,g)$ be a Riemannian manifold and $g$ the Riemannian metric in coordinates $g=g_{\alpha \beta}dx^{\alpha} \otimes dx^{\beta}$, where $x^{i}$ are local coordinates on $M$. Denote by $g^{\alpha \beta}$ the inverse components of the inverse metric $g^{-1}$. Let $\nabla$ be the Levi-Civita connection of the metric $g$. Consider, locally, the function $\det((g_{\alpha \beta})_{\alpha \beta})$. It is known that $\nabla \det((g_{\alpha \beta})_{\alpha \beta}) = 0$ by using normal coordinates etc... I would like to show this fact without using normal coordinates. Just by computation. Here is what I have so far: $\nabla \det((g_{\alpha \beta})_{\alpha \beta}) = \left [ g^{\gamma \delta} \partial_{\delta} \det((g_{\alpha \beta})_{\alpha \beta}) \right ] \partial_{\gamma} = \left [ \det((g_{\alpha \beta})_{\alpha \beta}) g^{\gamma \delta} g^{\beta \alpha} \partial_{\delta} g_{\alpha \beta}\right ] \partial_{\gamma}.$` Here: the first equality sign follows from the definition of the gradient of a function and the second equality sign is the derivative of the determinant. Question: How do I continue from here without using normal coordinates? Or are there any mistakes? If yes, where and which? Greetings, Phil dg.differential-geometry gt.geometric-topology real-analysis mg.metric-geometry riemannian-geometry PhillipPhillip $\begingroup$ The determinant of $g$ as a function is not intrinsic; that is, it depends on the coordinates you choose. If you choose normal coordinates, then it becomes clear that its derivative (in the usual sense) vanishes. Now, you can consider an intrinsic quantity which is the volume form $\mathrm{vol}(g)$ associated with $g$; it satisfies $\nabla \mathrm{vol}(g)=0$ as you may expect, using the same computation essentially. $\endgroup$ – Henri Apr 14 at 10:57 $\begingroup$ Is this statement also true locally? By this I mean consider only the locally defined function $\det(g_{\alpha \beta})$ and not the volume form (and continuing the above computation without using normal coordinates)? $\endgroup$ – Phillip Apr 14 at 11:13 $\begingroup$ I do not see why the component $g^{\gamma \delta}g^{\beta \alpha} \partial_{\delta} g_{\alpha \beta}$ should vanish ? $\endgroup$ – Phillip Apr 14 at 11:15 $\begingroup$ @Phillip You need to use $\nabla g = 0$ at some point. $\endgroup$ – Vít Tu?ek Apr 14 at 13:02 $\begingroup$ The point is that $\nabla\det g$ need not be zero. What miust be zero is the covariant derivative of the volume form. $\endgroup$ – Deane Yang Apr 14 at 13:22 The determinant is a quantity associated to a linear operator not to a symmetric bilinear form. On the other hand, given an inner product on a vector space $V$ we can identify a bilinear form (which is an element of $V^*\otimes V^*$) with a linear operator (which is an element of $V\otimes V^*$). Classically, this identification was called raising the indices. If we use the metric $g$ to identify it with an endomorphism of $TM$ you obtain the identity endomorphism whose determinant is $1$. The derivative is clearly zero. If instead you are thinking of the volume $dV_g$ form (assuming the manifold is oriented) then $\nabla dV_g=0$; see Proposition 4.1.44 in this book. If the manifold is not oriented and you're thinking of the volume density $|dV_g|$ then again $\nabla|dV_g|=0$. Liviu NicolaescuLiviu Nicolaescu As you say, if $x^1, \dots, x^n$ are local coordinates, there is a symmetric matrix $[g_{\alpha\beta}]$ such that $g = g_{\alpha\beta}dx^\alpha dx^\beta$. This matrix depends on local coordinates and therefore so does the scalar function $\det [g_{\alpha\beta}]$. If you examine how this function changes when you change coordinates, you can see that, even if $\nabla (\det [g_{\alpha\beta}])$, whose definition does not use the Levi-Civita connection, vanishes in one set of coordinates, it will not in another. On the other hand, the density $\Phi = |\det [g_{\alpha\beta}]|^{1/2}\,dx^1\wedge\cdots\wedge dx^n$ is invariant under all orientation preserving changes of coordinates. The definition of its covariant derivative $\nabla\Phi$ does use the Levi-Civita connection, and if you carry out the calculations, you find that the definition of the Levi-Civita connection in terms of $g$ (which is equivalent to the property $\nabla g = 0$) implies that $\nabla\Phi = 0$. Deane YangDeane Yang $\begingroup$ Don't you want a square root of the determinant? And even then I think it's only invariant under orientation-preserving changes of coordinates. $\endgroup$ – José Figueroa-O'Farrill Apr 14 at 20:09 $\begingroup$ @JoséFigueroa-O'Farrill, thanks. Yes. $\endgroup$ – Deane Yang Apr 14 at 20:24 The determinant of a metric makes perfectly good sense, but it is not a function, rather a $2$-density. Formally, this means that it transforms as a section of the bundle associated with the frame bundle and a particular nontrivial character of the general linear group. Let $\phi:V \to W$ be a linear map between $n$-dimensional vector spaces $V$ and $W$ (for simplicity, over a field of characteristic $0$). The induced map $\wedge^{n}\phi:\wedge^{n}V \to \wedge^{n}W$ can be identified with multiplication by a scalar when a generator is chosen for each of $\wedge^{n}V$ and $\wedge^{n}W$. When $V = W$, and the same generator is used on either side, the resulting scalar does not depend on the choice, and it is reasonable to call this scalar $\det \phi$ because it agrees with the usual determinant of an endomorphism. A symmetric bilinear form $g$ on $V$ is identified with a linear map $V \to V^{\ast}$ to the dual vector space. The determinant $\det g$ is defined as in the preceding paragraph, as the induced map $\wedge^{n}V \to \wedge^{n}V^{\ast}$. Consider the standard action of $GL(n) = GL(V)$ on $V$ and the induced actions on $V^{\ast}$ and tensor powers of $V$ and $V^{\ast}$. So the action of $\gamma \in GL(n)$ on $g$ is given by $(\gamma \cdot g)(u, v) = g(\gamma^{-1}\cdot u, \gamma^{-1}\cdot v)$ for $u,v \in V$. It follows straightforwardly that $\det (\gamma \cdot g) = (\det \gamma)^{-2}\det g$ for $\gamma \in GL(n)$. Suppose the base field is $\mathbb{R}$. Thus $\det:S^{2}V^{\ast} \to \mathbb{R}$ is a $GL(n)$-equivariant map for the standard action of $GL(n)$ on $S^{2}V^{\ast}$ and the $1$-dimensional representation $\chi:GL(n) \to GL(1)$ given by $\chi(\gamma) = (\det \gamma)^{-2}$. Now let $M$ be a smooth $n$-dimensional manifold with frame bundle $F\to M$. With each $GL(n)$ module $(\rho, W)$ there is associated a bundle of weighted tensors $F \times_{\rho}W$ whose fibers are linearly isomorphic to $W$. Applying this construction to the representations of the preceding paragraph one obtains a map $\det$ associating with a section of $S^{2}T^{\ast}M$ a section of the line bundle associated with the representation $\chi$, which can be interpreted as the tensor square of the top exterior power $\wedge^{n}T^{\ast}M$ (such a section is often called a $2$-density). Let $g$ be a Riemannian metric on $M$ with Levi-Civita connection $\nabla$. Let $h$ be a section of $S^{2}T^{\ast}M$. Picking a local frame $\{E_{1}, \dots, E_{n}\}$ in $TM$ determines a local trivialization of $\wedge^{n}T^{\ast}M$ so also of all its tensor powers. With respect to this trivialization, $\det h$ equals the determinant of the matrix $h(E_{i}, E_{j})$. The connection determined by $g$ determines a convariant derivative on any associated bundle of the frame bundle and the covariant derivative $\nabla \det h$ is a section of the same line bundle as is $\det h$. If $h$ is everywhere full rank and $h^{-1}$ is the section of $S^{2}TM$ inverse to $h$, then $\nabla \det h = (h^{-1}\nabla h)\det h$ (the notation requires interpretation; in abstract index notation $h^{-1}\nabla h$ means $h^{pq}\nabla_{i}h_{pq}$). In particular, $\nabla \det g = 0$ because $\nabla g = 0$. Dan FoxDan Fox $\begingroup$ Is this formula $\nabla \det (h) = h^{-1}\nabla h$ correct? Shouldn't it be $\nabla \det (h) = \det(h) \text{tr}(h^{-1}\nabla h)$ ? $\endgroup$ – Phillip Apr 17 at 8:41 $\begingroup$ Do you know any reference where the formula for $\nabla \det(h)$ is proven? $\endgroup$ – Phillip Apr 17 at 8:52 $\begingroup$ @Phillip: indeed, a $\det h$ was missing from the right-hand side. Corrected now. $\endgroup$ – Dan Fox Apr 17 at 14:16 $\begingroup$ Do you know any reference for this formula in the language of tensors? $\endgroup$ – Phillip Apr 17 at 17:09 Not the answer you're looking for? Browse other questions tagged dg.differential-geometry gt.geometric-topology real-analysis mg.metric-geometry riemannian-geometry or ask your own question. Riemannian manifold of bounded geometry has a normal bundle of bounded geometry Geometrical meaning of the Ricci Tensor and its Symmetry Taylor expansion of the determinant of a Riemannian metric Vector field built from connection and metric Diffeomorphism variation of the Christoffel symbol Existence of left-invariant metric on the cotangentbundle of homogeneous spaces? Reference for connection of a Hessian metric Is a symmetric, parallel (0,2)-tensor a metric? Relating the components of the Riemann curvature tensor to the second partials of the components of the metric Covariant derivative of the Monge-Ampere equation on Kähler manifolds 特工简.布隆德归来电子游艺 内蒙古11选5任3选号 骑士vs热火 09025期22选5开奖结果 塞维利亚√皇家贝蒂斯 天涯明月刀 摩洛哥和摩纳哥 外星战记送彩金 彩票走势图双色球 双色球关联选号技巧
CommonCrawl
Behavior Research Methods pp 1–15 | Cite as A robust method for calibration of eye tracking data recorded during nystagmus William Rosengren Marcus Nyström Björn Hammar Martin Stridh Eye tracking is a useful tool when studying the oscillatory eye movements associated with nystagmus. However, this oscillatory nature of nystagmus is problematic during calibration since it introduces uncertainty about where the person is actually looking. This renders comparisons between separate recordings unreliable. Still, the influence of the calibration protocol on eye movement data from people with nystagmus has not been thoroughly investigated. In this work, we propose a calibration method using Procrustes analysis in combination with an outlier correction algorithm, which is based on a model of the calibration data and on the geometry of the experimental setup. The proposed method is compared to previously used calibration polynomials in terms of accuracy, calibration plane distortion and waveform robustness. Six recordings of calibration data, validation data and optokinetic nystagmus data from people with nystagmus and seven recordings from a control group were included in the study. Fixation errors during the recording of calibration data from the healthy participants were introduced, simulating fixation errors caused by the oscillatory movements found in nystagmus data. The outlier correction algorithm improved the accuracy for all tested calibration methods. The accuracy and calibration plane distortion performance of the Procrustes analysis calibration method were similar to the top performing mapping functions for the simulated fixation errors. The performance in terms of waveform robustness was superior for the Procrustes analysis calibration compared to the other calibration methods. The overall performance of the Procrustes calibration methods was best for the datasets containing errors during the calibration. Eye tracking Nystagmus Calibration Eye tracking is a useful tool to record and study eye movements. However, the nystagmus eye movements disturb the calibration procedure for individual recordings, causing comparisons of waveforms between recordings unreliable. For example, the calibration protocol assumes an ability to fixate the gaze, which is limited in people with nystagmus. Using the default calibration protocol may lead to unreliable eye tracker data, which in turn may misrepresent or even invalidate data analysis. In this paper, we explore the problems associated with calibration and propose a method that secures a repeatable and reliable gaze estimation, referred to as point-of-regard (PoR), which is crucial for detailed computer based nystagmus diagnostics and objective evaluation of treatment effects between recordings. Description of nystagmus Nystagmus could be a symptom of an underlying oculomotor disorder, which causes involuntary movements of the eye(s) and the condition may lead to decreased visual acuity (Hertle, 2010; Hussain, 2016). There are two broad types of nystagmus: early-onset nystagmus and acquired nystagmus (Hussain, 2016; McLean, Proudlock, Thomas, Degg, & Gottlob, 2007), where the former condition is developed in the months after birth and the latter is developed later in life (Dunn, 2014). The eye movement pattern, sometimes referred to as a waveform, can be classified into different categories and there are at least 12 different types of nystagmus waveforms according to a classification study (Hussain, 2016; Theodorou & Clement, 2016; Dell'Osso & Daroff, 1975). Different treatments strategies, for instance drug treatment (McLean et al., 2007) and surgery (Kumar, Shetty, Vijayalakshmi, & Hertle, 2011), have been suggested to improve the visual acuity in people with nystagmus. In order to evaluate the different strategies, eye movements before and after the treatment can be studied. Treatment effects are difficult to asses in detail without an objective evaluation of the eye movements, since people with nystagmus are often considered to be hard to diagnose by clinicians (Hussain, 2016). Nystagmus can also be found in visually healthy subjects. Optokinetic nystagmus (OKN) is a reflex found in humans (Naegele & Held, 1982), which causes oscillatory eye movements similar to the oscillations found in some forms of nystagmus such as pure jerk nystagmus. It can easily be elicited by keeping the head still in a moving environment (Naegele & Held, 1982). Calibration of a camera based eye tracker Nystagmus eye movements can be studied in detail with the use of an eye tracker. The video-based eye tracker, referred to as video-oculography (VOG) (Holmqvist et al., 2011), records eye movements using eye images captured by an infrared camera. The data from the VOG system are in this work obtained by finding the pupil center (PC) and the reflection off the cornea caused by an infrared illuminator, called the corneal reflection (CR). The vector between the PC and CR positions is a measure called the pupil-corneal reflection vector (PCRV). This measure is unique for each eye orientation and can therefore be used to estimate the PoR. In order to do this estimation from the PCRV, a relationship between the PCRV data and the corresponding PoR data is needed. The process to identify this relationship is referred to as calibration, which is dependent on the geometry of the experiment as well as the individual eye anatomy of each participant (Holmqvist et al., 2011). The goal of the calibration is to find a mapping function (MF), e.g. a polynomial, which describes the relationship between the PCRV data and the PoR data. By presenting targets at known positions during an experiment, referred to as calibration targets, and simultaneously recording the corresponding PCRV data, it is possible to estimate the mapping function parameters. The number of calibration targets can vary, but common choices are 2, 5, 9, 13 and 16 targets (Holmqvist et al., 2011). The structure of the mapping function needs to be determined before its parameters can be estimated. The selection of the structure is difficult and the choice may significantly affect the resulting PoR estimation. This is illustrated in Fig. 1, where three different polynomial structures are used to estimate the same eye movement. The PoR estimations are not the same, which means that one would have to decide which of these is most likely to represent the actual eye movement. PoR estimation examples. Illustration of three different PoR polynomial estimations (see Barot, McLean, Gottlob, & Proudlock, 2013; McLean et al., 2007; Sheena & Borah, 1981 and Eqs. 5, 8 and 7), of the same recorded PCRV data. The data is obtained from a healthy participant viewing an OKN-stimulus, generating oscillatory eye movements. When comparing the three PoR estimations, it is apparent that \(\mathcal {A}_{4}\) is different from the other two Several calibration polynomials for video-based eye tracking have previously been studied. One study investigated more than 400,000 polynomials and evaluated their performance based on the average error (accuracy), maximum error, standard deviation of the estimated PoR, number of polynomial parameters and head movement tolerance (Cerrolaza, Villanueva, & Cabeza, 2008). Another study tested polynomial structures based on accuracy and the number of calibration targets (Blignaut & Wium, 2013). The two studies were using simulated data or data from participants with no visual impairments. In both Cerrolaza et al., (2008) and Blignaut and Wium (2013), accuracy was used to evaluate the calibration MFs. As is pointed out perfect accuracy, or goodness of fit, can be achieved by using the same model order as the number of calibration targets (Blignaut & Wium, 2013). The calibration polynomial is, however, used also for other gaze positions and should be tested also for these (Blignaut & Wium, 2013). Previous work on nystagmus calibration Different approaches for calibration data selection for nystagmus applications have previously been published. This is an important part of the calibration since the selected calibration data should represent that the participant looked at the displayed calibration target. If the selected calibration data do not represent the "correct" fixation, there is a risk of misrepresenting eye movement data. A method to find saccades in eye movement data based on adaptive acceleration thresholds was presented in Behrens, Mackeben, and Schröder-preikschat (2010). The intent of the method was not calibration of nystagmus data, but it served as the basis for the development of a method designed for the nystagmus case. The nystagmus specific version identified the slowest eye movement velocities, referred to as foveation periods (Dunn, 2014). The method is based on an algorithm for saccade detection in uncalibrated data, which is used to divide the waveform into fast and slow eye movements. The foveations are found in the slow phase of the data. Another approach to find foveations was presented in (Dell'Osso, 2005), where manual annotation to mark the start and end times of the foveations, was used. While there has been some work on how calibration data are selected, literature on the suitability of various polynomials for nystagmus recording purposes is sparse. Many papers concerning nystagmus and eye tracking do not report how calibration was performed and evaluated. In Table 1, nine different studies are summarised. As can be seen from the table, only three of the nine studies (McLean et al., 2007; Dunn, 2014; Barot et al., 2013) report any sort of calibration model structure, although the calibration MF details are not explicitly presented in any of the papers. Only three of the studies (Dunn, 2014; Abel, Wang, & Dell'Osso, 2008; Dell'Osso et al., 2011) report any type of data quality measure or accuracy. In the two first, the reported accuracies are taken from the manufacturer's specification sheet and therefore reveals no information about the accuracy for participants in these particular studies. Summary of nine different studies, their calibration and validation protocols, the calibration methods and the calibration method performance Calibration positions Data selection method Calibration polynomial Reported data quality McLean et al., (2007) (101) SMI Eye Link, 250 Hz 1: 3X3 grid, 0° and ± 20° Horizontal, ± 15° Vertical; 2: 3° steps from − 24° to 24°. Start point (− 24°, − 24°), Stop point (24°, 24°) 1:Information Missing (U); 2:Fixation (U) 1: Information Missing; 2: Fourth Order Poynomial Information missing Tai et al., (2010) (6) EyeLink 1000, 500 Hz 0° and ± 10° Horizontal and Vertical Not explicitly specified (U) Abel et al., (2008) (11) EyeLink II Foveation Periods (U) 0.5°–1.0° (Manufacturer Numbers) Barot et al., (2013) (16) 30° Left to 30° Right in steps of 3° Foveation Periods (A) Best line of fit Dell'Osso et al., (2011) (24) EyeLink II, 500 Hz (Hertle et al., 2011) (19) Ober 2 or EyeLink, 500 Hz or 1000 Hz 1° targets or 3° pictures End of fast phase (U) Taibbi et al., (2008) (28) Thomas et al., (2008) (56) EyeLink 250 Hz Dunn (2014) (1) EyeLink 1000, (include sampling frequency) ± 5° Horizontally, ± 3° and (0°, 0°) Automatic Foveation Algorithm (Dunn, 2014) (A) Regression with cross term. Degree unspecified. Self Validation mean and standard deviation for horizontal and vertical values All studies used an EyeLink system (except (McLean et al., 2007) which used the SMI EyeLink) and were concerned with technical, analytical or clinical applications of eye tracking and nystagmus. The columns represent from left to right: the reference to the study (with number of citations as of December 7, 2017), the eye tracker used in the study, the calibration target positions, the calibration data selection strategy, the calibration polynomial structure (note that no explicit equations are written), the validation protocol and the reported data quality. The calibration data strategy includes manually selection (M), automatic selection (A) or not explicitly stated (U) Calibration polynomials There are various references to calibration polynomials used in nystagmus eye tracking research. Four of these polynomials are evaluated in this paper. As described above, calibration data are used to estimate the polynomial coefficients where the input to the calibration polynomial is PCRV, denoted PC in the equations, data and the output is PoR data. Table 2 summarises the characteristics of the four selected polynomials previously used in the nystagmus eye tracking literature. Summary of the calibration polynomials found in eye tracking and nystagmus related studies Polynomial [P] Eye tracking data vector [uPC] **Barot et al., (2013) \( \mathcal {A}_{1}\) (5) [1 xPCyPC]T Linear mapping (Linear) **Dunn (2014) \(\mathcal {B}\) (6) [1 xPCyPCxPCyPC]T Linear mapping + Rotation (non-linear) * Stampe (1993) \(\mathcal {G}\) (7) \([1 \quad x_{PC} \quad y_{PC} \quad x_{PC}^{2} \quad y_{PC}^{2} \quad x_{PC}y_{PC}]^{T}\) Quadratic mapping + Rotation (non-linear) **McLean et al., (2007) \(\mathcal {A}_{4}\) (8) \([1 \quad x_{PC} \quad x_{PC}^{2} \quad x_{PC}^{3} \quad x_{PC}^{4} \quad y_{PC} \quad y_{PC}^{2} \quad y_{PC}^{3} \quad y_{PC}^{4}]^{T}\) Fourth order (non-linear) *:The polynomial suggested in Stampe (1993) has been slightly changed compared to the original proposal. The corner correction terms m[q] and n[q] used in Stampe (1993) are not estimated for each quadrant but rather for the entire plane. There are not any direct references in nystagmus research to this method in the literature presented in this work. Since it is a common calibration polynomial it was included. **: Polynomials which were not explicitly stated. Instead they have been interpreted from the context The PoR estimation, pPoR = [xPoRyPoR]T, is computed using a polynomial, P, and eye tracker data, uPC, as, $$ \boldsymbol{p}_{PoR} = \boldsymbol{P}\boldsymbol{u}_{PC}. $$ where uPC = [xPCyPC]T. The selected structure of P determines the structure of uPC (see Table 2). The purpose of the calibration is to estimate the coefficients of the polynomial $$ \boldsymbol{P} = \left[\begin{array}{l} \boldsymbol{p}_{\boldsymbol{h}} \\ \boldsymbol{p}_{\boldsymbol{v}} \end{array}\right], $$ where ph and pv are the horizontal and vertical polynomials respectively. The coefficients are estimated using a least squares solution according to $$ \boldsymbol{p}_{\boldsymbol{d}} = (\boldsymbol{U}_{PC}\boldsymbol{U}_{PC}^{T})^{-1}\boldsymbol{U}_{PC}^{T} \boldsymbol{t}_{d}, $$ where d is either the horizontal or the vertical direction, UPC is a matrix containing the calibration data vectors for each calibration target, $$ \boldsymbol{U}_{PC} = \left[\begin{array}{l} \boldsymbol{u}_{PC}(1) \\ {\vdots} \\ \boldsymbol{u}_{PC}(n) \end{array}\right], $$ td is a vector with calibration targets of direction d, and n is the number of calibration targets. The different polynomials evaluated in this work are given in the equations below: $$ \mathcal{A}_{1} = \left[\begin{array}{lll} a_{0, c} & a_{0, x} & 0 \\ a_{1, c} & 0 & a_{1, y} \end{array}\right], $$ $$ \mathcal{B} = \left[\begin{array}{llll} b_{0, c} & b_{0, x} & b_{0, y} &b_{0, xy} \\ b_{1, c} & b_{1, y} & b_{1, y} & b_{1, xy} \end{array}\right]. $$ $$ \mathcal{G} = \left[\begin{array}{llllll} g_{0, c} & g_{0, x} & g_{0, y} & g_{0, x^{2}} & g_{0, y^{2}} & g_{0, xy}\\ g_{1, c} & g_{1, x} & g_{1, y} & g_{1, x^{2}} & g_{1, y^{2}} & g_{1, xy} \end{array}\right], $$ $$ \mathcal{A}_{4} = \left[\begin{array}{ll} a_{0, c} & a_{1, c} \\ a_{0, x} & 0\\ a_{0, x^{2}} & 0 \\ a_{0, x^{3}} & 0 \\ a_{0, x^{4}} & 0 \\ 0 & a_{1, y} \\ 0 & a_{1, y^{2}}\\ 0 & a_{1, y^{3}} \\ 0 & a_{1, y^{4}} \end{array}\right]^{T}. $$ Aim of this paper The aims of this paper are to propose and evaluate a new calibration MF generating consistent PoR estimations across recording sessions and participants and compare it to other calibration mapping functions previously used in nystagmus research. The main objective is to find an MF which reliably can be used to evaluate the effects of different nystagmus treatments, even when the participant fails to accurately fixate the calibration target. Proposed method In this section a new calibration method is proposed. It is developed for video-based eye trackers using a nine-point calibration and a geometrical setup similar to that of an EyeLink 1000 Plus in desktop mode. The method consists of two parts: First, an outlier correction algorithm aimed at correcting inaccuracies in the recorded calibration data. Second, a linear mapping function based on Procrustes analysis is proposed. The method is based on 5 s of data recorded for each calibration target, as will be presented in more detail in "Calibration method evaluation". The outlier correction algorithm For the recommended setup of the eye-tracker used in this work, the horizontal data typically have the following structure; the horizontal PoR data are dependent only on the horizontal PCRV data, and not on the vertical PCRV data. Thus, horizontal PCRV for a horizontal gaze position is approximately the same, regardless of the vertical gaze position. This characteristic is used to create an algorithm to reduce errors in the calibration dataset. The algorithm is based on nine calibration targets distributed in a 3 × 3 grid where the calibration data for each calibration target are mapped to one coordinate pair. In this case there are 9 two-dimensional coordinates; one for each two-dimensional calibration target. The outlier correction algorithm consists of two stages. Stage I Divide the data into six groups with three adjacent data points in each. Half of the groups share a horizontal calibration target value (see Fig. 2a) and the other half share the vertical calibration target value (see Fig. 2b). Fit a line to the three data points in each of the six groups. Compute the angle between each of the vertically fitted lines and each of the horizontally fitted lines (3 × 3 computations). If the angle deviates more than 25° from the expected 90°, the vertical line is considered to contain an outlier. The value of 25° was chosen empirically. If one or more outliers were found during Stage I, Stage II is initiated. Stage II An outlier is detected by finding the datapoint with the largest horizontal deviation from the vertical line. Corrected coordinates of the outlier are computed as the average of the other data points on each of the intersecting horizontal and vertical lines, i.e., the new horizontal data point value is computed as the average of the corresponding horizontal data points of the vertical line, and the new vertical data point value is computed as the average of the corresponding the vertical data points of the horizontal line. Group Division. The data points divided after the horizontal target values Fig. 2a and vertical target values Fig. 2b. All data points of the same colour and shape belong to the same horizontal group Fig. 2a or same vertical group Fig. 2b An example of calibration data points before and after outlier correction is shown in Fig. 3. Illustration of the outlier correction algorithm where 6 out of 9 calibration data positions have been removed from the data set. The red squares represent the known calibration data positions, the green filled circles represent the known data before the algorithm estimation, and the black circles represent the estimated calibration data positions Procrustes calibration In the calibration process, a set of n (here n = 9) two-dimensional data points (calibration data) are fitted to another set of n two-dimensional data points (calibration targets). Both of these data sets can be viewed as two-dimensional shapes, and the objective of the calibration is to identify the best transformation from the calibration data shape to the calibration target shape. In this work, Procrustes analysis (Gower, 1975) is used to compare and align the two datasets. Three steps are involved in the Procrustes analysis: translation, scaling and rotation. Once they have been estimated, they can be used to compute the gaze positions from PCRV data. The three transformations have been implemented in the following way: Construct the calibration data matrix \(\boldsymbol {D} =\left [\begin {array}{ll}\boldsymbol {x}_{d} & \boldsymbol {y}_{d} \end {array}\right ]^{T} \) as a 2 × n matrix where n is the number of calibration targets, and the calibration target matrix \(\boldsymbol {T} = \left [\begin {array}{ll}\boldsymbol {x}_{t} & \boldsymbol {y}_{t} \end {array}\right ]^{T}\) contains the corresponding calibration targets. Center both the calibration data and calibration target datasets by subtracting their respective horizontal and vertical averages from each data set to create Dμ and Tμ. $$ \boldsymbol{D}_{\mu} = \left[\begin{array}{ll}\boldsymbol{x}_{d} - \bar{x}_{d}\\ \boldsymbol{y}_{d} - \bar{y}_{d} \end{array}\right] = \left[\begin{array}{ll}\boldsymbol{x}_{d, c}\\ \boldsymbol{y}_{d, c} \end{array}\right], $$ $$ \boldsymbol{T}_{\mu} = \left[\begin{array}{ll}\boldsymbol{x}_{t} - \bar{x}_{t}\\ \boldsymbol{y}_{t} - \bar{y}_{t} \end{array}\right] = \left[\begin{array}{ll}\boldsymbol{x}_{t, c}\\ \boldsymbol{y}_{t, c} \end{array}\right], $$ where \(\bar {x}_{d}\) is the average of xd, \(\bar {y}_{d}\) is the average of yd, \(\bar {x}_{t}\) is the average of xt and \(\bar {y}_{t}\) is the average of yt. Compute the norms, ND and NT, using $$ N_{D} = \sqrt{\sum\limits_{i = 1}^{n}x_{d, c}^{2}(i) + \sum\limits_{i = 1}^{n}y_{d, c}^{2}(i)} $$ where xd, c(i) ∈xd, c and yd, c(i) ∈yd, c, $$ N_{T} = \sqrt{\sum\limits_{i = 1}^{n}x_{t, c}^{2}(i) + \sum\limits_{i = 1}^{n}y_{t, c}^{2}(i)} $$ and xt, c(i) ∈xt, c and yt, c(i) ∈yt, c. The datasets are scaled according to: $$ \boldsymbol{D}_{N} = \frac{\boldsymbol{D}_{\mu}}{N_{D}} $$ $$ \boldsymbol{T}_{N} = \frac{\boldsymbol{T}_{\mu}}{N_{T}} $$ The rotation, R, is computed using singular value decomposition (SVD). In general, the SVD decomposes a matrix M into two orthonormal matrices U and V and a diagonal matrix S that contains the singular values σl, l ∈ [1, k]. In Procrustes analysis, \(\boldsymbol {M} =\boldsymbol {D}^{T}_{N}\boldsymbol {T}_{N}\). $$ \boldsymbol{D}^{T}_{N}\boldsymbol{T}_{N} = \boldsymbol{U}\boldsymbol{S}\boldsymbol{V}^{H}, $$ $$ \boldsymbol{R} = \boldsymbol{U}^{H}\boldsymbol{V}. $$ $$ \boldsymbol{S} = diag(\sigma_{1}, \ldots, \sigma_{k}). $$ Once the translation, scaling and rotation parameters have been estimated, the PoR estimation, pPoR, is computed as follows: $$ \boldsymbol{p}_{PoR} = \kappa \boldsymbol{R} \boldsymbol{p}_{PC} - \boldsymbol{L} $$ $$ \kappa = \frac{N_{T}} {N_{D}}\sum\limits_{i = 1}^{k}\sigma_{i}, $$ $$ \boldsymbol{L} = \kappa\left[\begin{array}{l} \bar{x}_{d} \\ \bar{y}_{d} \end{array}\right]\boldsymbol{R} - \left[\begin{array}{l} \bar{x}_{t} \\ \bar{y}_{t} \end{array}\right], $$ $$ \boldsymbol{p}_{PC} = \left[\begin{array}{l} x_{PC} \\ y_{PC} \end{array}\right]. $$ This method is denoted as \(\mathcal {P}\). Calibration method evaluation In this Section the evaluation strategy of the proposed method is presented. The Section consists of three main parts; the recording of nystagmus data ("The nystagmus data experiment (NDE)"), the recording of control data ("The control data experiment (CDE)") and the performance evaluation measures ("Comparing calibration methods"). Binocular, raw pupil and CR data were recorded with an EyeLink 1000 Plus (desktop mode) with a sampling frequency of 1000 Hz using the host software v. 5.09 and the DevKit 1.11.571. The center of mass tracking mode was used. The eye tracker camera was placed in accordance with the recommendations of the manufacturer (SR-Research, 2010). PsychoPy (version 1.83) (Peirce, 2007) was used to present all stimuli. The stimulus was presented on an ASUS VG248QE monitor with a resolution of 1920 × 1080 pixels, with dimensions 53 cm × 30 cm. The participant to monitor distance was 80 cm. A chin and forehead rest was used for all participants. The analysis software was written in Python (version 2.7). The nystagmus data experiment (NDE) The nystagmus data experiment was performed with patients diagnosed with nystagmus. The diagnosis was performed by Björn Hammar (MD), senior consultant at the neuro-ophthalmology unit at Skåne University Hospital in Lund, Sweden. This dataset is denoted NDE data. A total of eight patients with nystagmus were recorded, two of which were recorded twice totalling ten separate recordings. Two of the participants were female and six were male. Out of the ten recordings, four were excluded from the data set; one due to lack of validation data, two due to loss of calibration data (too many blinks during the recording of calibration data) and one due to too small oscillations. For this participant, only the data from one out of the nine calibration targets consisted of oscillations with an amplitude larger than 1° and a frequency higher than 2 Hz. Out of the six remaining recordings, from five different participants, all were diagnosed with infantile nystagmus (M = 35.3 [year], SD = 15.9[year]). Data recording The experiment included calibration and validation data recordings. Both calibration and validation data were recorded monocularly for both eyes by covering one eye and recording the other eye. Nine calibration targets were presented to each patient in a randomised order. The calibration targets were placed in a 3 × 3 grid. The horizontal target positions were 0° and ± 18° and the vertical target positions were 0° and ± 10°. The validation targets were placed in a 2 × 2 grid where the horizontal and vertical validation target positions were (± 5°,± 5°) respectively. The calibration target was a black circle with radius of 0.6° with a red circle of radius 0.15° in the center. The targets were presented on a grey background. The presentation duration of each calibration target and validation target was decided manually. The goal duration for each target was 5 s (M = 5.02 [s], SD = 1.24 [s]). The experiment also included fixation, smooth pursuit, saccade and optokinetic nystagmus tasks which were not included in this work. Calibration data selection The calibration data selection algorithm presented in Dunn (2014) was implemented. Some adjustments were made to the original algorithm: Instead of computing saccade velocity thresholds for the entire calibration data set, the thresholds were computed for each calibration target. The saccade acceleration threshold was not implemented, due to too heavy saccade rejection. The adaptive filter to find foveations was not implemented. Instead, each slow phase longer than 50 ms was considered as a potential foveation. The first 50 ms directly after the onset of the slow phase were considered to be the most likely foveation candidate. The observed waveforms in the NDE database are illustrated in Fig. 4. Various waveform recordings from the NDE data set for different participants. These are representative of the waveforms found in the dataset from the six participants. The length of blue scale bar at the right side of each signal is 4°. The calibration was preformed using the Procrustes calibration method The control data experiment (CDE) A total of eight participants were included in the dataset, one female and seven male (M = 37.0 [year], SD = 7.7 [year]). This data set is denoted CDE data and was divided into two subsets, see "Two CDE subsets". Data from one participant was excluded due to data loss (too many blinks during the recording of calibration data). The calibration protocol consisted of 81 calibration targets using a standard 3 × 3 grid with nine possible positions for each calibration target, one reference position and eight offset positions. The participants were recorded binocularly. The distribution of the targets is shown in Fig. 5. The vertical and horizontal offset amplitudes were ± 0.5° and ± 2.0°. Each target was shown for 1.5 s. The positions of the calibration targets were evenly distributed between − 10° and 10° in the horizontal direction and − 5° and 5° in the vertical direction, not counting the offsets. The calibration target was a white circle with radius of 0.6° with a black circle of radius 0.15° in the center. The targets were presented on a grey background. The calibration targets were presented in a randomised order and the offset magnitude at each calibration target was also randomised. Since no nystagmus is present in the CDE data a different method for calibration data selection was needed, see "Calibration data selection". The 81 calibration targets used for the CDE calibration are shown. The blue diamonds represent calibration targets and the red triangles represents targets simulating fixation inaccuracies. The targets were presented in random order. Each target was shown for 1.5 seconds Two OKN tasks were recorded for this dataset. A black and white striped pattern, see Fig. 6, was used to elicit OKN. In the first OKN task, the pattern was moving horizontally. In the second task, the pattern was rotated 90° and moved vertically. In both cases, the temporal frequency of the pattern was 8 cycles / second (in the moving direction), the spatial frequency was 0.5 cycles / degree and the duration was 15 s. Before the pattern was set in motion the participants were asked to look in the center of the screen and keep looking there as long as the pattern was moving. The experiment also included fixation, smooth pursuit and saccade tasks, which were not included in this work. The OKN datasets were used to compute the calibration plane distortion and the waveform robustness, described in "Calibration plane distortion & waveform robustness". Illustration of the black and white striped pattern used to elicit nystagmus eye movement in healthy participants. The participant was asked to fixate in the center of the stripped screen when the pattern was in motion. The motion of the stimulus in the configuration illustrated above was horizontal. If the pattern is rotated 90° the stimuli moves vertically Two CDE subsets The CDE data datasets were divided into two subsets: one which contains only calibration targets with no offset, CDE - NO, and one which contains calibration targets with a random offset for each calibration target, CDE - O. The notations NO and O represent datasets with no introduced offsets and with introduced offsets, respectively. While the CDE - NO data correspond to data from participants without any visual impairment, the CDE - O data simulate potential fixation inaccuracies caused by the nystagmus oscillations for different angles during the calibration. The CDE - O dataset was created by repeating the calibration data selection process 50 times, each time assigning a horizontal random error (including 0°) to each calibration target. Each repetition was independent of previous repetitions. The rationale for calibration data selection at each calibration target, is that the PCRV segment with the least variance best represents a fixation. The calibration data selection method is described below: First, in order to avoid influence of the time it takes to change positions after a new calibration target has appeared, the first 500 ms of the recorded data for each calibration target are removed. Second, the 200 ms window with the smallest variance of the following PCRV data are computed. The total variance, \(s_{tot}^{2}\), is computed according to Eq. 22, where \({s_{x}^{2}}\) and \({s_{y}^{2}}\) are the horizontal and vertical variance respectively. $$ s_{tot}^{2} = {s_{x}^{2}} + {s_{y}^{2}} $$ Finally, the horizontal and vertical calibration data position estimates are computed as the averages of the 200 ms window found in step 2. Comparing calibration methods In this work, three different measures are used to compare the characteristics of the different mapping functions. These are accuracy, α, calibration plane distortion, μd, and waveform robustness, ξ. Accuracy is tested on a limited number of validation targets, which in this work is equal to four targets per participant. The calibration plane distortion is the distance between two PoR estimations from the same MF. Finally, the waveform robustness is computed as the difference between two PoR estimations after adjusting for the linear properties translation, rotation and scaling between the two PoR estimations. The accuracy for validation target point k, αk, is computed according to Eq. 23 where xPoR(k) and yPoR(k) are the mapping function estimates of the horizontal and vertical validation target positions, respectively, and xs(k) and ys(k) are their corresponding known validation target positions. The accuracy computation in Eq. 23 results in one single value for each validation target. A small accuracy value means good performance, while a large value means poor performance. $$ \alpha_{k} = \sqrt{({x}_{PoR}(k) - x_{s}(k))^{2} + ({y}_{PoR}(k) - y_{s}(k))^{2}} $$ The accuracy is presented in the following way. For each mapping function, the average accuracy of each eye of all validation data for one dataset is computed. This means for example that the CDE O dataset contain: 7 participants × 4 validation targets × 50 iterations = 1400 accuracy samples. The accuracy is calculated separately for all three datasets. In order to evaluate the performance of the outlier correction algorithm (see "The outlier correction algorithm"), the accuracy results for the NDE dataset without the outlier correction algorithm are also calculated. Calibration plane distortion & waveform robustness The calibration plane distortion computations were implemented in the following way. If \(\boldsymbol {P}_{PoR1} = \left [\begin {array}{ll} \boldsymbol {v}_{x} & \boldsymbol {v}_{y} \end {array}\right ]^{T}\) and \(\boldsymbol {P}_{PoR2} =\left [\begin {array}{ll} \boldsymbol {w}_{x} & \boldsymbol {w}_{y} \end {array}\right ]^{T}\) are two matrices of dimension L × 2 containing gaze estimations, the calibration plane distortion, μ, is defined as: $$ \mu(\boldsymbol{P}_{PoR1}, \boldsymbol{P}_{PoR2}) = \frac{1}{L}\sum\limits_{l = 1}^{L}\sqrt{{\delta^{2}_{x}}(i) + {\delta^{2}_{y}}(i)}. $$ $$ \delta_{x}(i) = v_{x}(i) - w_{x}(i); v_{x} (i) \in \boldsymbol{v}_{x}, w_{x}(i) \in \boldsymbol{w}_{x}, $$ $$ \delta_{y}(i) = v_{y}(i) - w_{y}(i); v_{y}(i) \in \boldsymbol{v}_{y}, w_{y}(i) \in \boldsymbol{w}_{y}. $$ The calibration plane distortion is used to compute how close two PoR estimations are in absolute terms, i.e., the distance on the stimuli screen. Even though it includes changes in the waveform, it is likely that translation effects are a dominant part of the calibration plane distortion value. In order to study the effects on the waveform itself, the waveform robustness measure was computed using the Procrustes Distance, DP. It is defined as: $$ D_{P}(\boldsymbol{P}_{PoR1}, \boldsymbol{P}_{PoR2}) = 1 - \left( \sum\limits_{l = 1}^{k}\sigma_{l}\right)^{2} $$ where S = diag(σ1,…, σk), is computed according to Eq. 15 and DP ∈ [0,1]. The PPoR1 and PPoR2 matrices correspond to the D and T matrices described in "Procrustes calibration". If Pf{k}, NO is a gaze estimation from mapping function f{k} from the CDE - NO dataset and Pf{k}, O is a gaze estimation from mapping function f{k} from the CDE - O dataset, where \(f = \{\mathcal {A}_{1}, \mathcal {B}, \mathcal {G}, \mathcal {A}_{4}, \mathcal {P} \}\) and k ∈ [0,4], the calibration plane distortion, μk, and the waveform robustness, ξk, for mapping function k are defined in Eqs. 28 and 29 respectively. $$ \mu_{k} = \mu(\boldsymbol{P}_{f\{k\}, NO}, \boldsymbol{P}_{f\{k\}, O}) $$ $$ \xi_{k} = D_{P}(\boldsymbol{P}_{f\{k\}, NO}, \boldsymbol{P}_{f\{k\}, O}). $$ In order to reduce the influence of blinks and other artefacts in the OKN datasets, the blink removal algorithm used for the calibration data was applied to the OKN data before the computation of both the calibration plane distortion and the waveform robustness. The method is described in full detail in Dunn (2014). The results for calibration plane distortion and waveform robustness are presented as empirical cumulative distribution functions (CDF s), as well as the area under each CDF curve, ACDF. The area computations for the calibration plane distortion were bounded to 1° as this is considered a good calibration accuracy (Hansen and Ji, 2010). The area computation for the waveform robustness was bounded to 0.2 as the results from "Waveform robustness and accuracy examples" showed that Prob(DP > 0.2) ≈ 0.01 for the \(\mathcal {G}\) MF. The ACDF was adjusted such that ACDF ∈ [0,1] by dividing the computed area with the maximum CDF-value for the area computation. Using this definition of the waveform robustness, the ACDF for the Procrustes calibration method will be 1.0 be definition. The accuracies in all datasets are presented in Table 3. When comparing the accuracies for the NDE dataset with and without OA, it can be seen that the OA improves the accuracy at least for one of the eyes for all mapping functions. The most prominent improvements are seen for the mapping functions with a higher degree of freedom, i.e., \(\mathcal {G}\) and \(\mathcal {A}_{4}\). As expected, the \(\mathcal {G}\) MF achieved the best accuracy for the CDE - NO dataset. The average and standard deviation of accuracy for all datasets The format is mean ± standard deviation. The accuracies are divided and color-coded into three categories; Open image in new window : 0°− 0.5°, Open image in new window : 0.51°− 1.5° and Open image in new window : \(1.51^{\circ } - \inf ^{\circ }\). The (OA) indicates that the outlier correction algorithm has been used during the calibration For the NDE data and CDE - O data (both with OA) where calibration data fixation inaccuracies are present, the accuracies for the \(\mathcal {A}_{1}\), \(\mathcal {B}\), \(\mathcal {G}\) and \(\mathcal {P}\) mapping functions are approximately the same while the \(\mathcal {A}_{4}\) yields a considerably worse accuracy. The fact that the accuracies are worse for the NDE database than for the CDE - O database indicates that the true Nystagmus calibration errors are more severe than the simulated ones. If good accuracies are defined as being smaller than or equal to 0.5°, it is difficult to achieve good accuracy with inaccuracies in the calibration data. Calibration plane distortion The calibration plane distortion CDFs are presented in Fig. 7 and the ACDF results are listed in Table 4. The differences between the results for the vertical and horizontal OKN data within each MF are small. The performance of the \(\mathcal {A}_{1}\), \(\mathcal {B}\) and \(\mathcal {P}\) MFs are quite similar. The results for the other two MFs are worse. This is confirmed by Fig. 7. The calibration plane distortion plotted for the five mapping functions as cumulative distribution functions. The calibration plane distortion of the \(\mathcal {A}_{1}\), \(\mathcal {B}\) and the \(\mathcal {P}\) are almost the same. The \(\mathcal {G}\) and \(\mathcal {A}_{4}\) polynomials performs worse compared to the three aforementioned MFs ACDF Results Dataset calibration plane distortion (μ) \(\mathcal {A}_{1}\) \(\mathcal {B}\) \(\mathcal {G}\) \(\mathcal {P}\) Waveform robustness (ξ) The \(\mathcal {A}_{1}\) MF generates the best calibration plane distortion scores whereas the \(\mathcal {P}\) MF generates a perfect waveform robustness score (1.0). The difference in μ-value for the \(\mathcal {A}_{1}\), \(\mathcal {B}\) and \(\mathcal {P}\) MF are quite small. The analysis was conducted on OKN data Waveform robustness The waveform robustness CDFs are presented in Fig. 8 and the corresponding ACDF results are presented in Table 4. The results in Fig. 8 show that the Procrustes calibration method performs the best and the \(\mathcal {A}_{4}\) performs the worst for both the vertical and the horizontal OKN tasks. This is quantified in Table 4. The waveform robustness seems to be linked to the non-linearity of the MF; a higher degree of non-linearity causes worse waveform robustness performance and vice versa. Waveform Robustness CDF. The vertical and horizontal OKN data CDF:s for waveform robustness. The \(\mathcal {P}\) generates the best results and the \(\mathcal {A}_{4}\) generates the worst results Waveform robustness and accuracy examples A few examples illustrating the relationship between accuracy and waveform robustness for the \(\mathcal {G}\) MF are presented in Figs. 9 and 10. As can be seen in Fig. 9, it is possible for an MF to produce small waveform robustness values, DP = 0.05, with a relatively large accuracy value, 2.12°. On the other hand, Fig. 10 illustrates that a Open image in new window accuracy does not guarantee a small waveform robustness value. A DP value larger than 0.2 is high, since only 1 % of the waveforms generates a higher value in the CDE - O dataset. All waveforms estimations were made using the \(\mathcal {G}\) MF. Small DP. Examples of waveforms where the CDE - O estimation is similar to the CDE - NO estimation. The probabilities of the DP-values 0.06 and 0.05 are Prob(DP > 0.06) ≈ 0.62 and Prob(DP > 0.05) ≈ 0.73 for CDE - O dataset, respectively. As illustrated by the plots, it is possible to achieve good waveform robustness even if the accuracy is Open image in new window . The \(\mathcal {G}\) MF was used for all estimations. The analysis was conducted on OKN data Large DP. Examples of data generating Open image in new window accuracy 10a and Open image in new window 10b accuracy, but with poor waveform robustness values. The probabilities of the DP-values of 0.2 and 0.24 are Prob(DP > 0.2) = 0.01 and Prob(DP > 0.24) < 0.01 for CDE - Offset dataset, respectively. The \(\mathcal {G}\) MF was used for all estimations. The analysis was conducted on OKN data In this paper, we investigated the suitability of commonly used calibration mapping functions for data from people with nystagmus and proposed a new approach for calibration of these participants. The new method utilises an outlier correction algorithm based on the experiment geometry and calibrates the eye tracker using Procrustes analysis. Our method was compared to different calibration MFs previously used in nystagmus research. Accuracy and Procrustes distance were used to study the properties of the various MFs. Procrustes distance was used to study waveform robustness, i.e., how well waveform PoR data can be repeated within the same participants despite fixation inaccuracies during the calibration, and calibration plane distortion, i.e., how close, in absolute terms, data with simulated fixation inaccuracies were to data without simulated fixation inaccuracies. Data from people with nystagmus (NDE), visually healthy participants (CDE - NO) and participants with simulated fixation inaccuracies (CDE - Offset) were included in the study. The accuracy data show that there is little difference between the \(\mathcal {A}_{1}\), \(\mathcal {B}\), \(\mathcal {G}\) and \(\mathcal {P}\) MFs for the NDE and CDE - O when using the outlier algorithm. However, when studying the calibration plane distortion presented in Fig. 7 and Table 4 it becomes apparent that the \(\mathcal {G}\) polynomial performs worse compared to the \(\mathcal {A}_{1}\), \(\mathcal {B}\) and \(\mathcal {P}\) MFs. This observation is likely explained by poor performance on interpolated data (the OKN dataset) by the \(\mathcal {G}\) polynomial. The calibration plane distortion thought as an accuracy measure for interpolated data, using the CDE - NO as reference. Finally, the results from the waveform robustness in Fig. 8 show that the \(\mathcal {P}\) MF has the best performance Since the Procrustes calibration method is based on linear operations only, the waveform robustness is 1.0 by default. The performances of the other MFs are ordered by their non-linearity; the more non-linear, the worse performance. The overall results show that it is not beneficial to use non-linear mapping functions when working with difficult to calibrate participants. Therefore, Procrustes analysis is the best choice when repeatable calibrations are desirable. The outlier correction algorithm improved the validation accuracies in all cases. This suggests that there is a potential value in modelling the experiment geometry. Even though our results show that the accuracy alone is not a reliable measure for evaluation of an MF it is still desirable to improve the accuracy as long as it does not affect other properties, such as the waveform. It should be noted that if the distribution of the calibration targets is different from the one presented in this paper, the algorithm needs to be adapted for the specific target constellation. One could try to find the geometric relationship between data and targets for calibration target distributions as well, but that would likely demand a more in-depth analysis of the geometry of the experimental setup. The threshold for detecting an outlier, described in Stage I of the outlier correction algorithm in "The outlier correction algorithm", is an important parameter for the correction performance. This parameter reflects the maximum deviation that is accepted from the theoretical horizontal distribution of the calibration data. As can be seen in Fig. 4, the foveation position varies spontaneously for people with nystagmus. If the threshold value is set too low, there is a risk to affect the structure of the calibration data. On the other hand, if the threshold is set too high, there is a risk to not detect outliers in the data. The reason why accuracy is not considered as a good indicator of calibration performance for people with nystagmus are the following: 1) It is difficult to know if the validation data were recorded when the participant looked at the corresponding validation target. The accuracy analysis does not make sense if the participant did not look at the presented target, since the entire point of the validation is to test how well the mapping function transforms PCRV data to some known position. Since gaze estimation is dependent on the calibration, it is not possible to know if poor validation results originate from the calibration or the validation. 2) Data distortion effects, as shown in Fig. 10a, may occur even if the accuracy is considered to be Open image in new window . This is a problem because one will think that the calibration went well, when in reality gaze data do not correspond to the actual eye movements generated by the participant. However, accuracy is a good measure in the sense that it is a unit (degree) that can be compared between recordings and systems. The distance measure was included to complement the accuracy and it was used to study how the waveform is affected by the calibration. A problem with the calibration plane distortion and waveform robustness measures is that the value may be difficult to interpret. In this paper, we have computed them on the same PCRV dataset for each mapping function, which makes it possible to compare the distance values between the MFs. The results can only be used to find that there are differences in the waveform, not the nature of these differences. For the nystagmus case, more specific differences such as foveation duration, amplitude, frequency and the nystagmus waveform, are of interest but are not possible to find using DP. The CDE - O used in this work is likely not representative of fixation inaccuracies caused by nystagmus, which the results also indicate; the accuracy of CDE - NO is better compared to that of the NDE. The idea of making random errors of fixed magnitude does have its limitations and a continuous distribution may possibly be a more realistic representation of the fixation errors for some participants. Signal (1) in Fig. 4 shows that it is possible for the position after the fast phase to vary as much as 4° between cycles. The fixation errors introduced in the CDE - O database are therefore considered reasonable. The creation of the CDE - NO and CDE - O databases serves a useful purpose in the sense that we have created two identical PCRV datasets, but with different estimates of the mapping functions. This allowed us to study differences between the tested calibration mapping functions. It is not possible to turn off the nystagmus oscillations for the affected patients, causing this analysis to be impossible to carry out for nystagmus data, since there is no reference waveform to compare the estimations with. In this work, we tested the EyeLink 1000 Plus system, which is frequently cited in nystagmus research. The applicability of the proposed method for other eye trackers has not been studied. Finally, the calibration data selection has not been central to the analysis in this paper. It is reasonable to assume that a poor calibration data selection method does have a negative impact on the PoR results, especially considering the results presented in this paper. The adjustments to Dunn's method (Dunn, 2014) may have influenced the results in this paper. But based on the data we recorded, the method adjustments are considered reasonable. An updated version of the method has recently been developed (Dunn et al., 2018). This method may further improve the accuracy of the algorithm. It should also be noted that the calibration data selection implemented in this work is designed for nystagmus with foveation periods or at least waveforms with a distinct fast phase. As can be seen in Fig. 4, there were no pendular waveforms present in this dataset. For pendular nystagmus waveforms, it is, however, still possible to use the method since the outlier correction algorithm estimates missing data. It is, however, necessary to have at least three recorded data points, one in each row and one in each column, in order for the algorithm to work. The Procrustes analysis calibration method was shown to be the best when working with data from participants who have a decreased ability to fixate their gaze during the calibration. The principal difference between the Procrustes calibration method and the other investigated methods was the ability to generate repeatable waveform estimations regardless of the calibration recording condition. The choice of calibration mapping function may have a significant impact on the resulting eye movement estimations, which in turn may decrease the reliability of subsequent data analysis. This work has been funded by the Swedish Research Council [grant number VR 2015-05442]. We gratefully acknowledge the Lund University Humanities Laboratory. We would also like to thank all the participants. Abel, L.A., Wang, Z.I., & Dell'Osso, L.F. (2008). Wavelet analysis in infantile nystagmus syndrome: limitations and abilities. Investigative Ophthalmology & Visual Science, 49(8), 3413–3423.CrossRefGoogle Scholar Barot, N., McLean, R.J., Gottlob, I., & Proudlock, F.A. (2013). Reading performance in infantile nystagmus. Ophthalmology, 120(6), 1232–1238.CrossRefPubMedGoogle Scholar Behrens, F., Mackeben, M., & Schröder-preikschat, W. (2010). An improved algorithm for automatic detection of saccades in eye movement data and for calculating saccade parameters. Behavior Research Methods, 42 (3), 701–708.CrossRefPubMedGoogle Scholar Blignaut, P., & Wium, D. (2013). The effect of mapping function on the accuracy of a video-based eye tracker. In Proceedings of the 2013 conference on eye tracking South Africa (pp. 39–46): ACM.Google Scholar Cerrolaza, J.J., Villanueva, A., & Cabeza, R. (2008). Taxonomic study of polynomial regressions applied to the calibration of video-oculographic systems. In Proceedings of the 2008 symposium on eye tracking research & applications (pp. 259–266): ACM.Google Scholar Dell'Osso, L.F. (2005). Recording and calibrating the eye movements of nystagmus subjects. omlab report 011105, 1–4.Google Scholar Dell'Osso, L.F., & Daroff, R.B. (1975). Congenital nystagmus waveforms and foveation strategy. Documenta Ophthalmologica, 39(1), 155–182.CrossRefPubMedGoogle Scholar Dell'Osso, L.F., Hertle, R.W., Leigh, R.J., Jacobs, J.B., King, S., & Yaniglos, S. (2011). Effects of topical brinzolamide on infantile nystagmus syndrome waveforms: Eyedrops for nystagmus. Journal of Neuro-Ophthalmology, 31(3), 228–233.PubMedGoogle Scholar Dunn, M. (2014). Quantifying perception and oculomotor instability in infantile nystagmus. PhD thesis. Cardiff University.Google Scholar Dunn, M.J., Harris, C.M., Ennis, F.A., Margrain, T.H., Woodhouse, J.M., McIlreavy, L., & Erichsen, J.T. (2018). An automated segmentation approach to calibrating infantile nystagmus waveforms. In press.Google Scholar Gower, J.C. (1975). Generalized procrustes analysis. Psychometrika, 40(1), 33–51.CrossRefGoogle Scholar Hansen, D.W., & Ji, Q. (2010). In the eye of the beholder: A survey of models for eyes and gaze. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(3), 478–500.CrossRefPubMedGoogle Scholar Hertle, R.W. (2010). Nystagmus in infancy and childhood: characteristics and evidence for treatment. American Orthoptic Journal, 60(1), 48–58.CrossRefPubMedGoogle Scholar Hertle, R.W., Yang, D., Adams, K., & Caterino, R. (2011). Surgery for the treatment of vertical head posturing associated with infantile nystagmus syndrome: Results in 24 patients. Clinical & Experimental Ophthalmology, 39(1), 37–46.Google Scholar Holmqvist, K., Nyström, M., Andersson, R., Dewhurst, R., Jarodzka, H., & Van de Weijer, J. (2011) Eye tracking: A comprehensive guide to methods and measures. Oxford: OUP Oxford.Google Scholar Hussain, N. (2016). Diagnosis, assessment and management of nystagmus in childhood. Paediatrics and Child Health, 26(1), 31–36.CrossRefGoogle Scholar Kumar, A., Shetty, S., Vijayalakshmi, P., & Hertle, R.W. (2011). Improvement in visual acuity following surgery for correction of head posture in infantile nystagmus syndrome. Journal of Pediatric Ophthalmology and Strabismus, 48(6), 341–346.CrossRefPubMedGoogle Scholar McLean, R., Proudlock, F., Thomas, S., Degg, C., & Gottlob, I. (2007). Congenital nystagmus: Randomized, controlled, double-masked trial of memantine/gabapentin. Annals of Neurology, 61(2), 130–138.CrossRefPubMedGoogle Scholar Naegele, J.R., & Held, R. (1982). The postnatal development of monocular optokinetic nystagmus in infants. Vision Research, 22(3), 341–346.CrossRefPubMedGoogle Scholar Peirce, J.W. (2007). Psychopy psychophysics software in python. Journal of Neuroscience Methods, 162(1-2), 8–13.CrossRefPubMedPubMedCentralGoogle Scholar Sheena, D., & Borah, B. (1981). Compensation for second-order effects to improve eye position measurements. Eye movements: Cognition and visual perception, pp. 257?-268.Google Scholar SR-Research (2010). EyeLink 1000 User Manual.Google Scholar Stampe, D.M. (1993). Reliable calibration methods for video-based pupil-tracking systems. Heuristic Behavior Research Methods, Instruments, & Computers, 25(2), 137–142.CrossRefGoogle Scholar Tai, Z., Hertle, R.W., Bilonick, R.A., & Yang, D. (2010). A new algorithm for automated nystagmus acuity function analysis. British Journal of Ophthalmology, pp. bjo–2010.Google Scholar Taibbi, G., Wang, Z.I., & Dell'Osso, L.F. (2008). Infantile nystagmus syndrome: broadening the high-foveation-quality field with contact lenses. Clinical Ophthalmology, 2(3), 585–589.PubMedGoogle Scholar Theodorou, M., & Clement, R. (2016). Classification of infantile nystagmus waveforms. Vision Research, 123, 20–25.CrossRefPubMedGoogle Scholar Thomas, S., Proudlock, F.A., Sarvananthan, N., Roberts, E.O., Awan, M., McLean, R., ..., et al. (2008). Phenotypical characteristics of idiopathic infantile nystagmus with and without mutations in frmd7. Brain: A Journal of Neurology, 131(5), 1259–1267.CrossRefGoogle Scholar OpenAccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. 1.Department of Biomedical EngineeringLund UniversityLundSweden 2.Humanities LaboratoryLund UniversityLundSweden 3.Department of OphtalmologyLund UniversityLundSweden Rosengren, W., Nyström, M., Hammar, B. et al. Behav Res (2019). https://doi.org/10.3758/s13428-019-01199-0 The Psychonomic Society
CommonCrawl
Characterization for the existence of bounded solutions to elliptic equations Extremal functions for an embedding from some anisotropic space, involving the "one Laplacian" Françoise Demengel , and Thomas Dumas UMR 8088, University of Cergy Pontoise, 2 avenue Adolphe Chauvain, Cergy, France * Corresponding author: Françoise Demengel Received April 2018 Revised July 2018 Published November 2018 In this paper, we prove the existence of extremal functions for the best constant of embedding from anisotropic space, allowing some of the Sobolev exponents to be equal to $1$. We prove also that the extremal functions satisfy a partial differential equation involving the $1$ Laplacian. Keywords: $\vec p$ Laplacian, best constants, concentration compactness, $1$Laplacian, anisotropic spaces. Mathematics Subject Classification: Primary: 35L30; Secondary: 35J75, 35J60. Citation: Françoise Demengel, Thomas Dumas. Extremal functions for an embedding from some anisotropic space, involving the "one Laplacian". Discrete & Continuous Dynamical Systems - A, 2019, 39 (2) : 1135-1155. doi: 10.3934/dcds.2019048 A. Alvino, V. Ferone, G. Trombetti and P.-L. Lions, Convex symmetrization and applications, Ann. Inst. H. Poincaré, Anal. Non Linéaire, 14 (1997), 275-293. doi: 10.1016/S0294-1449(97)80147-3. Google Scholar G. Anzellotti, Pairings between measures and bounded functions and compensated compactness, Ann. Mat. Pura Appl., 135 (1983), 293-318. doi: 10.1007/BF01781073. Google Scholar T. Aubin, Problèmes isopérimétriques et espaces de Sobolev, J. Differential Geometry, 11 (1976), 573-598. Google Scholar L. Boccardo, P. Marcellini and C. Sbordone, $L^∞$ -regularity for variational problems with sharp nonstandard growth conditions, Boll. Un. Mat. Ital. A, 4 (1990), 219-225. Google Scholar H. Brezis and E. Lieb, A relation between pointwise convergence of functions and convergence of functionals, Proc. Amer. Math. Soc., 88 (1983), 486-490. doi: 10.2307/2044999. Google Scholar D. Cordero-Erausquin, B. Nazaret and C. Villani, A mass-transportation approach to sharp Sobolev and Gagliardo-Nirenberg inequalities, Advances in Mathematics, 182 (2004), 307-332. doi: 10.1016/S0001-8708(03)00080-X. Google Scholar G. Cupini, P. Marcellini and E. Mascolo, Regularity under sharp anisotropic general growth conditions, Discrete and Continuous Dynamical Systems, 11 (2009), 67-86. doi: 10.3934/dcdsb.2009.11.67. Google Scholar F. Demengel, On some nonlinear partial differential equations involving the "1"-Laplacian and critical Sobolev exponent, ESAIM: Control, Optimisation and Calculus of Variations, 4 (1999), 667-686. doi: 10.1051/cocv:1999126. Google Scholar F. Demengel, Some Existence's results for non coercive "$1$-Laplacian" operator, Asymptotic Analysis, 43 (2005), 287-322. Google Scholar F. Demengel, Functions locally $1$-harmonic, Applicable Analysis, 83 (2004), 865-896. doi: 10.1080/00036810310001621369. Google Scholar F. Demengel and R. Temam, Functions of a measure and its applications, Indiana Math. Journal, 33 (1984), 673-709. doi: 10.1512/iumj.1984.33.33036. Google Scholar J. Dieudonné, Eléments D'analyse, 2, Gauthiers Villars 1968. Google Scholar T. Dumas, Existence de Solutions Pour des Équations Apparentées au 1 Laplacien Anisotrope, Thèse d'Université, Université de Cergy, 2018.Google Scholar L. Esposito, F. Leonetti and G. Mingione, Higher integrability for minimizers of integral functionals with $(p, q)$ growth, J. Differential Equations, 157 (1999), 414-438. doi: 10.1006/jdeq.1998.3614. Google Scholar L. Esposito, F. Leonetti and G. Mingione, Sharp regularity for functionals with $(p, q)$ growth, J. Differential Equations, 204 (2004), 5-55. doi: 10.1016/j.jde.2003.11.007. Google Scholar I. Fragala, F. Gazzola and B. Kawohl, Existence and nonexistence results for anisotropic quasilinear elliptic equation, Ann. I. H. Poincaré Anal., 21 (2004), 715-734. doi: 10.1016/j.anihpc.2003.12.001. Google Scholar N. Fusco and C. Sbordone, Local boundedness of minimizers in a limit case, Manuscripta Math., 69 (1990), 19-25. doi: 10.1007/BF02567909. Google Scholar M. Giaquinta, Growth conditions and regularity, a counterexample, Manuscripta Math., 59 (1987), 245-248. doi: 10.1007/BF01158049. Google Scholar E. Giusti, Minimal Surfaces and Functions of Bounded Variation, Birkhauser, 1984. doi: 10.1007/978-1-4684-9486-0. Google Scholar A. El Hamidi and J. M. Rakotoson, Compactness and quasilinear problems with critical exponents, Differential Integral Equations, 18 (2005), 1201-1220. Google Scholar A. El Hamidi and J. M. Rakotoson, Extremal functions for the anisotropic Sobolev inequalities, Ann. I.H. Poincaré, Analyse non Linéaire, 24 (2007), 741-756. doi: 10.1016/j.anihpc.2006.06.003. Google Scholar S. N. Kruzhkov and I. M. Kolodii, On the theory of embedding of anisotropic Sobolev spaces, Uspekhi Mat. Nauk, 38 (1983), 207-208. Google Scholar S. N. Kruzhkov and A. G. Korolev, On the imbedding theory of anisotropic function spaces, (Russian), Dokl. Akad. Nauk SSSR, 285 (1985), 1054-1057. Google Scholar P. L. Lions, The concentration-compactness principle in the calculus of variations. The locally compact case. Ⅰ., and Ⅱ, Ann. Inst. H. Poincar Anal. Non Lin ire 1, 1 (1985), 223-283. Google Scholar P. L. Lions, The concentration-compactness principle in the calculus of variations. The limit case, part 1, Rev. Mat. Iberoamericana 1, 1 (1985), 145-201. doi: 10.4171/RMI/6. Google Scholar P. L. Lions, The concentration-compactness principle in the calculus of variations. The limit case, part 2, Rev. Mat. Iberoamericana 1, 1 (1985), 45-121. doi: 10.4171/RMI/12. Google Scholar P. Marcellini, Regularity of minimizers of integrals in the calculus of variations with non standard growth conditions, Arch. Rational Mech. Anal., 105 (1989), 267-284. doi: 10.1007/BF00251503. Google Scholar P. Marcellini, Regularity and existence of solutions of elliptic equations with $p- q$ growth conditions, J. Differential Equations, 90 (1991), 1-30. doi: 10.1016/0022-0396(91)90158-6. Google Scholar A. Mercaldo, J. D. Rossi, S. Segura de León and C. Trombetti, Anisotropic p, q-Laplacian equations when p goes to 1, Nonlinear Analysis, 73 (2010), 3546-3560. doi: 10.1016/j.na.2010.07.030. Google Scholar S. M. Nikolskii, On imbedding, continuation and approximation theorems for differentiable functions of several variables, Uspehi Mat. Nauk., 6 (1961), 63-114. Google Scholar G. Strang and R. Temam, Duality and relaxation in the variational problems of plasticity, J. Mécanique, 19 (1980), 493-527. Google Scholar G. Talenti, Best constant in Sobolev inequality, Ann. Mat. Pura Appl., 110 (1976), 353-372. doi: 10.1007/BF02418013. Google Scholar R. Temam, Mathematical Problems in Plasticity, Gauthiers Villars, 1983. Google Scholar M. Troisi, Teoremi di inclusione per spazi di Sobolev non isotropi, Ricerche Mat., 18 (1969), 3-24. Google Scholar J. Vetois, Decay estimates and a vanishing phenomenon for the solutions of critical anisotropic equations, Adv. Math., 284 (2015), 122-158. doi: 10.1016/j.aim.2015.04.029. Google Scholar Pavel Jirásek. On Compactness Conditions for the $p$-Laplacian. Communications on Pure & Applied Analysis, 2016, 15 (3) : 715-726. doi: 10.3934/cpaa.2016.15.715 Carlo Mercuri, Michel Willem. A global compactness result for the p-Laplacian involving critical nonlinearities. Discrete & Continuous Dynamical Systems - A, 2010, 28 (2) : 469-493. doi: 10.3934/dcds.2010.28.469 Vincenzo Ambrosio, Teresa Isernia. Multiplicity and concentration results for some nonlinear Schrödinger equations with the fractional p-Laplacian. Discrete & Continuous Dynamical Systems - A, 2018, 38 (11) : 5835-5881. doi: 10.3934/dcds.2018254 Dimitri Mugnai. Bounce on a p-Laplacian. Communications on Pure & Applied Analysis, 2003, 2 (3) : 371-379. doi: 10.3934/cpaa.2003.2.371 Anna Mercaldo, Julio D. Rossi, Sergio Segura de León, Cristina Trombetti. Behaviour of $p$--Laplacian problems with Neumann boundary conditions when $p$ goes to 1. Communications on Pure & Applied Analysis, 2013, 12 (1) : 253-267. doi: 10.3934/cpaa.2013.12.253 Guowei Dai, Ruyun Ma. Unilateral global bifurcation for $p$-Laplacian with non-$p-$1-linearization nonlinearity. Discrete & Continuous Dynamical Systems - A, 2015, 35 (1) : 99-116. doi: 10.3934/dcds.2015.35.99 Bernd Kawohl, Jiří Horák. On the geometry of the p-Laplacian operator. Discrete & Continuous Dynamical Systems - S, 2017, 10 (4) : 799-813. doi: 10.3934/dcdss.2017040 Claudianor O. Alves, Vincenzo Ambrosio, Teresa Isernia. Existence, multiplicity and concentration for a class of fractional $ p \& q $ Laplacian problems in $ \mathbb{R} ^{N} $. Communications on Pure & Applied Analysis, 2019, 18 (4) : 2009-2045. doi: 10.3934/cpaa.2019091 Ezequiel R. Barbosa, Marcos Montenegro. On the geometric dependence of Riemannian Sobolev best constants. Communications on Pure & Applied Analysis, 2009, 8 (6) : 1759-1777. doi: 10.3934/cpaa.2009.8.1759 Yansheng Zhong, Yongqing Li. On a p-Laplacian eigenvalue problem with supercritical exponent. Communications on Pure & Applied Analysis, 2019, 18 (1) : 227-236. doi: 10.3934/cpaa.2019012 Xianling Fan, Yuanzhang Zhao, Guifang Huang. Existence of solutions for the $p-$Laplacian with crossing nonlinearity. Discrete & Continuous Dynamical Systems - A, 2002, 8 (4) : 1019-1024. doi: 10.3934/dcds.2002.8.1019 Leandro M. Del Pezzo, Julio D. Rossi. Eigenvalues for a nonlocal pseudo $p-$Laplacian. Discrete & Continuous Dynamical Systems - A, 2016, 36 (12) : 6737-6765. doi: 10.3934/dcds.2016093 T. F. Ma, M. L. Pelicer. Attractors for weakly damped beam equations with $p$-Laplacian. Conference Publications, 2013, 2013 (special) : 525-534. doi: 10.3934/proc.2013.2013.525 Genni Fragnelli, Dimitri Mugnai, Nikolaos S. Papageorgiou. Robin problems for the p-Laplacian with gradient dependence. Discrete & Continuous Dynamical Systems - S, 2019, 12 (2) : 287-295. doi: 10.3934/dcdss.2019020 Kerstin Does. An evolution equation involving the normalized $P$-Laplacian. Communications on Pure & Applied Analysis, 2011, 10 (1) : 361-396. doi: 10.3934/cpaa.2011.10.361 Francesca Colasuonno, Benedetta Noris. A p-Laplacian supercritical Neumann problem. Discrete & Continuous Dynamical Systems - A, 2017, 37 (6) : 3025-3057. doi: 10.3934/dcds.2017130 Lorenzo Brasco, Enea Parini, Marco Squassina. Stability of variational eigenvalues for the fractional $p-$Laplacian. Discrete & Continuous Dynamical Systems - A, 2016, 36 (4) : 1813-1845. doi: 10.3934/dcds.2016.36.1813 Wenting Cong, Jian-Guo Liu. A degenerate $p$-Laplacian Keller-Segel model. Kinetic & Related Models, 2016, 9 (4) : 687-714. doi: 10.3934/krm.2016012 Nikolaos S. Papageorgiou, George Smyrlis. Positive solutions for parametric $p$-Laplacian equations. Communications on Pure & Applied Analysis, 2016, 15 (5) : 1545-1570. doi: 10.3934/cpaa.2016002 Friedemann Brock, Leonelo Iturriaga, Justino Sánchez, Pedro Ubilla. Existence of positive solutions for $p$--Laplacian problems with weights. Communications on Pure & Applied Analysis, 2006, 5 (4) : 941-952. doi: 10.3934/cpaa.2006.5.941 Françoise Demengel Thomas Dumas
CommonCrawl
Traffic flow estimation with data from a video surveillance camera Aleksandr Fedorov ORCID: orcid.org/0000-0002-8829-34471, Kseniia Nikolskaia ORCID: orcid.org/0000-0001-8335-20841, Sergey Ivanov ORCID: orcid.org/0000-0002-1008-62021, Vladimir Shepelev1 & Alexey Minbaleev2 This study addresses the problem of traffic flow estimation based on the data from a video surveillance camera. Target problem here is formulated as counting and classifying vehicles by their driving direction. This subject area is in early development, and the focus of this work is only one of the busiest crossroads in city Chelyabinsk, Russia. To solve the posed problem, we employed the state-of-the-art Faster R-CNN two-stage detector together with SORT tracker. A simple regions-based heuristic algorithm was used to classify vehicles movement direction. The baseline performance of the Faster R-CNN was enhanced by several modifications: focal loss, adaptive feature pooling, additional mask branch, and anchors optimization. To train and evaluate detector, we gathered 982 video frames with more than 60,000 objects presented in various conditions. The experimental results show that the proposed system can count vehicles and classify their driving direction during weekday rush hours with mean absolute percentage error that is less than 10%. The dataset presented here might be further used by other researches as a challenging test or additional training data. Urbanization and increased building density of cities are essential features of modern society. Not only does such a way of life bring economic benefits, but it also poses a new set of problems for city authorities. One of these problems is efficient traffic management and analysis. High population density leads to the tremendous number of personal cars, an increased number of freight vehicles for transportation of commodities and goods, tight pedestrian traffic. Transportation tasks can no longer be addressed by sub-optimal heuristics, based on the small amount of the manually gathered statistics. To make efficient decisions, forecast and assess their consequences, authorities require an automated system for analyzing traffic flow throughout the city. Nowadays, many cities have low-cost video surveillance systems, also known as closed-circuit television (CCTV). They exhibit rapid growth nowadays and usually include heterogeneous cameras with various resolution, mounting points, and frame rates [43]. CCTV works 24 h a day, 7 days a week and generates a massive amount of information, called Big Data. Among other applications, this data can serve as a foundation for the automated traffic surveillance system. There are several well-known problems when working with Big Data. For instance, to build a traffic surveillance system, one has to develop efficient algorithms for moving, storing, intelligent processing and analyzing data from surveillance cameras. In this work, we focus on the last two aspects. Despite obvious benefits, there is a limited number of researches that aim to analyze real-world data from the CCTV systems. Some of the representative works are the studies [42] and [24], which utilize low-resolution heterogeneous CCTV data and deep neural networks to count vehicles on the road and estimate traffic density. The examples of using conventional computer vision techniques are the systems developed in works [25, 34]. They address the problems of freight traffic monitoring [34] and illegal traffic behaviour recognition [25], respectively. In comparison, great efforts were made to improve results in more specific tasks. For vehicle detection, the majority of modern works focuses on adapting and improving state-of-the-art object detection frameworks such as Faster R-CNN [28], YOLO [26], and SSD [21]. This includes architectural novelties [10, 12, 15, 32, 36, 44], exploiting temporal information to perform joint detection and tracking [14, 16, 22], improving detection speed [1]. Another example of the well-studied task in intelligent transportation is fine-grained vehicle classification [31, 33]. In this work, we aim to address a gap between recent researches and them real-world applications. As a case study, we focused on one of the busiest crossroads in Chelyabinsk city, Russia. We started from a widely adopted Faster R-CNN model and sequentially improved baseline performance via additional mask branch [9], anchors shape optimization, focal loss [37] and adaptive feature pooling [20]. The proposed model, together with simple and efficient SORT tracker [2], forms a backbone of the modern traffic analysis solution. Our system is capable of counting and analyzing vehicles movement direction with a maximum relative error that is lower than 10%. The results of this research will be used by city authorities to improve the overall throughput of the crossroad. The contribution of the paper can be summarized as follows: We have constructed a new challenging dataset, which includes 982 highly crowded frames with more than 60,000 instances in total. We have proposed and implemented a novel traffic flow estimation system, which is based on the recent advances in vehicle detection and tracking tasks. We thoroughly evaluated our system and provided empirical evidence that the proposed solution has sufficient precision and might be further used as a backbone for other high-level models. The paper has the following structure. A review of relevant studies is presented in "Related works" section. "Methodology" section describes our approach to the traffic flow estimation problem, discusses design choices, and collected dataset. Evaluation protocol and experiment implementation details are presented in the "Experiments" section. In "Results and discussions" section, we provide analysis and summary of the obtained results. Finally, "Conclusions" section concludes the paper and outlines directions for further research. There are a very limited number of researches in the area of analyzing data from video surveillance systems. The most representative works are briefly summarized in the introduction. However, the targets and goals of these studies are different from the scope of this paper. For the lack of alternatives, here we provide an overview of the recent advances in the research topics on which our work is based on, i.e. objects detection and multi-object tracking (MOT) problems. When possible, we additionally highlight advances in specialized works for vehicle detection and tracking. Objects detection As of today, most of the object detectors are based on the convolutional neural networks (CNN) and can be roughly divided into two classes: single-stage detectors and two-stage detectors. The single-stage detectors are generally fast and predict objects bounding boxes together with classes within a single network pass. Classical examples of the single-stage detectors are YOLO [26] and SSD [21]. These architectures work particularly well in cases when target objects occupy a considerable amount of image. An example of such data is the popular UA-DETRAC vehicle detection dataset [39]. Based on this data, Dmitriy Anisimov and Tatiana Khanova [1] have shown that a thoroughly constructed SSD-like detector can run faster than 40 frames per second on the modern CPU while maintaining favourable precision. Another example of good speed-precision trade-off is YOLO v2 architecture [30], which was specialized for vehicle detection via anchors clustering, additional loss normalization, and multi-layer feature fusion strategy. The most representative example of two-stage detectors is the R-CNN family of detectors [7,8,9, 28] that currently occupy leading places in the COCO [19] and Cityscapes [5] benchmark datasets. In comparison to single-stage detectors, two-stage detectors first predict regions and then refine and classify each of them during the second stage. Early R-CNN [8] work utilized a straightforward approach: regions were generated via a selective search algorithm and then fed to the classification CNN. The overall speed of the R-CNN is low due to the selective search compute time and requirement to run heavy classifier per each region. To overcame this limitation, Fast R-CNN was proposed [7]. Instead of running CNN per each region, Ross Girshick fed the whole image to the CNN and pooled regions of interest (ROI) from the last feature map. Replacing selective search in the Faster R-CNN [28] with tiny CNN, called region proposal network, further boosted precision and speed of the detector. A thorough review of the main advantages and drawbacks of single and two-stage detectors is presented in the work [11]. Many vehicle detection works adopted variants of the Faster R-CNN architecture. Li [16] proposed to process multiple adjacent frames to better handle blurring and short-term occlusions. Wang et al. [37] studied the application of the focal loss [18] for vehicle surveillance. They showed that being a relatively simple technique, focal loss provides substantial performance improvements. Hu et al. [10] focused on the improving scale robustness of the Faster R-CNN and suggested context-aware RoI (CARoI) pooling that utilizes deconvolution with bilinear kernels to represent features for small objects accurately. CARoI pooling works on top of the multiple layers and fuses high and low-level semantic information to improve performance. Multi-object tracking The progress in the precision of the object detectors mentioned above made tracking-by-detection paradigm the leading one in a multi-object tracking (MOT) task. In this paradigm, tracking is formulated as a data association (DA) problem where the aim is to merge fuzzy detections across multiple frames into long tracklets. Classical tracking-by-detection methods rely solely on motion clues from the detector and solve the DA problem via optimization techniques. Well known examples are Multiple Hypothesis Tracking (MHT) [13] and Joint Probalistic Data Association Filter (JPDAF) [29]. These methods tackle the association problem on a frame-by-frame basis, yet their combinatorial complexity is exponential in the number of tracked objects, which makes them unsuitable for real-time tracking. On the contrary, recent SORT tracker [2] showed that a simple Hungarian algorithm with Kalman filtering for movement forecasting could achieve real-time processing speed while maintaining favourable performance. Most of the recent improvements in the MOT task involve fusing motion features with appearance one to better distinguish highly occluded objects and reidentify lost instances. The appearance clues usually come from convolutional neural networks [4, 40]. However, Tang et al. [35] showed that hand-crafted features, like the histogram of oriented gradients and colour histograms, might also be used if no training data is provided. From the practical point of view, computing visual features for each tracked object leads to a highly increased computational burden, especially if the number of objects is high. Together with detector processing time, cumulative performance usually is far from being near real-time. It is worthwhile to mention that simultaneous vehicle detection and tracking is an area of active research [14, 22]. The coupling of these tasks might solve the performance problem mentioned above. Detectors already incorporate appearance features, detection precision will also benefit from the temporal image context. This paper aims to develop a system for traffic flow estimation, i.e. for counting and classifying vehicles by their movement directions. To achieve that goal, we divide the problem into three sub-tasks: vehicle detection, vehicle tracking, and vehicle direction estimation. It naturally leads to a modular and easy to test architecture composed of the detection and tracking modules, respectively. In the following sections, we describe each module in detail alongside data collected for training and evaluation. There are more than 40 surveillance cameras in Chelyabinsk, the majority of each is mounted high above the main crossroads to provide a general overview of the traffic situation. In comparison to the low-end CCTV [43] systems, these cameras provide stable 25 frames per second, maintaining 1920 × 1080 resolution. However, the video stream is not perfect due to compression artefacts, blurring, and hardware faults. As shown in Fig. 1, images from these cameras are different from data in the public datasets, like KITTI [6] or UA-DETRAC [39]. The difference comes from many aspects: viewing angle, number of instances on a single frame, occlusions level, high variations in scale. Our data is more challenging. Even a trained human eye is not capable of labelling every instance on the image from the first time. Example of an image from our dataset. Blue, yellow, green, red, violet, pink and black polygons stands for the "car", "tram", "van", "bus", "trolleybus", "truck" and "unlabeled" objects respectively For this work, we focused on a single camera that monitors one of the most problematic crossroads in Chelyabinsk city. It is a deliberate choice to maximize detection precision and obtain a minimum viable product. We annotated with polygons more than 60,000 instances across 982 frames from the selected camera. All annotation relevant tasks were performed in the COCO Annotator tool [3]. In comparison to the existent vehicle detection datasets [6, 39], we did not annotate video sequences, as it would be extremely time-consuming for such crowded scenes. Instead, we focused on covering a wide range of traffic situations, time of day, weather conditions. We also annotated every vehicle that can be distinguished with high confidence, especially in crowded traffic jams, as an approximate number of vehicles in a jam might be useful for real-world tasks in the future. The overall statistic about the gathered data is summarized in the Table 1. Despite the specificity of the proposed dataset, our data might be used in other studies as a supplementary data source or challenging test set. Table 1 Data distribution in the collected dataset To thoroughly validate the proposed system in the traffic flow estimation problem we collected statistics about passing vehicles from 8:00 to 12:00 and from 17:00 to 19:00 for four of the most popular movement directions. We have deliberately focused on morning and evening hours during the weekday when traffic flow is very dense, and tracking vehicles is a hard task even for humans. Detection module The foundation of our detection module is a widely adopted two-stage Faster R-CNN [28] detector. This choice was made for two reasons. First, two-stage detectors are current state-of-the-art in many detection benchmarks [5, 19, 39] and provide a solid start point in any detection task. Second, our future research may include complex multi-task learning. Extending Faster R-CNN architecture for multiple tasks is as simple as adding a new prediction branch. We further improved Faster R-CNN baseline performance with a feature pyramid network (FPN) backbone [17], an additional mask branch [9], anchors shape optimization, focal loss [37] and adaptive feature pooling [20]. Mask branch As was shown in the Mask R-CNN work [9], additional regression of the per-instance masks leads to better precision in the relevant bounding box regression task. Consequently, the first optimization that we applied was a complementary mask branch. This branch runs in parallel with the existing Faster R-CNN branches and aims to regress a binary mask for each region of interest. For simplicity, we followed [9] and approximated precise pixel-wise instance masks via coarse polygon masks from the collected dataset. Anchors shape optimization The next improvement comes from the simple observation that Faster R-CNN anchors parameterization was explicitly designed for the COCO dataset [9], but our target data is different. There is an effective way [27] to choose anchors by K-Means clustering with a distance measure that is based on the intersection over union (IOU). However, this procedure yields a heterogeneous set of anchors, which is not suitable for Faster R-CNN architecture, specifically for the region proposal network (RPN). RPN regresses offsets for anchors which are conventionally defined as a Cartesian product of scale and aspect ratio sets. Each anchor is assigned to the specific FPN level, based on its area. RPN is shared across multiple feature pyramid levels and therefore, for each level RPN predicts offsets for anchors that have the same scale but different aspect ratios. We have found that in the case of using transfer learning any naive strategy of incorporating heterogeneous anchors into the described framework leads to unstable training and bad performance. For our setup, it holds even if separating region proposal network for each pyramid level. To adjust anchors to our data and keep benefits of the pre-trained weights, we employed a grid search. The objective of the search was to find a set of aspects and scales that maximizes IOU between objects in the training set and anchors that are matched to them during training. We followed initial Faster R-CNN design and limited the scope of the search for anchor scales exclusively to sequences of powers of two. Focal loss To address extreme foreground-background class imbalance encountered during the RPN training we used novel focal loss, originally proposed for single-stage detectors [18]. The focal loss (FL) serves as a drop-in replacement for conventional cross entropy and can be defined as: $$\begin{aligned} p_t\,= &\, {} {\left\{ \begin{array}{ll} p &{}\text {if y = 1}\\ 1 - p &{}\text {otherwise,}\end{array}\right. } \\ FL(p_t)= &\, {} -\alpha _t(1-p_t)^\gamma * log(p_t) \end{aligned}$$ Here \(y \in {\pm 1}\) specifies the target class and \(p \in [0, 1]\) is the estimated probability for the class with label \(y = 1\), also known as objectness. The parameter \(\gamma\) is used to down-weight easy negatives, \(\alpha _t \in [0, 1]\) is additional balancing factor and defined as \(\alpha\) for class 1 and \(1 - \alpha\) for class − 1. It was shown in [18] that such formulation naturally emphasizes hard samples and prevents a vast number of easy negatives from dominating the loss. In this paper, we used \(\gamma = 2\) and \(\alpha = 0.25\), because this configuration demonstrated superior results in works [18, 37]. Adaptive feature pooling At last, we further pushed the performance of the model by pooling and fusing ROI features from each FPN level. It allows each proposal to access both rich context information from deep pyramid levels and low-level features with high localization accuracy from the early levels. Here we did not follow CARoI pooling procedure [10], because concatenating already deep features and its further processing is too time-consuming. Instead, we have adopted adaptive feature pooling described in the reference [20] and directly fuse features from multiple levels via the max function. In comparison to the work [20], we did not include any additional layers before max fusion to keep inference performance near real time. Our final detection model is schematically shown on the Fig. 2. Conceptual illustration of our detection network: a backbone with feature pyramid network [17]; b adaptive feature pooling [20]; c box branch; d mask branch. Note, that RPN and channel dimension of the feature maps are omitted for brevity Tracking module The tracking module of the proposed system answers two questions: Where did each vehicle come from, and where did it go? To answer these questions, we first track each detected vehicle across multiple frames and then classify its movement direction. For vehicle tracking, we adopted SORT tracker [2] as it has a good speed-precision trade-off. We did not consider more recent works that fuse motion and appearance clues. Even providing a better performance, computing visual features for our crowded scenes is computationally expensive. SORT provides the required functionality to track objects across multiple frames. On the next step, given the complete or incomplete trajectory of the car, we need to classify its general movement direction. To do so, one might train a shallow classifier similar to the work [38]. However, we found that even a straightforward, heuristic-based approach works sufficiently well for us. Our method includes several steps. First, we divide the crossroad into several non-overlapping regions. Each region defines either an adjacent road or the center of the crossing. Assuming that we already have tracking results for the whole video sequence, we approximate the vehicle position in each frame by the center of the bottom edge of the bounding box. Now, given a complete trajectory of each vehicle in the image coordinates, one can assign each path point to one of the crossroad sections (or void if nothing). It transforms the movement trajectories into a sequence of the visited regions. Despite possible deviations, this sequence should generally be dividable into three major parts: source road, crossing, and target road. A couple of source and target roads uniquely describes the vehicle movement direction. We also need a way to deal with unfinished trajectories. It is a bit trickier, as they are usually the result of the detector fault or object disappearance due to the long-term occlusions. To address this problem, we merge all unfinished trajectories, i.e. one that ends or starts on the crossing center, within a short temporal and spatial window. Thus, we first merge all incomplete trajectories and discard sequences that do not satisfy the assumption above. Then we split resulting sequences into two parts: before and after the crossroad. The most common source and target regions form a movement direction label. All trajectories that are either not logically justified or include void as a source or target region are discarded. The developed system was evaluated in two ways. First, we trained and tested the detection network alongside all proposed improvements on our new dataset. Second, we assessed the overall performance in the traffic flow estimation problem. To do this, we used the system to process surveillance data at peak hours. The results of the number of passing vehicles for the 4 busy driving directions were compared to the ground true statistics gathered by people. All experiments was run on Ubuntu 18.04 with Nvidia RTX 2080 Ti GPU, 12 CPU cores (Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz) and 32 GB RAM. We built our solution on top the of the official Facebook Detectron realization [23]. Data processing details We trained and validated the detection model solely on the collected dataset. Specifically, we used 786 images for training and 196 for validation, e.g. 80/20 split. Training and validation images were preprocessed to limit the influence of the irrelevant regions. We masked each image in a way to leave only the roadway visible, i.e. parking areas that are adjacent to the road, uncategorized objects and distant sections of the road were cropped. Due to the limited GPU memory, we also rescaled each image to the 1280 × 512 resolution. To improve robustness we augmented training frames by flipping, contrast and brightness adjustment, simulated occlusion and camera-specific artefacts. Occlusions were generated by either simple masking part of the sufficiently large vehicles (i.e. objects that occupy at least 40 pixels) or its extreme blurring. To reproduce camera faults, we blurred tall random horizontal sections near the image center. Detection network details We used ResNet-50 model [43] as a backbone together with the default FPN configuration from the Detection [23], as it provides a good trade-off between speed and detection quality. Stochastic gradient descent with momentum was used to train the network. The initial learning rate was set to \(1e-3\) and reduced by a factor of 10 after 20k and 27k iterations. In total, training was terminated after 30k iterations. We fixed a batch size equal to 4 for all experiments. To speed up convergence, we used weights pre-trained on the MS COCO dataset [19]. Before training the whole network, we finetuned last layers of the RPN and classification/mask branches for 1k iterations with learning rate linearly decaying from \(25e-3\) to \(1e-3\). We refer readers to the source code for more specific implementation details. Evaluation protocol To thoroughly validate performance of the detection model and impact of each proposed improvement, we report mean average precision (AP), mean average precision at IOU 0.5 (\(AP_{0.5}\)) and mean average precision at IOU 0.75 (\(AP_{0.75}\)) as they are used in several major detection benchmarks [5, 19, 39]. For completeness, besides AP for bounding boxes (\(AP^{bb}\)) we also report average precision for instance segmentation (\(AP^{segm}\)). For the task of the traffic flow estimation, we report mean absolute error and mean absolute percentage error. We used statistics gathered by humans as a ground true data. Results and discussions Detection performance The Table 2 shows the results of the baseline Faster R-CNN model alongside the influence of each proposed improvement. It is clear that all used techniques provide a solid advance relative to the baseline. However, obtained results are worse by approximately 10 to 20 AP points than results of the similar top-ranked models on a popular UA-DETRAC and KITTI datasets [6, 39]. It supports the conclusions that our target data is more challenging than well-studied counterparts. Table 2 Influence of the proposed architectural decisions on the detection network efficiency We expected more gain from the anchors optimization as Fig. 3 clearly states that anchors chosen by grid search procedure provide better coverage of the ground true boxes. The reasons might lie in the fact that we used pre-trained weights, i.e. our network is initially biased towards objects whose shape is different from the target one. Distribution of the IOU between ground true bounding boxes and matched anchors: a before anchors optimization, b after optimization Our network runs at 15 frames per second and can process video sequences in real-time by operating on each second frame. Figure 4 shows some visual results of the model on the validation dataset. One might further improve the quality by replacing ResNet-50 backbone with massive ResNeXt-101 model [41] or by constructing a heavier feature pyramid network. However, it would make the network completely inappropriate for near real-time video processing. Results of the proposed detection network. Driving directions are shown in the upper right corner. Better viewed with zoom Evaluation results for the traffic flow estimation task are presented in Fig. 5. Table 3 shows mean absolute and mean absolute percentage error compared to the human results. Given the overall performance, we can conclude that the developed system, although showing promising results, is still behind the human level. However, our solution already has applied value. For instance, it can and will be used to optimize traffic light timings or detect anomalies in crossroad throughput. Bar plots for vehicles that were counted by humans and automatically by our system. Each bin includes statistic for a single hour, four separate driving directions are reported Table 3 Mean absolute error and mean absolute percentage error per hour for the traffic flow estimation task Additional investigation of the typical errors suggests that most of them are the result of strong and long-lasting occlusions between vehicles in a dense traffic stream. For example, during the movement, trams partially overlay from one to two lanes for the west-east movement direction. Many cars overlap while turning, waiting in the center of the intersection for a free window. This problem can be addressed by improving the tracking module with special techniques for instances re-identification based on appearance clues. However, as mentioned earlier, existing approaches have a high computational burden and are not applicable to real systems. Designing efficient algorithms for vehicle re-identification remains an open question. In this work, we focused on the problem of traffic flow estimation with data from a video surveillance camera. As a case study, we considered one of the busiest crossroads in Chelyabinsk, Russia. Being practically important, this subject area is still in early development. Up to date, there are just a few works that aim to automatically process surveillance video data for traffic analysis. To address the posed problem, we used SORT tracker and enhanced state-of-the-art Faster R-CNN detector. The proposed system can count and classify vehicles by driving directions with a mean percentage error that is less than 10%. Achieved performance indicates that there is a room for improvement, yet it is already enough to solve some real-world problems. Our solution will be used further by city authorities to optimize crossroad throughput. The horizontal scaling would make possible even more fruitful optimization of the traffic flows throughout the city. To the best of our knowledge, there are no publicly available datasets or benchmarks that cover conditions similar to ours. To train and evaluate the detector, we gathered 982 video frames with more than 60,000 labelled vehicles in total. Collected data covers only a single crossroad and is not applicable as a general purpose benchmark. However, other studies might use it as an additional training data or challenging test set. Our data is diverse in terms of the presented weather conditions, traffic situation, vehicle shapes, scales, and classes. Despite achieved results, there are still many problems to face. Here we did not cover problems of vehicle re-identification and fine-grained classification, possible optimizations of convolutional neural networks for efficient and cheap inference. Separately, there are tasks related to the efficient surveillance data movement and storage. We consider these problems as possible directions for future researches. All code is available on-line, https://github.com/alnfedorov/traffic-analysis. Data access will be provided upon a request. CCTV: closed-circuit television MOT: regions of interest CARoI: context-aware RoI data association MHT: Multiple Hypothesis Tracking JPDAF: Joint Probalistic Data Association Filter FPN: feature pyramid network IOU: intersection over union RPN: region proposal network FL: mean average precision Anisimov D, Khanova T. Towards lightweight convolutional neural networks for object detection. In: 2017 14th IEEE international conference on advanced video and signal based surveillance (AVSS). 2017, p. 1–8. https://doi.org/10.1109/AVSS.2017.8078500, http://ieeexplore.ieee.org/document/8078500/, arXiv:1707.01395. Bewley A, Ge Z, Ott L, Ramos F, Upcroft B. Simple online and realtime tracking. In: 2016 IEEE international conference on image processing (ICIP). vol. 2016-Aug, p. 3464–8. IEEE. 2016. https://doi.org/10.1109/ICIP.2016.7533003, http://ieeexplore.ieee.org/document/7533003/. Brooks J. COCO annotator. 2019, https://github.com/jsbroks/coco-annotator/. Chu P, Ling H. FAMNet: Joint learning of feature, affinity and multi-dimensional assignment for online multiple object tracking. 2019. arXiv:1811.07258 arXiv:1904.04989. Cordts M, Omran M, Ramos S, Rehfeld T, Enzweiler M, Benenson R, Franke U, Roth S, Schiele B. The cityscapes dataset for semantic urban scene understanding. In: 2016 IEEE conference on computer vision and pattern recognition (CVPR). 2016, p. 3213–23. IEEE. https://doi.org/10.1109/CVPR.2016.350, arXiv:1604.01685, http://ieeexplore.ieee.org/document/7780719/. Geiger A, Lenz P, Stiller C, Urtasun R. Vision meets robotics: the KITTI dataset. Int J Rob Res. 2013;32(11):1231–7. https://doi.org/10.1177/0278364913491297. Girshick R. Fast R-CNN. In: Proceedings of the IEEE international conference on computer vision 2015 Inter, 2015, p. 1440–8. https://doi.org/10.1109/ICCV.2015.169. Girshick R, Donahue J, Darrell T, Malik J. Rich feature hierarchies for accu- rate object detection and semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2012, p. 580–7, http://arxiv.org/pdf/1311.2524v3.pdf. He K, Gkioxari G, Dollar P, Girshick R. Mask R-CNN. In: 2017 IEEE international conference on computer vision (ICCV). vol. 2017 Oct, 2017, p. 2980–8. IEEE. https://doi.org/10.1109/ICCV.2017.322, http://ieeexplore.ieee.org/document/8237584/. Hu X, Xu X, Xiao Y, Chen H, He S, Qin J, Heng PA. SINet: A scale-insensitive convolutional neural network for fast vehicle detection. In: IEEE transactions on intelligent transportation systems. vol. 20, no. 3. 2019, p. 1010–9. https://doi.org/10.1109/TITS.2018.2838132, https://ieeexplore.ieee.org/document/8478157/. Huang J, Rathod V, Sun C, Zhu M, Korattikara A, Fathi A, Fischer I, Wojna Z, Song Y, Guadarrama S, Murphy K. Speed/accuracy trade-offs for modern convolutional object detectors. In: 2017 IEEE conference on computer vision and pattern recognition (CVPR). 2017, p. 3296–7. IEEE. https://doi.org/10.1109/CVPR.2017.351, arXiv:1611.10012 http://ieeexplore.ieee.org/document/8099834/. Jung H, Choi MK, Jung J, Lee JH, Kwon S, Jung WY. ResNet-based vehicle classification and localization in traffic surveillance systems. In: 2017 IEEE conference on computer vision and pattern recognition workshops (CVPRW). 2017, p. 934–40. IEEE. https://doi.org/10.1109/CVPRW.2017.129, http://ieeexplore.ieee.org/document/8014863/. Kim C, Li F, Ciptadi A, Rehg JM. Multiple hypothesis tracking revisited. In: 2015 IEEE international conference on computer vision (ICCV). vol. 22. 2015, p. 4696–704. IEEE. https://doi.org/10.1109/ICCV.2015.533, http://ieeexplore.ieee.org/document/7410890/. Li C, Dobler G, Feng X, Wang Y. TrackNet: simultaneous object detection and tracking and its application in traffic video analysis. 2019, p. 1–10, arXiv:1902.01466. Li S, Lin J, Li G, Bai T, Wang H, Pang Y. Vehicle type detection based on deep learning in traffic scene. Procedia Comput Sci. 2018;131:564–72. https://doi.org/10.1016/j.procs.2018.04.281. Li S. 3D-DETNet: a single stage video-based vehicle detector. 2018, arXiv:1801.01769. Lin TY, Dollar P, Girshick R, He K, Hariharan B, Belongie S. Feature pyramid networks for object detection. In: 2017 IEEE conference on computer vision and pattern recognition (CVPR). vol. 2017 Jan, 2017, p. 936–44. IEEE. https://doi.org/10.1109/CVPR.2017.106, http://ieeexplore.ieee.org/document/8099589/. Lin TY, Goyal P, Girshick R, He K, Dollar P. Focal loss for dense object detection. In: Proceedings of the IEEE international conference on computer vision. 2017 Oct, p. 2999–3007. https://doi.org/10.1109/ICCV.2017.324. Lin TY, Maire M, Belongie S, Hays J, Perona P, Ramanan D, Dollár P, Zitnick CL. Microsoft COCO: common objects in context. In: Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics). vol. 8693 LNCS, 2014, p. 740–55. https://doi.org/10.1007/978-3-319-10602-1_48. Liu S, Qi L, Qin H, Shi J, Jia J. Path aggregation network for instance segmentation. In: 2018 IEEE/CVF conference on computer vision and pattern recognition. 2018, p. 8759–68. IEEE. https://doi.org/10.1109/CVPR.2018.00913, https://ieeexplore.ieee.org/document/8579011/. Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, Fu CY, Berg AC. SSD: single shot multibox detector. In: Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics) 9905 LNCS. 2016, p. 21–37. https://doi.org/10.1007/978-3-319-46448-0_2. Luo W, Yang B, Urtasun R. Fast and furious: real time end-to-end 3D detection, tracking and motion forecasting with a single convolutional net. In: 2018 IEEE/CVF conference on computer vision and pattern recognition. 2018, p. 3569–77. IEEE. https://doi.org/10.1109/CVPR.2018.00376, https://www.scopus.com/inward/record.uri?eid=2-s2.0-84956766303&partnerID=40&md5=123d54008fbf65570432b00879599f56 https://ieeexplore.ieee.org/document/8578474/. Massa F, Girshick R. maskrcnn-benchmark: Fast, modular reference implementation of instance segmentation and object detection algorithms in PyTorch; 2018. https://github.com/facebookresearch/maskrcnn-benchmark. Peppa MV, Bell D, Komar T, Xiao W. Urban traffic flow analysis based on deep learning car detection from cctv image series. Int Arch Photogramm Remote Sens Spat Inf Sci. 2018;42(4):565–72. https://doi.org/10.5194/isprs-archives-XLII-4-499-2018. Rathore MM, Son H, Ahmad A, Paul A. Real-time video processing for traffic control in smart city using Hadoop ecosystem with GPUs. Soft Comput. 2018;22(5):1533–44. https://doi.org/10.1007/s00500-017-2942-7. Redmon J, Divvala S, Girshick R, Farhadi A. You only look once: unified, real-time object detection. In: 2016 IEEE conference on computer vision and pattern recognition (CVPR). 2016, p. 779–88. IEEE. https://doi.org/10.1109/CVPR.2016.91, arXiv:1506.02640 http://ieeexplore.ieee.org/document/7780460/. Redmon J, Farhadi A. YOLO9000: Better, faster, stronger. In: 2017 IEEE conference on computer vision and pattern recognition (CVPR). 2017, p. 6517–25. IEEE. https://doi.org/10.1109/CVPR.2017.690, arXiv:1612.08242 http://ieeexplore.ieee.org/document/8100173/. Ren S, He K, Girshick R, Sun J. Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans Pattern Anal Mach Intell. 2017;39(6):1137–49. https://doi.org/10.1109/TPAMI.2016.2577031. Rezatofighi SH, Milan A, Zhang Z, Shi Q, Dick A, Reid I. Joint probabilistic data association revisited. In: 2015 IEEE international conference on computer vision (ICCV). 2015, p. 3047–55. No. December, IEEE. https://doi.org/10.1109/ICCV.2015.349, http://ieeexplore.ieee.org/document/7410706/. Sang J, Wu Z, Guo P, Hu H, Xiang H, Zhang Q, Cai B. An improved YOLOv2 for vehicle detection. Sensors. 2018;18(12):4272. https://doi.org/10.3390/s18124272. Sochor J, Špaňhel J, Herout A. BoxCars: improving vehicle fine-grained recognition using 3D bounding boxes in traffic surveillance. 2017, p. 1–12. arXiv:1703.00686. Sommer L, Acatay O, Schumann A, Beyerer J. Ensemble of two-stage regression based detectors for accurate vehicle detection in traffic surveillance data. 2019, p. 1–6. https://doi.org/10.1109/avss.2018.8639149. Spanhel J, Sochor J, Makarov A. Vehicle fine-grained recognition based on convolutional neural networks for real-world applications. In: 2018 14th symposium on neural networks and applications (NEUREL). 2018, p. 1–5. IEEE. https://doi.org/10.1109/NEUREL.2018.8587012. Sun X, Ding J, Dalla Chiara G, Cheah L, Cheung NM. A generic framework for monitoring local freight traffic movements using computer vision-based techniques. In: 5th IEEE international conference on models and technologies for intelligent transportation systems, MT-ITS 2017—proceedings. 2017; p. 63–8. https://doi.org/10.1109/MTITS.2017.8005592. Tang Z, Wang G, Xiao H, Zheng A, Hwang JN. Single-camera and inter-camera vehicle tracking and 3D speed estimation based on fusion of visual and semantic features. In: 2018 IEEE/CVF conference on computer vision and pattern recognition workshops (CVPRW). vol. 2018 June, 2018, p. 1080–7. IEEE. https://doi.org/10.1109/CVPRW.2018.00022, https://ieeexplore.ieee.org/document/8575384/. Wang L, Lu Y, Wang H, Zheng Y, Ye H, Xue X. Evolving boxes for fast vehicle detection. In: 2017 IEEE international conference on multimedia and Expo (ICME). 2017, p. 1135–40. IEEE. https://doi.org/10.1109/ICME.2017.8019461. Wang X, Cheng P, Liu X, Uzochukwu B. Focal loss dense detector for vehicle surveillance. In: 2018 international conference on intelligent systems and computer vision (ISCV). vol. 2018 May, 2018, p. 1–5. IEEE. https://doi.org/10.1109/ISACV.2018.8354064. Wei Y, Song N, Ke L, Chang Mc, Lyu S. Street object detection/tracking for AI city traffic analysis. In: 2017 IEEE smartworld, ubiquitous intelligence & computing, advanced & trusted computed, scalable computing & communications, cloud & big data computing, internet of people and smart city innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI). 2017, p. 1–5. No. i. IEEE. https://doi.org/10.1109/UIC-ATC.2017.8397669. Wen L, Du D, Cai Z, Lei Z, Chang MC, Qi H, Lim J, Yang MH, Lyu S. UA-DETRAC: a new benchmark and protocol for multi-object detection and tracking. 2015. arXiv:1511.04136. Wojke N, Bewley A, Paulus D. Simple online and realtime tracking with a deep association metric. In: 2017 IEEE international conference on image processing (ICIP). vol. 2017 Sept, 2017, p. 3645–9. IEEE. https://doi.org/10.1109/ICIP.2017.8296962. Xie S, Girshick R, Dollár P, Tu Z, He K. Aggregated residual transformations for deep neural networks. In: Proceedings—30th IEEE conference on computer vision and pattern recognition, CVPR 2017. 2017-Jan, 2016, p. 5987–95. https://doi.org/10.1109/CVPR.2017.634, arXiv:1611.05431. Zhang S, Wu G, Costeira JP, Moura JM. FCN-rLSTM: Deep spatio-temporal neural networks for vehicle counting in city cameras. In: Proceedings of the IEEE international conference on computer vision 2017-Oct, 2017, p. 3687–96. https://doi.org/10.1109/ICCV.2017.396. Zhang S, Wu G, Costeira JP, Moura JM. Table of contents. In: 2017 IEEE conference on computer vision and pattern recognition (CVPR). vol. 2017-Jan, 2017, p. v–liii. IEEE. https://doi.org/10.1109/CVPR.2017.4, http://ieeexplore.ieee.org/document/8099487/. Zhu F, Lu Y, Ying N, Giakos G. Fast vehicle detection based on evolving convolutional neural network. In: 2017 IEEE international conference on imaging systems and techniques (IST). vol. 2018-Jan, 2017, p. 1–4. IEEE. https://doi.org/10.1109/IST.2017.8261505. The work was supported by Act 211 Government of the Russian Federation, contract No. 02.A03.21.0011. South Ural State University, 454091, Chelyabinsk, Russia Aleksandr Fedorov , Kseniia Nikolskaia , Sergey Ivanov & Vladimir Shepelev The Institute of State and Law of The Russian Academy of Sciences, Moscow, 119019, Russia Alexey Minbaleev Search for Aleksandr Fedorov in: Search for Kseniia Nikolskaia in: Search for Sergey Ivanov in: Search for Vladimir Shepelev in: Search for Alexey Minbaleev in: SI supervised data collection and processed the dataset. AF and KN implemented the proposed system, performed the experiments and analyzed test results. VS and AM formulated the problem, suggested initial ideas and influenced the shape of the research. AF wrote an initial draft of the manuscript. All authors suggested related works, discussed the structure of the paper and results. All authors read and approved the final manuscript. Correspondence to Aleksandr Fedorov. Fedorov, A., Nikolskaia, K., Ivanov, S. et al. Traffic flow estimation with data from a video surveillance camera. J Big Data 6, 73 (2019) doi:10.1186/s40537-019-0234-z DOI: https://doi.org/10.1186/s40537-019-0234-z Traffic flow estimation Surveillance data
CommonCrawl
How exactly are the different motions of only one kind of fundamental string assumed to give rise to the spectrum of elementary particles we observe? In string theory, it is assumed that all particles can be described as quanta corresponding to the excitations of only one kind of fundamental string. How can in principle the different motion patterns of one kind of string give rise to the whole particle zoo in the standard model? How are the different properrties, that characterize an elementary particle such as their encoded in the allowed motion pattterns of the string? (I know that the vacuum corresponding to our standard model can not be uniquely identified at present, but I am interested in the general concepts and ideas that should in principle give rise to the different characteristics of the particles we observe.) This is just an attempt to extract and reask the useful part of this now a bit too overloaded original question. particle-physics string-theory standard-model DilatonDilaton 9,15722 gold badges4848 silver badges119119 bronze badges A string is a "particle with a complicated internal structure". To see the rough emergence of particle species, you may start with a hydrogen-string analogy. The hydrogen atom is a bound state of a proton and an electron. It may be in various energy eigenstates described by the quantum numbers $(n,l,m)$. They have a different angular momentum and its third polarization and different energies that mostly depend on $n$. It's similar for a string. A string may be found in various states. The exact "spectrum" i.e. composition of these states depends on the background where the string propagates and the type of string theory (more precisely, the type of the string theory vacuum). But for the rough picture, consider string theory in the flat space, e.g. in the 26-dimensional spacetime. Take an open string. Its positions $X^\mu(\sigma)$ may be Fourier decomposed and each of the Fourier modes, labeled by a positive integer $n$, produces coordinates of a 24-dimensional harmonic oscillator. So an open string is equivalent to a $24\infty$-dimensional harmonic oscillator (yes, it's twenty-four times infinity). Each of the directions in this oscillator contributes $Nn/ \alpha'$ to the squared mass $m^2$ of the resulting particle where $N$ is the total excitation level of the harmonic oscillators that arise from the $n$-th Fourier mode. At any rate, the possible values of the squared mass $m^2$ of the particle are some integer multiples of $1/\alpha'$. This dimensionful parameter $1/\alpha'$ is also called $1/l_{string}^2=m_{string}^2$. The ground state of the string, $|0\rangle$ of the harmonic oscillator, is a tachyonic particle with $m^2=-1/\alpha'$ in the case of bosonic strings. These tachyons are filtered away in the superstring. The first excited state of an open string is $\alpha^\mu_{-1}|0\rangle$ which carries one spacetime Lorentz vector index so all these states behave as a vector with $m^2=0$. They give you a gauge boson. And then there are massive modes with $m^2\gt 0$. Closed strings of similar masses have twice larger number of indices, so for example, the massless closed string states inevitably produce a graviton. So different masses of the resulting particles arise from different values of $Nn$ – and the very fact that the values may be different for different excitations is analogous to the same feature of the hydrogen atom or any other composite particle in the world. In string theory, however, one may also produce states with different values of the angular momentum – also somewhat analogous to the hydrogen atom which is a sufficient model – or different values of the electric charge and other charges. For example, in some Kaluza-Klein-like vacua, the number of excitations of $X^5_{n}$, the Fourier modes of the (circular) fifth dimension $X^5$, will be interpreted as the electric charge and it will behave as the electric charge in all physical situations, too. There are other ways how $U(1)$ electric-like charges and other charges arise in string theory. See e.g. this popular review http://motls.blogspot.com/2012/08/why-stringy-enhanced-symmetries-are.html?m=1 of ways how Yang-Mills gauge groups and charges may emerge from different formulations and vacua of string theory. If even this review is too technical, you will have to be satisfied with the popular Brian-Greene-like description stating that particles of different mass, spin, or charges emerge from strings vibrating in different ways. I am sort of puzzled about your question – and afraid that my answer will be either too simple or too off-topic given your real question – because you must have heard and read these basic insights about string theory about hundreds of times already. Luboš MotlLuboš Motl $\begingroup$ Hi Lumo, thanks for this nice answer to this originally not my question :-). Please dont be disappointed or annoyed about it, the puzzle has a simple solution. My question here is just an attempt to reask the useful enough for this site content if this question from another user called Neo. In the corresponding metadiscussion, I asked the mods to associate this question with Neo's account since it is not my own. $\endgroup$ – Dilaton $\begingroup$ Many people here on physics SE were interested in the answer to this only reasonable question buried in the unsuccessful post. They asked for a popular anwer and they should perfectly like what you explain here. Cheers $\endgroup$ Not the answer you're looking for? Browse other questions tagged particle-physics string-theory standard-model or ask your own question. Isn't a single Quantum one single string? How do comparatively larger particles arise from vibrations of infinitely smaller strings? Are There Strings that aren't Chew-ish? What really is the future about the Standard Model More questions on string theory and the standard model What is the physical interpretation of the S-matrix in QFT? What is the mathematically simplest particle field? Is there an infinite variety of particles in string theory? Which particles can be described as an excitation of the electromagnetic field?
CommonCrawl
People & Contacts Thematic & Focus Programs General Scientific Activity Commercial & Industrial Programs Centre for Mathematical Medicine FieldsLive Proposals & Applications Honours, Prizes & Fellowships Resources and Facilities SCIENTIFIC PROGRAMS AND ACTIVITIES Workshop on Arithmetic and Geometry of K3 surfaces and Calabi-Yau threefolds to be held at the University of Toronto and the Fields Institute Charles F. Doran, University of Alberta, and PIMS Shigeyuki Kondõ, Nagoya University Radu Laza, Stony Brook University James D. Lewis, University of Alberta Matthias Schütt, Leibniz University Hannover Noriko Yui, Queen's University (corresponding organizer) This workshop is supported by: The Fields Institute NSF (1100007 plus individual grants) JSPS Grant-in-Aid (S), No. 22224001 DFG GRK 1463 Pedagogical Lectures: C. Doran (University of Alberta, Canada) Periods, Picard-Fuchs Equations, and Calabi-Yau Moduli We introduce and explore the transcendental theory of Calabi-Yau manifolds and its interplay with explicit algebraic moduli. The focus in each lecture will be on Calabi-Yau manifolds of sequentially higher dimension (elliptic curves, K3 surfaces, and Calabi-Yau threefolds). Special attention will be given to hypersurfaces and complete intersections in toric varieties. S. Kondo (Nagoya University, Japan) K3 and Enriques surfaces In this introductory lecture, I shall give a survey on moduli and automorphisms of K3, Enriques surfaces. A related lattice theory and the theory of automorphic forms will be included. Lecture Notes 1 R. Laza (Stony Brook University, USA) Degenerations of K3 surfaces and Calabi-Yau threefolds In these lectures we will review the degenerations of K3 surfaces and Calabi-Yau threefolds from a geometric and Hodge theoretic point of view. In the first lecture we will focus on K3 surfaces, and we will review the period and its compactifications. In the second lecture, we will discuss the behavior of the period map near the boundary and the connection to mirror symmetry. J. Lewis (University of Alberta, Canada) Lectures in Transcendental Algebraic Geometry: Algebraic Cycles with a Special Emphasis on Calabi-Yau Varieties These lectures serve as an introduction to algebraic cycle groups and their regulators for projective algebraic manifolds. More precisely, after presenting a general overview, we begin with some rudimentary aspects of Hodge theory and algebraic cycles. We then introduce Deligne cohomology, as well as generalized cycles that are connected to higher $K$-theory, and associated regulators. Finally, we specialize to the Calabi-Yau situation, and explain some recent developments in the field. M. Schuett (University of Hannover, Germany) Arithmetic of K3 surfaces We will review various aspects of the arithmetic of K3 surfaces. Topics will include rational points, Picard number and Tate conjecture, zeta functions and modularity. N. Yui (Queen's University, Canada) Modularities of Calabi--Yau varieties: 2011 and beyond This paper presents the current status on modularities of Calabi--Yau varieties since the last update in 2001. We will focus on Calabi-Yau varieties of dimension at most three. Here modularities refer to at least two different types: arithmetic modularity, and geometric modularity. These will include: (1) the modularity of Galois representations of Calabi--Yau varieties (or motives) defined over $\QQ$ or number fields, (2) the modularity of solutions of Picard-Fuchs differential equations of families of Calabi-Yau varieties, and the modularity of mirror maps (mirror moonshine), (3) the modularity of generating functions of various invariants counting some quantities on Calabi-Yau varieties, and (4) the modularity of moduli for families of Calabi-Yau varieties. The topic (4) is commonly known as the geometric modularity. In this series of talks, I will concentrate on arithmetic modularity, namely, on the topic (1), and possibly on the topics (2) and (3) if time permits. Invited Speaker Abstracts: M. Artebani (Universidad de Concepcion, Chile) Examples of Mori dream Calabi-Yau threefolds Let $Z$ be a Mori dream space, i.e. a normal projective variety having finitely generated Cox ring $R(Z)$, and let $X$ be a hypersurface of $Z$. In a joint work with A. Laface we provided a necessary and sufficient condition for the Cox ring $R(X)$ to be isomorphic to $R(Z)/(f)$, where $f$ is a defining section for $X$. In this talk, after presenting this result, two applications to Calabi-Yau 3-folds will be given. Firstly, we will show that there are five families of Calabi-Yau hypersurfaces insmooth toric Fano fourfolds whose Cox ring is a polynomial ring with one relation. As a second application, we will compute the Cox ring of the generic quintic 3-fold containing a plane. X. Chen (University of Alberta, Canada) Rational self-maps of K3 surfaces and Calabi-Yau manifolds It is conjectured that a very general K3 surface does not have any nontrivial dominant rational self-maps. I'll give a proof for this conjecture and also show the same holds for a very general Calabi-Yau complete intersection in projective spaces of higher dimensions by induction. A. Clingher (Washington University in St. Louis, USA) On K3 Surfaces of High Picard Rank I will report on a classification of a certain class of K3 surfaces of Picard rank 16 or higher. In terms of periods, the moduli space of these objects is a quotient of a four-dimensional bounded symmetric domain of type IV. Explicit normal forms will be presented, as well as a discussion of modular forms associated with this family. S. Cynk (Jagiellonian University, Poland) Arithmetically significant Calabi-Yau threefolds From the point of view of their arithmetic the most interesting Calabi-Yau threefolds are those with small Hodge number $h^{1,2}$, especially the rigid ones. I will discuss the most important constructions of such Calabi-Yau threefolds, f.i. the Kummer construction, fiber product of rational elliptic surfaces with section and their refinements. I. Dolgachev (University of Michigan, USA) Quartic surfaces and Cremona transformations I will discuss the following question: when a birational automorphism of a quartic surface is a restriction of a Cremona transformation of the ambient space. N. Elkies (Harvard University, USA) Even lattices and elliptic fibrations of K3 surfaces I, II Abstract: Given a K3 surface $X$, any elliptic fibration with zero-section has an essential lattice $L$ (orthogonal complement of a hyperbolic plane) whose genus depends only on the Neron-Severi lattice $NS(X)$. The Kneser-Nishiyama gluing method and related techniques often makes it feasible to list all possible $L$, or all $L$ satisfying some additional condition such as nontrivial torsion or large Mordell-Weil rank, and to give explicit equations when one equation for $X$ is known. We illustrate with several examples: (a) Of the 13 elliptic fibrations of Euler's surface $E_a: xyz(x+y+z)=a$, nine can be defined over $Q(a)$, all with Mordell-Weil rank zero. This may both explain why Euler found it unusually hard to find families of solutions in $Q(a)$ and suggest how he did eventually find one such family. Over an algebraically closed field, the $E_a$ all become isomorphic with the "singular" K3 surface (Picard number $20$, maximal in characteristic zero) with $disc(NS(X)) = -4$. (b) If $NS(X)$ has rank $20$ and consists entirely of classes defined over $Q$, then $|disc(NS(X))|$ is at most $163$. We use this to show that no elliptic fibration can have attain the maximum of $18$ for the Mordell-Weil rank of an elliptic K3 surface over $C(t)$; this together with an explicit rank $17$ surface over $Q(t)$ (with $\rho=19$) answers a question of Shioda (1994). (c) Certain families of K3 surfaces with Picard number $19$ are parametrized by Shimura modular curves; this makes it possible to give explicit equations and CM coordinates on these curves that were previously inaccessible, and to find the genus $2$ curves with quaternionic multiplication that the Shimura curves parametrize. R. Girivaru (University of Missouri--St. Louis, USA) Extension theorems for subvarieties and bundles Given a subvariety (respectively a vector bundle) on a smooth hyperplane section of a smooth projective variety, it is of interest to know when it is the restriction of a subvariety (resp a bundle) on the ambient variety. I will present some results on this theme. J. W. Hoffman (Louisiana State University, USA) Picard groups of Siegel modular threefolds and theta lifting This is a joint work with Hongyu He. A Siegel modular threefold is a quotient of the Siegel upper half space of genus 2 by a subgroup of finite index in Sp(4, Z). These spaces are moduli spaces for abelian varieties with additional structure, and are examples of Shimura varieties. We discuss the structure of the Picard groups of these; they are groups generated by algebraic cycles of codimension one. We show that these Picard groups are generated by special cycles in the sense of Kudla-Millson. These special cycles are identified with the classically defined Humbert surfaces. The key points are: (1) the theory of special cycles relating geometric cycles to automorphic forms coming from theta-lifting; (2) Weissauer's theorems describing the Picard groups via automorphic forms; (3) results of Howe about the oscillator representation. K. Hulek (University of Hannover, Germany) Abelian varieties with a singular odd $2$-torsion point on the theta divisor We study the (closure of the) locus of intermediate Jacobians of cubic threefolds in the perfect cone compactification of the moduli space of principally polarized abelian fivefolds for which we obtain an expression in the tautological Chow ring. As a generalization we consider the locus of principally polarized abelian varieties with a singular odd $2$-torsion point on the theta divisor and their degenerations. This is joint work with S. Grushevsky. M. Kerr (Washington University in St. Louis, USA) Higher Chow cycles on families of K3 surfaces This talk is a tale of two cycles, both supported on singular fibers of families of elliptically fibered K3's. The first lives on a cover of the $H+E8+E8$-polarized family of Clingher and Doran, and we discuss a direct evaluation of the real regulator (part of joint work with Chen, Doran, and Lewis). The resulting function is related to a kind of "Maass cusp formwith pole". For the second cycle, we explain how to use a bit of Tauberian theory to compute the transcendental regulator. J. Keum (KIAS, Korea) Finite groups acting on K3 surfaces in positive characteristic A remarkable work of S. Mukai [1988] gives a classification of finite groups which can act on a complex K3 surface leaving invariant its holomorphic 2-form (symplectic automorphism groups). Any such group turns out to be isomorphic to a subgroup of the Mathieu group $M_{23}$ which has at least 5 orbits in its natural action on the set of 24 elements. A list of maximal subgroups with this property consists of 11 groups, each of these can be realized on an explicitly given K3 surface. Different proofs of Mukai's result were given by S. Kond\={o} [1998] and G. Xiao [1996].None of the 3 proofs extends to the case of K3 surfaces over algebraically closed fields of positive characteristic $p$.In this talk I will outline a recent joint work with I. Dolgachev on extending Mukai's result to the positive characteristic case.In positive characteristic case we first have to handle wild automorphisms, the ones whose orders are divisible by the characteristic $p$.It turns out that no wild automorphism of a K3 surface exists in characteristic $p > 11$. Then a classification of finite groups will be given which may act symplectically on a K3 surface in positive characteristic. R. Kloosterman (Humboldt Universitaet zu Berlin, Germany) Mordell-Weil ranks, highest degree syzygies and Alexander polynomials We discuss an approach to calculate the Mordell-Weil rank for elliptic threefold. We apply this method to a class of elliptic threefolds with constant $j$-invariant 0. It turns out that in this particular case there is a strong connection between 1. the number of highest degree syzygies of the ideal of a certain subscheme of the singular locus of the discriminant curve, 2. the Mordell-Weil rank of the fibration, 3. the exponent of $(t^2-t+1)$ in the Alexander polynomial of the discriminant curve. We used the connection between 1 and 2 to find a nontrivial upper bound for the Mordell-Weil rank. As an application we use the connection between 1 and 2 to describe all degree 18 plane curves, with only nodes and cusps as singularities, such that its deformation space has larger dimension than expected. (In this case the associated elliptic threefold is a degeneration of a Calabi-Yau elliptic threefold.) We then show that one can recover the Alexander polynomial of any even degree $d$ plane curve $C=Z(f(z_0,z_1,z_2))$ by studying the threefold $W\subset \mathbb{P}(d/2,1,1,1)$ given by $y^2+x^d+f=0$. It turns out that in the case that $C$ has only ADE singularities the Alexander polynomial of $C$ determines the group of Weil Divisors on $W$ modulo $\mathbb{Q}$-Cartier divisors on $W$. One can use this to find a series of subschemes $J_i$ of the singular locus of $C$, such that the number of highest degree syzygies of $J_i$ has a geometric interpretation. We end by giving some higher dimensional examples. S. Kudla (University of Toronto) Modular generating functions for arithmetic cycles: a survey In this talk I will give a survey of some recent results on the relations between the Fourier coefficients of modular forms and the classes of certain cycles in arithmetic Chow groupsShimura varieties. When the generating series for such cycle classes are modular forms, they may be viewed as an exotic type of theta function. The behavior of such forms under natural geometric operations,such as pullback to subvarieties, is of particular interest. I will describe several examples and discuss some open problems. A. Kumar (MIT, USA) Elliptic fibrations on Kummer surfaces I will describe computations regarding elliptic fibrations on Kummer surfaces, and some applications, such as explicit algebraic families of K3 surfaces with Shioda-Inose structure. C. Liedtke (Stanford University, USA) Rational Curves on K3 Surfaces We show that projective K3 surfaces with odd Picard rank contain infinitely many rational curves. Our proof extends the Bogomolov-Hassett-Tschinkel approach, i.e., uses moduli spaces of stable maps and reduction to positive characteristic. This is joint work with Jun Li. H. Movasati (IMPA, Brazil) Eisenstein type series for mirror quintic Calabi-Yau varieties In this talk we introduce an ordinary differential equation associated to the one parameter family of Calabi-Yau varieties which is mirror dual to the universal family of smooth quintic three folds. It is satisfied by seven functions written in the $q$-expansion form and the Yukawa coupling turns out to be rational in these functions. We prove that these functions are algebraically independent over the field of complex numbers, and hence, the algebra generated by such functions can be interpreted as the theory of quasi-modular forms attached to the one parameter family of Calabi-Yau varieties.Our result is a reformulation and realization of a problem of Griffiths around seventies on the existence of automorphic functions for the moduli of polarized Hodge structures. It is a generalization of the Ramanujan differential equation satisfied by three Eisenstein series. S. Mukai (RIMS, Japan) Enriques surfaces and root systems There are many interesting families of Enriques surfaces which are characterized by the presence of (negative definite) root sublattices ADE's in their twisted Picard lattices. In this talk I will discuss two such families (a) Enriques surfaces with many M-semi-symplectic automorphisms and (d) Enriques surfaces of Lieberman type related with the joint work with H. Ohashi, and another kind of family of (e) Enriques surfaces of type $E_7$. V. Nikulin (University of Liverpool, UK, and Steklov Mathematical Institute, Moscow, Russia) Elliptic fibrations on K3 surfaces We discuss, how many elliptic fibrations and elliptic fibrations with infinite automorphism groups an algebraic K3 surface over an algebraically closed field can have. As examples of applications of the same ideas, we also consider K3 surfaces with exotic structures: with finite number of Enriques involutions, and with naturally arithmetic automorphism groups. See details in arXiv:1010.3904. K. O'Grady (Sapienza Universita' di Roma) Moduli and periods of double EPW-sextics We analyze the GIT-quotient of the parameter space for (double covers of) EPW-sextics i.e. the symplectic grassmannian of lagrangian subspaces of the third wedge-product of a $6$-dimensional complex vector-space (equipped with the symplectic form defined by wedge product on $3$-vectors) modulo the natural action of $PGL(6)$. Our goal is to analyze the period map for the GIT-quotient, thus we aim to establish a dictionary between (semi)stability conditions and properties of the periods. We are inspired by the works of C.Voisin and R.Laza on cubic 4-folds. K. Oguiso (Osaka University, Japan) Group of automorphisms of Wehler type on Calabi-Yau manifolds and compact hyperkaehler manifolds Wehler pointed out, without proof, that a K3 surface defined by polynomial of multi-degree $(2,2,2)$ in the product of three projective lines admits a biholomorphic group action of the free product of three cyclic groups of order two. I would like to first explain one proof of his result and in which aspects his example is interesting. Then I would like to give a "fake" generalization for Calabi-Yau manifolds and explain why it is fake. Finally I would like to give a right generalization for Calabi-Yau manifolds of any even dimensions and compact hyperk\"ahler manifolds of any degree. H. Ohashi (Nagoya University, Japan) On automorphisms of Enriques surfaces We will discuss a possible extension to Enriques surfaces of an outstanding result of Mukai about the automorphism groups of K3 surfaces. We define the notion of Mathieu-semi-symplectic actions on Enriques surfaces and classify them. The maximal groups will be characterized in terms of the small Mathieu group $M_{12}$. This is a joint work with S. Mukai. G. Pearlstein (Michigan State University, USA) Jumps in the Archimedean Height We answer a question of Richard Hain regarding the asymptotic behavior of the archimedean heights and explain its connection to the Hodge conjecture via the work of Griffiths and Green. J.-C. Rohde (Universitaet Hamburg, Germany) Shimura varieties and Calabi-Yau manifolds versus Mirror Symmetry There are examples of Calabi-Yau $3$-manifolds $X$, which cannot be a fiber of a maximal family of Calabi-Yau $3$-manifolds with maximally unipotent monodromy. This contradicts the assumptions of the mirror symmetry conjecture. All known examples of this kind can be constructed by quotients of products of K3 surfaces $S$ and elliptic curves by an automorphism of order 3 or 4. Moreover the associated period domain of a maximal family with a fiber isomorphic to $X$ is a complex ball containing a dense set of complex multiplication points. In some examples the K3 surfaces S used for the construction of $X$ can also be used to construct pairs of subfamilies of pairs of mirror families with dense sets of complex multiplication fibers. A. Sarti (University of Poitiers, France) The BHCR-mirror symmetry for K3 surfaces The aim of this talk is to apply the construction of mirror pairs of Berglund and H\"ubsch to K3 surfaces with non symplectic involution and to investigate a recent result of Chiodo and Ruan. They apply the construction to pairs $(X,G)$ where $X$ is a Calabi Yau manifold of dimension at least three, given as the zero set of a non degenerate potential in some weighted projective space, and $G$ is a finite group acting on the manifold.\\For this reason we call the symmetry the {\it BHCR-mirror symmetry. In the talk I will show that this symmetry coincides with the mirror symmetry for lattice polarized K3 surfaces described by Dolgachev.\\This is a joint work with Michela Artebani and Samuel Boissi\`ere. C. Schnell (IPMU, Japan) Derived equivalences and the fundamental group I will describe an example (constructed by Gross and Popescu) of a simply connected Calabi-Yau threefold $X$, with a free action by the group $G = Z/5Z x Z/5Z$, for which $X$ and $X/G$ are derived equivalent. This shows that being simply connected is not a derived invariant. C. Schoen (Duke University, USA) Desingularized fiber products of elliptic surfaces The varieties of the title are sufficiently complex to exhibit many of the phenomena which arise when one studies smooth projective threefolds, but are often significantly simpler to work with than general threefolds because of the well understood elliptic surfaces from which they are built. So far these varieties have contributed to our understanding of algebraic cycles, modularity of Galois representations, phenomena peculiar to postive characteristic, superstring theory, Brauer groups, Calabi-Yau threefolds, and families of Kummer surfaces. Many open problems remain. S. Schroeer (University of Duesseldorf, Germany) Enriques manifolds Enriques manifolds are complex spaces whose universal coverings are hyperkahler manifolds. We give several examples, construct period domains, and establish a local Torelli theorem. The theory applies to various situations related to punctual Hilbert schemes, moduli spaces of stable sheaves, and Mukai flops. This is a joint work of K. Oguiso. A. Thompson (Oxford University, UK) Degenerations of K3 surfaces of degree two We consider semistable degenerations of K3 surfaces of degree two, with the aim of explicitly studying the geometric behaviour at the boundary of the moduli space of such surfaces. We begin by showing that results of the minimal model program may be used to bring these degenerations into a uniquely determined normal form: the relative log canonical model. We then proceed to describe a result that explicitly classifies the central fibres that may appear in this relative log canonical model, as complete intersections in certain weighted projective spaces. D. van Straten (Universitaet Mainz, Germany) CY-period expansions The local power series expansion of period-functions have strong integrality properties. Such expansions can be used effectively to find Picard–Fuchs equations in situations, where the traditional "Dwork–Griffiths–Method" is not available or cumbersome to use. We give examples how to use "conifold expansions" to obtain the Picard–Fuchs equations for some one-parameter families of Calabi–Yau 3-folds. (Work in progress, joint with S. Cynk). U. Whitcher (Harvey Mudd College, USA) Picard-Fuchs equations for lattice-polarized K3 surfaces The moduli spaces of K3 surfaces polarized by the lattices $H\oplus E_8\oplus E_8$ and $H\oplus E_8 \oplus E_7$ are related to moduli spaces of polarized abelian surfaces. We use Picard-Fuchs equations for the lattice-polarized K3 surfaces to explore this correspondence and characterize subloci of the moduli spaces of particular interest. K.-I. Yoshikawa (Kyoto University, Japan) On the value of Borcherds $\Phi$-function It is well known that the Petersson norm of Jacobi Delta-function is expressed as the product of the discriminant of cubic curve and the $L_2$ norm of appropriately normalized $1$-form on the curve. We give a generalization this fact to Enriques surfaces and Borcherds $\Phi$-function. J.-D. Yu (National Taiwan University, Taiwan) On Dwork congruences The Dwork congruences refer to a system of congruences among the coefficients of periods of certain Calabi-Yau pencils. They are used to derive the unit root formula for the zeta functions of the reductions of the fibers. Examples include certain hypergeometric series proved by Dwork himself via ad hoc methods. Here we give a geometric interpretation of these congruences. Y. Zarhin (Pennsylvania State University, USA) Hodge groups We discuss computations of Hodge groups of certain superelliptic jacobians (based on joint papers with Jiangwei Xue). Contributed Speaker Abstracts: M.J. Bertin (Université Paris, 6) Elliptic fibrations on the modular surface associated to $\Gamma_1(8)$ This is a joint work with Odile Lecacheux. Using Nishiyama's method, we determine all the elliptic fibrations with section on the elliptic surface $$X+\frac {1}{X}+Y+\frac {1}{Y}+Z+\frac {1}{Z}=2.$$ This $K3$-surface, of discriminant $-8$, is explained to be the modular surface associated to the modular group $\Gamma_1(8)$.We illustrate the method with examples and show how to get, for a given fibration, the rank and torsion of the Mordell-Weil group.Moreover, from a Weierstrass equation of an elliptic fibration, we explain one of the various ways to obtain a Weierstrass equation of another fibration. Yasuhiro Goto (Hokkaido University of Education Hakodate) On K3 surfaces with involution K3 surfaces with involution are classified by Nikulin's invariants. We calculate these invariants for K3 surfaces defined in weighted projective $3$-spaces by Delsarte-type equations. L. H. Halle (University of Oslo, Norway) Motivic zeta functions for degenerations of Calabi-Yau varieties I will discuss a global version of Denef and Loeser's motivic zeta functions. More precisely, to any Calabi-Yau variety $X$ defined over a discretely valued field $K$, I will define a formal power series $Z_X(T)$ with coefficients in a certain localized Grothendieck ring of varieties over the residue field $k$ of $K$. The series $Z_X(T)$ has properties analogous to Denef and Loeser's zeta function, in particular one can formulate a global version of the motivic monodromy conjecture. I will present a few cases where this conjecture has been proved. This is joint work with Johannes Nicaise. S. Sijsling (IMPA, Brazil) Calculating arithmetic Picard-Fuchs equations We consider second-order Picard-Fuchs equations that are obtained by uniformizing certain genus 1 Shimura curves. These equations are distinguished by having a particularly beautiful monodromy group, generated by two elements and commensurable with the group of units of a quaternion order. They describe the periods of certain families of fake elliptic curves that are as yet hard to write down.We explore the methods for determining these equations explicitly, and discuss the open questions that remain.
CommonCrawl
Kerala Gazette Gazettes of India Bharat Ek Khoj directorate of printing, government of india Korean, english WBAL (NBC) 11 News at 11 : WBAL : November 2, 2009 11:00pm-11:35pm EST Jun 8, 2011 06/11 News News/Business. Daniels, Banister. New. (CC) Topics: maryland, baltimore, brennan, geico, wilson, baltimore city, browns, the black & decker,... 11 News at 5 : WBAL : November 3, 2009 5:00pm-6:00pm EST News News/Business. Hamilton, Stovall. New. (CC) Topics: maryland, baltimore, virginia, annapolis, disabilities, washington, fresh express, chrysler,... 11 News at 5AM : WBAL : November 3, 2009 5:00am-5:30am EST News News/Business. Stovall, Basara. New. (CC) Topics: yankees, washington, virginia, maryland, new york, westbound eastern, obama, baltimore, wbal,... Topics: maryland, virginia, annapolis, baltimore, ravens, voice, utley, schroyer, obama, olay, kfc, tremaine 11 News at Noon : WBAL : November 3, 2009 12:00pm-12:30pm EST News News/Business. Basara. New. (CC) Topics: maryland, virginia, leno, philadelphia, sandra shaw, obama, baltimore 11 News Sunrise : WBAL : November 3, 2009 5:30am-6:00am EST Topics: cdc, maryland, baltimore, cecil 11 News Today : WBAL : November 3, 2009 6:00am-7:00am EST Topics: maryland, baltimore, dixon, obama, wbal, baltimore city, geico, mchenry, jennifer franciotti, new... 75As NMR study of single crystals of the heavily overdoped pnictide superconductors Ba{1-x}KxFe2As2 (x = 0.7 and 1) by S. W. Zhang; L. Ma; Y. D. Hou; J. S. Zhang; T. L. Xia; G. F. Chen; J. P. Hu; G. M. Luke; W. Yu We performed $^{75}$As NMR studies on two overdoped high-quality Ba$_{1-x}$K$_{x}$Fe$_2$As$_2$ (x=0.7 and 1.0) single crystals. In the normal states, we found a dramatic increase of the spin-lattice relaxation ($1/^{75}T_1$) from the x=0.7 to the x=1.0 samples. In KFe$_2$As$_2$, the ratio of $1/^{75}T_1TK_n^2$, where $^{75}K_n$ is the Knight shift, increases as temperature drops. These results indicate the existence of a new type of spin fluctuations in KFe$_2$As$_2$ which is accustomed to... Source: http://arxiv.org/abs/0908.4369v2 WBEZ 91.5: Eight Forty-Eight 848 - 11/03/2009 by WBEZ Topics: WBEZ, 848, Eight Forty-Eight, Richard Steele WUSA (CBS) 9News Now at 11pm : WUSA : November 2, 2009 11:00pm-11:35pm EST News News/Business. New. (CC) Topics: virginia, utley, at&t, phillies, new york, obama, fred davis, audrey barnes, redskins, yankees,... 9News Now at 5am : WUSA : November 3, 2009 5:00am-6:00am EST News News/Business. New. Topics: virginia, angie, maryland, howard bernstein, southern maryland, washington, michael jackson,... 9News Now at 5pm : WUSA : November 3, 2009 5:00pm-6:00pm EST Topics: virginia, mcdonnell, maryland, bethesda, bob mcdonnell, bruce springsteen, washington, obama,... Topics: virginia, maryland, annapolis, rockville, angie, new york, eastbound, gaithersburg, alex trevino,... Topics: virginia, maryland, mcdonnell, washington, cleveland, barbara bullock, tatyana liensky, bob... Topics: virginia, maryland, afghanistan, taliban, georges, maine 9News Now at Noon : WUSA : November 3, 2009 12:00pm-12:30pm EST Topics: virginia, new york, woodbridge, gaithersburg, bedford, maine, annapolis, maryland 9News Now Tonight : WUSA : November 2, 2009 7:00pm-7:30pm EST News/Business. Topics: virginia, alexandria, manassas, gaithersburg, maryland, fredericksburg CD-ROM Software Library [PSP] Star Wars - Battlefront - Elite Squadron (USA) (En,Fr,Es) by Rebellion Developments Ltd. Previously in the Star Wars saga, an unknown story developed between two brothers, codenamed X1 and X2. These two brothers, cloned from a Jedi, grew up to become an important part of the Clone Wars, trainers and leaders of the army that would lead the Republic against those that tempted to destroy order within the Galaxy. Unfortunately for both, their worlds would come crushing down as the Emperor rose to power, putting into place Order 66 and dismantling the Jedi Order in one fowl swoop. X2,... Topics: PlayStation Portable, Star Wars, Battlefront, Elite Squadron, Redump A Distributed MAC Protocol for Cooperation in Random Access Networks by Georg Böcherer; Alexandre de Baynast WLAN is one of the most successful applications of wireless communications in daily life because of low cost and ease of deployment. The enabling technique for this success is the use of random access schemes for the wireless channel. Random access requires minimal coordination between the nodes, which considerably reduces the cost of the infrastructure. Recently, cooperative communication in wireless networks has been of increasing interest because it promises higher rates and reliability. An... A Ferguson-Klass-LePage series representation of multistable multifractional processes and related processes by Ronan Le Guével; Jacques Lévy-Véhel The study of non-stationary processes whose local form has controlled properties is a fruitful and important area of research, both in theory and applications. We present here a construction of multifractional multistable processes, based on the Ferguson-Klass-LePage series representation of stable processes. We consider various particular cases of interest, including multistable L\'evy motion, multistable reverse Ornstein-Uhlenbeck process, log-fractional multistable motion and linear... A Gradient Descent Algorithm on the Grassman Manifold for Matrix Completion by Raghunandan H. Keshavan; Sewoong Oh We consider the problem of reconstructing a low-rank matrix from a small subset of its entries. In this paper, we describe the implementation of an efficient algorithm called OptSpace, based on singular value decomposition followed by local manifold optimization, for solving the low-rank matrix completion problem. It has been shown that if the number of revealed entries is large enough, the output of singular value decomposition gives a good estimate for the original matrix, so that local... A Holographic model for Non-Relativistic Superconductor by Shi Pu; Sang-Jin Sin; Yang Zhou We build a holographic description of non-relativistic system for superconductivity in strongly interacting condensed matter via gauge/gravity duality. We focus on the phase transition and give an example to show that a simple gravitational theory can provide a non-relativistic holographical dual description of a superconductor. There is also a critical temperature like the relativistic case, below which a charged condensation field appears by a second order phase transition and the (DC)... ABC 7 News at 1100 : WJLA : November 2, 2009 11:00pm-12:00am EST News News/Business. Harris, Bunyan, Hill, Brant. New. (CC) Topics: bob mcdonnell, virginia, jim zorn, ken cuccinelli, upstate new york, new york, steve shannon,... ABC 7 News at 500 : WJLA : November 3, 2009 5:00pm-6:00pm EST News News/Business. Harris, Simpson, Hill, Brant. New. (CC) Topics: arlington, maryland, washington, virginia, diane sawyer, chris brown, disabilities, alexandria,... News News/Business. Peterson, Bunyan, Hill, Brant. New. (CC) Topics: dan snyder, joe biden, toyota, maryland, virginia, phillies, rob mcdonnell, kfc ABC 7 News at Noon : WJLA : November 3, 2009 12:00pm-12:30pm EST News News/Business. McKelway, Starling, van de Graaff. New. (CC) Topics: virginia, maryland, new york, philadelphia, obama, washington, simon mcgregor, charlie dullea,... ABC News Good Morning America : WJLA : November 3, 2009 7:00am-9:00am EST News/Business. Carrie Underwood. (2009) Carrie Underwood performs; author Joel Osteen ('It's Your Time'); author Lisa Niemi ('The Time of My Life'). New. (HD) (CC) Topics: virginia, carrie underwood, toyota, obama, patrick, sarah palin, new york, china, lisa niemi,... ABC News Good Morning America : WMAR : November 3, 2009 7:00am-9:00am EST News/Business. Carrie Underwood. (2009) Carrie Underwood performs; author Joel Osteen ('It's Your Time'); author Lisa Niemi ('The Time of My Life'). New. (CC) Topics: carrie underwood, toyota, patrick, sarah palin, china, lisa niemi, patrick swayze, mikulski,... ABC World News Now : WJLA : November 3, 2009 3:00am-4:30am EST News/Business. Global news. New. (CC) Topics: new york, afghanistan, washington, hamid karzai, vinita, medicare, iran, maine, virginia, phoenix,... ABC World News Now : WMAR : November 3, 2009 3:05am-4:30am EST Topics: new york, washington, afghanistan, medicare, iran, hamid karzai, uso, maine, iran, phoenix, tehran,... ABC's World News With Charles Gibson : WJLA : November 3, 2009 6:30pm-7:00pm EST ABC News News/Business. Charles Gibson. The latest world and national news. New. (HD) (CC) Topics: bpa, toyota, charlie, atlanta, houston, virginia, warren buffett, obama, barbara pinto, abc news,... ABC's World News With Charles Gibson : WMAR : November 3, 2009 6:30pm-7:00pm EST ABC News News/Business. Charles Gibson. The latest world and national news. New. (CC) Topics: bpa, toyota, charlie, atlanta, houston, virginia, warren buffett, new york, obama, nausea, barbara... ABC2 News at 530PM : WMAR : November 3, 2009 5:30pm-6:00pm EST Topics: california, maryland, baltimore, aberdeen, kelly swoope, perry hall, baltimore city, cardin, cheryl... ABC2 News at 5PM : WMAR : November 3, 2009 5:00pm-5:30pm EST Topics: baltimore, maryland, terry owens, miriam frankl, anthony sowell, virginia, arizona, towson,... Topics: baltimore, maryland, miriam frankl, baltimore city, sherrie johnson, brian kuebler, miriam, effie... ABC2 News Good Morning Maryland 5AM : WMAR : November 3, 2009 5:00am-5:30am EST Topics: baltimore, virginia Topics: baltimore, towson, maryland, virginia, cardin, mikulski ABC2 News Good Morning Maryland 9AM : WMAR : November 3, 2009 9:00am-10:00am EST Topics: baltimore, disabilities, magan, maryland, stanley, hollywood, mel gibson, harford county,... ABC2 News The Latest at 11 : WMAR : November 2, 2009 11:00pm-11:35pm EST Topics: mikulski, maryland, baltimore, cardin, terry owens, towson, stanley, cdc, olay Absence of the discontinuous transition in the one-dimensional triplet creation model by Su-Chan Park Although Hinrichsen in his unpublished work theoretically rebutted the possibility of the discontinuous transition in one-dimensional nonequilibrium systems unless there are additional conservation laws, long-range interactions, macroscopic currents, or special boundary conditions, we have recently observed the resurrection of the claim that the triplet creation model (TC) introduced by Dickman and Tom\'e [Phys. Rev. E {\bf 44}, 4833 (1991)] would show the discontinuous transition. By extensive... Address to a Joint Session by Chancellor Angela Merkel of Germany by The Capitol Congressional Hearing U.S. House of Representatives The Capitol Title: Address to a Joint Session by Chancellor Angela Merkel of Germany Date: 2009-11-03 Room: The Capitol Topic: The Capitol Edina Community Channel 16 Agenda Edina - Episode 16 - November 2009 by Edina Community Channel 16 Edina Channel 16 City of Edina, Minnesota August, 2008 A monthly look at the actions and events of the Edina, Minnesota City Council and it's boards and commissions. This episode features: Topics: Minnesota, Edina, Edina Community Channel 16, Government Access Channel, Community Media, PEG,... Alaska News Nightly Alaska News Nightly - 11/03/2009 by Alaska Public Media Topics: Alaska Public Media, Alaska News Nightly Allen City TV Allen Community Ice Rink - Opens November 9, 2009 by Allen City TV http://alleneventcenter.com/index.php?option=com_content&view=article&id=28&Itemid=8 * Full-sized state of the art ice surface 85 x 200' * General Seating for spectators * Locker rooms, equipment shop, meeting rooms, off-ice trainig rooms, skate rental, broom ball * Open skating sessions, drop in hockey sessions, freestyle sessions, summer camps, skating shows, skating competitions and hockey tournaments * Learn to Skate program, recreational and competitive hockey and figure... Topics: Texas, Allen, Allen City TV, ACTV, Government Access TV, Community Media, PEG, Youtube, city, of,... New Zealand Censorship Database Almost Illegal #1 by Office of Film and Literature Classification This is a New Zealand classification of Almost Illegal #1 . Title Almost Illegal #1 Medium DVD Publication number 0901234.000 Registration date 2009-11-03 Classification body Office of Film and Literature Classification (1994–) Submission channel Section 13(1)(a) - Comptroller of Customs Running time At submission: 52:37 At classification: 52:37 Language English Country United States Associated documents 93420_35_4_70278.pdf (Register Page) Related publications 0901230.000 0901231.000... Topics: New Zealand, censorship America This Morning : WJLA : November 3, 2009 4:30am-5:00am EST News/Business. Breaking news. New. (CC) Topics: washington, obama, philadelphia, yankees, virginia, new york, phillies, karzai, boniva, detroit,... America This Morning : WMAR : November 3, 2009 4:30am-5:00am EST Topics: washington, philadelphia, yankees, obama, phillies, virginia, karzai, new york, dove, detroit,... America's Newsroom : FOXNEWS : November 3, 2009 9:00am-11:00am EST News/Business. News coverage and discussion. New. Topics: virginia, new jersey, chris christie, sarah palin, washington, jon corzine, bob mcdonnell, new... An analytic model of the stereodynamics of rotationally inelastic molecular collisions by Mikhail Lemeshko; Bretislav Friedrich We develop an analytic model of vector correlations in rotationally inelastic atom-diatom collisions and test it against the much examined Ar--NO ($\mathbf X^2 \Pi$) system. Based on the Fraunhofer scattering of matter waves, the model furnishes complex scattering amplitudes needed to evaluate the polarization moments characterizing the quantum stereodynamics. The analytic polarization moments are found to be in an excellent agreement with experimental results and with close-coupling... Andhra Pradesh, 2009-11-03, No. 162 by Commissioner of Printing, Government of Andhra Pradesh The Gazette of Andhra Pradesh Date: 2009-11-3 Department: Others Gazette Number: 162 Gazette Type: EXTRAORDINARY Issued By: COMMISSIONER OF PRINTING Part Number: PART II Subject: PRIVATE ADVERTISEMENTS CHANGE OF NAMES Topic: Andhra Gazette
CommonCrawl
Switch to: References Citations of: Dispositionalism and the Modal Operators Philosophy and Phenomenological Research 91 (2):411-424 (2015) Add citations You must login to add citations. Order: Most recent First author Export: Choose a format.. Formatted textPlain textBibTeXZoteroEndNoteReference Manager The Fundamentality of Fundamental Powers.Joaquim Giannotti - 2021 - Acta Analytica 36 (4):589-613.details Dispositional essentialism is the view that all or many fundamental properties are essentially dispositional, or powers. The literature on the dispositional essence of powers is abundant. In contrast, the question of how to understand the fundamentality of fundamental powers has received scarce interest. Therefore, the fundamentality of powers stands in need of clarification. There are four main conceptions of the fundamental, namely as that which is metaphysically independent; or belonging to a minimally complete basis; or perfectly natural; or metaphysically primitive. (...) Here, I present and discuss each of these approaches from the viewpoint of dispositional essentialism. I show that is incompatible with the metaphysics of powers and – have more drawbacks than merits. Therefore, my conclusion is that the dispositional essentialist should seek an alternative. Although I offer no positive account, I pave the way to more fruitful views by identifying the shortcoming of these unpromising options. (shrink) On How (Not) to Define Modality in Terms of Essence.Robert Michels - 2019 - Philosophical Studies 176 (4):1015-1033.details In his influential article 'Essence and Modality', Fine proposes a definition of necessity in terms of the primitive essentialist notion 'true in virtue of the nature of'. Fine's proposal is suggestive, but it admits of different interpretations, leaving it unsettled what the precise formulation of an Essentialist definition of necessity should be. In this paper, four different versions of the definition are discussed: a singular, a plural reading, and an existential variant of Fine's original suggestion and an alternative version proposed (...) by Correia which is not based on Fine's primitive essentialist notion. The first main point of the paper is that the singular reading is untenable. The second that given plausible background assumptions, the remaining three definitions are extensionally equivalent. The third is that, this equivalence notwithstanding, Essentialists should adopt Correia's version of the definition, since both the existential variant, which has de facto been adopted as the standard version of the definition in the literature, and the plural reading suffer from problems connected to Fine's primitive essentialist notion. (shrink) Modal Semantics Without Worlds.Craig Warmke - 2016 - Philosophy Compass 11 (11):702-715.details Over the last half century, possible worlds have bled into almost every area of philosophy. In the metaphysics of modality, for example, philosophers have used possible worlds almost exclusively to illuminate discourse about metaphysical necessity and possibility. But recently, some have grown dissatisfied with possible worlds. Why are horses necessarily mammals? Because the property of being a horse bears a special relationship to the property of being a mammal, they say. Not because every horse is a mammal in every possible (...) world. Some have recently begun to use properties to develop rivals to possible worlds semantics which may someday compare in formal power and capture the different systems of modal logic. In this paper, I do two things. I first offer a quick primer on possible worlds semantics. Then I discuss three rivals and the work they have left to do. (shrink) The Gap Problem Made Easy?Tien-Chun Lo - 2020 - Analysis 80 (3):486-492.details Byerly recently developed a new solution to the gap problem for cosmological arguments. His idea is that the best explanation for why the necessary being is found to have necessary existence is that it is a perfect being. I raise an objection to Byerly's solution on the ground that there is some rival naturalistic explanation that is as good as Byerly s theistic explanation. Can Hardcore Actualism Validate S5?Samuel Kimpton-Nye - 2019 - Philosophy and Phenomenological Research (2):342-358.details Hardcore actualism (HA) grounds all modal truths in the concrete constituents of the actual world (see, e.g., Borghini and Williams (2008), Jacobs (2010), Vetter (2015)). I bolster HA, and elucidate the very nature of possibility (and necessity) according to HA, by considering if it can validate S5 modal logic. Interestingly, different considerations pull in different directions on this issue. To resolve the tension, we are forced to think hard about the nature of the hardcore actualist's modal reality and how radically (...) this departs from possible worlds orthodoxy. Once we achieve this departure, the prospects of a hardcore actualist validation of S5 look considerably brighter. This paper thus strengthens hardcore actualism by arguing that it can indeed validate S5–arguably the most popular logic of metaphysical modality–and, in the process, it elucidates the very nature of modality according to this revisionary, but very attractive, modal metaphysics. (shrink) The Nature of Necessity.Brian Leftow - 2017 - Res Philosophica 94 (3):359-383.details I give an account of the nature of absolute or metaphysical necessity. Absolute-necessarily P, I suggest, just if it is always the case that P and there never is or was a power with a chance to bring it about, bring about a power to bring it about, etc., that not P. I display both advantages and a cost of this sort of definition. New Powers for Dispositionalism.Giacomo Giannini - 2020 - Synthese (ST: New Foundations for Disposit):1-30.details Establishing Dispositionalism as a viable theory of modality requires the successful fulfilment of two tasks: showing that all modal truths can be derived from truths about actual powers, and offering a suitable metaphysics of powers. These two tasks are intertwined: difficulties in one can affect the chances of success in the other. In this paper, I generalise an objection to Dispositionalism by Jessica Leech and argue that the theory in its present form is ill-suited to account for de re truths (...) about merely possible entities. I argue that such difficulty is rooted in a problem in the metaphysics of powers. In particular, I contend that the well-known tension between two key principle of powers ontology, namely Directedness and Independence has received an unsatisfactory solution so far, and that it is this unsatisfactory solution concerning the status of "unmanifested manifestations" that makes it hard for Dispositionalism to account for mere possibilia. I develop a novel account of the status of unmanifested manifestations and an overall metaphysics of powers which allows to better respond to Leech's objection and handle mere possibilia. The central idea of the proposal is that unmanifested manifestations are akin to mere logical existents, and are best characterised as non-essentially non-located entities. (shrink) Essence, Potentiality, and Modality.Barbara Vetter - 2021 - Mind 130 (519):833-861.details According to essentialism, metaphysical modality is founded in the essences of things, where the essence of a thing is roughly akin to its real definition. According to potentialism, metaphysical modality is founded in the potentialities of things, where a potentiality is roughly the generalized notion of a disposition. Essentialism and potentialism have much in common, but little has been written about their relation to each other. The aim of this paper is to understand better the relations between essence and potentiality, (...) on the one hand, and between essentialism and potentialism, on the other. It is argued, first, that essence and potentiality are not duals but interestingly linked by a weaker relation dubbed 'semi-duality'; second, that given this weaker relation, essentialism and potentialism are not natural allies but rather natural competitors; and third, that the semi-duality of essence and potentiality allows the potentialist to respond to an important explanatory challenge by using essentialist resources without thereby committing to essentialism. (shrink) Counterpossibles.Alexander W. Kocurek - 2021 - Philosophy Compass 16 (11):e12787.details Making Sense of Powerful Qualities.Ashley Coates - 2020 - Synthese 198 (9):8347-8363.details According to the powerful qualities view, properties are both powerful and qualitative. Indeed, on this view the powerfulness of a property is identical to its qualitativity. Proponents claim that this view provides an attractive alternative to both the view that properties are pure powers and the view that they are pure qualities. It remains unclear, however, whether the claimed identity between powerfulness and qualitativity can be made coherent in a way that allows the powerful qualities view to constitute this sort (...) of alternative. I argue here that this can be done, given a particular conception of both the qualitativity and powerfulness of properties. On this conception, a property is qualitative just in the sense that its essence is fixed independently of any distinct properties, and it is powerful just if its essence grounds its dispositional role. (shrink) The World Is a Necessary Being.Chad Vance - 2020 - Philosophia 48 (1):377-390.details A standard conception of metaphysical modality accepts that Some de re modal claims are true, These should be understood in terms of a possible worlds semantics, and There is trans-world identity. For instance, it seems true that Humphrey could have won the election. In possible worlds speak, we say that there exists a possible world where Humphrey wins the election. Furthermore, had that possibility been actualized instead of this one, Humphrey—our Humphrey, the very same man—would still have existed. Here, I (...) argue that this way of understanding de re modal claims, in conjunction with certain other plausible assumptions, entails that The World is a necessary being. (shrink) Modal Dispositionalism and the (T) Axiom.Matthew James Collier - 2021 - Philosophia 49 (3):977-988.details Yates has recently argued that modal dispositionalism invalidates the axiom. Both Yates and Allen have advanced responses to the objection: Yates's response proposes installing truth into the possibility biconditional, and Allen's response requires that all properties be construed as being essentially dispositional. I argue that supporters of Borghini and Williams's modal dispositionalist theory cannot accept these responses, given critical tenets of their theory. But, since these responses to the objection are the most plausible in the literature, I conclude that the (...) threat that Borghini and Williams's modal dispositionalist theory invalidates the axiom still looms large. (shrink) A Plenitude of Powers.Barbara Vetter - 2018 - Synthese 198 (Suppl 6):1365-1385.details Dispositionalism about modality is the view that metaphysical modality is a matter of the dispositions possessed by actual objects. In a recent paper, David Yates has raised an important worry about the formal adequacy of dispositionalism. This paper responds to Yates's worry by developing a reply that Yates discusses briefly but dismisses as ad hoc: an appeal to a 'plenitude of powers' including such powers as the necessarily always manifested power for 2+2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} (...) \begin{document}$$2+2$$\end{document} to be 4. I argue that the reply is not ad hoc at all, by defending the metaphysics of dispositions that should underly it. I then argue, first, that a proper understanding of dispositions' degrees provides us with an argument for such necessarily always manifested dispositions; second, that all the natural attempts to block that argument can be resisted without being ad hoc; and third, that pragmatic considerations explain our intuitive resistance to the ascription of necessarily always manifested dispositions. Dispositionalism can be formally adequate after all. (shrink) New Frontiers in Ground, Essence, and Modality: Introduction.Donnchadh O'conaill & Tuomas E. Tahko - 2021 - Synthese 198 (Suppl 6):1219-1230.details Ground, essence, and modality seem to have something to do with each other. Can we provide unified foundations for ground and essence, or should we treat each as primitives? Can modality be grounded in essence, or should essence be expressed in terms of modality? Does grounding entail necessitation? Are the notions of ground and essence univocal? This volume focuses on the links—or lack thereof—between these three notions, as well as the foundations of ground, essence, and modality more generally, bringing together (...) work on the metaphysics, epistemology, and logic of these three notions by some of the leading figures in the field as well as emerging young scholars. -/- The invited contributors to this volume presented their work at a conference on Ground, Essence, and Modality at the University of Helsinki in June 2016, funded by the Academy of Finland Project The Epistemology of Metaphysics: From Rationalism to Nominalism. This conference is just one of many recent high-profile events and publications on these themes (e.g., the edited volumes Correia and Schnieder 2012a; Sirkel and Tahko 2014; Jago 2016). -/- After providing a brief historical summary of the (re)emergence of modality, essence and ground as central notions in metaphysics (Sect. 1), we shall outline some of the main themes in recent work on these notions and on the links between them (Sect. 2). In Sect. 3 we briefly introduce the papers in this volume. (shrink) Counterpossibles (Not Only) for Dispositionalists.Barbara Vetter - 2016 - Philosophical Studies 173 (10):2681-2700.details Dispositionalists try to provide an account of modality—possibility, necessity, and the counterfactual conditional—in terms of dispositions. But there may be a tension between dispositionalist accounts of possibility on the one hand, and of counterfactuals on the other. Dispositionalists about possibility must hold that there are no impossible dispositions, i.e., dispositions with metaphysically impossible stimulus and/or manifestation conditions; dispositionalist accounts of counterfactuals, if they allow for non-vacuous counterpossibles, require that there are such impossible dispositions. I argue, first, that there are in (...) fact no impossible dispositions; and second, that the dispositionalist can nevertheless acknowledge the non-vacuity of some counterpossibles. The strategy in the second part is one of 'divide and conquer' that is not confined to the dispositionalist: it consists in arguing that counterpossibles, when non-vacuous, are read epistemically and are therefore outside the purview of a dispositional account. (shrink) Natural Necessity: An Introductory Guide for Ontologists.Fumiaki Toyoshima - forthcoming - Applied Ontology:1-29.details From Possibility to Properties? Or From Properties to Possibility?Sophie R. Allen - 2017 - Philosophy 92 (1):21-49.details This paper contrasts two metaphysical accounts of modality and properties: Modal Realism which treats possible entities as primitive; and Strong Dispositionalism in which metaphysical possibility and necessity are determined by actually existing dispositions or powers. I argue that Strong Dispositionalism loses its initial advantages of simplicity and parsimony over Modal Realism as it is extended and amended to account for metaphysical rather than just causal necessity. Furthermore, to avoid objections to its material and formal adequacy, Strong Dispositionalism requires a richer (...) fundamental ontology which it cannot explicate without appealing either to possible worlds or to an account of counterfactual truth conditions, both of which Strong Dispositionalism was intended to replace. (shrink)
CommonCrawl
Design and aerodynamic investigation of dynamic architecture Hassam Nasarullah Chaudhry ORCID: orcid.org/0000-0002-4657-16051, John Kaiser Calautit2 & Ben Richard Hughes2 Innovative Infrastructure Solutions volume 1, Article number: 7 (2016) Cite this article The effect of the spacing between adjacent building floors on the wind distribution and turbulence intensity was analysed using computational fluid dynamics in this study. Five computational models were created with floor spacing ranging from 0.8 m (benchmark) to 1.6 m. The three-dimensional Reynolds-Averaged Navier–Stokes equations along with the momentum and continuity equations were solved using the FLUENT code for obtaining the velocity and pressure field. Simulating a reference wind speed of 5.5 m/s, the findings from the study quantified that at a floor spacing of 1.6 m, the overall wind speed augmentation was 39 % which was much higher than the benchmark model (floor spacing = 0.8 m) indicating an amplification in wind speed of approximately 27 %. In addition, the results indicated a gradual reduction in turbulence kinetic energy by up to 53 % when the floor spacing was increased from 0.8 to 1.6 m. Although the concept was to integrate wind turbines into the building fabric, this study is limited to the assessment of the airflow inside the spaces of building floors which can be potentially harnessed by a vertical axis wind turbine. The findings of this work have indicated that there is a potential for integration which will lead on to future research in this area. Building energy framework is part of a multifaceted system that includes transport and urban planning and has major social consequences as well as climate change impacts. The energy mix is also important in determining carbon dioxide emissions. Buildings worldwide account for a surprisingly high 40 % of global energy consumption, and the resulting carbon footprint, significantly exceeding those of all transportation combined [1–3]. Energy consumption of the United Arab Emirates (UAE) is driven largely by residential use, with a major proportion of the electricity being consumed by the building sector. With the ever-increasing economy of the UAE, the dependency on usage of non-renewable resources of energy is cumulating in direct proportion. Nevertheless, the non-renewable potential of these resources is certainly a leading world-wide issue, and one which has gained substantial international interest over the past decades. Large and attractive opportunities exist to reduce building's energy use at lower costs and higher returns than other sectors. At the same time, substantial investments will be required to achieve this target. These will require the combination of social, economic and environmental actions, including building energy codes, investment subsidies, labelling and reporting mechanisms, increased and trained workforce capacity, and evolving energy-efficient designs and technologies. There is a shortage of research on exploiting the potential building integrated wind turbines in achieving a sustainable built environment. Apart from using the height of high-rise buildings to increase energy capture, strategically optimising building geometry and its aerodynamic features itself can enhance wind turbine performance by inducing a higher volume of air than the macro-climate. One such proposal is to integrate wind turbines between floors of high rise structures as conceptually described by the rotating tower sustainable building design model. The arrangement uses the power generated to supply each residential apartment and also to rotate the floors, thus creating a dynamic structure [4]. This study numerically investigates the aerodynamic design of the rotating tower or dynamic architecture proposed for Dubai, UAE [4]. In relation to the existing literature [5–8], the CFD modeling approach is used in this paper out in order to determine the influence of floor spacing on the efficiency of building-integrated wind turbines with the spacing-height ratios varying between 25 and 50 %. Individual parameters including air velocity distribution, pressure profiles and turbulence kinetic energy is analysed to illustrate the variations in relation to increasing floor altitude. Previous related work Work related to the implementation of renewable energy as power sources in buildings, in particular wind energy is vast. Following is a concise assessment of prior work related to this investigation. In our previous works [5, 6], we have developed a numerical methodology for simulating the performance of a horizontal axis wind turbine integrated into a high-rise structure. Chaudhry et al. [5] conducted a numerical study using Computational Fluid Dynamics (CFD) on determining the influence of structural morphology on the performance of wind turbines using Bahrain World Trade Centre as the case-study. The study used square, triangular and circular cross-sections of the building in addition to the benchmark model. The findings determined that the optimum cross-sectional configuration for installing wind turbines in high-rise buildings was the circular orientation as the average wind speed at the wind turbines was accelerated by 0.3 m/s resulting in an overall augmentation of 5 %. The results from this study therefore highlighted that circular building morphology is the most viable building orientation, particularly suited to regions with a dominant prevailing wind direction. Further work by Chaudhry et al. [6] quantified an estimate power generation of 6.4 kW and a capacity factor of 2.9 % for the benchmark model at a reference wind speed of 6 m/s. In addition, the investigation determined that the wind direction of 180° yielded most favorable results with a maximum power production capacity of 29.3 kW indicating a high capacity factor of 13 % Chong et al. [7] conducted a study on the design of a wind turbine generator for the purpose of energy consumption in buildings. Two vertical axis wind turbines (VAWTs) in cross-wind orientation which were integrated with an enclosure were installed above a cooling tower in order to harness the discharged wind for electricity generation. The tests were conducted with and without the VAWTs on the performance of the system. From the laboratory testing, it was observed that installing VAWT sat a correct position above the exhaust air system did not indicate any significant negative impacts on the performance of the cooling tower model. The performance of the cooling tower model was improved by the increment of intake air speed and the reduction off an motor power consumption. The rotational speed of the VAWTs were very high (>400 rpm) while the tip speed ratios were in the range of 1.28–1.29, making it preferable for electricity generation. Sharpe and Proven [9] carried out detailed work on the concept and early development of a building integrated wind turbine in Scotland. The Crossflex proposal used in this investigation is based on the Darrieus turbines approach, consisting of two or more flexible aero foil blades attached to both the top and bottom of a vertical rotating shaft. Although the development of this technology is still in its early stage, the study has outlined the remaining tasks of its work. The subsequent stages will involve supplementary computer modeling using CFD analysis to model airflow over the cowling to establish the nature of augmentation and flow stability that may occur, and to optimize the design in respect of these. This stage will also investigate materials that may be used for the cowling, in terms of both functional and aesthetic performance and also possible behavior of the frame as a mechanism for inducing passive twist in the blades. It is anticipate that these designs will be evaluated against common building types to test methods of fixing, loads distribution and integration under a variety of built form conditions. Wang et al. [10] developed a methodology for aerodynamic study on a small domestic wind turbine with scoop to improve energy capture under low ambient wind speeds. The work used advanced CFD modelling incorporating the RNG k-e turbulence model. The findings of this work depicted that having a scoop acts as a diffuser and amplifies the airflow speed by a factor of 1.5 times equivalent to an increase in power output of 2.2 times with the same swept area. In addition, the work concluded that CFD can be used as a robust modelling technique for designing such wind turbine rotor blade systems to function within the proposed wind scoop. Computational domain The rotating tower is one of the green projects scheduled to be built in Dubai. Once complete, it will stand at 420 m and will comprise of 78 floors [4, 11], each with the freedom to rotate about its own axis. Detailed information about the dimensions of the structure is scarce. One important aspect is the spacing between individual floors which is 20 % of the floor height itself. This spacing is to accommodate for the wind turbine blades integrated inside the building structure. The height of each floor is 5.2 m and hence, the spacing is approximately 0.8 m. The three-dimensional computer-aided design was used for the CFD analysis; with dimensions shown in Fig. 1. Section—front view (1.6 m distance between each floor) The computational domain comprised of the building geometry, which was designed according to the actual specifications of the high-rise tower and the specific wind turbines. The enclosure was designed to allow for simulating the macro-climate wind profiles over the building as specified in Fig. 2. In order to ensure that the walls of the computational domain will have no or minimal adverse effect on the airflow near the building, the domain was sized appropriately (2.6 H for the side wall and 2.1 H for the top wall). The distancing between the building and inlet or outlet is set based on the AIJ guidelines for simulating urban environments [16]. Modelled computational domain ANSYS FLUENT 14 commercial software was used to perform The Reynolds-Averaged Navier–Stokes (RANS) computations based on a control volume approach for solving flow equations. The Semi-Implicit Method for Pressure-Linked Equations (SIMPLE) algorithm was used for velocity–pressure coupling. All of the transport equations were discretized using second order upwind scheme. The standard k-e transport model which is frequently used for studying airflows around buildings was used to define the turbulence kinetic energy and flow dissipation rate within the model [12, 13]. The governing equations are the continuity, momentum and energy which are detailed in Eqs. 1–3. $$\frac{\partial }{\partial t}\left( {\alpha_{q} \rho_{q} } \right) + \nabla \cdot \left( {\alpha_{q} \rho_{q} \vec{v}_{a} } \right) = \sum\nolimits_{n = 1}^{n} {\left( {\dot{m}_{pq} - \dot{m}_{qp} } \right)} + S_{q}$$ $$\frac{\partial }{\partial t}\left( {\alpha_{q} \rho_{q} \vec{v}_{q} } \right) + \nabla \cdot \left( {\alpha_{q} \rho_{q} \vec{v}_{q} \vec{v}_{q} } \right) = - \alpha_{q} \nabla \rho + \nabla \cdot \bar{\bar{\tau }}_{q} + \alpha_{q} \rho_{q} \vec{g} + \sum\nolimits_{p = 1}^{n} {\left( {\vec{R}_{pq} + \dot{m}_{pq} \vec{v}_{pq} - \dot{m}_{qp} \vec{v}_{p} } \right)} + \left( {\vec{F}_{q} + \vec{F}_{lift,q} + \vec{F}_{v,q} } \right)$$ $$\frac{\partial }{\partial t}\left( {\alpha_{q} \rho_{q} h_{q} } \right) + \nabla \cdot \left( {\alpha_{q} \rho_{q} \vec{u}_{q} h_{q} } \right) = \alpha_{q} \frac{{\partial \rho_{q} }}{\partial t} + \bar{\bar{\tau }}_{q} :\nabla \vec{u}_{q} - \nabla \cdot \vec{q}_{q} + S_{q} + \sum\nolimits_{p = 1}^{n} {\left( {\vec{R}_{pq} + \dot{m}_{pq} \vec{v}_{pq} - \dot{m}_{qp} \vec{v}_{p} } \right)}$$ where; \(\vec{v}_{q}\) represents the velocity of phase q and \({\dot{\text{m}}}_{\text{pq}}\) and \(\dot{m}_{qp}\) characterises the mass transfer from the pth to qth phase and vice versa. \(\bar{\bar{\tau }}_{q}\) represents the qth phase stress–strain tensor. h q represents the specific enthalpy of the qth phase and \(\vec{q}_{q}\) represents the heat flux. Q pq represents the intensity of heat exchange between the pth and qth phases and h pq is the interface enthalpy. S q represents the source term. The use of the standard k-e transport model on building configurations has been found in previous works [11, 14]. The turbulence kinetic energy, k, and its rate of dissipation, e, are obtained from the following transport equations formulated in Eqs. 4 and 5. $$\frac{\partial }{\partial t}\left( {\rho k} \right) + \frac{\partial }{{\partial x_{i} }}\left( {\rho ku_{i} } \right) = \frac{\partial }{{\partial x_{j} }}\left[ {\left( {\mu + \frac{{\mu_{t} }}{{\sigma_{k} }}} \right)\frac{\partial k}{{\partial x_{j} }}} \right] + G_{k} + G_{b} - \rho \in - Y_{M} + S_{k}$$ $$\frac{\partial }{\partial t}\left( {\rho e} \right) + \frac{\partial }{{\partial x_{i} }}\left( {\rho eu_{i} } \right) = \frac{\partial }{{\partial x_{j} }}\left[ {\left( {\mu + \frac{{\mu_{t} }}{{\sigma_{e} }}} \right)\frac{\partial e}{{\partial x_{j} }}} \right] + C_{1e} \frac{e}{k}(G_{k} + C_{3e} G_{b} ) - C_{2 \in } \rho \frac{{e^{2} }}{k} + S_{e}$$ where G k represents the generation of turbulence kinetic energy due to the mean velocity gradients, G b represents the generation of turbulence kinetic energy due to buoyancy. Y M represents the contribution of fluctuating dilatation in compressible turbulence to the overall dissipation rate. C 1e , C 2e and C 3e are constants; σ k and σ e are the turbulent Prandtl numbers for k and e. The computational volumes were applied with non-uniform mesh due to the complicated shape of the geometry. The meshed model comprised of 2,523,459 nodes and 11,849,200 elements as displayed in Fig. 3. The minimum face angle was 5.67° while the maximum edge length and element volume ratios were 9.9 and 44.6. As observed in Fig. 3, the mesh around the building and spacing between the floors were refined to ensure that the flow field was accurately captured in the simulations. Mesh generation on the computational domain Grid verification In order to verify the accuracy of the numerical models, a grid independency test was carried out to determine the variation in results over increasing mesh sizes. Basic concepts associated with mesh refinement deals with the refinement and evaluation of elements where the posterior error indicator is larger than the preset criterion, while mesh enrichment considers running higher order polynomials till the solution is expected to improve with a fixed mesh [13]. Grid verification was carried out using mesh refinements (h-method) in order to optimise the distribution of mesh size h over a finite element. The area-weighted average value of the static pressure on the building façade was taken as the error indicator (Fig. 4), as the grid was refined from 1,427,500 to 11,849,200 elements. The discretisation error was found to reduce to below 1 % when the cells were increased 11,849,200 and hence the size was used in this investigation. The mesh used for the investigation was based on a grid verification analysis which was performed by conducting additional simulations of the same domain and boundary conditions but with various mesh sizes. Discretisation error at increasing number of elements Using the benchmark model (floor spacing = 0.8 m), a total of four additional computational models were created with floor spacing variation ranging from 0.8 to 1.6 m in order to determine its impact on the wind velocity and pressure distribution and thus evaluate the potential for integrating wind turbines within the space. Table 1 displays a section of the generated computational models for this investigation. Table 1 Generated computational models The applied boundary conditions comprised of a reference velocity (uref) of 5.5 m/s at a height of 420 m (zref) approaching directly perpendicular to the building façade [5, 6]. The geometry was modeled as a solid zone while the enclosure was modeled as a fluid zone for the analyses. The boundary conditions were kept identical throughout the numerical investigation for all analysed models and are displayed in Table 2. Table 2 Boundary conditions Using the computational domain, a representation of velocity boundary layer profile and turbulence intensity (I) at the windward side of the building is shown in Fig. 5 wherein the wind speed is taken from the direction of the sea. The thickness of boundary layer of the atmosphere varies with wind speed, turbulence level and the type of surface. The power law is an empirical equation expressed in Eq. 1. For neutral stability conditions, α is approximately 1/7, or 0.143, regarded as a reasonable but conservative estimate [15]. Representation of the boundary layer profile and turbulence intensity $${u} = {u}_{{{ref}}} \left( {\frac{{z}}{{{z}_{{{ref}}} }}} \right)^{{\alpha}}$$ where u is the speed at a particular point, uref is the reference speed, z is the height at a particular point and zref is the reference height. Figure 6 displays the static pressure gradient at a height of 250 m above ground level on the building model with a spacing of 1.6 m between floors. At an inlet wind speed of 5.5 m/s and wind angle of 0°, the maximum positive pressure of 29 Pa was created on the windward side of the tower. As expected, a negative pressure was created on the leeward side thus highlighting a maximum pressure differential of 102 Pa. Figure 6b shows a more detailed view of the static pressure distribution inside the floor spacing. Similarly, positive pressure was observed on the wind ward side of the internal circular wall and negative pressure on the sides where airflow splits and accelerates. Contour levels of static pressure at a height of 250 m on the a building fabric b plan view Air velocity distribution for the building model with spacing of 1.6 m between floors is shown in Fig. 7. The direction of the airflow in the diagram is from left to right. The airflow velocity was reduced up to 2.1 m/s as it approached the wind ward surface of the building. Some of the airflow streamed around the sides of the building with some entering the floor spacing. The airflow inside the spacing accelerates as it moved along the sides of the internal circular wall. Using the inlet speed of 5.5 m/s, the findings showed that the velocity was increased by almost 92 % to 10.6 m/s between the floors of the building. The wind turbine can be designed and positioned inside the spacing to exploit this wind acceleration. A large region of recirculation was observed on the leeward side of the building. Plan view of air velocity at a height of 250 m Figure 8 displays the contour levels for the turbulence kinetic energy on the building geometry. The maximum turbulence was observed in the region adjacent between two conforming floors of the building, thus indicating a turbulent exposure of the proposed wind turbines with the incoming flow of air. The maximum turbulence intensity was recorded at 2.5 J/kg while a mean turbulence intensity of 1.1 J/kg was obtained on the windward façade of the geometry. Plan view of air turbulence kinetic energy at a height of 250 m Figure 9 displays the velocity distribution on the building at increasing altitude ranging from ground level to 450 m. In general, all analysed heights displayed a similar velocity profile with the wind shearing away towards the sides of the tower while creating a region of re-circulation at the leeward end owing to the adverse pressure gradient. However, due to the simulated atmospheric boundary layer flows, a gradual increase in air velocity were observed as the height was gradually increased highlighting a direct proportionality between the two parameters. This phenomenon must be taken into account during the design stage as surrounding structures can have an impact on the airflow distribution particularly, the wind turbines that will be located on the lower floors. Velocity distribution on the building structure at increasing altitude Table 3 displays the summary of area-weighted results for mean velocity, turbulence intensity and pressure variables for the rotating tower building model with floor spacing ranging from 0.8 to 1.6 m. A linear proportionality between floor spacing and air velocity was observed with the speed increasing by 9 % from 8.9 to 9.9 m/s at a floor spacing of 1.6 m respectively. The turbulence intensity was found to decrease from 52 to 34 % signifying the positive influence of varying floor spacing on turbulence characteristics. A detailed demonstration of the floor spacing on the induced wind velocity and turbulence is highlighted in Fig. 10. Relationship between induced wind velocity and turbulence kinetic energy at increasing floor spacing The graphical representation of the effect of floor spacing on the wind distribution is shown in Fig. 11. In order to quantify the findings, three individual points were created on the building geometry and the results of air velocity were recorded for each computational model. As observed, all the three points indicated an increase in air velocity as the floor spacing was increased from 0.8 to 1.6 m with a maximum speed of 9.1 m/s. At a floor spacing of 1.6 m, the overall wind speed augmentation was calculated at 39 % which was much higher than the benchmark model (floor spacing = 0.8 m) which indicated an amplification in wind speed of approximately 27 % from the inlet velocity of 5.5 m/s. The investigation depicted that the morphology of the building plays a key role in the overall character of air circulation over the structure and also the spacing between the floors. Effect of floor spacing on wind speed at different points The effect of floor spacing on the overall turbulence kinetic energy and pressure variables are quantified in Figs. 12 and 13. An inverse proportionality trend was obtained for parameters as the turbulence kinetic energy across point 1 was found to decrease from 0.36 to 0.17 J/kg as the floor spacing was increased from 0.8 to 1.6 m. The highest variation in turbulence kinetic energy was noted across point 3 which was located directly perpendicular to the incoming wind. On average, the turbulence kinetic energy decreased by 53 % across the points on the sides (Point 1 and 2) when the floor spacing was increased from 0.8 to 1.6 m, thus underlining a considerable improvement in air distribution. Effect of floor spacing on turbulence kinetic energy at different points Effect of floor spacing on static pressure at different points A comparison between the overall floor height of the building and the corresponding wind velocity is displayed in Fig. 14. As expected, the two quantities were found to be in direct proportion to each other. Using the inlet wind speed of 5.5 m/s, an increase of 19 % was obtained at a height of 100 m above ground level. The air velocity was further found to accelerate by 25 % at a height of 450 m. This phenomenon was predominantly because of the boundary layer which is formed due to the spatially integrated heat and moisture exchanges between the urban environment and its overlying air. Effect of building floor height on wind speed Increasing the energy efficiency of high-rise buildings by utilising renewable energy is a major factor in improving the sustainability of the built environment As depicted from the results, structural morphology plays a very important role on the efficiency of building-integrated wind turbines by regulating the induced air stream. For the rotating tower model, the aerodynamic façade assists in an unfluctuating distribution of wind across the building as noted from the velocity gradient represented by discrete point measurements taken in Fig. 11. The airstream was shown to accelerate by 12 % when the floor spacing was increased from 0.8 to 1.6 m. In addition, having an aerodynamic façade with a floor spacing of 1.6 m reduces the overall turbulence kinetic energy over the structure by up to 53 %. Dynamic architecture is a term devised for a new generation of off-grid high rise structures that rotate. In this paper, the influence of building morphology on wind distribution is presented by investigating the spacing between adjacent building floors of dynamic architecture. Five computational models were created with floor spacing ranging from 0.8 m (benchmark) to 1.6 m. The three-dimensional Reynolds-Averaged Navier–Stokes (RANS) equations along with the momentum and continuity equations were solved for obtaining the velocity and pressure field. Simulating a reference wind speed of 5.5 m/s, the findings from the study quantified that at a floor spacing of 1.6 m, the overall wind speed augmentation was 39 % which was much higher than the benchmark model (floor spacing = 0.8 m) indicating an amplification in wind speed of approximately 27 %. In addition, the study indicated a decrease in turbulence kinetic energy by 53 % when the floor spacing was increased from 0.8 to 1.6 m, thus underlining a considerable improvement in air distribution. The findings from this study depicted that the vertical morphology and aerodynamic features of a building plays a key role in the overall character of air distribution over the structural façade. The results from this study can be used for the design and integration of wind generation technologies into high rise buildings. Key World Energy Statistics, International Energy Agency, Head of Communication and Information Office, 75739 Paris Cedex 15, France Müller G, Mark F, Jentsch ES (2009) Vertical axis resistance type wind turbines for use in buildings. Renew Energy 34:1407–1412 Killa S, Smith RF (2008) Harnessing energy in tall buildings: Bahrain World Trade Center and Beyond, Council of Tall Buildings and Urban Habitat (CTBUH) 8th World Congress, Dubai, United Arab Emirates, March 3–5 Fisher DH (2008) Rotating tower Dubai, Rotating Tower Technology International Limited (UK) Chaudhry HN, Calautit JK, Hughes BR (2014) The influence of structural morphology on the efficiency of Building Integrated Wind Turbines (BIWT). Aims Energy 2:219–236 Chaudhry HN, Calautit JK, Hughes BR (2015) Computational analysis to factor wind into the design of an architectural environment. Model Simul Eng Article ID 234601 Chong WT, Yip SY, Fazlizan A, Poh SC, Hew WP, Tan EP, Lim TS (2013) Design of an exhaust air energy recovery wind turbine generator for energy conservation in commercial buildings. Renew Energy 67:252–256 Lu L, Sun K (2014) Wind power evaluation and utilization over a reference high-rise building in urban area. Energy Build 68:339–350 Sharpe T, Proven G (2010) Crossflex: concept and early development of a true building integrated wind turbine. Energy Build 42:2365–2375 Wang F, Bai L, Fletcher J, Whiteford J, Cullen D (2008) The methodology for aerodynamic study on a small domestic wind turbine with scoop. J Wind Eng Ind Aerodyn 96:1–24 Chaudhry HN, Hughes BR (2010) Computational analysis of dynamic architecture. IMechE – J. Power Energy IMechE Part A: Vol 225, Number 1/2010 Launder BE, Spalding DB (1972) Lectures in mathematical models of turbulence. Academic Press, London Chung TJ (2002) Computational fluid dynamics. Cambridge University Press; illustrated edition, ISBN-0521594162 Calautit JK, Hughes BR, Ghani SA (2013) Numerical investigation of the integration of heat transfer devices into wind towers. Chem Eng Trans 34:43–48 Kubik ML, Coker PJ, Hunt C. (2011) Using meteorological wind data to estimate turbine generation output: a sensitivity analysis. World Renewable Energy Congress; Linkoping, Sweden Tominaga Y, Mochida A, Yoshie R, Kataoka H, Nozu T, Yoshikawa M, Shirasawa T (2008) AIJ guidelines for practical applications of CFD to pedestrian wind environment around buildings. J Wind Eng Ind Aerodyn 96:1749–1761 The authors would like to acknowledge Heriot-Watt University and the University of Sheffield for the use of the research facilities. School of Energy, Geoscience, Infrastructure and Society, Heriot-Watt University, PO Box 294345, Dubai, UAE Hassam Nasarullah Chaudhry Department of Mechanical Engineering, University of Sheffield, Sheffield, S10 2TN, UK John Kaiser Calautit & Ben Richard Hughes John Kaiser Calautit Ben Richard Hughes Correspondence to Hassam Nasarullah Chaudhry. I hereby confirm, on behalf of the authors that none of the authors have any competing interests in the manuscript. Chaudhry, H.N., Calautit, J.K. & Hughes, B.R. Design and aerodynamic investigation of dynamic architecture. Innov. Infrastruct. Solut. 1, 7 (2016). https://doi.org/10.1007/s41062-016-0002-2
CommonCrawl
Between subject mixture models Within subject mixture models Objectives: learn how to implement between subject mixture models (BSMM) and within subject mixture models (WSMM). Projects: bsmm1_project, bsmm2_project, wsmm_project Several types of mixture models exist, they are useful in the context of mixed effects models. It may be necessary in some situations to introduce diversity into the structural models themselves: Between-subject model mixtures (BSMM) assume that there exists subpopulations of individuals. Different structural models describe the response of each subpopulation, and each subject belongs to one of these subpopulations. One can imagine for example different structural models for responders, nonresponders and partial responders to a given treatment. The easiest way to model a finite mixture model is to introduce a label sequence that takes its values in such that if subject i belongs to subpopulation m. is the probability for subject i to belong to subpopulation m. A BSMM assumes that the structural model is a mixture of M different structural models: $$f\left(t_{ij}; \psi_i, z_i \right) = \sum_{m=1}^M 1_{z_i = m} f_m\left( t_{ij}; \psi_i \right) $$ In other word, each subpopulation has its own structural model: is the structural model for subpopulation m. Within-subject model mixtures (WSMM) assume that there exist subpopulations (of cells, viruses, etc.) within each patient. In this case, different structural models can be used to describe the response of different subpopulations, but the proportion of each subpopulation depends on the patient. Then, it makes sense to consider that the mixture of models happens within each individual. Such within-subject model mixtures require additional vectors of individual parameters representing the proportions of the M models within each individual i: $$f\left( t_{ij}; \psi_i, z_i \right) = \sum_{m=1}^M \pi_{m,i} f_m\left( t_{ij}; \psi_i \right)$$ The proportions are now individual parameters in the model and the problem is transformed into a standard mixed effects model. These proportions are assumed to be positive and to sum to 1 for each patient. bsmm1_project (data = 'pdmixt1_data.txt', model = 'bsmm1_model.txt') We consider a very simple example here with two subpopulations of individuals who receive a given treatment. The outcome of interest is the measured effect of the treatment (a viral load for instance). The two populations are non responders and responders. We assume here that the status of the patient is known. Then, the data file contains an additional column GROUP. This column is duplicated because Monolix uses it i) as a regression variable (REGRESSOR): it is used in the model to distinguish responders and non responders, ii) as a categorical covariate (CATEGORICAL COVARIATE): it is used to stratify the diagnosis plots. We can then display the data and use the categorical covariate GROUP_CAT to split the plot into responders and non responders: We use different structural models for non responders and responders. The predicted effect for non responders is constant f(t) = A1 while the predicted effect for responders decreases exponentially f(t) = A2 exp(-kt). The model is implemented in the model file bsmm1_model.txt (note that the names of the regression variable in the data file and in the model script do not need to match): input = {A1, A2, k, g} g = {use=regressor} if g==1 f = A1 f = A2*exp(-k*max(t,0)) output = f The plot of individual fits exhibit the two different structural models: VPCs should then be splitted according to the GROUP_CAT as well as the prediction distribution for non responders and responders: The status of the patient is unknown in this project (which means that the column GROUP is not available anymore). Let p be the proportion of non responders in the population. Then, the structural model for a given subject is f1 with probability p and f2 with probability 1-p. The structural model is therefore a BSMM: input = {A1, A2, k, p} f1 = A1 f2 = A2*exp(-k*max(t,0)) f = bsmm(f1, p, f2, 1-p) Important: p is a population parameter of the model to estimate. There is no inter-patient variability on p: all the subjects have the same probability of being a non responder in this example. We use a logit-normal distribution for p in order to constrain it to be between 0 and 1, but without variability: p is estimated with the other population parameters: Then, the group to which a patient belongs is also estimated as the group of highest conditional probability: $$\begin{aligned}\hat{z}_i &= 1~~~~\textrm{if}~~~~ \mathbb{P}(z_i=1 | (y_{ij}), \hat{\psi}_i, \hat{\theta})> \mathbb{P}(z_i=2 | (y_{ij}),\hat{\psi}_i, \hat{\theta}),\\ &=0~~~~\textrm{otherwise}\end{aligned}$$ The estimated groups can be used as a stratifying variable to split some plots such as VPCs wsmm_project (data = 'pdmixt2_data.txt', model = 'wsmm_model.txt') It may be too simplistic to assume that each individual is represented by only one well-defined model from the mixture. We consider here that the mixture of models happens within each individual and use a WSMM: f = p*f1 + (1-p)*f2 f = wsmm(f1, p, f2, 1-p) Remark: Here, writing f = wsmm(f1, p, f2, 1-p) is equivalent to writing f = p*f1 + (1-p)*f2 Important: Here, p is an individual parameter: the subjects have different proportions of non responder cells. We use a probit-normal distribution for p in order to constrain it to be between 0 and 1, with variability: There is no latent covariate when using WSMM: mixtures are continuous mixtures. We therefore cannot split anymore the VPC and the prediction distribution anymore.
CommonCrawl
Computational Geosciences pp 1–18 | Cite as A lattice-Boltzmann study of permeability-porosity relationships and mineral precipitation patterns in fractured porous media Mehrdad Ahkami Andrea Parmigiani Paolo Roberto Di Palma Martin O. Saar Xiang-Zhao Kong First Online: 11 January 2020 Mineral precipitation can drastically alter a reservoir's ability to transmit mass and energy during various engineering/natural subsurface processes, such as geothermal energy extraction and geological carbon dioxide sequestration. However, it is still challenging to explain the relationships among permeability, porosity, and precipitation patterns in reservoirs, particularly in fracture-dominated reservoirs. Here, we investigate the pore-scale behavior of single-species mineral precipitation reactions in a fractured porous medium, using a phase field lattice-Boltzmann method. Parallel to the main flow direction, the medium is divided into two halves, one with a low-permeability matrix and one with a high-permeability matrix. Each matrix contains one flow-through and one dead-end fracture. A wide range of species diffusivity and reaction rates is explored to cover regimes from advection- to diffusion-dominated, and from transport- to reaction-limited. By employing the ratio of the Damköhler (Da) and the Peclet (Pe) number, four distinct precipitation patterns can be identified, namely (1) no precipitation (Da/Pe < 1), (2) near-inlet clogging (Da/Pe > 100), (3) fracture isolation (1 < Da/Pe < 100 and Pe > 1), and (4) diffusive precipitation (1 < Da/Pe < 100 and Pe < 0.1). Using moment analyses, we discuss in detail the development of the species (i.e., reactant) concentration and mineral precipitation fields for various species transport regimes. Finally, we establish a general relationship among mineral precipitation pattern, porosity, and permeability. Our study provides insights into the feedback loop of fluid flow, species transport, mineral precipitation, pore space geometry changes, and permeability in fractured porous media. Lattice-Boltzmann method Fractured porous media Mineral precipitation patterns Permeability-porosity relationships This work was supported by ETH Grant ETH-12 15-2. The Werner Siemens Foundation (Werner Siemens-Stiftung) is further thanked by Martin Saar for its support of the Geothermal Energy and Geofluids (GEG.ethz.ch) Group at ETH Zurich. We thank the two anonymous reviewers for their helpful comments and suggestions that improved this paper. Compliance with ethical standards The authors declare that they have no conflict of interest. Appendix A: fluid dynamics solver In a D2Q9 lattice-Boltzmann method (LBM), the particle velocity vector, vi, in Eq. 1 is defined as $$ \boldsymbol{v}_{i} = \frac{\Delta x}{\Delta t}\times\left\{ \begin{array}{lllll} &(0,0), & & i=0 \\ & \left( \cos{\frac{(i-1)\pi}{2}}, \sin{\frac{(i-1)\pi}{2}} \right), & & i=1-4 \\ & \left( \cos{\frac{(2i-1)\pi}{4}}, \sin{\frac{(2i-1)\pi}{4}} \right), & & i=5-8, \end{array} \right. $$ where i = 0 indicates the rest velocity, i = 1 − 4 indicates velocities pointing to North-South-East-West, respectively, and i = 5 − 8 indicates the diagonal velocities. The collision operator, \(\boldsymbol {\Omega }_{i}^{f}(x,t)\), is reduced here to a single relaxation time (SRT or BGK) operator that redistributes momentum and mass among particles [41, 43], $$ \boldsymbol{\Omega}_{i}^{f}(\boldsymbol{x},t)=\frac{1}{\tau_{f}}\Big({f_{i}^{0}}(\boldsymbol{x},t)-f_{i}(\boldsymbol{x},t)\Big), $$ where \({f_{i}^{0}}\) is the local equilibrium distribution, $$ {f_{i}^{0}}(\boldsymbol{x},t)={w_{i}^{f}}\rho(\boldsymbol{x},t)\left( 1 + \frac{\boldsymbol{u}\cdot \boldsymbol{v}_{i}}{{c_{s}^{2}}} + \frac{1}{2}\frac{(\boldsymbol{u}\cdot \boldsymbol{v}_{i})^{2}}{{c_{s}^{4}}} - \frac{1}{2}\frac{\boldsymbol{u}\cdot \boldsymbol{u}}{{c_{s}^{2}}}\right). $$ The lattice weights, \({w_{i}^{f}}\), for the D2Q9 scheme are $$ {w_{i}^{f}} = \left\{ \begin{array}{lllll} &4/9, & & i=0 \\ &1/9, & & i=1-4 \\ &1/36, & & i=5-8. \end{array} \right. $$ The fluid density, ρ(x, t), is calculated as $$ \rho(\boldsymbol{x},t)=\sum\limits_{i=0}^{8} f_{i}(\boldsymbol{x},t). $$ Accordingly, the fluid pressure is given as \(p=\rho {c_{s}^{2}}\). Following the scheme proposed by [16], the macroscopic fluid velocity, u, is calculated from the momentum of the probability distribution function, f(x, t), and the drag force, fdrag, $$ \boldsymbol{u}(\boldsymbol{x},t)= \frac{1}{\rho(\boldsymbol{x},t)}\left( \sum\limits_{i=0}^{8} f_{i}(\boldsymbol{x},t)\boldsymbol{v}_{i}+\frac{\Delta t}{2}\boldsymbol{f}_{drag}\right). $$ The Fdrag, i in Eq. 1 is calculated by $$ F_{drag,i}= {w_{i}^{f}}\left( 1-\frac{1}{2\tau_{f}}\right)\left( \frac{\boldsymbol{v}_{i}-\boldsymbol{u}}{{c_{s}^{2}}}+\frac{\boldsymbol{u}\cdot \boldsymbol{v}_{i}}{{c_{s}^{4}}}\boldsymbol{v}_{i}\right)\cdot \boldsymbol{f}_{drag}. $$ Appendix B: heterogeneous reaction transport solver In the D2Q5 scheme, the lattice direction ei in Eq. 4 is given by $$ \boldsymbol{e}_{i} = \frac{\Delta x}{\Delta t}\times\left\{ \begin{array}{lllll} &(0,0), & & i=0 \\ &\left( \cos{\frac{(i-1)\pi}{2}}, \sin{\frac{(i-1)\pi}{2}} \right), & & i=1-4 \end{array} \right. $$ where i = 0 indicates the rest velocity, and i = 1 − 4 indicates velocities pointing to North-South-East-West, respectively. The collision term, Ωg(x, t), is calculated using a so-called single relaxation time approach (SRT), $$ \boldsymbol{\Omega}_{i}^{g}(\boldsymbol{x},t)=\frac{1}{\tau_{g}}\Big({g_{i}^{0}}(\boldsymbol{x},t)-g_{i}(\boldsymbol{x},t)\Big), $$ where \({g_{i}^{0}}\) is the local equilibrium distribution, $$ {g_{i}^{0}}(\boldsymbol{x},t)={w_{i}^{g}} C(\boldsymbol{x},t)\left( 1~+~\frac{\boldsymbol{u}\cdot \boldsymbol{e}_{i}}{{c_{s}^{2}}}\right), $$ where \(C(\boldsymbol {x},t)={\sum }_{i=0}^{4} g_{i}(\boldsymbol {x},t)\) is the concentration of the transported species. Appendix C: reactive model coupling The coupling between flow and reactive transport in this study is illustrated by Fig. 11. Our procedure is slightly different from Huber et al. [19]. In the present study, the convergence of the velocity field is required to proceed to the transport and reaction step. Flow chart of the current lattice-Boltzmann reactive transport model Appendix D: verification of the current LBM code The current LBM code is validated with simulations of (i) flow and (ii) transport to their corresponding analytical solutions in a Hele-Shaw model with two parallel plates. Taking advantage of our drag force model (Eq. 1), the 3D Hele-Shaw model can be simulated using a 2D domain. Here, the width and length of the 2D domain are set to 600 lattices/nodes. Boundaries parallel to the flow/transport direction are set to no-slip boundaries. A constant pressure gradient is set from the inlet to the outlet boundaries (perpendicular to the main transport direction). For the flow validation, we calculate the equivalent permeability of the Hele-Shaw model with various openings (i.e., the aperture of the Hele-Shaw model), ranging from 4 to 30 lattices/nodes ([4,5,6,8,10,14,17,20,25,30]). For these apertures, the Re number is calculated as ([0.006, 0.011, 0.018, 0.043, 0.084, 0.227, 0.405, 0.657, 1.275, 2.189]), respectively. In these validations, the physical dimension of one lattice is 50 μ m. The simulated permeability is compared with the permeability calculated using the cubic law [60], $$ k=h^{2}/12 , $$ where k is the permeability and h is the aperture. The comparison of permeabilities, estimated by the LBM simulations and the cubic law is shown in Fig. 12. The results indicate that our current LBM code successfully reproduce fluid flow. Comparison of permeability values, calculated by the current LBM code and the cubic law [60] For the transport validation, we simulate a non-reactive solute transport (front propagation) in a 2D Hele-Shaw model of the same size as the flow validation but with an aperture of 6 lattices/nodes. Each lattice has a physical dimension of 50 μ m. This aperture is the same as the one in the mineral precipitation simulations. Similar to the flow validation, no-slip boundaries are set to the boundaries parallel to the main transport direction and a constant pressure gradient is set across the inlet and outlet boundaries (perpendicular to the main flow/transport direction). For the given pressure boundary, the Reynolds (Re) number is 0.018. Initial solute concentration in the domain is set to zero and a constant solute concentration of 1 is set at the inlet boundary. The Peclet (Pe) number for the solute transport is Pe = 0.1. We compare the solute transport (front propagation) from the LBM simulations, with the following analytical solution [28], $$ \begin{array}{lllll} \frac{\left( C(x,t)-C_{0}\right)}{\left( C_{inj}-C_{0}\right)} = \frac{1}{2}\times\left( \text{erfc}\left( \frac{x-vt}{\sqrt{4Dt}}\right)+\exp\left( \frac{xv}{D}\right)\text{erfc}\left( \frac{x+vt}{\sqrt{4Dt}}\right)\right) s , \end{array} $$ where C0 = C(x, 0) is the initial concentration, Cinj = C(0,t) is the inlet concentration, x is the coordinate along the center line of the Hele-Shaw domain in the flow/transport direction, v is the fluid velocity along the center line at steady-state conditions, D is the solute diffusion coefficient, erfc(y) is the complementary error function, $$ \text{erfc}\left( y\right)= \frac{2}{\sqrt{\pi}}{\int}^{\infty}_{y} \exp(-t^{2}) dt, $$ and exp is the natural exponential function. The comparison at different lattice times shows a close-to perfect agreement between our LBM simulation and the analytical solution (Fig. 13). Comparison of solute front propagation between the LBM simulations and the analytical solutions at different lattice times Ahkami, M., Roesgen, T., Saar, M.O., Kong, X.Z.: High-resolution temporo-ensemble PIV to resolve pore-scale flow in 3D-printed fractured porous media. Transport in Porous Media. ISSN 15731634. http://link.springer.com/10.1007/s11242-018-1174-3 (2018) Souto, H.P.A., Moyne, C.: Dispersion in two-dimensional periodic porous media. Part ii. Dispersion tensor. Phys. Fluids 9(8), 2253–2263 (1997)CrossRefGoogle Scholar Battiato, I., Tartakovsky, D.M.: Applicability regimes for macroscopic models of reactive transport in porous media. J. Contam. Hydrol. 120, 18–26 (2011)CrossRefGoogle Scholar Battiato, I., Tartakovsky, D.M., Tartakovsky, A.M., Scheibe, T.D.: Hybrid models of reactive transport in porous and fractured media. Adv. Water Resour. 34(9), 1140–1150 (2011)CrossRefGoogle Scholar Beckingham, L.E.: Evaluation of macroscopic porosity-permeability relationships in heterogeneous mineral dissolution and precipitation scenarios. Water Resour. Res. 53(12), 10217–10230 (2017)CrossRefGoogle Scholar Brenner, H.: Dispersion resulting from flow through spatially periodic porous media. Philos. Trans. R. Soc. London. Series A Math. Phys. Sci. 297(1430), 81–133 (1980)CrossRefGoogle Scholar Carman, P.C.: Fluid flow through granular beds. Trans. Inst. Chem. Eng. 15, 150–166 (1937)Google Scholar Chen, L., Zhang, R., Min, T., Kang, Q., Tao, W.: Pore-scale study of effects of macroscopic pores and their distributions on reactive transport in hierarchical porous media. Chem. Eng. J. 349, 428–437 (2018). ISSN 13858947CrossRefGoogle Scholar Daccord, G., Lietard, O., Lenormand, R.: Chemical dissolution of a porous medium by a reactive fluid—ii. Convection vs reaction, behavior diagram. Chem. Eng. Sci. 48(1), 179–186 (1993)CrossRefGoogle Scholar Davis, M.A., Walsh, S.D.C., Saar, M.O.: Statistically reconstructing continuous isotropic and anisotropic two-phase media while preserving macroscopic material properties. Phys. Rev. E 83(2), 026706 (2011)CrossRefGoogle Scholar Flekkøy, E.G., Oxaal, U., Feder, J., Jøssang, T.: Hydrodynamic dispersion at stagnation points: simulations and experiments. Phys. Rev. E. 52(5), 4952 (1995)CrossRefGoogle Scholar Flukiger, F., Bernard, D.: A new numerical model for pore scale dissolution of calcite due to co2 saturated water flow in 3d realistic geometry: principles and first results. Chem. Geol. 265(1–2), 171–180 (2009)CrossRefGoogle Scholar Ghezzehei, T.A.: Linking sub-pore scale heterogeneity of biological and geochemical deposits with changes in permeability. Adv. Water Resour. 39, 1–6 (2012)CrossRefGoogle Scholar Grosfils, P., Boon, J.P.: Viscous fingering in miscible, immiscible and reactive fluids. Int. J. Modern Phys. B 17(01n02), 15–20 (2003)CrossRefGoogle Scholar Grosfils, P., Boon, J.P., Chin, J., Boek, E.S.: Structural and dynamical characterization of hele–shaw viscous fingering. Philos. Trans. R. Soc. London. Series A: Math. Phys. Eng. Sci. 362(1821), 1723–1734 (2004)CrossRefGoogle Scholar Guo, Z., Zheng, C., Shi, B.: Discrete lattice effects on the forcing term in the lattice Boltzmann method. Phys. Rev. E 65(4), 046308 (2002)CrossRefGoogle Scholar Hoefner, M.L., Fogler, H.S.: Pore evolution and channel formation during flow and reaction in porous media. AIChE J 34(1), 45–54 (1988)CrossRefGoogle Scholar Horn, F.J.M.: Calculation of dispersion coefficients by means of moments. AIChE J 17(3), 613–620 (1971)CrossRefGoogle Scholar Huber, C., Shafei, B., Parmigiani, A.: A new pore-scale model for linear and non-linear heterogeneous dissolution and precipitation. Geochim. Cosmochim. Acta 124, 109–130 (2014)CrossRefGoogle Scholar Kang, Q., Zhang, D., Chen, S., He, X.: Lattice Boltzmann simulation of chemical dissolution in porous media. Phys. Rev. E 65(3), 036318 (2002)CrossRefGoogle Scholar Kang, Q., Lichtner, P.C., Zhang, D.: Lattice Boltzmann pore-scale model for multicomponent reactive transport in porous media. J. Geophys. Res.: Solid Earth 111(5), 1–12 (2006). ISSN 21699356Google Scholar Kang, Q., Lichtner, P.C., Janecky, D.R.: Lattice boltzmann method for reacting flows in porous media. Adv. Appl. Math. Mech 2(5), 545–563 (2010)CrossRefGoogle Scholar Kang, Q., Chen, L., Valocchi, A.J., Viswanathan, H.S.: Pore-scale study of dissolution-induced changes in permeability and porosity of porous media. J. Hydrol. 517, 1049–1055 (2014)CrossRefGoogle Scholar Katz, G.E., Berkowitz, B., Guadagnini, A., Saaltink, M.W.: Experimental and modeling investigation of multicomponent reactive transport in porous media. J. Contam. Hydrol. 120, 27–44 (2011)CrossRefGoogle Scholar Kim, D., Peters, C.A., Lindquist, W.B.: Upscaling geochemical reaction rates accompanying acidic co2-saturated brine flow in sandstone aquifers. Water Resour. Res. 47, 1 (2011)CrossRefGoogle Scholar Kong, X.-Z., Saar, M.O.: Numerical study of the effects of permeability heterogeneity on density-driven convective mixing during co2 dissolution storage. Int. J. Greenhouse Gas Control 19, 160–173 (2013)CrossRefGoogle Scholar Laleian, A., Valocchi, A., Werth, C.: An incompressible, depth-averaged lattice Boltzmann method for liquid flow in microfluidic devices with variable aperture. Computation 3(4), 600–615 (2015)CrossRefGoogle Scholar Lapidus, L., Amundson, N.R.: Mathematics of adsorption in beds. vi. The effect of longitudinal diffusion in ion exchange and chromatographic columns. J. Phys. Chem. 56(8), 984–988 (1952)CrossRefGoogle Scholar Leverett, M.C.: Capillary behavoir in porous solids. Trans. AIME 142, 159–172 (1941)Google Scholar Li, L., Peters, C.A., Celia, M.A.: Upscaling geochemical reaction rates using pore-scale network modeling. Adv. Water Resour. 29(9), 1351–1370 (2006)CrossRefGoogle Scholar Liu, H.-H., Zhang, G., Yi, Z., Wang, Y.: A permeability-change relationship in the dryout zone for co2 injection into saline aquifers. Int. J. Greenhouse Gas Control 15, 42–47 (2013)CrossRefGoogle Scholar Luhmann, A.J., Kong, X.-Z., Tutolo, B.M., Garapati, N., Bagley, B.C., Saar, M.O., Seyfried, JrW E: Experimental dissolution of dolomite by co2-charged brine at 100 c and 150 bar: evolution of porosity, permeability, and reactive surface area. Chem. Geol. 380, 145–160 (2014)CrossRefGoogle Scholar Luquot, L., Rodriguez, O., Gouze, P.: Experimental characterization of porosity structure and transport property changes in limestone undergoing different dissolution regimes. Transp. Porous Media 101(3), 507–532 (2014)CrossRefGoogle Scholar Mehmani, Y., Sun, T., Balhoff, M.T., Eichhubl, P., Bryant, S.: Multiblock pore-scale modeling and upscaling of reactive transport: application to carbon sequestration. Transp. Porous Media 95(2), 305–326 (2012)CrossRefGoogle Scholar Molins, S., Trebotich, D., Steefel, C.I., Shen, C.: An investigation of the effect of pore scale flow on average geochemical reaction rates using direct numerical simulation. Water Resour. Res. 48, 3 (2012)CrossRefGoogle Scholar Niu, Q., Zhang, C.: Permeability prediction in rocks experiencing mineral precipitation and dissolution: a numerical study. Water Resources Research (2019)Google Scholar Nogues, J.P., Fitts, J.P., Celia, M.A., Peters, C.A.: Permeability evolution due to dissolution and precipitation of carbonates using reactive transport modeling in pore networks. Water Resour. Res. 49(9), 6006–6021 (2013)CrossRefGoogle Scholar Noiriel, C., Steefel, C.I., Yang, L., Bernard, D.: Effects of pore-scale precipitation on permeability and flow. Adv. Water Resour. 95, 125–137 (2016). ISSN 03091708CrossRefGoogle Scholar Parmigiani, A., Huber, C., Bachmann, O., Chopard, B.: Pore-scale mass and reactant transport in multiphase porous media flows. J. Fluid Mech. 686, 40–76 (2011)CrossRefGoogle Scholar Parmigiani, A, Di Palma, P.R., Leclaire, S., Habib, F., Kong, X.-Z.: Characterization of transport-enhanced phase separation in porous media using a lattice-Boltzmann method. Geofluids, https://doi.org/10.1155/2019/5176410 (2019)CrossRefGoogle Scholar Qian, Y., D'Humieres, D., Lallemand, P.: {L}attice {BGK} models for the {N}avier-{S}tokes equation. Europhys. Lett. 17, 479–484 (1992)CrossRefGoogle Scholar Saar, M.O., Manga, M.: In vesicular basalts. Geophys. Res. Lett. 26(1), 111–114 (1999)CrossRefGoogle Scholar Shih, H.C., Huang, C.L.: Image analysis and interpretation for semantics categorization in baseball video. Proceedings ITCC 2003, International Conference on Information Technology: Computers and Communications 94(3), 379–383 (2003). ISSN 0031-899X. https://doi.org/10.1109/ITCC.2003.1197559 Google Scholar Singh, M., Mohanty, K.K.: Permeability of spatially correlated porous media. Chem. Eng. Sci. 55(22), 5393–5403 (2000)CrossRefGoogle Scholar Steefel, C.I., Lasaga, A.C.: Evolution of dissolution patterns: permeability change due to coupled flow and reaction. Chem. Model. Aqueous Syst. II(416), 212–225 (1990)CrossRefGoogle Scholar Steefel, C.I., Lasaga, A.C.: A coupled model for transport of multiple chemical species and kinetic precipitation/dissolution reactions with application to reactive flow in single phase hydrothermal systems. Am. J. Sci. 294(5), 529–592 (1994)CrossRefGoogle Scholar Steefel, C.I., DePaolo, D.J., Lichtner, P.C.: Reactive transport modeling: an essential tool and a new research approach for the earth sciences. Earth Planet. Sci. Lett. 240(3–4), 539–558 (2005)CrossRefGoogle Scholar Stewart, M.L., Ward, A.L., Rector, D.R.: A study of pore geometry effects on anisotropy in hydraulic permeability using the lattice-Boltzmann method. Adv. Water Resour. 29(9), 1328–1340 (2006)CrossRefGoogle Scholar Succi, S.: The lattice Boltzmann equation: for fluid dynamics and beyond. Oxford University Press (2001)Google Scholar Szymczak, P., Ladd, A.J.C.: Wormhole formation in dissolving fractures. J. Geophys. Res.: Solid Earth 114, B6 (2009)CrossRefGoogle Scholar Tartakovsky, A.M., Meakin, P., Scheibe, T.D., Wood, B.D.: A smoothed particle hydrodynamics model for reactive transport and mineral precipitation in porous and fractured porous media. Water Resour. Res., 43(5). ISSN 00431397. (2007)Google Scholar Tartakovsky, A.M., Redden, G., Lichtner, P.C., Scheibe, T.D., Meakin, P.: Mixing-induced precipitation: experimental study and multiscale numerical analysis. Water Resour. Res. 44, 6 (2008)CrossRefGoogle Scholar Tartakovsky, A.M., Scheibe, T.D., Meakin, P.: Pore-scale model for reactive transport and biomass growth. J. Porous Media 12, 5 (2009)CrossRefGoogle Scholar Varloteaux, C., Vu, M.T., Békri, S., Adler, P.M.: Reactive transport in porous media: pore-network model approach compared to pore-scale model. Phys. Rev. E 87(2), 023010 (2013)CrossRefGoogle Scholar Venturoli, M., Boek, E.S.: Two-dimensional lattice-Boltzmann simulations of single phase flow in a pseudo two-dimensional micromodel. Physica A: Stat. Mech. Appl. 362(1), 23–29 (2006).CrossRefGoogle Scholar Walsh, S.D.C., Saar, M.O.: Interpolated lattice boltzmann boundary conditions for surface reaction kinetics. Phys. Rev. E 82(6), 066703 (2010)CrossRefGoogle Scholar Walsh, S.D.C., Saar, M.O.: Macroscale lattice-Boltzmann methods for low peclet number solute and heat transport in heterogeneous porous media. Water Resour. Res., 46(7) (2010)Google Scholar Walsh, S.D.C., Burwinkle, H., Saar, M.O.: A new partial-bounceback lattice-Boltzmann method for fluid flow through heterogeneous media. Comput. Geosci. 35(6), 1186–1193 (2009)CrossRefGoogle Scholar Yoon, H., Valocchi, A.J., Werth, C.J., Dewers, T.: Pore-scale simulation of mixing-induced calcium carbonate precipitation and dissolution in a microfluidic pore network. Water Resour. Res., 48(2). ISSN 00431397. https://doi.org/10.1029/2011WR011192 (2012) Zimmerman, R.W., Bodvarsson, G.S.: Hydraulic conductivity of rock fractures. Transp. Porous Media 23(1), 1–30 (1996)CrossRefGoogle Scholar © Springer Nature Switzerland AG 2020 1.Geothermal Energy and Geofluids Group, Institute of GeophysicsETH ZurichZurichSwitzerland 2.FlowKit-NumecaLausanneSwitzerland 3.National Research Council of ItalyWater Research InstituteMontelibrettiItaly 4.Department of Earth and Environmental SciencesUniversity of MinnesotaMinneapolisUSA Ahkami, M., Parmigiani, A., Di Palma, P. et al. Comput Geosci (2020). https://doi.org/10.1007/s10596-019-09926-4 Received 15 July 2019 Accepted 11 December 2019 First Online 11 January 2020 DOI https://doi.org/10.1007/s10596-019-09926-4 Publisher Name Springer International Publishing
CommonCrawl
Solvability and asymptotic analysis of a generalization of the Caginalp phase field system CPAA Home Global well-posedness and scattering for Skyrme wave maps September 2012, 11(5): 1935-1957. doi: 10.3934/cpaa.2012.11.1935 Symmetries and blow-up phenomena for a Dirichlet problem with a large parameter Pierpaolo Esposito 1, and Maristella Petralla 2, Dipartimento di Matematica, Universitá di Roma Tre, Largo San Leonardo Murialdo, 1, I-00146 Roma Dipartimento di Matematica, Università degli Studi "Roma Tre", Largo S. Leonardo Murialdo 1, Rome, 00146, Italy Received May 2011 Revised September 2011 Published March 2012 \noindent For the Dirichlet problem $-\Delta u+\lambda V(x) u=u^p$ in $\Omega \subset \mathbb R^N$, $N\geq 3$, in the regime $\lambda \to +\infty$ we aim to give a description of the blow-up mechanism. For solutions with symmetries an uniform bound on the ``invariant" Morse index provides a localization of the blow-up orbits in terms of c.p.'s of a suitable modified potential. The main difficulty here is related to the presence of fixed points for the underlying group action. Keywords: Morse index., Blow-up, symmetries, singular perturbation. Mathematics Subject Classification: Primary: 35J60, 35B25, 35B44; Secondary: 35J2. Citation: Pierpaolo Esposito, Maristella Petralla. Symmetries and blow-up phenomena for a Dirichlet problem with a large parameter. Communications on Pure & Applied Analysis, 2012, 11 (5) : 1935-1957. doi: 10.3934/cpaa.2012.11.1935 A. Ambrosetti, A. Malchiodi and W. M. Ni, Singularly perturbed elliptic equations with symmetry: existence of solutions concentrating on spheres. I, Comm. Math. Phys., 235 (2003), 427-466. doi: 10.1007/s00220-003-0811-y. Google Scholar A. Ambrosetti, A. Malchiodi and W. M. Ni, Singularly perturbed elliptic equations with symmetry: existence of solutions concentrating on spheres. II, Indiana Univ. Math. J., 53 (2004), 297-329. doi: 10.1512/iumj.2004.53.2400. Google Scholar M. Badiale and T. D'Aprile, Concentration around a sphere for a singularly perturbed Schrödinger equation, Nonlinear Anal., 49 (2002), 947-985. doi: 10.1016/S0362-546X(01)00717-9. Google Scholar V. Benci and T. D'Aprile, The semiclassical limit of the nonlinear Schrödinger equation in a radial potential, J. Differential Equations, 184 (2002), 109-138. doi: 10.1006/jdeq.2001.4138. Google Scholar D. Cao, E. N. Dancer, E. Noussair and S. Yan, On the existence and profile of multi-peaked solutions to singularly perturbed semilinear Dirichlet problems, Discrete Contin. Dyn. Syst., 2 (1996), 221-236. doi: 10.3934/dcds.1996.2.221. Google Scholar D. Cao and E. Noussair, Multi-peak solutions for a singularly perturbed semilinear elliptic problem, J. Differential Equations, 166 (2000), 266-289. doi: 10.1006/jdeq.2000.3795. Google Scholar E. N. Dancer, Some singularly perturbed problems on annuli and a counterexample to a problem of Gidas, Ni and Nirenberg, Bull. London Math. Soc., 29 (1997), 322-326. doi: 10.1112/S0024609396002391. Google Scholar E. N. Dancer and S. Yan, A singularly perturbed elliptic problem in bounded domains with nontrivial topology, Adv. Differential Equations, 4 (1999), 347-368. Google Scholar E. N. Dancer and S. Yan, Effect of the domain geometry on the existence of multipeak solution for an elliptic problem, Topol. Methods Nonlinear Anal., 14 (1999), 1-38. Google Scholar E. N. Dancer and S. Yan, A new type of concentration solutions for a singularly perturbed elliptic problem, Trans. Amer. Math. Soc., 359 (2007), 1765-1790. doi: 10.1090/S0002-9947-06-04386-8. Google Scholar E. N. Dancer and J. Wei, On the effect of domain topology in a singular perturbation problem, Topol. Methods Nonlinear Anal., 11 (1998), 227-248. Google Scholar T. D'Aprile, On a class of solutions with non-vanishing angular momentum for nonlinear Schrödinger equations, Differential Integral Equations, 16 (2003), 349-384. Google Scholar M. del Pino, P. Felmer and J. Wei, On the role of distance function in some singular perturbation problems, Comm. Partial Differential Equations, 25 (2000), 155-177. doi: 10.1080/03605300008821511. Google Scholar O. Druet, F. Robert and J. Wei, The Lin-Ni's problem for mean convex domains,, preprint, (). Google Scholar P. Esposito, G. Mancini, Sanjiban Santra and P. N. Srikanth, Asymptotic behavior of radial solutions for a semilinear elliptic problem on an annulus through Morse index, J. Differential Equations, 239 (2007), 1-15. doi: 10.1016/j.jde.2007.04.008. Google Scholar P. Esposito and M. Petralla, Pointwise blow-up phenomena for a Dirichlet problem, Comm. Partial Differential Equations, 36 (2011), 1654-1682. doi: 10.1080/03605302.2011.574304. Google Scholar A. Farina, On the classification of solutions of the Lane-Emden equation on unbounded domains of $\mathbb R^N$, J. Math. Pures Appl., 87 (2007), 537-561. doi: 10.1016/j.matpur.2007.03.001. Google Scholar D. Gilbarg and N. S. Trudinger, "Elliptic Partial Differential Equations of Second Order," 2nd edition, Springer-Verlag, Berlin, 1983. Google Scholar M. Grossi and A. Pistoia, On the effect of critical points of distance function in superlinear elliptic problems, Adv. Differential Equations, 5 (2000), 1397-1420. Google Scholar M. K. Kwong, Uniqueness of positive solutions of positive solutions of $\Delta u - u + u^p =0$ in $\mathbb R^N$, Arch. Rat. Mech. Anal., 105 (1989), 243-266. doi: 10.1007/BF00251502. Google Scholar Y. Y. Li and L. Nirenberg, The Dirichlet problem for singularly perturbed elliptic equations, Comm. Pure Appl. Math., 51 (1998), 1445-1490. Google Scholar F. Mahmoudi and A. Malchiodi, Concentration on minimal submanifolds for a singularly perturbed Neumann problem, Adv. Math., 209 (2007), 460-525. doi: 10.1016/j.aim.2006.05.014. Google Scholar A. Malchiodi, Concentration at curves for a singularly perturbed Neumann problem in three-dimensional domains, Geom. Funct. Anal., 15 (2005), 1162-1222. doi: 10.1007/s00039-005-0542-7. Google Scholar A. Malchiodi and M. Montenegro, Boundary concentration phenomena for a singularly perturbed elliptic problem, Comm. Pure Appl. Math., 55 (2002), 1507-1568. doi: 10.1002/cpa.10049. Google Scholar A. Malchiodi and M. Montenegro, Multidimensional boundary-layers for a singularly perturbed Neumann problem, Duke Math. J., 124 (2004), 105-143. doi: 10.1215/S0012-7094-04-12414-5. Google Scholar A. Malchiodi, W. M. Ni and J. Wei, Multiple clustered layer solutions for semilinear Neumann problems on a ball, Ann. Inst. H. Poincaré Anal. Non Linéaire, 22 (2005), 143-163. doi: 10.1016/j.anihpc.2004.05.003. Google Scholar R. Molle and D. Passaseo, Concentration phenomena for solutions of superlinear elliptic problems, Ann. Inst. H. Poincaré Anal. Non Linéaire, 23 (2006), 63-84. doi: 10.1016/j.anihpc.2005.02.002. Google Scholar W. M. Ni, Diffusion, cross-diffusion, and their spike-layer steady states, Notices Amer. Math. Soc., 45 (1998), 9-18. Google Scholar W. M. Ni and I. Takagi, On the shape of least-energy solution to a semilinear Neumann problem, Comm. Pure Appl. Math., 44 (1991), 819-851. doi: 10.1002/cpa.3160440705. Google Scholar W. M. Ni and I. Takagi, Locating the peaks of least-energy solutions to a semilinear Neumann problem, Duke Math. J., 70 (1993), 247-281. doi: 10.1215/S0012-7094-93-07004-4. Google Scholar W. M. Ni and J. Wei, On the location and profile of spike-layer solutions to singularly perturbed semilinear Dirichlet problems, Comm. Pure Math. Appl., 48 (1995), 731-768. doi: 10.1002/cpa.3160480704. Google Scholar Maristella Petralla, "Asymptotic Analysis for a Singularly Perturbed Dirichlet Problem," Ph.D thesis, University of Rome III, 2010. Google Scholar M. Petralla, Non existence of bounded Morse index solutions for a super-critical Dirichlet problem with a large parameter,, in preparation., (). Google Scholar A. Pistoia, The role of the distance function in some singular perturbation problem, Methods Appl. Anal., 8 (2001), 301-319. Google Scholar B. Ruf and P. N. Srikanth, Singularly perturbed elliptic equations with solutions concentrating on a $1-$dimensional orbit, J. Eur. Math. Soc., 12 (2010), 413-427. doi: 10.4171/JEMS/203. Google Scholar J. Wei, On the construction of single-peaked solutions of a singularly perturbed semilinear Dirichlet problem, J. Differential Equations, 129 (1996), 315-333. doi: 10.1006/jdeq.1996.0120. Google Scholar J. Wei, On the interior spike solutions for some singular perturbation problems, Proc. Royal Soc. of Edinburgh Sect. A, 128 (1998), 849-874. doi: 10.1017/S030821050002182X. Google Scholar J. Wei, On the effect of the domain geometry in singular perturbation problems, Differential Integral Equations, 13 (2000), 15-45. Google Scholar Futoshi Takahashi. Morse indices and the number of blow up points of blowing-up solutions for a Liouville equation with singular data. Conference Publications, 2013, 2013 (special) : 729-736. doi: 10.3934/proc.2013.2013.729 Zongming Guo, Zhongyuan Liu, Juncheng Wei, Feng Zhou. Bifurcations of some elliptic problems with a singular nonlinearity via Morse index. Communications on Pure & Applied Analysis, 2011, 10 (2) : 507-525. doi: 10.3934/cpaa.2011.10.507 Zhijun Zhang. Boundary blow-up for elliptic problems involving exponential nonlinearities with nonlinear gradient terms and singular weights. Communications on Pure & Applied Analysis, 2007, 6 (2) : 521-529. doi: 10.3934/cpaa.2007.6.521 Hua Chen, Nian Liu. Asymptotic stability and blow-up of solutions for semi-linear edge-degenerate parabolic equations with singular potentials. Discrete & Continuous Dynamical Systems, 2016, 36 (2) : 661-682. doi: 10.3934/dcds.2016.36.661 Xiumei Deng, Jun Zhou. Global existence and blow-up of solutions to a semilinear heat equation with singular potential and logarithmic nonlinearity. Communications on Pure & Applied Analysis, 2020, 19 (2) : 923-939. doi: 10.3934/cpaa.2020042 Philippe Souplet, Juan-Luis Vázquez. Stabilization towards a singular steady state with gradient blow-up for a diffusion-convection problem. Discrete & Continuous Dynamical Systems, 2006, 14 (1) : 221-234. doi: 10.3934/dcds.2006.14.221 Haitao Yang, Yibin Chang. On the blow-up boundary solutions of the Monge -Ampére equation with singular weights. Communications on Pure & Applied Analysis, 2012, 11 (2) : 697-708. doi: 10.3934/cpaa.2012.11.697 C. Y. Chan. Recent advances in quenching and blow-up of solutions. Conference Publications, 2001, 2001 (Special) : 88-95. doi: 10.3934/proc.2001.2001.88 Jorge A. Esquivel-Avila. Blow-up in damped abstract nonlinear equations. Electronic Research Archive, 2020, 28 (1) : 347-367. doi: 10.3934/era.2020020 Marina Chugunova, Chiu-Yen Kao, Sarun Seepun. On the Benilov-Vynnycky blow-up problem. Discrete & Continuous Dynamical Systems - B, 2015, 20 (5) : 1443-1460. doi: 10.3934/dcdsb.2015.20.1443 Alberto Bressan, Massimo Fonte. On the blow-up for a discrete Boltzmann equation in the plane. Discrete & Continuous Dynamical Systems, 2005, 13 (1) : 1-12. doi: 10.3934/dcds.2005.13.1 Marek Fila, Hiroshi Matano. Connecting equilibria by blow-up solutions. Discrete & Continuous Dynamical Systems, 2000, 6 (1) : 155-164. doi: 10.3934/dcds.2000.6.155 Victor A. Galaktionov, Juan-Luis Vázquez. The problem Of blow-up in nonlinear parabolic equations. Discrete & Continuous Dynamical Systems, 2002, 8 (2) : 399-433. doi: 10.3934/dcds.2002.8.399 W. Edward Olmstead, Colleen M. Kirk, Catherine A. Roberts. Blow-up in a subdiffusive medium with advection. Discrete & Continuous Dynamical Systems, 2010, 28 (4) : 1655-1667. doi: 10.3934/dcds.2010.28.1655 Yukihiro Seki. A remark on blow-up at space infinity. Conference Publications, 2009, 2009 (Special) : 691-696. doi: 10.3934/proc.2009.2009.691 Petri Juutinen. Convexity of solutions to boundary blow-up problems. Communications on Pure & Applied Analysis, 2013, 12 (5) : 2267-2275. doi: 10.3934/cpaa.2013.12.2267 Guangyu Xu, Jun Zhou. Global existence and blow-up of solutions to a singular Non-Newton polytropic filtration equation with critical and supercritical initial energy. Communications on Pure & Applied Analysis, 2018, 17 (5) : 1805-1820. doi: 10.3934/cpaa.2018086 José M. Arrieta, Raúl Ferreira, Arturo de Pablo, Julio D. Rossi. Stability of the blow-up time and the blow-up set under perturbations. Discrete & Continuous Dynamical Systems, 2010, 26 (1) : 43-61. doi: 10.3934/dcds.2010.26.43 Mohamed-Ali Hamza, Hatem Zaag. Blow-up results for semilinear wave equations in the superconformal case. Discrete & Continuous Dynamical Systems - B, 2013, 18 (9) : 2315-2329. doi: 10.3934/dcdsb.2013.18.2315 Huyuan Chen, Hichem Hajaiej, Ying Wang. Boundary blow-up solutions to fractional elliptic equations in a measure framework. Discrete & Continuous Dynamical Systems, 2016, 36 (4) : 1881-1903. doi: 10.3934/dcds.2016.36.1881 Pierpaolo Esposito Maristella Petralla
CommonCrawl
Table 4 lists the results of 27 tasks from 23 articles on the effects of d-AMP or MPH on working memory. The oldest and most commonly used type of working memory task in this literature is the Sternberg short-term memory scanning paradigm (Sternberg, 1966), in which subjects hold a set of items (typically letters or numbers) in working memory and are then presented with probe items, to which they must respond "yes" (in the set) or "no" (not in the set). The size of the set, and hence the working memory demand, is sometimes varied, and the set itself may be varied from trial to trial to maximize working memory demands or may remain fixed over a block of trials. Taken together, the studies that have used a version of this task to test the effects of MPH and d-AMP on working memory have found mixed and somewhat ambiguous results. No pattern is apparent concerning the specific version of the task or the specific drug. Four studies found no effect (Callaway, 1983; Kennedy, Odenheimer, Baltzley, Dunlap, & Wood, 1990; Mintzer & Griffiths, 2007; Tipper et al., 2005), three found faster responses with the drugs (Fitzpatrick, Klorman, Brumaghim, & Keefover, 1988; Ward et al., 1997; D. E. Wilson et al., 1971), and one found higher accuracy in some testing sessions at some dosages, but no main effect of drug (Makris et al., 2007). The meaningfulness of the increased speed of responding is uncertain, given that it could reflect speeding of general response processes rather than working memory–related processes. Aspects of the results of two studies suggest that the effects are likely due to processes other than working memory: D. E. Wilson et al. (1971) reported comparable speeding in a simple task without working memory demands, and Tipper et al. (2005) reported comparable speeding across set sizes. Either prescription or illegal, daily use of testosterone would not be cheap. On the other hand, if I am one of the people for whom testosterone works very well, it would be even more valuable than modafinil, in which case it is well worth even arduous experimenting. Since I am on the fence on whether it would help, this suggests the value of information is high. Up to 20% of Ivy League college students have already tried "smart drugs," so we can expect these pills to feature prominently in organizations (if they don't already). After all, the pressure to perform is unlikely to disappear the moment students graduate. And senior employees with demanding jobs might find these drugs even more useful than a 19-year-old college kid does. Indeed, a 2012 Royal Society report emphasized that these "enhancements," along with other technologies for self-enhancement, are likely to have far-reaching implications for the business world. The difference in standard deviations is not, from a theoretical perspective, all that strange a phenomenon: at the very beginning of this page, I covered some basic principles of nootropics and mentioned how many stimulants or supplements follow a inverted U-curve where too much or too little lead to poorer performance (ironically, one of the examples in Kruschke 2012 was a smart drug which did not affect means but increased standard deviations). "I think you can and you will," says Sarter, but crucially, only for very specific tasks. For example, one of cognitive psychology's most famous findings is that people can typically hold seven items of information in their working memory. Could a drug push the figure up to nine or 10? "Yes. If you're asked to do nothing else, why not? That's a fairly simple function." Increasing incidences of chronic diseases such as diabetes and cancer are also impacting positive growth for the global smart pills market. The above-mentioned factors have increased the need for on-site diagnosis, which can be achieved by smart pills. Moreover, the expanding geriatric population and the resulting increasing in degenerative diseases has increased demand for smart pills Adderall is a mix of 4 amphetamine salts (FDA adverse events), and not much better than the others (but perhaps less addictive); as such, like caffeine or methamphetamine, it is not strictly a nootropic but a cognitive enhancer and can be tricky to use right (for how one should use stimulants, see How To Take Ritalin Correctly). I ordered 10x10mg Adderall IR off Silk Road (Wikipedia). On the 4th day after confirmation from seller, the package arrived. It was a harmless looking little padded mailer. Adderall as promised: 10 blue pills with markings, in a double ziplock baggy (reasonable, it's not cocaine or anything). They matched pretty much exactly the descriptions of the generic I had found online. (Surprisingly, apparently both the brand name and the generic are manufactured by the same pharmacorp.) When I spoke with Jesse Lawler, who hosts the podcast Smart Drugs Smarts, about breakthroughs in brain health and neuroscience, he was unsurprised to hear of my disappointing experience. Many nootropics are supposed to take time to build up in the body before users begin to feel their impact. But even then, says Barry Gordon, a neurology professor at the Johns Hopkins Medical Center, positive results wouldn't necessarily constitute evidence of a pharmacological benefit. Now, what is the expected value (EV) of simply taking iodine, without the additional work of the experiment? 4 cans of 0.15mg x 200 is $20 for 2.1 years' worth or ~$10 a year or a NPV cost of $205 (\frac{10}{\ln 1.05}) versus a 20% chance of $2000 or $400. So the expected value is greater than the NPV cost of taking it, so I should start taking iodine. Jesper Noehr, 30, reels off the ingredients in the chemical cocktail he's been taking every day before work for the past six months. It's a mixture of exotic dietary supplements and research chemicals that he says gives him an edge in his job without ill effects: better memory, more clarity and focus and enhanced problem-solving abilities. "I can keep a lot of things on my mind at once," says Noehr, who is chief technology officer for a San Francisco startup. Using the 21mg patches, I cut them into quarters. What I would do is I would cut out 1 quarter, and then seal the two edges with scotch tape, and put the Pac-Man back into its sleeve. Then the next time I would cut another quarter, seal the new edge, and so on. I thought that 5.25mg might be too much since I initially found 4mg gum to be too much, but it's delivered over a long time and it wound up feeling much more like 1mg gum used regularly. I don't know if the tape worked, but I did not notice any loss of potency. I didn't like them as much as the gum because I would sometimes forget to take off a patch at the end of the day and it would interfere with sleep, and because the onset is much slower and I find I need stimulants more for getting started than for ongoing stimulation so it is better to have gum which can be taken precisely when needed and start acting quickly. (One case where the patches were definitely better than the gum was long car trips where slow onset is fine, since you're most alert at the start.) When I finally ran out of patches in June 2016 (using them sparingly), I ordered gum instead. One possibility is that when an individual takes a drug like noopept, they experience greater alertness and mental clarity. So, while the objective ability to see may not actually improve, the ability to process visual stimuli increases, resulting in the perception of improved vision. This allows individuals to process visual cues more quickly, take in scenes more easily, and allows for the increased perception of smaller details. A total of 14 studies surveyed reasons for using prescription stimulants nonmedically, all but one study confined to student respondents. The most common reasons were related to cognitive enhancement. Different studies worded the multiple-choice alternatives differently, but all of the following appeared among the top reasons for using the drugs: "concentration" or "attention" (Boyd et al., 2006; DeSantis et al., 2008, 2009; Rabiner et al., 2009; Teter et al., 2003, 2006; Teter, McCabe, Cranford, Boyd, & Guthrie, 2005; White et al., 2006); "help memorize," "study," "study habits," or "academic assignments" (Arria et al., 2008; Barrett et al., 2005; Boyd et al., 2006; DeSantis et al., 2008, 2009; DuPont et al., 2008; Low & Gendaszek, 2002; Rabiner et al., 2009; Teter et al., 2005, 2006; White et al., 2006); "grades" or "intellectual performance" (Low & Gendaszek, 2002; White et al., 2006); "before tests" or "finals week" (Hall et al., 2005); "alertness" (Boyd et al., 2006; Hall et al., 2005; Teter et al., 2003, 2005, 2006); or "performance" (Novak et al., 2007). However, every survey found other motives mentioned as well. The pills were also taken to "stay awake," "get high," "be able to drink and party longer without feeling drunk," "lose weight," "experiment," and for "recreational purposes." The Stroop task tests the ability to inhibit the overlearned process of reading by presenting color names in colored ink and instructing subjects to either read the word (low need for cognitive control because this is the habitual response to printed words) or name the ink color (high need for cognitive control). Barch and Carter (2005) administered this task to normal control subjects on placebo and d-AMP and found speeding of responses with the drug. However, the speeding was roughly equivalent for the conditions with low and high cognitive control demands, suggesting that the observed facilitation may not have been specific to cognitive control. Natural nootropic supplements derive from various nutritional studies. Research shows the health benefits of isolated vitamins, nutrients, and herbs. By increasing your intake of certain herbal substances, you can enhance brain function. Below is a list of the top categories of natural and herbal nootropics. These supplements are mainstays in many of today's best smart pills. So I eventually got around to ordering another thing of nicotine gum, Habitrol Nicotine Gum, 4mg MINT flavor COATED gum. 96 pieces per box. Gum should be easier to double-blind myself with than nicotine patches - just buy some mint gum. If 4mg is too much, cut the gum in half or whatever. When it arrived, my hopes were borne out: the gum was rectangular and soft, which made it easy to cut into fourths. A LessWronger found that it worked well for him as far as motivation and getting things done went, as did another LessWronger who sells it online (terming it a reasonable productivity enhancer) as did one of his customers, a pickup artist oddly enough. The former was curious whether it would work for me too and sent me Speciosa Pro's Starter Pack: Test Drive (a sampler of 14 packets of powder and a cute little wooden spoon). In SE Asia, kratom's apparently chewed, but the powders are brewed as a tea. Some critics argue that Modafinil is an expression of that, a symptom of a new 24/7 work routine. But what if the opposite is true? Let's say you could perform a task in significantly less time than usual. You could then use the rest of your time differently, spending it with family, volunteering, or taking part in a leisure activity. And imagine that a drug helped you focus on clearing your desk and inbox before leaving work. Wouldn't that help you relax once you get home? Only two of the eight experiments reviewed in this section found that stimulants enhanced performance, on a nonverbal fluency task in one case and in Raven's Progressive Matrices in the other. The small number of studies of any given type makes it difficult to draw general conclusions about the underlying executive function systems that might be influenced. With the right lifestyle and the right stack of supplements and nootropics, you can enjoy enhanced mental clarity, easier flow, and better vision. The best nootropics for your needs will depend on how much you want to spend, how often you want to take them, and what you want to take them for. Nutritional supplements should be taken daily, for the cumulative effect, but Smart drugs such as noopept and modafinil are usually taken on an as-needed basis, for those times when you are aiming for hyperfocus, better clarity, and better recall, or the ability to process a huge amount of incoming visual information quickly and accurately and to pick up on details that you might otherwise miss. Evidence in support of the neuroprotective effects of flavonoids has increased significantly in recent years, although to date much of this evidence has emerged from animal rather than human studies. Nonetheless, with a view to making recommendations for future good practice, we review 15 existing human dietary intervention studies that have examined the effects of particular types of flavonoid on cognitive performance. The studies employed a total of 55 different cognitive tests covering a broad range of cognitive domains. Most studies incorporated at least one measure of executive function/working memory, with nine reporting significant improvements in performance as a function of flavonoid supplementation compared to a control group. However, some domains were overlooked completely (e.g. implicit memory, prospective memory), and for the most part there was little consistency in terms of the particular cognitive tests used making across study comparisons difficult. Furthermore, there was some confusion concerning what aspects of cognitive function particular tests were actually measuring. Overall, while initial results are encouraging, future studies need to pay careful attention when selecting cognitive measures, especially in terms of ensuring that tasks are actually sensitive enough to detect treatment effects. Vinh Ngo, a San Francisco family practice doctor who specializes in hormone therapy, has become familiar with piracetam and other nootropics through a changing patient base. His office is located in the heart of the city's tech boom and he is increasingly sought out by young, male tech workers who tell him they are interested in cognitive enhancement. In most cases, cognitive enhancers have been used to treat people with neurological or mental disorders, but there is a growing number of healthy, "normal" people who use these substances in hopes of getting smarter. Although there are many companies that make "smart" drinks, smart power bars and diet supplements containing certain "smart" chemicals, there is little evidence to suggest that these products really work. Results from different laboratories show mixed results; some labs show positive effects on memory and learning; other labs show no effects. There are very few well-designed studies using normal healthy people. Smart Pill appears to be a powerful dietary supplement that blends ingredients with proven positive effect on the brain, thus promoting mental health. Some problems like attention disorders, mood disorders, or stress can be addressed with this formula. The high price related to the amount provided for a month can be a minus, but the ingredients used a strong link to brain health. Other supplements that provide the same effect can be found online, so a quick search is advised to find the best-suited supplement for your particular needs. If any problems arise, consult a medical doctor immediately. Unfortunately, cognitive enhancement falls between the stools of research funding, which makes it unlikely that such research programs will be carried out. Disease-oriented funders will, by definition, not support research on normal healthy individuals. The topic intersects with drug abuse research only in the assessment of risk, leaving out the study of potential benefits, as well as the comparative benefits of other enhancement methods. As a fundamentally applied research question, it will not qualify for support by funders of basic science. The pharmaceutical industry would be expected to support such research only if cognitive enhancement were to be considered a legitimate indication by the FDA, which we hope would happen only after considerably more research has illuminated its risks, benefits, and societal impact. Even then, industry would have little incentive to delve into all of the issues raised here, including the comparison of drug effects to nonpharmaceutical means of enhancing cognition. We reached out to several raw material manufacturers and learned that Phosphatidylserine and Huperzine A are in short supply. We also learned that these ingredients can be pricey, incentivizing many companies to cut corners. A company has to have the correct ingredients in the correct proportions in order for a brain health formula to be effective. We learned that not just having the two critical ingredients was important – but, also that having the correct supporting ingredients was essential in order to be effective. As opposed to what it might lead you to believe, Ginkgo Smart is not simply a Ginkgo Biloba supplement. In all actuality, it's much more than that – a nootropic (Well duh, we wouldn't be reviewing it otherwise). Ginkgo Smart has actually been seeing quite some popularity lately, possibly riding on the popularity of Ginkgo Biloba as a supplement, which has been storming through the US lately, and becoming one of the highest selling supplement in the US. We were pleasantly pleased at the fact that it wasn't too hard to find Ginkgo Smart's ingredients… Learn More... The use of cognition-enhancing drugs by healthy individuals in the absence of a medical indication spans numerous controversial issues, including the ethics and fairness of their use, concerns over adverse effects, and the diversion of prescription drugs for nonmedical uses, among others.[1][2] Nonetheless, the international sales of cognition-enhancing supplements exceeded US$1 billion in 2015 when global demand for these compounds grew.[3]
CommonCrawl
Sample records for frequency-selective fading channels Channel and delay estimation for base-station–based cooperative communications in frequency-selective fading channels Hongjun Xu Full Text Available A channel and delay estimation algorithm for both positive and negative delay, based on the distributed Alamouti scheme, has been recently discussed for base-station–based asynchronous cooperative systems in frequency-flat fading channels. This paper extends the algorithm, the maximum likelihood estimator, to work in frequency-selective fading channels. The minimum mean square error (MMSE performance of channel estimation for both packet schemes and normal schemes is discussed in this paper. The symbol error rate (SER performance of equalisation and detection for both time-reversal space-time block code (STBC and single-carrier STBC is also discussed in this paper. The MMSE simulation results demonstrated the superior performance of the packet scheme over the normal scheme with an improvement in performance of up to 6 dB when feedback was used in the frequency-selective channel at a MSE of 3 x 10–2. The SER simulation results showed that, although both the normal and packet schemes achieved similar diversity orders, the packet scheme demonstrated a 1 dB coding gain over the normal scheme at a SER of 10–5. Finally, the SER simulations showed that the frequency-selective fading system outperformed the frequency-flat fading system. A novel reduced-complexity group detection structure in MIMO frequency selective fading channels Qaraqe, Khalid A.; Ahimian, Nariman R.; Alouini, Mohamed-Slim In this paper a novel reduced complexity detection method named modified symbol flipping method is introduced and its advantages on reducing the burden of the calculations at the receiver compared to the optimum maximum likelihood detection method on multiple input- multiple output frequency selective fading channels are explained. The initial concept of the symbol flipping method is derived from a preliminary detection scheme named bit flipping which was introduced in [1]. The detection structure employed in this paper is ing, detection, and cancellation. On the detection stage, the proposed method is employed and the results are compared to the group maximum likelihood detection scheme proposed in [2]. Simulation results show that a 6 dB performance gain can be achieved at the expense of a slight increase in complexity in comparison with the conventional symbol flipping scheme. © 2010 Crown. Qaraqe, Khalid A. Secured Communication over Frequency-Selective Fading Channels: A Practical Vandermonde Precoding Mari Kobayashi Full Text Available We study the frequency-selective broadcast channel with confidential messages (BCC where the transmitter sends a confidential message to receiver 1 and a common message to receivers 1 and 2. In the case of a block transmission of N symbols followed by a guard interval of L symbols, the frequency-selective channel can be modeled as a N×(N+L Toeplitz matrix. For this special type of multiple-input multiple-output channels, we propose a practical Vandermonde precoding that projects the confidential messages in the null space of the channel seen by receiver 2 while superposing the common message. For this scheme, we provide the achievable rate region and characterize the optimal covariance for some special cases of interest. Interestingly, the proposed scheme can be applied to other multiuser scenarios such as the K+1-user frequency-selective BCC with K confidential messages and the two-user frequency-selective BCC with two confidential messages. For each scenario, we provide the secrecy degree of freedom (s.d.o.f. region of the corresponding channel and prove the optimality of the Vandermonde precoding. One of the appealing features of the proposed scheme is that it does not require any specific secrecy encoding technique but can be applied on top of any existing powerful encoding schemes. Debbah Mérouane Full Text Available We study the frequency-selective broadcast channel with confidential messages (BCC where the transmitter sends a confidential message to receiver 1 and a common message to receivers 1 and 2. In the case of a block transmission of symbols followed by a guard interval of symbols, the frequency-selective channel can be modeled as a Toeplitz matrix. For this special type of multiple-input multiple-output channels, we propose a practical Vandermonde precoding that projects the confidential messages in the null space of the channel seen by receiver 2 while superposing the common message. For this scheme, we provide the achievable rate region and characterize the optimal covariance for some special cases of interest. Interestingly, the proposed scheme can be applied to other multiuser scenarios such as the -user frequency-selective BCC with confidential messages and the two-user frequency-selective BCC with two confidential messages. For each scenario, we provide the secrecy degree of freedom (s.d.o.f. region of the corresponding channel and prove the optimality of the Vandermonde precoding. One of the appealing features of the proposed scheme is that it does not require any specific secrecy encoding technique but can be applied on top of any existing powerful encoding schemes. Noise-based frequency offset modulation in wideband frequency-selective fading channels Meijerink, Arjan; Cotton, S.L.; Bentum, Marinus Jan; Scanlon, W.G. A frequency offset modulation scheme using wideband noise carriers is considered. The main advantage of such a scheme is that it enables fast receiver synchronization without channel adaptation, while providing robustness to multipath fading and in-band interference. This is important for low-power Performance Analysis of the IEEE 802.11A WLAN Standard Optimum and Sub-Optimum Receiver in Frequency-Selective, Slowly Fading Nakagami Channels with AWGN and Pulsed Noise Jamming Kalogrias, Christos ... 802.11a wireless local area network (WLAN) standard receiver when the signal is transmitted over a frequency selective, slow fading Nakagami channel in a worst case, pulse-noise jamming environment... Iterative MMSE Detection for MIMO/BLAST DS-CDMA Systems in Frequency Selective Fading Channels - Achieving High Performance in Fully Loaded Systems Silva, João Carlos; Souto, Nuno; Cercas, Francisco; Dinis, Rui A MMSE (Minimum Mean Square Error) DS-CDMA (Direct Sequence-Code Division Multiple Access) receiver coupled with a low-complexity iterative interference suppression algorithm was devised for a MIMO/BLAST (Multiple Input, Multiple Output / Bell Laboratories Layered Space Time) system in order to improve system performance, considering frequency selective fading channels. The scheme is compared against the simple MMSE receiver, for both QPSK and 16QAM modulations, under SISO (Single Input, Single Output) and MIMO systems, the latter with 2Tx by 2Rx and 4Tx by 4Rx (MIMO order 2 and 4 respectively) antennas. To assess its performance in an existing system, the uncoded UMTS HSDPA (High Speed Downlink Packet Access) standard was considered. Performance of Reverse-Link Synchronous DS-CDMA System on a Frequency-Selective Multipath Fading Channel with Imperfect Power Control Duk Kyung Kim Full Text Available We analyze the performance for reverse-link synchronous DS-CDMA system in a frequency-selective Rayleigh fading channel with an imperfect power control scheme. The performance degradation due to power control error (PCE, which is approximated by a log-normally distributed random variable, is estimated as a function of the standard deviation of the PCE. In addition, we investigate the impacts of the multipath intensity profile (MIP shape and the number of resolvable paths on the performance. Finally, the coded bit error performance is evaluated in order to estimate the system capacity. Comparing with the conventional CDMA system, we show an achievable gain of from 59% to 23% for reverse-link synchronous transmission technique (RLSTT in the presence of imperfect power control over asynchronous transmission for BER=10−6. As well, the effect of tradeoff between orthogonality and diversity can be seen according to the number of multipaths, and the tendency is kept even in the presence of PCE. We conclude that the capacity can be further improved via the RLSTT, because the DS-CDMA system is very sensitive to power control imperfections. New Concept of PLC Modems: Multi-Carrier System for Frequency Selective Slow-Fading Channels Based on Layered SCCC Turbocodes J. Zavrtalek Full Text Available The article introduces a novel concept of a PLC modem as a complement to the existing G3 and PRIME standards for communications using medium- or high-voltage overhead or cable lines. The proposed concept is based on the fact that the levels of impulse noise and frequency selectivity are lower on high-voltage lines than on low-voltage ones. Also, the demands for "cost-effective� circuitry design are not so crucial as in the case of modems for low-voltage level. In contract to these positive conditions, however, there is the need to overcome much longer distances and to take into account low SNR on the receiving side. With respect to the listed reasons, our concept makes use of MCM, instead of OFDM. The assumption of low SNR is compensated through the use of an efficient channel coding based on a serially concatenated turbo code. In addition, MCM offers lower latency and PAPR compared to OFDM. Therefore, when using MCM, it is possible to excite the line with higher power. The proposed concept has been verified during experimental transmission of testing data over a real, 5 km long, 22kV overhead line. On Data and Parameter Estimation Using the Variational Bayesian EM-algorithm for Block-fading Frequency-selective MIMO Channels Christensen, Lars P.B.; Larsen, Jan A general Variational Bayesian framework for iterative data and parameter estimation for coherent detection is introduced as a generalization of the EM-algorithm. Explicit solutions are given for MIMO channel estimation with Gaussian prior and noise covariance estimation with inverse-Wishart prior....... Simulation of a GSM-like system provides empirical proof that the VBEM-algorithm is able to provide better performance than the EM-algorithm. However, if the posterior distribution is highly peaked, the VBEM-algorithm approaches the EM-algorithm and the gain disappears. The potential gain is therefore... Frequency selective tunable spin wave channeling in the magnonic network Sadovnikov, A. V., E-mail: [email protected]; Nikitov, S. A. [Laboratory "Metamaterials,� Saratov State University, Saratov 410012 (Russian Federation); Kotel' nikov Institute of Radioengineering and Electronics, Russian Academy of Sciences, Moscow 125009 (Russian Federation); Beginin, E. N.; Odincov, S. A.; Sheshukova, S. E.; Sharaevskii, Yu. P. [Laboratory "Metamaterials,� Saratov State University, Saratov 410012 (Russian Federation); Stognij, A. I. [Scientific-Practical Materials Research Center, National Academy of Sciences of Belarus, 220072 Minsk (Belarus) Using the space-resolved Brillouin light scattering spectroscopy, we study the frequency and wavenumber selective spin-wave channeling. We demonstrate the frequency selective collimation of spin-wave in an array of magnonic waveguides, formed between the adjacent magnonic crystals on the surface of yttrium iron garnet film. We show the control over spin-wave propagation length by the orientation of an in-plane bias magnetic field. Fabricated array of magnonic crystal can be used as a magnonic platform for multidirectional frequency selective signal processing applications in magnonic networks. Distributive estimation of frequency selective channels for massive MIMO systems Zaib, Alam We consider frequency selective channel estimation in the uplink of massive MIMO-OFDM systems, where our major concern is complexity. A low complexity distributed LMMSE algorithm is proposed that attains near optimal channel impulse response (CIR) estimates from noisy observations at receive antenna array. In proposed method, every antenna estimates the CIRs of its neighborhood followed by recursive sharing of estimates with immediate neighbors. At each step, every antenna calculates the weighted average of shared estimates which converges to near optimal LMMSE solution. The simulation results validate the near optimal performance of proposed algorithm in terms of mean square error (MSE). © 2015 EURASIP. Design of Nonuniform Filter Bank Transceivers for Frequency Selective Channels Yuan-Pei Lin Full Text Available In recent years, there has been considerable interest in the theory and design of filter bank transceivers due to their superior frequency response. In many applications, it is desired to have transceivers that can support multiple services with different incoming data rates and different quality-of-service requirements. To meet these requirements, we can either do resource allocation or design transceivers with a nonuniform bandwidth partition. In this paper, we propose a method for the design of nonuniform filter bank transceivers for frequency selective channels. Both frequency response and signal-to-interference ratio (SIR can be incorporated in the transceiver design. Moreover, the technique can be extended to the case of nonuniform filter bank transceivers with rational sampling factors. Simulation results show that nonuniform filter bank transceivers with good filter responses as well as high SIR can be obtained by the proposed design method. On Capacity of the Writing onto Fast Fading Dirt Channel Rini, Stefano; Shamai, Shlomo The "Writing onto Fast Fading Dirt" (WFFD) channel is investigated to study the effects of partial channel knowledge on the capacity of the "writing on dirty paper" channel. The WFFD channel is the Gel'fand-Pinsker channel in which the output is obtained as the sum of the input, white Gaussian noise and a fading-times-state term. The fading-times-state term is equal to the element-wise product of the channel state sequence, known only at the transmitter, and a fast fading process, known only ... Diversity Techniques for Single-Carrier Packet Retransmissions over Frequency-Selective Channels Assimi Abdel-Nasser Full Text Available In data packet communication systems over multipath frequency-selective channels, hybrid automatic repeat request (HARQ protocols are usually used in order to ensure data reliability. For single-carrier packet transmission in slow fading environment, an identical retransmission of the same packet, due to a decoding failure, does not fully exploit the available time diversity in retransmission-based HARQ protocols. In this paper, we compare two transmit diversity techniques, namely, cyclic frequency-shift diversity and bit-interleaving diversity. Both techniques can be integrated in the HARQ scheme in order to improve the performance of the joint detector. Their performance in terms of pairwise error probability is investigated using maximum likelihood detection and decoding. The impact of the channel memory and the modulation order on the performance gain is emphasized. In practice, we use low complexity linear filter-based equalization which can be efficiently implemented in the frequency domain. The use of iterative equalization and decoding is also considered. The performance gain in terms of frame error rate and data throughput is evaluated by numerical simulations. An iterative detection method of MIMO over spatial correlated frequency selective channel: using list sphere decoding for simplification Shi, Zhiping; Yan, Bing In multiple-input multiple-output(MIMO) wireless systems, combining good channel codes(e.g., Non-binary Repeat Accumulate codes) with adaptive turbo equalization is a good option to get better performance and lower complexity under Spatial Correlated Frequency Selective(SCFS) Channel. The key of this method is after joint antennas MMSE detection (JAD/MMSE) based on interruption cancelling using soft information, considering the detection result as an output of a Gaussian equivalent flat fading channel, and performing maximum likelihood detection(ML) to get more correct estimated result. But the using of ML brings great complexity increase, which is not allowed. In this paper, a low complexity method called list sphere decoding is introduced and applied to replace the ML in order to simplify the adaptive iterative turbo equalization system. Adaptive Space-Time-Spreading-Assisted Wideband CDMA Systems Communicating over Dispersive Nakagami- Fading Channels Yang Lie-Liang Full Text Available In this contribution, the performance of wideband code-division multiple-access (W-CDMA systems using space-time-spreading- (STS- based transmit diversity is investigated, when frequency-selective Nakagami- fading channels, multiuser interference, and background noise are considered. The analysis and numerical results suggest that the achievable diversity order is the product of the frequency-selective diversity order and the transmit diversity order. Furthermore, both the transmit diversity and the frequency-selective diversity have the same order of importance. Since W-CDMA signals are subjected to frequency-selective fading, the number of resolvable paths at the receiver may vary over a wide range depending on the transmission environment encountered. It can be shown that, for wireless channels where the frequency selectivity is sufficiently high, transmit diversity may be not necessitated. Under this case, multiple transmission antennas can be leveraged into an increased bitrate. Therefore, an adaptive STS-based transmission scheme is then proposed for improving the throughput of W-CDMA systems. Our numerical results demonstrate that this adaptive STS-based transmission scheme is capable of significantly improving the effective throughput of W-CDMA systems. Specifically, the studied W-CDMA system's bitrate can be increased by a factor of three at the modest cost of requiring an extra 0.4 dB or 1.2 dB transmitted power in the context of the investigated urban or suburban areas, respectively. A new simple model for composite fading channels: Second order statistics and channel capacity Yilmaz, Ferkan In this paper, we introduce the most general composite fading distribution to model the envelope and the power of the received signal in such fading channels as millimeter wave (60 GHz or above) fading channels and free-space optical channels, which we term extended generalized-K (EGK) composite fading distribution. We obtain the second-order statistics of the received signal envelope characterized by the EGK composite fading distribution. Expressions for probability density function, cumulative distribution function, level crossing rate and average fade duration, moments, amount of fading and average capacity are derived. Numerical and computer simulation examples validate the accuracy of the presented mathematical analysis. © 2010 IEEE. Yilmaz, Ferkan; Alouini, Mohamed-Slim In this paper, we introduce the most general composite fading distribution to model the envelope and the power of the received signal in such fading channels as millimeter wave (60 GHz or above) fading channels and free-space optical channels, which Computer models for fading channels with applications to digital transmission Loo, Chun; Secord, Norman The authors describe computer models for Rayleigh, Rician, log-normal, and land-mobile-satellite fading channels. All computer models for the fading channels are based on the manipulation of a white Gaussian random process. This process is approximated by a sum of sinusoids with random phase angle. These models compare very well with analytical models in terms of their probability distribution of envelope and phase of the fading signal. For the land mobile satellite fading channel, results of level crossing rate and average fade duration are given. These results show that the computer models can provide a good coarse estimate of the time statistic of the faded signal. Also, for the land-mobile-satellite fading channel, the results show that a 3-pole Butterworth shaping filter should be used with the model. An example of the application of the land-mobile-satellite fading-channel model to predict the performance of a differential phase-shift keying signal is described. An overview of turbo decoding on fading channels ATILGAN, DoÄŸan A review of turbo coding and decoding has been presented in the literature [1]. In that paper, turbo coding and decoding on AWGN (Additive White Gaussian Noise) channels has been elaborated. In wireless communications, a phenomennon called multipath fading is frequently encountered. Therefore, investigation of efficient techniques to tackle with the destructive effects of fading is essential. Turbo coding has been proven as an efficient channel coding technique for AWGN channels. Some of the ... OPRA capacity bounds for selection diversity over generalized fading channels Hanif, Muhammad Fainan; Yang, Hongchuan; Alouini, Mohamed-Slim , lower and upper bounds on OPRA capacity for selection diversity scheme are presented. These bounds hold for variety of fading channels including log-normal and generalized Gamma distributed models and have very simple analytic expressions for easy Efficient incremental relaying for packet transmission over fading channels Fareed, Muhammad Mehboob; Alouini, Mohamed-Slim; Yang, Hongchuan In this paper, we propose a novel relaying scheme for packet transmission over fading channels, which improves the spectral efficiency of cooperative diversity systems by utilizing limited feedback from the destination. Our scheme capitalizes Secret Sharing over Fast-Fading MIMO Wiretap Channels Bloch Matthieu Full Text Available Secret sharing over the fast-fading MIMO wiretap channel is considered. A source and a destination try to share secret information over a fast-fading MIMO channel in the presence of an eavesdropper who also makes channel observations that are different from but correlated to those made by the destination. An interactive, authenticated public channel with unlimited capacity is available to the source and destination for the secret sharing process. This situation is a special case of the "channel model with wiretapper" considered by Ahlswede and Csiszár. An extension of their result to continuous channel alphabets is employed to evaluate the key capacity of the fast-fading MIMO wiretap channel. The effects of spatial dimensionality provided by the use of multiple antennas at the source, destination, and eavesdropper are then investigated. Cooperative Spectrum Sensing over Non-Identical Nakagami Fading Channels Rao, Anlei Previous works in cooperative spectrum sensing assumed that the channels for sensing and reporting are independent identical distributed (i.i.d). A more practical and appropriate assumption, however, should be that the sensing channels and reporting channels are independent but not necessarily identically distributed (i.n.i.d). In this paper, we derive the false-alarm probability and the detection probability of cooperative spectrum sensing with energy fusion over i.n.i.d Nakagami fading channels. Selected numerical results show that cooperative spectrum sensing still gives considerably better performance results even over i.n.i.d fading channels. Space-Time Joint Interference Cancellation Using Fuzzy-Inference-Based Adaptive Filtering Techniques in Frequency-Selective Multipath Channels Hu, Chia-Chang; Lin, Hsuan-Yu; Chen, Yu-Fan; Wen, Jyh-Horng An adaptive minimum mean-square error (MMSE) array receiver based on the fuzzy-logic recursive least-squares (RLS) algorithm is developed for asynchronous DS-CDMA interference suppression in the presence of frequency-selective multipath fading. This receiver employs a fuzzy-logic control mechanism to perform the nonlinear mapping of the squared error and squared error variation, denoted by ([InlineEquation not available: see fulltext.],[InlineEquation not available: see fulltext.]), into a forgetting factor[InlineEquation not available: see fulltext.]. For the real-time applicability, a computationally efficient version of the proposed receiver is derived based on the least-mean-square (LMS) algorithm using the fuzzy-inference-controlled step-size[InlineEquation not available: see fulltext.]. This receiver is capable of providing both fast convergence/tracking capability as well as small steady-state misadjustment as compared with conventional LMS- and RLS-based MMSE DS-CDMA receivers. Simulations show that the fuzzy-logic LMS and RLS algorithms outperform, respectively, other variable step-size LMS (VSS-LMS) and variable forgetting factor RLS (VFF-RLS) algorithms at least 3 dB and 1.5 dB in bit-error-rate (BER) for multipath fading channels. Chen Yu-Fan Full Text Available An adaptive minimum mean-square error (MMSE array receiver based on the fuzzy-logic recursive least-squares (RLS algorithm is developed for asynchronous DS-CDMA interference suppression in the presence of frequency-selective multipath fading. This receiver employs a fuzzy-logic control mechanism to perform the nonlinear mapping of the squared error and squared error variation, denoted by ( , , into a forgetting factor . For the real-time applicability, a computationally efficient version of the proposed receiver is derived based on the least-mean-square (LMS algorithm using the fuzzy-inference-controlled step-size . This receiver is capable of providing both fast convergence/tracking capability as well as small steady-state misadjustment as compared with conventional LMS- and RLS-based MMSE DS-CDMA receivers. Simulations show that the fuzzy-logic LMS and RLS algorithms outperform, respectively, other variable step-size LMS (VSS-LMS and variable forgetting factor RLS (VFF-RLS algorithms at least 3 dB and 1.5 dB in bit-error-rate (BER for multipath fading channels. Bit Error Rate Analysis for MC-CDMA Systems in Nakagami- Fading Channels Li Zexian Full Text Available Multicarrier code division multiple access (MC-CDMA is a promising technique that combines orthogonal frequency division multiplexing (OFDM with CDMA. In this paper, based on an alternative expression for the -function, characteristic function and Gaussian approximation, we present a new practical technique for determining the bit error rate (BER of multiuser MC-CDMA systems in frequency-selective Nakagami- fading channels. The results are applicable to systems employing coherent demodulation with maximal ratio combining (MRC or equal gain combining (EGC. The analysis assumes that different subcarriers experience independent fading channels, which are not necessarily identically distributed. The final average BER is expressed in the form of a single finite range integral and an integrand composed of tabulated functions which can be easily computed numerically. The accuracy of the proposed approach is demonstrated with computer simulations. Optimum Combining for Rapidly Fading Channels in Ad Hoc Networks Sonia Furman Full Text Available Research and technology in wireless communication systems such as radar and cellular networks have successfully implemented alternative design approaches that utilize antenna array techniques such as optimum combining, to mitigate the degradation effects of multipath in rapid fading channels. In ad hoc networks, these methods have not yet been exploited primarily due to the complexity inherent in the network's architecture. With the high demand for improved signal link quality, devices configured with omnidirectional antennas can no longer meet the growing need for link quality and spectrum efficiency. This study takes an empirical approach to determine an optimum combining antenna array based on 3 variants of interelement spacing. For rapid fading channels, the simulation results show that the performance in the network of devices retrofitted with our antenna arrays consistently exceeded those with an omnidirectional antenna. Further, with the optimum combiner, the performance increased by over 60% compared to that of an omnidirectional antenna in a rapid fading channel. A Novel Simulation Model for Nonstationary Rice Fading Channels Kaili Jiang Full Text Available In this paper, we propose a new simulator for nonstationary Rice fading channels under nonisotropic scattering scenarios, as well as the improved computation method of simulation parameters. The new simulator can also be applied on generating Rayleigh fading channels by adjusting parameters. The proposed simulator takes into account the smooth transition of fading phases between the adjacent channel states. The time-variant statistical properties of the proposed simulator, that is, the probability density functions (PDFs of envelope and phase, autocorrelation function (ACF, and Doppler power spectrum density (DPSD, are also analyzed and derived. Simulation results have demonstrated that our proposed simulator provides good approximation on the statistical properties with the corresponding theoretical ones, which indicates its usefulness for the performance evaluation and validation of the wireless communication systems under nonstationary and nonisotropic scenarios. Capacity of Fading Channels in the Low Power Regime Benkhelifa, Fatma The low power regime has attracted various researchers in the information theory and communication communities to understand the performance limits of wireless systems. Indeed, the energy consumption is becoming one of the major limiting factors in wireless systems. As such, energy-efficient wireless systems are of major importance to the next generation wireless systems designers. The capacity is a metric that measures the performance limit of a wireless system. The study of the ergodic capacity of some fading channels in the low power regime is the main subject of this thesis. In our study, we consider that the receiver has always a full knowledge of the channel state information. However, we assume that the transmitter has possibly imperfect knowledge of the channel state information, i.e. he knows either perfectly the channel or only an estimated version of the channel. Both radio frequency and free space optical communication channel models are considered. The main contribution of this work is the explicit characterization of how the capacity scales as function of the signal-to-noise ratio in the low power regime. This allows us to characterize the gain due to the perfect knowledge compared to no knowledge of the channel state information at the transmitter. In particular, we show that the gain increases logarithmically for radio frequency communication. However, the gain increases as log2(Pavg) or log4(Pavg) for free-space optical communication, where Pavg is the average power constraint imposed to the input. Furthermore, we characterize the capacity of cascaded fading channels and we applied the result to Rayleigh-product fading channel and to a free-space optical link over gamma-gamma atmospheric turbulence in the presence of pointing errors. Finally, we study the capacity of Nakagami-m fading channel under quality of service constraints, namely the effective capacity. We have shown that the effective capacity converges to Shannon capacity in the very low Low-Complexity Iterative Receiver for Space-Time Coded Signals over Frequency Selective Channels Mohamed Siala Full Text Available We propose a low-complexity turbo-detector scheme for frequency selective multiple-input multiple-output channels. The detection part of the receiver is based on a List-type MAP equalizer which is a state-reduction algorithm of the MAP algorithm using per-survivor technique. This alternative achieves a good tradeoff between performance and complexity provided a small amount of the channel is neglected. In order to induce the good performance of this equalizer, we propose to use a whitened matched filter (WMF which leads to a white-noise "minimum phase� channel model. Simulation results show that the use of the WMF yields significant improvement, particularly over severe channels. Thanks to the iterative turbo processing (detection and decoding are iterated several times, the performance loss due to the use of the suboptimum List-type equalizer is recovered. Wireless Energy Harvesting Using Signals from Multiple Fading Channels Chen, Yunfei In this paper, we study the average, the probability density function and the cumulative distribution function of the harvested power. In the study, the signals are transmitted from multiple sources. The channels are assumed to be either Rician fading or Gamma-shadowed Rician fading. The received signals are then harvested by using either a single harvester for simultaneous transmissions or multiple harvesters for transmissions at different frequencies, antennas or time slots. Both linear and nonlinear models for the energy harvester at the receiver are examined. Numerical results are presented to show that, when a large amount of harvested power is required, a single harvester or the linear range of a practical nonlinear harvester are more efficient, to avoid power outage. Further, the power transfer strategy can be optimized for fixed total power. Specifically, for Rayleigh fading, the optimal strategy is to put the total power at the source with the best channel condition and switch off all other sources, while for general Rician fading, the optimum magnitudes and phases of the transmitting waveforms depend on the channel parameters. Secrecy Capacity Analysis over α−μ Fading Channels Lei, Hongjiang In this work, we study the secrecy capacity of the classic Wyner's model over the α − μ fading channels, where α and μ specify the nonlinearity and clustering of fading channels, respectively. The average secrecy capacity (ASC) is derived in closed-form by using the extended generalized bivariate Fox's Hfunction (EGBFHF). Moreover, the asymptotic analysis of ASC in high signal-to-noise ratio (SNR) regime is conducted. The asymptotic results unveil that the ASC follows the scaling law of Θ(ln p), where p stands for the ratio between the average powers of main channels and eavesdropping channels. Moreover, the ASC can be enhanced by increasing the transmit SNR, while there exists a ceiling of ASC as the SNRs at both sides are improved simultaneously. The accuracy of the analytical results is validated by Monte-Carlo simulations. The numerical results show that rigorous fading channels are beneficial to the secrecy performance, that is, serious nonlinearity (small α) and sparse clustering (small μ) will lead to the improvement of ASC. Lei, Hongjiang; Ansari, Imran Shafique; Pan, Gaofeng; Alomair, Basel; Alouini, Mohamed-Slim Cooperative Orthogonal Space-Time-Frequency Block Codes over a MIMO-OFDM Frequency Selective Channel M. Rezaei Full Text Available In this paper, a cooperative algorithm to improve the orthogonal space-timefrequency block codes (OSTFBC in frequency selective channels for 2*1, 2*2, 4*1, 4*2 MIMO-OFDM systems, is presented. The algorithm of three node, a source node, a relay node and a destination node is formed, and is implemented in two stages. During the first stage, the destination and the relay antennas receive the symbols sent by the source antennas. The destination node and the relay node obtain the decision variables employing time-space-frequency decoding process by the received signals. During the second stage, the relay node transmits decision variables to the destination node. Due to the increasing diversity in the proposed algorithm, decision variables in the destination node are increased to improve system performance. The bit error rate of the proposed algorithm at high SNR is estimated by considering the BPSK modulation. The simulation results show that cooperative orthogonal space-time-frequency block coding, improves system performance and reduces the BER in a frequency selective channel. Radio resource allocation over fading channels under statistical delay constraints Le-Ngoc, Tho This SpringerBrief presents radio resource allocation schemes for buffer-aided communications systems over fading channels under statistical delay constraints in terms of upper-bounded average delay or delay-outage probability. This Brief starts by considering a source-destination communications link with data arriving at the source transmission buffer. The first scenario, the joint optimal data admission control and power allocation problem for throughput maximization is considered, where the source is assumed to have a maximum power and an average delay constraints. The second scenario, optimal power allocation problems for energy harvesting (EH) communications systems under average delay or delay-outage constraints are explored, where the EH source harvests random amounts of energy from renewable energy sources, and stores the harvested energy in a battery during data transmission. Online resource allocation algorithms are developed when the statistical knowledge of the random channel fading, data arrivals... Optimized Irregular Low-Density Parity-Check Codes for Multicarrier Modulations over Frequency-Selective Channels Valérian Mannoni Full Text Available This paper deals with optimized channel coding for OFDM transmissions (COFDM over frequency-selective channels using irregular low-density parity-check (LDPC codes. Firstly, we introduce a new characterization of the LDPC code irregularity called "irregularity profile.� Then, using this parameterization, we derive a new criterion based on the minimization of the transmission bit error probability to design an irregular LDPC code suited to the frequency selectivity of the channel. The optimization of this criterion is done using the Gaussian approximation technique. Simulations illustrate the good performance of our approach for different transmission channels. Hanif, Muhammad Fainan Channel side information at the transmitter can increase the average capacity by enabling optimal power and rate adaptation. The resulting optimal power and rate adaptation (OPRA) capacity rarely has a closed-form analytic expression. In this paper, lower and upper bounds on OPRA capacity for selection diversity scheme are presented. These bounds hold for variety of fading channels including log-normal and generalized Gamma distributed models and have very simple analytic expressions for easy evaluation even for kth best path selection. Some selected numerical results show that the newly proposed bounds closely approximate the actual OPRA capacity. © 2014 IEEE. Performance Analysis of OFDM in Frequency Selective, Slowly Fading Nakagami Channels Count, Patrick ...) as the modulation technique of choice. Given the incredible growth in the industry as manifest by the seemingly infinite demand for wireless products and services and the accompanying need for superior performance of these systems... In an effort to offer faster, more reliable wireless communications services to the public, many wireless standardization committees have, in recent years, adopted Orthogonal Frequency Division Multiplexing (OFDM... On the capacity of Rician fading channels with full channel state information at low SNR Rezki, Zouheir The capacity of flat Rayleigh fading channels with full channel state information (CSI) at the transmitter and at the receiver at asymptotically low SNR has been recently shown to scale essentially as SNR log (1/SNR). In this paper, we investigate the Rician fading channel capacity with full CSI, and show that the capacity of this channel scales essentially as 1/1+K SNR log (1 /SNR), where K is the Rician factor. This characterization includes perfect CSI at both the transmitter and the receiver or noisy CSI at the transmitter and perfect CSI at the receiver. We also show that one-bit CSI at the transmitter is enough to achieve this asymptotic capacity using an On-Off power control scheme. Our framework may be seen as a generalization of previous works as it captures the Rayleigh fading channel as a special case by letting K goes to zero. © 2012 IEEE. Blind CP-OFDM and ZP-OFDM Parameter Estimation in Frequency Selective Channels Vincent Le Nir Full Text Available A cognitive radio system needs accurate knowledge of the radio spectrum it operates in. Blind modulation recognition techniques have been proposed to discriminate between single-carrier and multicarrier modulations and to estimate their parameters. Some powerful techniques use autocorrelation- and cyclic autocorrelation-based features of the transmitted signal applying to OFDM signals using a Cyclic Prefix time guard interval (CP-OFDM. In this paper, we propose a blind parameter estimation technique based on a power autocorrelation feature applying to OFDM signals using a Zero Padding time guard interval (ZP-OFDM which in particular excludes the use of the autocorrelation- and cyclic autocorrelation-based techniques. The proposed technique leads to an efficient estimation of the symbol duration and zero padding duration in frequency selective channels, and is insensitive to receiver phase and frequency offsets. Simulation results are given for WiMAX and WiMedia signals using realistic Stanford University Interim (SUI and Ultra-Wideband (UWB IEEE 802.15.4a channel models, respectively. Equivalence of Linear MMSE Detection in DS-CDMA and MC-CDMA Systems over Time and Frequency Selective Channels Tamer A. Kadous Full Text Available The goal of this paper is to compare the performance of the linear minimum mean square error (MMSE detector for a class of code division multiple access (CDMA systems in time and frequency selective channels. Specifically, we consider direct sequence (DS-CDMA, multicarrier (MC-CDMA, and the MC-DS-CDMA systems. Two key tools are used in our development. First, a general time-frequency framework that includes the different CDMA systems as special cases. Second, the duality between time and frequency domains that is used to derive equivalences between the different CDMA systems operating over purely frequency selective and purely time selective channels. We then combine the insights obtained from these special cases to assess the performance of CDMA systems over time and frequency selective channels. We provide sufficient conditions for the codes employed by the CDMA systems for the equivalences to hold. Numerical results are presented to illustrate the results. Asymptotic analysis of multicell massive MIMO over Rician fading channels Sanguinetti, Luca; Kammoun, Abla; Debbah, Merouane This work considers the downlink of a multicell massive MIMO system in which L base stations (BSs) of N antennas each communicate with K single-antenna user equipments randomly positioned in the coverage area. Within this setting, we are interested in evaluating the sum rate of the system when MRT and RZF are employed under the assumption that each intracell link forms a MIMO Rician uncorrelated fading channel. The analysis is conducted assuming that N and K grow large with a non-trivial ratio N/K under the assumption that the data transmission in each cell is affected by channel estimation errors, pilot contamination, and an arbitrary large scale attenuation. Numerical results are used to validate the asymptotic analysis in the finite system regime and to evaluate the network performance under different settings. The asymptotic results are also instrumental to get insights into the interplay among system parameters. Sanguinetti, Luca Asymptotic analysis of downlink MISO systems over Rician fading channels Falconet, Hugo In this work, we focus on the ergodic sum rate in the downlink of a single-cell large-scale multi-user MIMO system in which the base station employs N antennas to communicate with K single-antenna user equipments. A regularized zero-forcing (RZF) scheme is used for precoding under the assumption that each link forms a spatially correlated MIMO Rician fading channel. The analysis is conducted assuming N and K grow large with a non trivial ratio and perfect channel state information is available at the base station. Recent results from random matrix theory and large system analysis are used to compute an asymptotic expression of the signal-to-interference-plus-noise ratio as a function of the system parameters, the spatial correlation matrix and the Rician factor. Numerical results are used to evaluate the performance gap in the finite system regime under different operating conditions. © 2016 IEEE. Falconet, Hugo; Sanguinetti, Luca; Kammoun, Abla; Debbah, Merouane Compensating for Channel Fading in DS-CDMA Communication Systems Employing ICA Neural Network Detectors David Overbye Full Text Available In this paper we examine the impact of channel fading on the bit error rate of a DS-CDMA communication system. The system employs detectors that incorporate neural networks effecting methods of independent component analysis (ICA, subspace estimation of channel noise, and Hopfield type neural networks. The Rayleigh fading channel model is used. When employed in a Rayleigh fading environment, the ICA neural network detectors that give superior performance in a flat fading channel did not retain this superior performance. We then present a new method of compensating for channel fading based on the incorporation of priors in the ICA neural network learning algorithms. When the ICA neural network detectors were compensated using the incorporation of priors, they give significantly better performance than the traditional detectors and the uncompensated ICA detectors. Keywords: CDMA, Multi-user Detection, Rayleigh Fading, Multipath Detection, Independent Component Analysis, Prior Probability Hebbian Learning, Natural Gradient Discrete Multiwavelet Critical-Sampling Transform-Based OFDM System over Rayleigh Fading Channels Sameer A. Dawood Full Text Available Discrete multiwavelet critical-sampling transform (DMWCST has been proposed instead of fast Fourier transform (FFT in the realization of the orthogonal frequency division multiplexing (OFDM system. The proposed structure further reduces the level of interference and improves the bandwidth efficiency through the elimination of the cyclic prefix due to the good orthogonality and time-frequency localization properties of the multiwavelet transform. The proposed system was simulated using MATLAB to allow various parameters of the system to be varied and tested. The performance of DMWCST-based OFDM (DMWCST-OFDM was compared with that of the discrete wavelet transform-based OFDM (DWT-OFDM and the traditional FFT-based OFDM (FFT-OFDM over flat fading and frequency-selective fading channels. Results obtained indicate that the performance of the proposed DMWCST-OFDM system achieves significant improvement compared to those of DWT-OFDM and FFT-OFDM systems. DMWCST improves the performance of the OFDM system by a factor of 1.5–2.5 dB and 13–15.5 dB compared with the DWT and FFT, respectively. Therefore the proposed system offers higher data rate in wireless mobile communications. On Channel Estimation for OFDM/TDM Using MMSE-FDE in a Fast Fading Channel Gacanin Haris Full Text Available Abstract MMSE-FDE can improve the transmission performance of OFDM combined with time division multiplexing (OFDM/TDM, but knowledge of the channel state information and the noise variance is required to compute the MMSE weight. In this paper, a performance evaluation of OFDM/TDM using MMSE-FDE with pilot-assisted channel estimation over a fast fading channel is presented. To improve the tracking ability against fast fading a robust pilot-assisted channel estimation is presented that uses time-domain filtering on a slot-by-slot basis and frequency-domain interpolation. We derive the mean square error (MSE of the channel estimator and then discuss a tradeoff between improving the tracking ability against fading and the noise reduction. The achievable bit error rate (BER performance is evaluated by computer simulation and compared with conventional OFDM. It is shown that the OFDM/TDM using MMSE-FDE achieves a lower BER and a better tracking ability against fast fading in comparison with conventional OFDM. Fareed, Muhammad Mehboob In this paper, we propose a novel relaying scheme for packet transmission over fading channels, which improves the spectral efficiency of cooperative diversity systems by utilizing limited feedback from the destination. Our scheme capitalizes on the fact that relaying is only required when direct transmission suffers deep fading. We calculate the packet error rate for the proposed efficient incremental relaying (EIR) scheme with both amplify and forward and decode and forward relaying. We compare the performance of the EIR scheme with the threshold-based incremental relaying (TIR) scheme. It is shown that the efficiency of the TIR scheme is better for lower values of the threshold. However, the efficiency of the TIR scheme for higher values of threshold is outperformed by the EIR. In addition, three new threshold-based adaptive EIR are devised to further improve the efficiency of the EIR scheme. We calculate the packet error rate and the efficiency of these new schemes to provide the analytical insight. © 2014 IEEE. On the Delay-Energy Tradeoff in Multiuser Fading Channels Ralf R. Müller Full Text Available We consider the delay-energy tradeoff on a fading channel with multiuser diversity. For fixed arbitrary rates of the users, the total transmitted energy is minimized subject to a delay constraint. To achieve this goal we propose a scheme which schedules a subset of all users simultaneously. The scheduled users are allocated power to guarantee successful separation at the detector by successive decoding. In this way, we can benefit from both multiuser diversity and the near-far situation via scheduling and simultaneous transmission, respectively. We analytically show that when the number of users goes to infinity the energy required to guarantee the required user rates can be made as small as required at the cost of a higher delay "delay-energy tradeoff�. We explicitly compute the delay under the proposed scheduling policy and discuss how delay differentiation can be achieved. We extend the results to multiband multiaccess channel. Finally, all the results can be generalized in a straightforward fashion to broadcast channel due to the Gaussian multiaccess-broadcast channel duality. Chen, Yunfei; Zhao, Nan; Alouini, Mohamed-Slim fading or Gamma-shadowed Rician fading. The received signals are then harvested by using either a single harvester for simultaneous transmissions or multiple harvesters for transmissions at different frequencies, antennas or time slots. Both linear Combined diversity and improved energy detection in cooperative spectrum sensing with faded reporting channels Srinivas Nallagonda Full Text Available In this paper we evaluate the performance of cooperative spectrum sensing (CSS where each cognitive radio (CR employs an improved energy detector (IED with multiple antennas and uses selection combining (SC for detecting the primary user (PU in noisy and faded sensing (S channels. We derive an expression for the probability of false alarm and expressions for probability of missed detection in non-faded (AWGN and Rayleigh faded sensing environments in terms of cumulative distribution function (CDF. Each CR transmits its decision about PU via noisy and faded reporting (R channel to fusion center (FC. In this paper we assume that S-channels are noisy and Rayleigh faded while several cases of fading are considered for R-channels such as: (i Hoyt (or Nakagami-q, (ii Rayleigh, (iii Rician (or Nakagami-n, and (iv Weibull. A Binary Symmetric channel (BSC with a fixed error probability (r in the R-channel is also considered. The impact of fading in R-channel, S-channel and several network parameters such as IED parameter, normalized detection threshold, number of CRs, and number of antennas on missed detection and total error probability is assessed. The effects of Hoyt, Rician, and Weibull fading parameters on overall performance of IED-CSS are also highlighted. Multicarrier chaotic communications in multipath fading channels without channel estimation Wang, Shilian, E-mail: [email protected]; Zhang, Zhili [College of Electrical Science and Engineering, National University of Defense Technology, Changsha, 410073, P R China (China) A multi-carrier chaotic shift keying(MC-CSK) communication scheme with low probability of interception(LPI) is proposed in this article. We apply chaotic spreading sequences in the frequency domain, mapping a different chip of a chaotic sequence to an individual orthogonal frequency division multiplexing(OFDM) subcarrier. In each block size of $M$ OFDM symbols, we use one pilot OFDM symbol inserted time-spaced in all-frequency to transmit the reference chaotic signal and use the other M-1 OFDM symbols to transmit the information-bearing signals each spreaded by the reference chaotic signal. At the receiver, we construct a differential detector after DFT and recover the information bits from the correlations between the pilot OFDM symbol and the other M-1 OFDM symbols in each block size of M. Performance analysis and computer simulations show that the MC-CSK outperforms differential chaos shift keying(DCSK) in AWGN channels with high bandwidth efficiency for the block size of M=2 and that the MC-CSK exploits effectively the frequent diversity of the multipath channel. Shilian Wang Full Text Available A multi-carrier chaotic shift keying(MC-CSK communication scheme with low probability of interception(LPI is proposed in this article. We apply chaotic spreading sequences in the frequency domain, mapping a different chip of a chaotic sequence to an individual orthogonal frequency division multiplexing(OFDM subcarrier. In each block size of $M$ OFDM symbols, we use one pilot OFDM symbol inserted time-spaced in all-frequency to transmit the reference chaotic signal and use the other M-1 OFDM symbols to transmit the information-bearing signals each spreaded by the reference chaotic signal. At the receiver, we construct a differential detector after DFT and recover the information bits from the correlations between the pilot OFDM symbol and the other M-1 OFDM symbols in each block size of M. Performance analysis and computer simulations show that the MC-CSK outperforms differential chaos shift keying(DCSK in AWGN channels with high bandwidth efficiency for the block size of M=2 and that the MC-CSK exploits effectively the frequent diversity of the multipath channel. On the low SNR capacity of MIMO fading channels with imperfect channel state information The capacity of multiple-input multiple-output (MIMO) Rayleigh fading channels with full knowledge of channel state information (CSI) at both the transmitter and the receiver (CSI-TR) has been shown recently to scale at low signal-to-noise ratio (SNR) essentially as SNR log(1/SNR), independently of the number of transmit and receive antennas. In this paper, we investigate the ergodic capacity of MIMO Rayleigh fading channel with estimated channel state information at the transmitter (CSI-T) and possibly imperfect channel state information at the receiver (CSI-R). Our framework can be seen as a generalization of previous works as it can capture the perfect CSI-TR as a special case when the estimation error variance goes to zero. In this paper, we mainly focus on the low SNR regime, and we show that the capacity scales as (1-α) SNR log(1/SNR), where α is the estimation error variance. This characterization shows the loss of performance due to error estimation over the perfect channel state information at both the transmitter and the receiver. As a by-product of our new analysis, we show that our framework can be also extended to characterize the capacity of MIMO Rician fading channels at low SNR with possibly imperfect CSI-T and CSI-R. © 1972-2012 IEEE. The capacity of Multiple Input Multiple Output (MIMO) Rayleigh fading channels with full knowledge of channel state information (CSI) at both the transmitter and the receiver (CSI-TR) has been shown recently to scale at low Signal-to-Noise Ratio (SNR) essentially as SNR log(1=SNR), independently of the number of transmit and receive antennas. In this paper, we investigate the ergodic capacity of MIMO Rayleigh fading channel with estimated channel state information at the transmitter (CSI-T) and possibly imperfect channel state information at the receiver (CSI-R). Our framework can be seen as a generalization of previous works as it can capture the perfect CSI-TR as a special case when the estimation error variance goes to zero. In our work, we mainly focus on the low SNR regime and we show that the capacity scales as (1-α) SNR log(1=SNR), where α is the estimation error variance. This characterization shows the loss of performance due to error estimation over the perfect channel state information at both the transmitter and the receiver. As a by-product of our new analysis, we show that our framework can also be extended to characterize the capacity of MIMO Rician fading channels at low SNR with possibly imperfect CSI-T and CSI-R. © 2014 IFIP. Analysis of D2D Communications over Gamma/Nakagami Fading Channels Z. Hussain Full Text Available In this paper, we investigate the outage probability, channel capacity and symbol error rate (SER performance of device-to-device (D2D communication systems. The D2D communication system is affected by several co-channel interferers. Gamma fading channel is considered for the D2D communication system. The channel for the co-channel interference is assumed to be Nakagami faded. An expression for the probability density function (PDF of the signal-to-interference ratio (SIR is presented. The PDF is a function of distances between various devices in the D2D system, path-loss, channel fading conditions and signal powers. Based on the PDF expression, we present the expressions for the outage, channel capacity and SER. With the help of numerical results the performance of D2D communication system is discussed under various conditions of interference, path-loss and channel fading. Higher order capacity statistics of multi-hop transmission systems over Rayleigh fading channels Yilmaz, Ferkan; Tabassum, Hina; Alouini, Mohamed-Slim In this paper, we present an exact analytical expression to evaluate the higher order statistics of the channel capacity for amplify and forward (AF) multihop transmission systems operating over Rayleigh fading channels. Furthermore, we present On limits of Wireless Communications in a Fading Environment: a General Parameterization Quantifying Performance in Fading Channel Amit Grover Full Text Available The reliable services along with high throughput can be achieved by using wireless communication systems. These systems also provides a wide coverage because of their features, no doubt MIMO Communication System [1] is one among them. Features provided by these systems ensure the improved system coverage and increased data transmission rate by considering multiple numbers of transmitter and receiver antennas. In this article, the concept of equalization has been considered and finally the performance of the MIMO Systems in Rician flat fading [5] channel is compared with the Rayleigh flat fading channel. It has also been observed that the performance of these Systems in Rician Flat Fading Channel is the best as compare to the Rayleigh Flat Fading Channel [10]. It has been concluded that the successive interference methods provide better performance as compare to others, but their complexity is high. Simulation results shows that ML provides the better performance in comparison to other equalizers but Sphere decoder provides the best performance. On the capacity of multiaccess fading channels with full channel state information at low power regime We study the throughput capacity region of the Gaussian multiaccess (MAC) fading channel with perfect channel state information (CSI) at the receiver (CSI-R) and at the transmitters (CSI-T), at low power regime. We show that it has a multidimensional rectangle structure and thus is simply characterized by single user capacity points. More specifically, we show that at low power regime, the boundary surface of the capacity region shrinks to a single point corresponding to the sum rate maximizer and that the coordinates of this point coincide with single user capacity bounds. Inspired from this result, we propose an on-off scheme, compute its achievable rate, and provide a necessary condition on the fading channels under which this scheme achieves single user capacity bounds of the MAC channel at asymptotically low power regime. We argue that this necessary condition characterizes a class of fading that encompasses all known wireless channels, where the capacity region of the MAC channel has a simple expression in terms of users\\' average power constraints only. © 2013 IEEE. An MGF-based capacity analysis of equal gain combining over fading channels Exact average capacity results for L-branch coherent equal-gain combining (EGC) in correlated and uncorrelated fading channels are not known. This paper develops a novel framework (approach) for capacity analysis of L-branch EGC in generalized fading channels. In addition, Gamma shadowed generalized Nakagami-m fading model is proposed in order to statistically model the fading environments in high frequencies such as 60 GHz and above. Some simulations are carried out and then the obtained results are accentuated by means of numerical analysis based on the proposed framework. Numerical and simulation results, performed to verify the correctness of the proposed framework, are in perfect agreement. ©2010 IEEE. An information-guided channel-hopping scheme for block-fading channels with estimation errors Yang, Yuli Information-guided channel-hopping technique employing multiple transmit antennas was previously proposed for supporting high data rate transmission over fading channels. This scheme achieves higher data rates than some mature schemes, such as the well-known cyclic transmit antenna selection and space-time block coding, by exploiting the independence character of multiple channels, which effectively results in having an additional information transmitting channel. Moreover, maximum likelihood decoding may be performed by simply decoupling the signals conveyed by the different mapping methods. In this paper, we investigate the achievable spectral efficiency of this scheme in the case of having channel estimation errors, with optimum pilot overhead for minimum meansquare error channel estimation, when transmitting over blockfading channels. Our numerical results further substantiate the robustness of the presented scheme, even with imperfect channel state information. ©2010 IEEE. On the capacity of cognitive radio under limited channel state information over fading channels A spectrum-sharing communication system where the secondary user is aware of the instantaneous channel state information (CSI) of the secondary link, but knows only the statistics and an estimated version of the secondary transmitter-primary receiver (ST-PR) link, is investigated. The optimum power profile and the ergodic capacity of the secondary link are derived for general fading channels (with continuous probability density function) under average and peak transmit-power constraints and with respect to two different interference constraints: an interference outage constraint and a signal-to-interference (SI) outage constraint. When applied to Rayleigh fading channels, our results show, for instance, that the interference constraint is harmful at high-power regime, whereas at low-power regime, it has a marginal impact and no-interference performance may be achieved. © 2011 IEEE. Analysis of Coded FHSS Systems with Multiple Access Interference over Generalized Fading Channels Salam A. Zummo Full Text Available We study the effect of interference on the performance of coded FHSS systems. This is achieved by modeling the physical channel in these systems as a block fading channel. In the derivation of the bit error probability over Nakagami fading channels, we use the exact statistics of the multiple access interference (MAI in FHSS systems. Due to the mathematically intractable expression of the Rician distribution, we use the Gaussian approximation to derive the error probability of coded FHSS over Rician fading channel. The effect of pilot-aided channel estimation is studied for Rician fading channels using the Gaussian approximation. From this, the optimal hopping rate in coded FHSS is approximated. Results show that the performance loss due to interference increases as the hopping rate decreases. Asymptotic SER performance comparison of MPSK and MDPSK in wireless fading channels Song, Xuegui We propose a general framework to investigate asymptotic relative performance between M-ary phase-shift keying (MPSK) and M-ary differential phase-shift keying (MDPSK) in wireless fading channels. Using this framework, we provide an alternative derivation for the closed-form expression of the asymptotic performance loss of MDPSK w.r.t. MPSK in an additive white Gaussian noise channel. The same performance loss is also shown to be true for the lognormal fading channels. Song, Xuegui; Yang, Fan; Cheng, Julian; Alouini, Mohamed-Slim Cooperative Cognitive Radio Systems over Nakagami-m Fading Channels Hyadi, Amal This thesis aims to investigate the incorporation of cooperative techniques in cognitive radio networks over Nakagami-m fading channels. These last years, spectrum sharing mechanisms has gained a lot of interest in the wireless communication domain. Using cooperation in a cognitive set up make the use of spectrum much more efficient. Moreover, it helps to extend the coverage area of the cognitive network and also to reduce the transmitting power and, thus, the generated interference. In this work, we consider two particular scenarios for cooperative cognitive radio systems. The first scenario consider multihop regenerative relaying in an underlay cognitive set up. The cooperation is performed in the secondary system, in the presence of multiple primary users. Both interference power and peak power constraints are taking into account. Closed-form expressions for the statistical characteristics and multiple end- to-end performance metrics are derived. Different scenarios are presented to illustrate the obtained results and Monte Carlo simulations confirm the accuracy of our analytical derivations. In the second part of this work, we consider an overlay cognitive network with the spectrally efficient two-phase two-way relaying protocol. Two relay selection techniques, optimizing both the primary and the secondary communication, are presented. The overall outage performance is investigated and an optimal power allocation scheme, that ameliorate the outage performance of the system, is proposed. Numerical simulations are presented to illustrate and compare the obtained results. On the capacity of nakagami-m fading Channels with full channel state information at low SNR The capacity of flat Rayleigh fading channels with full channel state information (CSI) at the transmitter and at the receiver at asymptotically low SNR has been recently shown to scale essentially as SNR log(1/SNR)}. In this paper, we investigate the Nakagami-m fading channel capacity with full CSI, and show that the capacity of this channel scales essentially as m/ Omega SNR log(1/SNR), where m is the Nakagami-m fading parameter and where Ω is the channel mean-square. We also show that one-bit CSI at the transmitter is enough to achieve this asymptotic capacity using an On-Off power control scheme. Our framework may be seen as a generalization of previous works as it captures the Rayleigh fading channel as a special case by taking m=1. © 2012 IEEE. Delay-limited capacity of fading multiple access and broadcast channels in the low power regime Rezki, Zouheir; Alouini, Mohamed-Slim show that for fading channels where the MAC capacity region is strictly positive, it has a multidimensional rectangle structure and thus is simply characterized by single user capacity points. More specifically, we show that at low power, the boundary Opportunistic relaying in multipath and slow fading channel: Relay selection and optimal relay selection period Sungjoon Park,; Stark, Wayne E. In this paper we present opportunistic relay communication strategies of decode and forward relaying. The channel that we are considering includes pathloss, shadowing, and fast fading effects. We find a simple outage probability formula Capacity bounds for kth best path selection over generalized fading channels Exact ergodic capacity calculation for fading wireless channels typically involves time-consuming numerical evaluation of infinite integrals. In this paper, lower and upper bounds on ergodic capacity for kth best path are presented. These bounds Product of the powers of generalized Nakagami-m variates and performance of cascaded fading channels -identically distributed GNM random variables, for the purpose of modeling the cascaded fading channels. In particular, using the Fox's H function, we derive the probability density function, the cumulative distribution function, the moment generating function A tone-aided dual vestigial sideband system for digital communications on fading channels Hladik, Stephen M.; Saulnier, Gary J.; Rafferty, William A spectrally efficient tone-aided dual vestigial sideband (TA/DVSB) system for digital data communications on fading channels is presented and described analytically. This PSK (phase-shift-keying) system incorporates a feed-forward, tone-aided demodulation technique to compensate for Doppler frequency shift and channel- induced, multipath fading. In contrast to other tone-in-band-type systems, receiver synchronization is derived from the complete data VSBs. Simulation results for the Rician fading channel are presented. These results demonstrate the receiver's ability to mitigate performance degradation due to fading and to obtain proper data carrier synchronization, suggesting that the proposed TA/DVSB system has promise for this application. Simulated BER (bit-error rate) data indicate that the TA/DVSB system effectively alleviates the channel distortions of the land mobile satellite application. A Unified Simulation Approach for the Fast Outage Capacity Evaluation over Generalized Fading Channels Rached, Nadhir B.; Kammoun, Abla; Alouini, Mohamed-Slim; Tempone, Raul The outage capacity (OC) is among the most important performance metrics of communication systems over fading channels. The evaluation of the OC, when equal gain combining (EGC) or maximum ratio combining (MRC) diversity techniques are employed Ear-to-Ear On-Body Channel Fading in the ISM-band for Tangentially-Polarized Antennas Kvist, Søren Helstrup; Thaysen, Jesper; Jakobsen, Kaj Bjarne The ear-to-ear on-body channel fading has been studied in the ISM-band. The ear-to-ear path gain was measured on six persons in an indoor environment for a duration of 200 s. The channel fading has been characterized in terms of empirical cumulative distribution functions (CDF), average fade... High SNR BER comparison of coherent and differentially coherent modulation schemes in lognormal fading channels Song, Xuegui; Cheng, Julian; Alouini, Mohamed-Slim Using an auxiliary random variable technique, we prove that binary differential phase-shift keying and binary phase-shift keying have the same asymptotic bit-error rate performance in lognormal fading channels. We also show that differential quaternary phase-shift keying is exactly 2.32 dB worse than quaternary phase-shift keying over the lognormal fading channels in high signal-to-noise ratio regimes. Exact Symbol Error Probability of Square M-QAM Signaling over Generalized Fading Channels subject to Additive Generalized Gaussian Noise Soury, Hamza This paper considers the average symbol error probability of square Quadrature Amplitude Modulation (QAM) coherent signaling over flat fading channels subject to additive generalized Gaussian noise. More specifically, a generic closedform expression in terms of the Fox H function and the bivariate Fox H function is offered for the extended generalized-K fading case. Simplifications for some special fading distributions such as generalized-K fading, Nakagami-m fading, and Rayleigh fading and special additive noise distributions such as Gaussian and Laplacian noise are then presented. Finally, the mathematical formalism is illustrated by some numerical examples verified by computer based simulations for a variety of fading and additive noise parameters. Analysis of multipath channel fading techniques in wireless communication systems Mahender, Kommabatla; Kumar, Tipparti Anil; Ramesh, K. S. Multipath fading occurs in any environment where there is multipath propagation and there is some movement of elements within the radio communications system. This may include the radio transmitter or receiver position, or in the elements that give rise to the reflections. The multipath fading can often be relatively deep, i.e. the signals fade completely away, whereas at other times the fading may not cause the signal to fall below a useable strength. Multipath fading may also cause distortion to the radio signal. As the various paths that can be taken by the signals vary in length, the signal transmitted at a particular instance will arrive at the receiver over a spread of times. This can cause problems with phase distortion and inter symbol interference when data transmissions are made. As a result, it may be necessary to incorporate features within the radio communications system that enables the effects of these problems to be minimized. This paper analyses the effects of various types of multipath fading in wireless transmission system. General Switch-and-Stay Combing for Space Diversity over Rayleigh Fading Channels Yawgeng A. Chau Full Text Available Three multibranch switch-and-stay combining (MSSC schemes are analyzed for Rayleigh fading channels, where different decision statistics for antenna switching (i.e., switch statistic are used. Let a and r denote the fading factor and the received baseband signal of a diversity branch, respectively. In contrast to the traditional MSSC that uses the faded signal-to-noise ratio (SNR of diversity branches as the corresponding switch statistic, to enhance the receiver performance, |r|, |ar|, and a new linear combination of a and |r| are used as switch statistics of the three MSSC schemes, respectively. For performance evaluation, the bit error rate (BER of BPSK is derived for the three MSSC schemes over both independent-and-identical distributed (i.i.d. and independent-and-nonidentical distributed (i.n.d. Rayleigh fading channels. To pursue optimal performance, the locally optimal switch threshold (ST of each MSSC scheme is obtained for general i.n.d. fading channels. In addition, the locally optimal ST becomes the globally optimal ST for i.i.d. channels. Numerical results based on the analysis and simulations are presented. In contrast to the MSSC over i.i.d. fading channels, we will show that the performance of MSSC schemes can be improved by increasing the number of branches, if i.n.d. channels are considered. EFFECTS OF RICIAN FADING ON THE OPERATION OF AERONAUTICAL SATELLITE OFDM CHANNEL Volodymyr Kharchenko Full Text Available The aim of this study is to investigate the influence of Rician fading on messages transmission via the aeronautical satellite OFDM channel with adaptive modulation and the development of a method for estimating the parameters of such a channel. Methods: To study the effect of Rician fading on messages transmission via aeronautical satellite OFDM channel with adaptive modulation the original model of the communication channel "Aircraft-Satellite-Ground Station� was built using software package MATLAB Sіmulіnk. The model includes "Aircraft Transmitter�, "Uplink/Downlink Path�, "Satellite Transponder�, and "Ground Station Receiver�. Each modulator block in the modulation bank performs convolutional coding and puncturing using code rates of ½, ²/3, and ¾, data interleaving, BPSK, QPSK, 16-QAM, and 64-QAM modulation. Results: Dependences of Estimated channel SNR on the ratio between the power of the LOS component and the diffuse component, on the downlink gain and delay in the diffuse component for different Doppler spectrum types and Doppler frequency offsets were obtained. A method for estimating the parameters of the satellite channels with fading was proposed. Discussion: The realistic model of aeronautical satellite OFDM link with Rician fading is developed for the first time on a basis of IEEE 802.11a standard and used for channel parameters evaluation. Proposed in this article approach can be considered as a method for estimating parameters of the channel with fading. In this paper, we analyze the fading statistics of a generic fading distribution, termed the N-product Generalized Nakagami-m (GNM) distribution (N*GNM distribution), constructed as the product of the power of N statistically independent and non-identically distributed GNM random variables, for the purpose of modeling the cascaded fading channels. In particular, using the Fox\\'s H function, we derive the probability density function, the cumulative distribution function, the moment generating function and the moments of such channels in closed-form. These derived results are a convenient tool to statistically model the cascaded GNM fading channels and to analyze the performance of digital communication systems over these kinds of channels. As such, generic closed-form expressions for the amount of fading, the outage probability, the capacity, the outage capacity and the average bit error probabilities of digital communications systems over cascaded GNM fading channels are presented. Numerical and simulation results, performed to verify the correctness of the proposed formulation, are in perfect agreement. In this paper, we present an exact analytical expression to evaluate the higher order statistics of the channel capacity for amplify and forward (AF) multihop transmission systems operating over Rayleigh fading channels. Furthermore, we present simple and efficient closed-form expression to the higher order moments of the channel capacity of dual hop transmission system with Rayleigh fading channels. In order to analyze the behavior of the higher order capacity statistics and investigate the usefulness of the mathematical analysis, some selected numerical and simulation results are presented. Our results are found to be in perfect agreement. © 2012 IEEE. Sungjoon Park, In this paper we present opportunistic relay communication strategies of decode and forward relaying. The channel that we are considering includes pathloss, shadowing, and fast fading effects. We find a simple outage probability formula for opportunistic relaying in the channel, and validate the results by comparing it with the exact outage probability. Also, we suggest a new relay selection algorithm that incorporates shadowing. We consider a protocol of broadcasting the channel gain of the previously selected relay. This saves resources in slow fading channel by reducing collisions in relay selection. We further investigate the optimal relay selection period to maximize the throughput while avoiding selection overhead. © 2011 IEEE. Adaptive Jamming Suppression in Coherent FFH System Using Weighted Equal Gain Combining Receiver over Fading Channels with Imperfect CSI Yishan He Full Text Available Fast frequency hopping (FFH is commonly used as an antijamming communication method. In this paper, we propose efficient adaptive jamming suppression schemes for binary phase shift keying (BPSK based coherent FFH system, namely, weighted equal gain combining (W-EGC with the optimum and suboptimum weighting coefficient. We analyze the bit error ratio (BER of EGC and W-EGC receivers with partial band noise jamming (PBNJ, frequency selective Rayleigh fading, and channel estimation errors. Particularly, closed-form BER expressions are presented with diversity order two. Our analysis is verified by simulations. It is shown that W-EGC receivers significantly outperform EGC. As compared to the maximum likelihood (ML receiver in conventional noncoherent frequency shift keying (FSK based FFH, coherent FFH/BPSK W-EGC receivers also show significant advantages in terms of BER. Moreover, W-EGC receivers greatly reduce the hostile jammers' jamming efficiency. Robust Automatic Modulation Classification Technique for Fading Channels via Deep Neural Network Jung Hwan Lee Full Text Available In this paper, we propose a deep neural network (DNN-based automatic modulation classification (AMC for digital communications. While conventional AMC techniques perform well for additive white Gaussian noise (AWGN channels, classification accuracy degrades for fading channels where the amplitude and phase of channel gain change in time. The key contributions of this paper are in two phases. First, we analyze the effectiveness of a variety of statistical features for AMC task in fading channels. We reveal that the features that are shown to be effective for fading channels are different from those known to be good for AWGN channels. Second, we introduce a new enhanced AMC technique based on DNN method. We use the extensive and diverse set of statistical features found in our study for the DNN-based classifier. The fully connected feedforward network with four hidden layers are trained to classify the modulation class for several fading scenarios. Numerical evaluation shows that the proposed technique offers significant performance gain over the existing AMC methods in fading channels. On the capacity of multiple access and broadcast fading channels with full channel state information at low SNR We study the throughput capacity region of the Gaussian multi-access (MAC) fading channel with perfect channel state information (CSI) at the receiver and at the transmitters, at low power regime. We show that it has a multidimensional rectangle structure and thus is simply characterized by single user capacity points.More specifically, we show that at low power regime, the boundary surface of the capacity region shrinks to a single point corresponding to the sum rate maximizer and that the coordinates of this point coincide with single user capacity bounds. Inspired from this result, we propose an on-off scheme, compute its achievable rate, and show that this scheme achieves single user capacity bounds of the MAC channel for a wide class of fading channels at asymptotically low power regime. We argue that this class of fading encompasses all known wireless channels for which the capacity region of the MAC channel has even a simpler expression in terms of users\\' average power constraints only. Using the duality of Gaussian MAC and broadcast channels (BC), we deduce a simple characterization of the BC capacity region at low power regime and show that for a class of fading channels (including Rayleigh fading), time-sharing is asymptotically optimal. © 2014 IEEE. Wireless Transmission of Big Data: A Transmission Time Analysis over Fading Channel Wang, Wen-Jing; Yang, Hong-Chuan; Alouini, Mohamed-Slim In this paper, we investigate the transmission time of a large amount of data over fading wireless channel with adaptive modulation and coding (AMC). Unlike traditional transmission systems, where the transmission time of a fixed amount of data is typically regarded as a constant, the transmission time with AMC becomes a random variable, as the transmission rate varies with the fading channel condition. To facilitate the design and optimization of wireless transmission schemes for big data applications, we present an analytical framework to determine statistical characterizations for the transmission time of big data with AMC. In particular, we derive the exact statistics of transmission time over block fading channels. The probability mass function (PMF) and cumulative distribution function (CDF) of transmission time are obtained for both slow and fast fading scenarios. We further extend our analysis to Markov channel, where transmission time becomes the sum of a sequence of exponentially distributed time slots. Analytical expression for the probability density function (PDF) of transmission time is derived for both fast fading and slow fading scenarios. These analytical results are essential to the optimal design and performance analysis of future wireless transmission systems for big data applications. Wang, Wen-Jing Continuous-variable quantum key distribution in uniform fast-fading channels Papanastasiou, Panagiotis; Weedbrook, Christian; Pirandola, Stefano We investigate the performance of several continuous-variable quantum key distribution protocols in the presence of uniform fading channels. These are lossy channels whose transmissivity changes according to a uniform probability distribution. We assume the worst-case scenario where an eavesdropper induces a fast-fading process, where she chooses the instantaneous transmissivity while the remote parties may only detect the mean statistical effect. We analyze coherent-state protocols in various configurations, including the one-way switching protocol in reverse reconciliation, the measurement-device-independent protocol in the symmetric configuration, and its extension to a three-party network. We show that, regardless of the advantage given to the eavesdropper (control of the fading), these protocols can still achieve high rates under realistic attacks, within reasonable values for the variance of the probability distribution associated with the fading process. Channel allocation and rate adaptation for relayed transmission over correlated fading channels Hwang, Kyusung We consider, in this paper, channel allocation and rate adaptation scheme for relayed transmission over correlated fading channels via cross-layer design. Specifically, jointly considering the data link layer buffer occupancy and channel quality at both the source and relay nodes, we develop an optimal channel allocation and rate adaptation policy for a dual-hop relayed transmission. As such the overall transmit power for the relayed system is minimized while a target packet dropping rate (PDR) due to buffer over flows is guaranteed. In order to find such an optimal policy, the channel allocation and rate adaptation transmission framework is formulated as a constraint Markov decision process (CMDP). The PDR performance of the optimal policy is compared with that of two conventional suboptimal schemes, namely the channel quality based and the buffer occupancy based channel allocation schemes. Numerical results show that for a given power budget, the optimal scheme requires significantly less power than the conventional schemes in order to maintain a target PDR. ©2009 IEEE. A unified MGF-based capacity analysis of diversity combiners over generalized fading channels Unified exact ergodic capacity results for L-branch coherent diversity combiners including equal-gain combining (EGC) and maximal-ratio combining (MRC) are not known. This paper develops a novel generic framework for the capacity analysis of L-branch EGC/MRC over generalized fading channels. The framework is used to derive new results for the gamma-shadowed generalized Nakagami-m fading model which can be a suitable model for the fading environments encountered by high frequency (60 GHz and above) communications. The mathematical formalism is illustrated with some selected numerical and simulation results confirming the correctness of our newly proposed framework. © 2012 IEEE. Average bit error probability of binary coherent signaling over generalized fading channels subject to additive generalized gaussian noise This letter considers the average bit error probability of binary coherent signaling over flat fading channels subject to additive generalized Gaussian noise. More specifically, a generic closed form expression in terms of the Fox\\'s H function is offered for the extended generalized-K fading case. Simplifications for some special fading distributions such as generalized-K fading and Nakagami-m fading and special additive noise distributions such as Gaussian and Laplacian noise are then presented. Finally, the mathematical formalism is illustrated by some numerical examples verified by computer based simulations for a variety of fading and additive noise parameters. © 2012 IEEE. A Novel Criterion for Optimum MultilevelCoding Systems in Mobile Fading Channels YUAN Dongfeng; WANG Chengxiang; YAO Qi; CAO Zhigang A novel criterion that is "capac-ity rule" and "mapping rule" for the design of op-timum MLC scheme over mobile fading channels isproposed.According to this theory,the performanceof multilevel coding with multistage decoding schemes(MLC/MSD) in mobile fading channels is investi-gated,in which BCH codes are chosen as componentcodes,and three mapping strategies with 8ASK mod-ulation are used.Numerical results indicate that whencode rates of component codes in MLC scheme are de-signed based on "capacity rule",the performance ofthe system with block partitioning (BP) is optimumfor Rayleigh fading channels,while the performance ofthe system with Ungerboeck partioning (UP) is bestfor AWGN channels. Capacity limits of spectrum-sharing systems over hyper-fading channels Ekin, Sabit Cognitive radio (CR) with spectrum-sharing feature is a promising technique to address the spectrum under-utilization problem in dynamically changing environments. In this paper, the achievable capacity gain of spectrum-sharing systems over dynamic fading environments is studied. To perform a general analysis, a theoretical fading model called hyper-fading model that is suitable to the dynamic nature of CR channel is proposed. Closed-form expressions of probability density function (PDF) and cumulative density function (CDF) of the signal-to-noise ratio (SNR) for secondary users (SUs) in spectrum-sharing systems are derived. In addition, the capacity gains achievable with spectrum-sharing systems in high and low power regions are obtained. The effects of different fading figures, average fading powers, interference temperatures, peak powers of secondary transmitters, and numbers of SUs on the achievable capacity are investigated. The analytical and simulation results show that the fading figure of the channel between SUs and primary base-station (PBS), which describes the diversity of the channel, does not contribute significantly to the system performance gain. © 2011 John Wiley & Sons, Ltd. Fading Characteristics of Wireless Channel on High-Speed Railway in Hilly Terrain Scenario Fengyu Luan Full Text Available This paper focuses on the fading characteristics of wireless channel on High-Speed Railway (HSR in hilly terrain scenario. Due to the rapid speed, the fading characteristics of HSR channel are highly correlated with time or Transmit-Receive distance and have their own special property. To investigate the fading characteristics, the measurement is conducted on the Guangzhou-Shenzhen passenger-dedicated line in China with the speed of 295 km/h in the data-collection area at 2.4 GHz. From the measured data, the amplitude of each path is estimated by using the Subspace-Alternating Generalized Expectation-Maximization (SAGE algorithm along with other parameters of channel impulse responses. Then the fading parameters, including path loss, shadow fading, and K-factor, are analysed. With the numerical results in the measurement and analysis, the fading characteristics have been revealed and modelled. It is supposed that this work has a promotion for HSR communication system design and improvement. Ekin, Sabit; Yilmaz, Ferkan; Ç elebi, Hasari Burak; Qaraqe, Khalid A.; Alouini, Mohamed-Slim; Serpedin, Erchin The capacity of the cascaded fading channel in the low power regime In this paper, we present a simple way to compute the ergodic capacity of cascaded channels with perfect channel state information at both the transmitter and the receiver. We apply our generic results to the Rayleigh-double fading channel, and to the free-space optical channel in the presence of pointing errors and we express their low signal-to-noise ratio capacities. We mainly focus on the low signal-to-noise ratio range. On the low SNR capacity of maximum ratio combining over rician fading channels with full channel state information In this letter, we study the ergodic capacity of a maximum ratio combining (MRC) Rician fading channel with full channel state information (CSI) at the transmitter and at the receiver. We focus on the low Signal-to-Noise Ratio (SNR) regime and we show that the capacity scales as L ΩK+L SNRx log(1SNR), where Ω is the expected channel gain per branch, K is the Rician fading factor, and L is the number of diversity branches. We show that one-bit CSI feedback at the transmitter is enough to achieve this capacity using an on-off power control scheme. Our framework can be seen as a generalization of recently established results regarding the fading-channels capacity characterization in the low-SNR regime. © 2012 IEEE. Benkhelifa, Fatma; Rezki, Zouheir; Alouini, Mohamed-Slim Entanglement of Gaussian states and the applicability to quantum key distribution over fading channels Usenko, Vladyslav C; Filip, Radim; Heim, Bettina; Peuntinger, Christian; Wittmann, Christoffer; Marquardt, Christoph; Leuchs, Gerd Entanglement properties of Gaussian states of light as well as the security of continuous variable quantum key distribution with Gaussian states in free-space fading channels are studied. These qualities are shown to be sensitive to the statistical properties of the transmittance distribution in the cases when entanglement is strong or when channel excess noise is present. Fading, i.e. transmission fluctuations, caused by beam wandering due to atmospheric turbulence, is a frequent challenge in free-space communication. We introduce a method of fading discrimination and subsequent post-selection of the corresponding sub-states and show that it can improve the entanglement resource and restore the security of the key distribution over a realistic fading link. Furthermore, the optimal post-selection strategy in combination with an optimized entangled resource is shown to drastically increase the protocol's robustness to excess noise, which is confirmed for experimentally measured fading channel characteristics. The stability of the result against finite data ensemble size and imperfect channel estimation is also addressed. (paper) Multiple-Symbol Decision-Feedback Space-Time Differential Decoding in Fading Channels Wang Xiaodong Full Text Available Space-time differential coding (STDC is an effective technique for exploiting transmitter diversity while it does not require the channel state information at the receiver. However, like conventional differential modulation schemes, it exhibits an error floor in fading channels. In this paper, we develop an STDC decoding technique based on multiple-symbol detection and decision-feedback, which makes use of the second-order statistic of the fading processes and has a very low computational complexity. This decoding method can significantly lower the error floor of the conventional STDC decoding algorithm, especially in fast fading channels. The application of the proposed multiple-symbol decision-feedback STDC decoding technique in orthogonal frequency-division multiplexing (OFDM system is also discussed. Exact capacity analysis of multihop transmission over amplify-and-forward relay fading channels Yilmaz, Ferkan; Kucur, Oǧuz; Alouini, Mohamed-Slim In this paper, we propose an analytical framework on the exact computation of the average capacity of multihop transmission over amplify-and-forward relay fading channels. Our approach relies on the algebraic combination of Mellin and Laplace transforms to obtain exact single integral expressions which can be easily computed by Gauss-Chebyshev Quadrature (GCQ) rule. As such, the derived results are a convenient tool to analyze the average capacity of multihop transmission over amplify-and-forward relay fading channels. As an application of the analytical framework on the exact computation of the average capacity of multihop transmission, some examples are accentuated for generalized Nakagami-m fading channels. Numerical and simulation results, performed to verify the correctness of the proposed formulation, are in perfect agreement. ©2010 IEEE. Space-Time Trellis Coded 8PSK Schemes for Rapid Rayleigh Fading Channels Full Text Available This paper presents the design of 8PSK space-time (ST trellis codes suitable for rapid fading channels. The proposed codes utilize the design criteria of ST codes over rapid fading channels. Two different approaches have been used. The first approach maximizes the symbol-wise Hamming distance (HD between signals leaving from or entering to the same encoder′s state. In the second approach, set partitioning based on maximizing the sum of squared Euclidean distances (SSED between the ST signals is performed; then, the branch-wise HD is maximized. The proposed codes were simulated over independent and correlated Rayleigh fading channels. Coding gains up to 4 dB have been observed over other ST trellis codes of the same complexity. Partial relay selection based on shadowing side information over generalized composite fading channels In this paper, in contrast to the relay selection protocols available in the literature, we propose a partial relay selection protocol utilizing only the shadowing side information of the relays instead of their full channel side information in order to select a relay in a dual-hop relaying system through the available limited feedback channels and power budget. We then presented an exact unified performance expression combining the average bit error probability, ergodic capacity, and moments-generating function of the proposed partial relay selection over generalized fading channels. Referring to the unified performance expression introduced in [1], we explicitly offer a generic unified performance expression that can be easily calculated and that is applicable to a wide variety of fading scenarios. Finally, as an illustration of the mathematical formalism, some numerical and simulation results are generated for an extended generalized-K fading environment, and these numerical and simulation results are shown to be in perfect agreement. © 2011 IEEE. Achievable capacity of a spectrum sharing system over hyper fading channels Cognitive radio with spectrum sharing feature is a promising technique to address the spectrum under-utilization problem in dynamically changing environments. In this paper, achievable capacity gain of spectrum sharing systems over dynamic fading environments is studied. For the analysis, a theoretical fading model called hyper fading model that is suitable to the dynamic nature of cognitive radio channel is proposed. Closed-form expression of probability density function (PDF) and cumulative density function (CDF) of the signal-to-noise ratio (SNR) for secondary users in spectrum sharing systems are derived. In addition, the capacity gains achievable with spectrum sharing systems in high and low power regions are obtained. Numerical simulations are performed to study the effects of different fading figures, average powers, interference temperature, and number of secondary users on the achievable capacity. Turbo coding, turbo equalisation and space-time coding for transmission over fading channels Hanzo, L; Yeap, B Against the backdrop of the emerging 3G wireless personal communications standards and broadband access network standard proposals, this volume covers a range of coding and transmission aspects for transmission over fading wireless channels. It presents the most important classic channel coding issues and also the exciting advances of the last decade, such as turbo coding, turbo equalisation and space-time coding. It endeavours to be the first book with explicit emphasis on channel coding for transmission over wireless channels. Divided into 4 parts: Part 1 - explains the necessary background for novices. It aims to be both an easy reading text book and a deep research monograph. Part 2 - provides detailed coverage of turbo conventional and turbo block coding considering the known decoding algorithms and their performance over Gaussian as well as narrowband and wideband fading channels. Part 3 - comprehensively discusses both space-time block and space-time trellis coding for the first time in literature. Par... Outage performance of reactive cooperation in Nakagami-m fading channels Benjillali, Mustapha In this paper, we investigate the outage performance of Decode-and-Forward with reactive relaying in dual-hop cooperetive Nakagaml-m fading links. The destination, based on the umque knowledge of local second hop channel state information, selects the best relay to increase the chances of cooperation when the direct link is also available. After deriving the exact distribution of the variables of interest, the outage probability of the system - with and without the direct link - is obtained in closed-form, and the ε-outage capacity is derived in the particular c.se wh.ere the channel model is reduced to a Rayleigh fading. Simulation results confirm the accuracy of our analysis for a large selection of system and fading parameters. Outage performance of Decode-and-Forward partial selection in Nakagami-m fading channels In this paper, we investigate the outage performance of Decode-and-Forward with partial selection relaying in dualhop cooperative Nakagami-m fading links. The source, based on the unique knowledge of local first hop channel state information, selects the best relay to increase the chances of successful decoding and hence the possibility of cooperation when the direct link is also available. After deriving the exact distribution of the sum of two gamma variates with the same shape parameter, the outage probability of the system-with and without the direct link-is obtained in closed-form. We also derive the ε-outage capacity in different particular cases, and the obtained results- when the channel model is reduced to a Rayleigh fading-are either new or correspond to those previously obtained in other works. Simulation results confirm the accuracy of our analysis for a large selection of system and fading parameters. © 2009 IEEE. On the Ergodic Capacity of Dual-Branch Correlated Log-Normal Fading Channels with Applications Al-Quwaiee, Hessa; Alouini, Mohamed-Slim Closed-form expressions of the ergodic capacity of independent or correlated diversity branches over Log-Normal fading channels are not available in the literature. Thus, it is become of an interest to investigate the behavior of such metric at high Asymptotic analysis for Nakagami-m fading channels with relay selection Zhong, Caijun; Wong, Kaikit; Jin, Shi; Alouini, Mohamed-Slim; Ratnarajah, Tharm In this paper, we analyze the asymptotic outage probability performance of both decode-and-forward (DF) and amplify-and-forward (AF) relaying systems using partial relay selection and the "best" relay selection schemes for Nakagami-m fading channels Benjillali, Mustapha; Alouini, Mohamed-Slim In this paper, we investigate the outage performance of Decode-and-Forward with reactive relaying in dual-hop cooperetive Nakagaml-m fading links. The destination, based on the umque knowledge of local second hop channel state information, selects Modeling of Non-WSSUS Double-Rayleigh Fading Channels for Vehicular Communications Carlos A. Gutiérrez Full Text Available This paper deals with the modeling of nonstationary time-frequency (TF dispersive multipath fading channels for vehicle-to-vehicle (V2V communication systems. As a main contribution, the paper presents a novel geometry-based statistical channel model that facilitates the analysis of the nonstationarities of V2V fading channels arising at a small-scale level due to the time-varying nature of the propagation delays. This new geometrical channel model has been formulated following the principles of plane wave propagation (PWP and assuming that the transmitted signal reaches the receiver antenna through double interactions with multiple interfering objects (IOs randomly located in the propagation area. As a consequence of such interactions, the first-order statistics of the channel model's envelope are shown to follow a worse-than-Rayleigh distribution; specifically, they follow a double-Rayleigh distribution. General expressions are derived for the envelope and phase distributions, four-dimensional (4D TF correlation function (TF-CF, and TF-dependent delay and Doppler profiles of the proposed channel model. Such expressions are valid regardless of the underlying geometry of the propagation area. Furthermore, a closed-form solution of the 4D TF-CF is presented for the particular case of the geometrical two-ring scattering model. The obtained results provide new theoretical insights into the correlation and spectral properties of small-scale nonstationary V2V double-Rayleigh fading channels. We study delay-limited (also called zero-outage) capacity region of the fading multi-access channel (MAC) with Gaussian noise and perfect channel state information (CSI) at the receiver and at the transmitters (CSI-TR), in the low-power regime. We show that for fading channels where the MAC capacity region is strictly positive, it has a multidimensional rectangle structure and thus is simply characterized by single user capacity points. More specifically, we show that at low power, the boundary surface of the capacity region shrinks to a single point corresponding to the sum-rate maximizer and that the coordinates of this point coincide with single user capacity bounds. Using the duality of the Gaussian MAC and broadcast channels (BC), we show that time-sharing (or time division multiple access (TDMA)) is asymptotically optimal. © 2015 IEEE. Composite and Cascaded Generalized-K Fading Channel Modeling and Their Diversity and Performance Analysis Ansari, Imran Shafique The introduction of new schemes that are based on the communication among nodes has motivated the use of composite fading models due to the fact that the nodes experience different multipath fading and shadowing statistics, which subsequently determines the required statistics for the performance analysis of different transceivers. The end-to-end signal-to-noise-ratio (SNR) statistics plays an essential role in the determination of the performance of cascaded digital communication systems. In this thesis, a closed-form expression for the probability density function (PDF) of the end-end SNR for independent but not necessarily identically distributed (i.n.i.d.) cascaded generalized-K (GK) composite fading channels is derived. The developed PDF expression in terms of the Meijer-G function allows the derivation of subsequent performance metrics, applicable to different modulation schemes, including outage probability, bit error rate for coherent as well as non-coherent systems, and average channel capacity that provides insights into the performance of a digital communication system operating in N cascaded GK composite fading environment. Another line of research that was motivated by the introduction of composite fading channels is the error performance. Error performance is one of the main performance measures and derivation of its closed-form expression has proved to be quite involved for certain systems. Hence, in this thesis, a unified closed-form expression, applicable to different binary modulation schemes, for the bit error rate of dual-branch selection diversity based systems undergoing i.n.i.d. GK fading is derived in terms of the extended generalized bivariate Meijer G-function. Symbol Error Probability of DF Relay Selection over Arbitrary Nakagami-m Fading Channels George C. Alexandropoulos Full Text Available We present a new analytical expression for the moment generating function (MGF of the end-to-end signal-to-noise ratio of dual-hop decode-and-forward (DF relaying systems with relay selection when operating over Nakagami-m fading channels. The derived MGF expression, which is valid for arbitrary values of the fading parameters of both hops, is subsequently utilized to evaluate the average symbol error probability (ASEP of M-ary phase shift keying modulation for the considered DF relaying scheme under various asymmetric fading conditions. It is shown that the MGF-based ASEP performance evaluation results are in excellent agreement with equivalent ones obtained by means of computer simulations, thus validating the correctness of the presented MGF expression. Closed-form Capacity Expressions for the α-μ Fading Channel with SC Diversity under Different Adaptive Transmission Strategies Mohamed, Refaat; Ismail, Mahmoud H.; Newagy, Fatma; Mourad, Heba M. Stemming from the fact that the α-μ fading distribution is one of the very general fading models used in the literature to describe the small scale fading phenomenon, in this paper, closed-form expressions for the Shannon capacity of the α-μ fading channel operating under four main adaptive transmission strategies are derived assuming integer values for μ. These expressions are derived for the case of no diversity as well as for selection combining diversity with independent and identically distributed branches. The obtained expressions reduce to those previously derived in the literature for the Weibull as well as the Rayleigh fading cases, which are both special cases of the α-μ channel. Numerical results are presented for the capacity under the four adaptive transmission strategies and the effect of the fading parameter as well as the number of diversity branches is studied. On Amplify-and-Forward Relaying Over Hyper-Rayleigh Fading Channels S. H. Alvi Full Text Available Relayed transmission holds promise for the next generation of wireless communication systems due to the performance gains it can provide over non-cooperative systems. Recently hyper-Rayleigh fading, which represents fading conditions more severe than Rayleigh fading, has received attention in the context of many practical communication scenarios. Though power allocation for Amplify-and-Forward (AF relaying networks has been studied in the literature, a theoretical analysis of the power allocation problem for hyper-Rayleigh fading channels is a novel contribution of this work. We develop an optimal power allocation (OPA strategy for a dual-hop AF relaying network in which the relay-destination link experiences hyper-Rayleigh fading. A new closed-form expression for the average signal-to-noise ratio (SNR at destination is derived and it is shown to provide a new upper-bound on the average SNR at destination, which outperforms a previously proposed upper-bound based on the well-known harmonic-geometric mean inequality. An OPA across the source and relay nodes, subject to a sum-power constraint, is proposed and it is shown to provide measurable performance gains in average SNR and SNR outage at the destination relative to the case of equal power allocation. Rached, Nadhir B. The outage capacity (OC) is among the most important performance metrics of communication systems over fading channels. The evaluation of the OC, when equal gain combining (EGC) or maximum ratio combining (MRC) diversity techniques are employed, boils down to computing the cumulative distribution function (CDF) of the sum of channel envelopes (equivalently amplitudes) for EGC or channel gains (equivalently squared enveloped/ amplitudes) for MRC. Closed-form expressions of the CDF of the sum of many generalized fading variates are generally unknown and constitute open problems. We develop a unified hazard rate twisting Importance Sampling (IS) based approach to efficiently estimate the CDF of the sum of independent arbitrary variates. The proposed IS estimator is shown to achieve an asymptotic optimality criterion, which clearly guarantees its efficiency. Some selected simulation results are also shown to illustrate the substantial computational gain achieved by the proposed IS scheme over crude Monte Carlo simulations. Transmit selection for imperfect threshold-based receive MRC in Rayleigh fading channels Radaydeh, Redha Mahmoud Mesleh The performance of multiple-antenna diversity systems in which the receiver combines signal replicas per thresholdbased maximal ratio combining (MRC) and the transmitter uses only a single antenna according to receive combined signal strength is studied. The impact of imperfect channel estimation is considered when the received signal replicas undergo independent and flat multipath fading. The analysis is applicable for arbitrary transmit antenna selection when the multiple-antenna channels experience identically distributed and non-identically distributed Rayleigh fading conditions. New closed-form expressions for the combined SNR statistics and some performance measures are presented. The system models adopted herein and the presented analytical results can be used to study the performance of different system architectures under various channel conditions when the implementation complexity is of interest. © 2009 IEEE. The outage capacity (OC) is among the most important performance metrics of communication systems over fading channels. The evaluation of the OC, when Equal Gain Combining (EGC) or Maximum Ratio Combining (MRC) diversity techniques are employed, boils down to computing the Cumulative Distribution Function (CDF) of the sum of channel envelopes (equivalently amplitudes) for EGC or channel gain (equivalently squared enveloped/amplitudes) for MRC. Closed-form expressions of the CDF of the sum of many generalized fading variates are generally unknown and constitute open problems. In this paper, we develop a unified hazard rate twisting Importance Sampling (IS) based approach to efficiently estimate the CDF of the sum of independent arbitrary variates. The proposed IS estimator is shown to achieve an asymptotic optimality criterion, which clearly guarantees its efficiency. Some selected simulation results are also shown to illustrate the substantial computational gain achieved by the proposed IS scheme over crude Monte-Carlo simulations. Design and Performance Analysis of MISO-ORM-DCSK System over Rayleigh Fading Channels Gang Zhang Full Text Available A novel chaotic communication system, named Orthogonality-based Reference Modulated-Differential Chaos Shift Keying (ORM-DCSK, is proposed to enhance the performance of RM-DCSK. By designing an orthogonal chaotic generator (OCG, the intrasignal interference components in RM-DCSK are eliminated. Also, the signal frame format is expanded so the average bit energy is reduced. As a result, the proposed system has less interference in decision variables. Furthermore, to investigate the bit error rate (BER performance over Rayleigh fading channels, the MISO-ORM-DCSK is studied. The BER expressions of the new system are derived and analyzed over AWGN channel and multipath Rayleigh fading channel. All simulation results not only show that the proposed system can obtain significant improvement but also verify the analysis in theory. Performance analysis of power-efficient adaptive interference cancelation in fading channels Radaydeh, Redha Mahmoud Mesleh; Alouini, Mohamed-Slim This paper analyzes the performance of a -steering scheme for highly correlated receive antennas in the presence of statistically unordered co-channel interferers over multipath fading channels. An adaptive activation of receive antennas according to the interfering signals fading conditions is considered in the analysis. Analytical expressions for various system performance measures, including the outage probability, average error probability of different signaling schemes, and raw moments of the combined signal-to-interference-plus-noise ratio (SINR) are obtained in exact forms. Numerical and simulation results for the performance-complexity tradeoff of this scheme is presented and then compared with that of full-size arbitrary interference cancelation and no cancelation scenarios. ©2010 IEEE. Adaptive Modulation with Best User Selection over Non-Identical Nakagami Fading Channels In this paper, we analyze the performance of adaptive modulation with single-cell multiuser scheduling over independent but not identical distributed (i.n.i.d.) Nakagami fading channels. Closed-form expressions are derived for the average channel capacity, spectral efficiency, and bit-error-rate (BER) for both constant-power variable-rate and variable-power variable-rate uncoded M-ary quadrature amplitude modulation (M-QAM) schemes. We also study the impact of time delay on the average BER of adaptive M-QAM. Selected numerical results show that the multiuser diversity brings a considerably better performance even over i.n.i.d. fading environments. Partial PIC-MRC Receiver Design for Single Carrier Block Transmission System over Multipath Fading Channels Juinn-Horng Deng Full Text Available Single carrier block transmission (SCBT system has become one of the most popular modulation systems due to its low peak-to-average power ratio (PAPR, and it is gradually considered to be used for uplink wireless communication systems. In this paper, a low complexity partial parallel interference cancellation (PIC with maximum ratio combining (MRC technology is proposed to use for receiver to combat the intersymbol interference (ISI problem over multipath fading channel. With the aid of MRC scheme, the proposed partial PIC technique can effectively perform the interference cancellation and acquire the benefit of time diversity gain. Finally, the proposed system can be extended to use for multiple antenna systems to provide excellent performance. Simulation results reveal that the proposed low complexity partial PIC-MRC SIMO system can provide robust performance and outperform the conventional PIC and the iterative frequency domain decision feedback equalizer (FD-DFE systems over multipath fading channel environment. Capacity of spectrum sharing Cognitive Radio systems over Nakagami fading channels at low SNR Sboui, Lokman In this paper, we study the ergodic capacity of Cognitive Radio (CR) spectrum sharing systems at low power regime. We focus on Nakagami fading channels. We formally define the low power regime and present closed form expressions of the capacity in the low power regime under various types of interference and/or power constraints, depending on the available channel state information (CSI) of the cross link (CL) between the secondary user transmitter and the primary user receiver. We explicitly characterize two regimes where either the interference constraint or the power constraint dictates the optimal power profile. Our framework also highlights the effects of different fading parameters on the secondary link ergodic capacity. Interestingly, we show that the low power regime analysis provides a specific insight on the capacity behavior of CR that has not been reported by previous studies. © 2013 IEEE. Space-Time Water-Filling for Composite MIMO Fading Channels Full Text Available We analyze the ergodic capacity and channel outage probability for a composite MIMO channel model, which includes both fast fading and shadowing effects. The ergodic capacity and exact channel outage probability with space-time water-filling can be evaluated through numerical integrations, which can be further simplified by using approximated empirical eigenvalue and maximal eigenvalue distribution of MIMO fading channels. We also compare the performance of space-time water-filling with spatial water-filling. For MIMO channels with small shadowing effects, spatial water-filling performs very close to space-time water-filling in terms of ergodic capacity. For MIMO channels with large shadowing effects, however, space-time water-filling achieves significantly higher capacity per antenna than spatial water-filling at low to moderate SNR regimes, but with a much higher channel outage probability. We show that the analytical capacity and outage probability results agree very well with those obtained from Monte Carlo simulations. Physical Layer Security for Cooperative Relaying Over Generalized-K Fading Channels Wu, Ling; Yang, Liang; Chen, Jianchao; Alouini, Mohamed-Slim In this letter, we analyze the secrecy performance for a cooperative diversity system with amplify-and-forward (AF) over generalized-K fading channels. More specifically, some lower bounds on the secrecy performance, such as secure outage probability (SOP), average secrecy capacity (ASC), and strictly positive secrecy capacity (SPSC), are derived in closed-form. Finally, we provide simulation results to verify the accuracy of our analytical results. Wu, Ling Analytical evaluation of adaptive-modulation-based opportunistic cognitive radio in nakagami-m fading channels Chen, Yunfei; Alouini, Mohamed-Slim; Tang, Liang; Khan, Fahdahmed The performance of adaptive modulation for cognitive radio with opportunistic access is analyzed by considering the effects of spectrum sensing, primary user (PU) traffic, and time delay for Nakagami- m fading channels. Both the adaptive continuous rate scheme and the adaptive discrete rate scheme are considered. Numerical examples are presented to quantify the effects of spectrum sensing, PU traffic, and time delay for different system parameters. © 1967-2012 IEEE. Exact ergodic capacity calculation for fading wireless channels typically involves time-consuming numerical evaluation of infinite integrals. In this paper, lower and upper bounds on ergodic capacity for kth best path are presented. These bounds have simple analytic expressions which allow their fast evaluation. Numerical results show that the newly proposed bounds closely approximate the exact ergodic capacity for a large variety of system configurations. © 1997-2012 IEEE. Novel asymptotic results on the high-order statistics of the channel capacity over generalized fading channels The exact analysis of the higher-order statistics of the channel capacity (i.e., higher-order ergodic capacity) often leads to complicated expressions involving advanced special functions. In this paper, we provide a generic framework for the computation of the higher-order statistics of the channel capacity over generalized fading channels. As such, this novel framework for the higher-order statistics results in simple, closed-form expressions which are shown to be asymptotically tight bounds in the high signal-to-noise ratio (SNR) regime of a variety of fading environment. In addition, it reveals the existence of differences (i.e., constant capacity gaps in log-domain) among different fading environments. By asymptotically tight bound we mean that the high SNR limit of the difference between the actual higher-order statistics of the channel capacity and its asymptotic bound (i.e., lower bound) tends to zero. The mathematical formalism is illustrated with some selected numerical examples that validate the correctness of our newly derived results. © 2012 IEEE. Achievable rate of spectrum sharing cognitive radio systems over fading channels at low-power regime We study the achievable rate of cognitive radio (CR) spectrum sharing systems at the low-power regime for general fading channels and then for Nakagami fading. We formally define the low-power regime and present the corresponding closed-form expressions of the achievable rate lower bound under various types of interference and/or power constraints, depending on the available channel state information of the cross link (CL) between the secondary-user transmitter and the primary-user receiver. We explicitly characterize two regimes where either the interference constraint or the power constraint dictates the optimal power profile. Our framework also highlights the effects of different fading parameters on the secondary link (SL) ergodic achievable rate. We also study more realistic scenarios when there is either 1-bit quantized channel feedback from the CL alone or 2-bit feedback from both the CL and the SL and propose simple power control schemes and show that these schemes achieve the previously achieved rate at the low-power regime. Interestingly, we show that the low-power regime analysis provides a specific insight into the maximum achievable rate behavior of CR that has not been reported by previous studies. Progressive transmission of images over fading channels using rate-compatible LDPC codes. Pan, Xiang; Banihashemi, Amir H; Cuhadar, Aysegul In this paper, we propose a combined source/channel coding scheme for transmission of images over fading channels. The proposed scheme employs rate-compatible low-density parity-check codes along with embedded image coders such as JPEG2000 and set partitioning in hierarchical trees (SPIHT). The assignment of channel coding rates to source packets is performed by a fast trellis-based algorithm. We examine the performance of the proposed scheme over correlated and uncorrelated Rayleigh flat-fading channels with and without side information. Simulation results for the expected peak signal-to-noise ratio of reconstructed images, which are within 1 dB of the capacity upper bound over a wide range of channel signal-to-noise ratios, show considerable improvement compared to existing results under similar conditions. We also study the sensitivity of the proposed scheme in the presence of channel estimation error at the transmitter and demonstrate that under most conditions our scheme is more robust compared to existing schemes. Exact Symbol Error Probability of Cross-QAM in AWGN and Fading Channels Zhang Xi-chun Full Text Available The exact symbol error probability (SEP performance of -ary cross quadrature amplitude modulation (QAM in additive white Gaussian noise (AWGN channel and fading channels, including Rayleigh, Nakagami-m, Rice, and Nakagami-q (Hoyt channels, is analyzed. The obtained closed-form SEP expressions contain a finite (in proportion to sum of single integrals with finite limits and an integrand composed of elementary (exponential, trigonometric, and/or power functions, thus readily enabling numerical evaluation. Particularly, Gaussian -function is a special case of these integrals and is included in the SEP expressions. Simple and very precise approximations, which contain only Gaussian -function for AWGN channel and contain three terms of the single integrals mentioned above for fading channels, respectively, are also given. The analytical expressions show excellent agreement with the simulation results, and numerical evaluation with the proposed expressions reveals that cross QAM can obtain at least 1.1 dB gain compared to rectangular QAM when SEP < 0.3 in all the considered channels. Relay selection in cooperative communication systems over continuous time-varying fading channel Ke Geng Full Text Available In this paper, we study relay selection under outdated channel state information (CSI in a decode-and-forward (DF cooperative system. Unlike previous researches on cooperative communication under outdated CSI, we consider that the channel varies continuously over time, i.e., the channel not only changes between relay selection and data transmission but also changes during data transmission. Thus the level of accuracy of the CSI used in relay selection degrades with data transmission. We first evaluate the packet error rate (PER of the cooperative system under continuous time-varying fading channel, and find that the PER performance deteriorates more seriously under continuous time-varying fading channel than when the channel is assumed to be constant during data transmission. Then, we propose a repeated relay selection (RRS strategy to improve the PER performance, in which the forwarded data is divided into multiple segments and relay is reselected before the transmission of each segment based on the updated CSI. Finally, we propose a combined relay selection (CRS strategy which takes advantage of three different relay selection strategies to further mitigate the impact of outdated CSI. Maximizing Expected Achievable Rates for Block-Fading Buffer-Aided Relay Channels Shaqfeh, Mohammad In this paper, the long-term average achievable rate over block-fading buffer-aided relay channels is maximized using a hybrid scheme that combines three essential transmission strategies, which are decode-and-forward, compress-and-forward, and direct transmission. The proposed hybrid scheme is dynamically adapted based on the channel state information. The integration and optimization of these three strategies provide a more generic and fundamental solution and give better achievable rates than the known schemes in the literature. Despite the large number of optimization variables, the proposed hybrid scheme can be optimized using simple closed-form formulas that are easy to apply in practical relay systems. This includes adjusting the transmission rate and compression when compress-and-forward is the selected strategy based on the channel conditions. Furthermore, in this paper, the hybrid scheme is applied to three different models of the Gaussian block-fading buffer-aided relay channels, depending on whether the relay is half or full duplex and whether the source and the relay have orthogonal or non-orthogonal channel access. Several numerical examples are provided to demonstrate the achievable rate results and compare them to the upper bounds of the ergodic capacity for each one of the three channel models under consideration. Shaqfeh, Mohammad; Zafar, Ammar; Alnuweiri, Hussein; Alouini, Mohamed-Slim On the computation of the higher-order statistics of the channel capacity over generalized fading channels The higher-order statistics (HOS) of the channel capacity μn=E[logn (1+γ end)], where n ∈ N denotes the order of the statistics, has received relatively little attention in the literature, due in part to the intractability of its analysis. In this letter, we propose a novel and unified analysis, which is based on the moment generating function (MGF) technique, to exactly compute the HOS of the channel capacity. More precisely, our mathematical formalism can be readily applied to maximal-ratio-combining (MRC) receivers operating in generalized fading environments. The mathematical formalism is illustrated by some numerical examples focusing on the correlated generalized fading environments. © 2012 IEEE. An Adaptive Channel Estimation Algorithm Using Time-Frequency Polynomial Model for OFDM with Fading Multipath Channels Liu KJ Ray Full Text Available Orthogonal frequency division multiplexing (OFDM is an effective technique for the future 3G communications because of its great immunity to impulse noise and intersymbol interference. The channel estimation is a crucial aspect in the design of OFDM systems. In this work, we propose a channel estimation algorithm based on a time-frequency polynomial model of the fading multipath channels. The algorithm exploits the correlation of the channel responses in both time and frequency domains and hence reduce more noise than the methods using only time or frequency polynomial model. The estimator is also more robust compared to the existing methods based on Fourier transform. The simulation shows that it has more than improvement in terms of mean-squared estimation error under some practical channel conditions. The algorithm needs little prior knowledge about the delay and fading properties of the channel. The algorithm can be implemented recursively and can adjust itself to follow the variation of the channel statistics. Secure Multiple-Antenna Block-Fading Wiretap Channels with Limited CSI Feedback In this paper, we investigate the ergodic secrecy capacity of a block-fading wiretap channel with limited channel knowledge at the transmitter. We consider that the legitimate receiver, the eavesdropper and the transmitter are equipped with multiple antennas and that the receiving nodes are aware of their respective channel matrices. The transmitter, on the other hand, is only provided by a B-bit feedback of the main channel state information. The feedback bits are sent by the legitimate receiver, at the beginning of each fading block, over an error-free public link with limited capacity. The statistics of the main and the eavesdropper channel state information are known at all nodes. Assuming an average transmit power constraint, we establish upper and lower bounds on the ergodic secrecy capacity. Then, we present a framework to design the optimal codebooks for feedback and transmission. In addition, we show that the proposed lower and upper bounds coincide asymptotically as the capacity of the feedback link becomes large, i.e. $B \\ ightarrow \\\\infty$ ; hence, fully characterizing the ergodic secrecy capacity in this case. Besides, we analyze the asymptotic behavior of the presented secrecy rates, at high Signal-to-Noise Ratio (SNR), and evaluate the gap between the bounds. Hyadi, Amal; Rezki, Zouheir; Alouini, Mohamed-Slim In this paper, we investigate the ergodic secrecy capacity of a block-fading wiretap channel with limited channel knowledge at the transmitter. We consider that the legitimate receiver, the eavesdropper and the transmitter are equipped with multiple antennas and that the receiving nodes are aware of their respective channel matrices. The transmitter, on the other hand, is only provided by a B-bit feedback of the main channel state information. The feedback bits are sent by the legitimate receiver, at the beginning of each fading block, over an error-free public link with limited capacity. The statistics of the main and the eavesdropper channel state information are known at all nodes. Assuming an average transmit power constraint, we establish upper and lower bounds on the ergodic secrecy capacity. Then, we present a framework to design the optimal codebooks for feedback and transmission. In addition, we show that the proposed lower and upper bounds coincide asymptotically as the capacity of the feedback link becomes large, i.e. $B \\rightarrow \\infty$ ; hence, fully characterizing the ergodic secrecy capacity in this case. Besides, we analyze the asymptotic behavior of the presented secrecy rates, at high Signal-to-Noise Ratio (SNR), and evaluate the gap between the bounds. Hard Fusion Based Spectrum Sensing over Mobile Fading Channels in Cognitive Vehicular Networks. Qian, Xiaomin; Hao, Li; Ni, Dadong; Tran, Quang Thanh An explosive growth in vehicular wireless applications gives rise to spectrum resource starvation. Cognitive radio has been used in vehicular networks to mitigate the impending spectrum starvation problem by allowing vehicles to fully exploit spectrum opportunities unoccupied by licensed users. Efficient and effective detection of licensed user is a critical issue to realize cognitive radio applications. However, spectrum sensing in vehicular environments is a very challenging task due to vehicle mobility. For instance, vehicle mobility has a large effect on the wireless channel, thereby impacting the detection performance of spectrum sensing. Thus, gargantuan efforts have been made in order to analyze the fading properties of mobile radio channel in vehicular environments. Indeed, numerous studies have demonstrated that the wireless channel in vehicular environments can be characterized by a temporally correlated Rayleigh fading. In this paper, we focus on energy detection for spectrum sensing and a counting rule for cooperative sensing based on Neyman-Pearson criteria. Further, we go into the effect of the sensing and reporting channel conditions on the sensing performance under the temporally correlated Rayleigh channel. For local and cooperative sensing, we derive some alternative expressions for the average probability of misdetection. The pertinent numerical and simulating results are provided to further validate our theoretical analyses under a variety of scenarios. Efficient Cooperative Protocols for Full-Duplex Relaying over Nakagami-m Fading Channels Khafagy, Mohammad Galal In this work, efficient protocols are studied for full-duplex relaying (FDR) with loopback interference over Nakagami-m block fading channels. Recently, a selective decodeand- forward (DF) protocol was proposed for FDR, and was shown to outperform existing protocols in terms of outage over Rayleigh-fading channels. In this work, we propose an incremental selective DF protocol that offers additional power savings, yet yields the same outage performance. We evaluate their outage performance over independent non-identically distributed Nakagami-m fading links, and study their relative performance in terms of the signal-to-noise ratio cumulative distribution function via closed-form expressions. The offered diversity gain is also derived. In addition, we study their performance relative to their half-duplex counterparts, as well as known non-selective FDR protocols. We corroborate our theoretical results with simulation, and confirm that selective cooperation protocols outperform the known non-selective protocols in terms of outage. Finally, we show that depending on the loopback interference level, the proposed protocols can outperform their half-duplex counterparts when high spectral efficiencies are targeted. Khafagy, Mohammad Galal; Tammam, Amr; Alouini, Mohamed-Slim; Aissa, Sonia On the outage capacity of the block fading channel at low-power regime Outage performance of the M-block fading with additive white Gaussian noise (BF-AWGN) is investigated at low-power regime. We consider delay-constrained constant-rate communications with perfect channel state information (CSI) at both the transmitter and the receiver (CSI-TR), under a short-term power constraint. We show that selection diversity that allocates all the power to the strongest block is asymptotically optimal. Then, we provide a simple characterization of the outage probability in the regime of interest. We quantify the reward due to CSI-TR over the constant-rate constant-power scheme and show that this reward increases with the delay constraint. For instance, for Rayleigh fading, we find that a power gain up to 4.3 dB is achievable. © 2014 IEEE. Unified Importance Sampling Schemes for Efficient Simulation of Outage Capacity over Generalized Fading Channels The outage capacity (OC) is among the most important performance metrics of communication systems operating over fading channels. Of interest in the present paper is the evaluation of the OC at the output of the Equal Gain Combining (EGC) and the Maximum Ratio Combining (MRC) receivers. In this case, it can be seen that this problem turns out to be that of computing the Cumulative Distribution Function (CDF) for the sum of independent random variables. Since finding a closedform expression for the CDF of the sum distribution is out of reach for a wide class of commonly used distributions, methods based on Monte Carlo (MC) simulations take pride of price. In order to allow for the estimation of the operating range of small outage probabilities, it is of paramount importance to develop fast and efficient estimation methods as naive Monte Carlo (MC) simulations would require high computational complexity. In this line, we propose in this work two unified, yet efficient, hazard rate twisting Importance Sampling (IS) based approaches that efficiently estimate the OC of MRC or EGC diversity techniques over generalized independent fading channels. The first estimator is shown to possess the asymptotic optimality criterion and applies for arbitrary fading models, whereas the second one achieves the well-desired bounded relative error property for the majority of the well-known fading variates. Moreover, the second estimator is shown to achieve the asymptotic optimality property under the particular Log-normal environment. Some selected simulation results are finally provided in order to illustrate the substantial computational gain achieved by the proposed IS schemes over naive MC simulations. Optimal Training for Time-Selective Wireless Fading Channels Using Cutoff Rate Tong Lang Full Text Available We consider the optimal allocation of resources—power and bandwidth—between training and data transmissions for single-user time-selective Rayleigh flat-fading channels under the cutoff rate criterion. The transmitter exploits statistical channel state information (CSI in the form of the channel Doppler spectrum to embed pilot symbols into the transmission stream. At the receiver, instantaneous, though imperfect, CSI is acquired through minimum mean-square estimation of the channel based on some set of pilot observations. We compute the ergodic cutoff rate for this scenario. Assuming estimator-based interleaving and -PSK inputs, we study two special cases in-depth. First, we derive the optimal resource allocation for the Gauss-Markov correlation model. Next, we validate and refine these insights by studying resource allocation for the Jakes model. A Novel Simulator of Nonstationary Random MIMO Channels in Rayleigh Fading Scenarios Qiuming Zhu Full Text Available For simulations of nonstationary multiple-input multiple-output (MIMO Rayleigh fading channels in time-variant scattering environments, a novel channel simulator is proposed based on the superposition of chirp signals. This new method has the advantages of low complexity and implementation simplicity as the sum of sinusoids (SOS method. In order to reproduce realistic time varying statistics for dynamic channels, an efficient parameter computation method is also proposed for updating the frequency parameters of employed chirp signals. Simulation results indicate that the proposed simulator is effective in generating nonstationary MIMO channels with close approximation of the time-variant statistical characteristics in accordance with the expected theoretical counterparts. Performance of Multilevel Coding Schemes with Different Decoding Methods and Mapping Strategies in Mobile Fading Channels Based on "capacity rule", the perfor-mance of multilevel coding (MLC) schemes with dif-ferent set partitioning strategies and decoding meth-ods in AWGN and Rayleigh fading channels is investi-gated, in which BCH codes are chosen as componentcodes and 8ASK modulation is used. Numerical re-sults indicate that MLC scheme with UP strategy canobtain optimal performance in AWGN channels andBP is the best mapping strategy for Rayleigh fadingchannels. BP strategy is of good robustness in bothkinds of channels to realize an optimum MLC system.Multistage decoding (MSD) is a sub-optimal decodingmethod of MLC for both channels. For Ungerboeckpartitioning (UP) and mixed partitioning (MP) strat-egy, MSD is strongly recommended to use for MLCsystem, while for BP strategy, PDL is suggested to useas a simple decoding method compared with MSD. Modeling and Simulation of MIMO Mobile-to-Mobile Wireless Fading Channels Gholamreza Bakhshi Full Text Available Analysis and design of multielement antenna systems in mobile fading channels require a model for the space-time cross-correlation among the links of the underlying multipleinput multiple-output (MIMO Mobile-to-Mobile (M-to-M communication channels. In this paper, we propose the modified geometrical two-ring model, a MIMO channel reference model for M-to-M communication systems. This model is based on the extension of single-bounce two-ring scattering model for flat fading channel under the assumption that the transmitter and the receiver are moving. Assuming single-bounce scattering model in both isotropic and nonisotropic environment, a closed-form expression for the space-time cross-correlation function (CCF between any two subchannels is derived. The proposed model provides an important framework in M-to-M system design, where includes many existing correlation models as special cases. Also, two realizable statistical simulation models are proposed for simulating both isotropic and nonisotropic reference model. The realizable simulation models are based on Sum-of-Sinusoids (SoS simulation model. Finally, the correctness of the proposed simulation models is shown via different simulation scenarios. Performance analysis of MIMO wireless optical communication system with Q-ary PPM over correlated log-normal fading channel Wang, Huiqin; Wang, Xue; Lynette, Kibe; Cao, Minghua The performance of multiple-input multiple-output wireless optical communication systems that adopt Q-ary pulse position modulation over spatial correlated log-normal fading channel is analyzed in terms of its un-coded bit error rate and ergodic channel capacity. The analysis is based on the Wilkinson's method which approximates the distribution of a sum of correlated log-normal random variables to a log-normal random variable. The analytical and simulation results corroborate the increment of correlation coefficients among sub-channels lead to system performance degradation. Moreover, the receiver diversity has better performance in resistance of spatial correlation caused channel fading. A novel unified expression for the capacity and bit error probability of wireless communication systems over generalized fading channels Analysis of the average binary error probabilities (ABEP) and average capacity (AC) of wireless communications systems over generalized fading channels have been considered separately in past years. This paper introduces a novel moment generating function (MGF)-based unified expression for the ABEP and AC of single and multiple link communications with maximal ratio combining. In addition, this paper proposes the hyper-Fox\\'s H fading model as a unified fading distribution of a majority of the well-known generalized fading environments. As such, the authors offer a generic unified performance expression that can be easily calculated, and that is applicable to a wide variety of fading scenarios. The mathematical formulism is illustrated with some selected numerical examples that validate the correctness of the authors\\' newly derived results. © 1972-2012 IEEE. Throughput and Delay Analysis of HARQ with Code Combining over Double Rayleigh Fading Channels Chelli, Ali This paper proposes the use of hybrid automatic repeat request (HARQ) with code combining (HARQ-CC) to offer reliable communications over double Rayleigh channels. The double Rayleigh fading channel is of particular interest to vehicle-to-vehicle communication systems as well as amplify-and-forward relaying and keyhole channels. This work studies the performance of HARQ-CC over double Rayleigh channels from an information theoretic perspective. Analytical approximations are derived for the $\\\\epsilon$-outage capacity, the average number of transmissions, and the throughput of HARQ-CC. Moreover, we evaluate the delay experienced by Poisson arriving packets for HARQ-CC. We provide analytical expressions for the average waiting time, the packets sojourn time, the average consumed power, and the energy efficiency. In our investigation, we take into account the impact of imperfect feedback on different performance metrics. Additionally, we explore the tradeoff between energy efficiency and the throughput. The proposed scheme is shown to maintain the outage probability below a specified threshold $\\\\epsilon$ which ensures the link reliability. Meanwhile, HARQ-CC adapts implicitly the transmission rate to the channel conditions such that the throughput is maximized. Our results demonstrate that HARQ-CC allows improving the achievable communication rate compared to fixed time diversity schemes. To maximize the throughput of HARQ-CC, the rate per HARQ round should be less than the rate required to meet the outage constraint. Our investigation of the performance of HARQ-CC over Rayleigh and double Rayleigh channels shows that double Rayleigh channels have a higher severity of fading and result in a larger degradation of the throughput. Our analysis reveals that HARQ with incremental redundancy (HARQ-IR) achieves a larger throughput compared to HARQ-CC, while HARQ-CC is simpler to implement, has a lower decoding Energy-harvesting in cooperative AF relaying networks over log-normal fading channels Rabie, Khaled M.; Salem, Abdelhamid; Alsusa, Emad; Alouini, Mohamed-Slim Energy-harvesting (EH) and wireless power transfer are increasingly becoming a promising source of power in future wireless networks and have recently attracted a considerable amount of research, particularly on cooperative two-hop relay networks in Rayleigh fading channels. In contrast, this paper investigates the performance of wireless power transfer based two-hop cooperative relaying systems in indoor channels characterized by log-normal fading. Specifically, two EH protocols are considered here, namely, time switching relaying (TSR) and power splitting relaying (PSR). Our findings include accurate analytical expressions for the ergodic capacity and ergodic outage probability for the two aforementioned protocols. Monte Carlo simulations are used throughout to confirm the accuracy of our analysis. The results show that increasing the channel variance will always provide better ergodic capacity performance. It is also shown that a good selection of the EH time in the TSR protocol, and the power splitting factor in the PTS protocol, is the key to achieve the best system performance. © 2016 IEEE. Rabie, Khaled M. BER EVALUATION OF LDPC CODES WITH GMSK IN NAKAGAMI FADING CHANNEL Surbhi Sharma Full Text Available LDPC codes (Low Density Parity Check Codes have already proved its efficacy while showing its performance near to the Shannon limit. Channel coding schemes are spectrally inefficient as using an unfiltered binary data stream to modulate an RF carrier that will produce an RF spectrum of considerable bandwidth. Techniques have been developed to improve this bandwidth inefficiency or spectral efficiency, and ease detection. GMSK or Gaussian-filtered Minimum Shift Keying uses a Gaussian Filter of an appropriate bandwidth so as to make system spectrally efficient. A Nakagami model provides a better explanation to less and more severe conditions than the Rayleigh and Rician model and provide a better fit to the mobile communication channel data. In this paper we have demonstrated the performance of Low Density Parity Check codes with GMSK modulation (BT product=0.25 technique in Nakagami fading channel. In results it is shown that average bit error rate decreases as the 'm' parameter increases (Less fading. Large-Scale Fading and Time Dispersion Parameters of UWB Channel in Underground Mines Abdellah Chehri Full Text Available RF channel measurements in underground mines have important applications in the field of mobile communications for improving operational efficiency and worker safety. This paper presents an experimental study of the ultra wideband (UWB radio channel, based on extensive sounding campaigns covering the underground mine environment. Measurements were carried out in the frequency band of 2–5 GHz. Various communication links were considered including both line-of-sight (LOS and non-LOS (NLOS scenarios. In this paper, we are interested in more details of the variations of the RMS delay spread and mean excess delay with Tx/Rx separation, and the variation of RMS with mean excess. The distance dependency of path loss and shadowing fading statistics is also investigated. To give an idea about the behaviour of UWB channel in underground mines, a comparison of our approach with other published works is given including path loss exponent, shadow fading variance, mean excess delay, and RMS delay spread. Methods and Apparatuses for Signaling with Geometric Constellations in a Raleigh Fading Channel Barsoum, Maged F. (Inventor); Jones, Christopher R. (Inventor) Communication systems are described that use signal constellations, which have unequally spaced (i.e. `geometrically` shaped) points. In many embodiments, the communication systems use specific geometric constellations that are capacity optimized at a specific SNR, over the Raleigh fading channel. In addition, ranges within which the constellation points of a capacity optimized constellation can be perturbed and are still likely to achieve a given percentage of the optimal capacity increase compared to a constellation that maximizes d.sub.min, are also described. Capacity measures that are used in the selection of the location of constellation points include, but are not limited to, parallel decode (PD) capacity and joint capacity. Performance Analysis of Iterative Decoding Algorithms for PEG LDPC Codes in Nakagami Fading Channels O. Al Rasheed Full Text Available In this paper we give a comparative analysis of decoding algorithms of Low Density Parity Check (LDPC codes in a channel with the Nakagami distribution of the fading envelope. We consider the Progressive Edge-Growth (PEG method and Improved PEG method for the parity check matrix construction, which can be used to avoid short girths, small trapping sets and a high level of error floor. A comparative analysis of several classes of LDPC codes in various propagation conditions and decoded using different decoding algorithms is also presented. Error Probability of Binary and -ary Signals with Spatial Diversity in Nakagami- (Hoyt Fading Channels Duong Trung Q Full Text Available We analyze the exact average symbol error probability (SEP of binary and -ary signals with spatial diversity in Nakagami- (Hoyt fading channels. The maximal-ratio combining and orthogonal space-time block coding are considered as diversity techniques for single-input multiple-output and multiple-input multiple-output systems, respectively. We obtain the average SEP in terms of the Lauricella multivariate hypergeometric function . The analysis is verified by comparing with Monte Carlo simulations and we further show that our general SEP expressions particularize to the previously known results for Rayleigh ( = 1 and single-input single-output (SISO Nakagami- cases. System Performance of Concatenated STBC and Block Turbo Codes in Dispersive Fading Channels Kam Tai Chan Full Text Available A new scheme of concatenating the block turbo code (BTC with the space-time block code (STBC for an OFDM system in dispersive fading channels is investigated in this paper. The good error correcting capability of BTC and the large diversity gain characteristics of STBC can be achieved simultaneously. The resulting receiver outperforms the iterative convolutional Turbo receiver with maximum- a-posteriori-probability expectation maximization (MAP-EM algorithm. Because of its ability to perform the encoding and decoding processes in parallel, the proposed system is easy to implement in real time. Outage probability analysis of wireless sensor networks in the presence of channel fading and spatial correlation Al-Murad, Tamim M. Evaluating the reliability of wireless sensor networks is becoming more important as theses networks are being used in crucial applications. The outage probability defined as the probability that the error in the system exceeds a maximum acceptable threshold has recently been used as a measure of the reliability of such systems. In this work we find the outage probability of wireless sensor network in different scenarios of distributed sensing where sensors\\' readings are affected by spatial correlation and in the presence of channel fading. © 2011 IEEE. On the capacity of multiple access and broadcast fading Channels with full channel state information at low power regime We study the throughput capacity region of the Gaussian multi-access (MAC) fading channel with perfect channel state information (CSI) at the receiver and at the transmitters (CSI-TR), at low power regime. We show that it has a multidimensional rectangle structure and thus is simply characterized by single user capacity points. More specifically, we show that at low power regime, the boundary surface of the capacity region shrinks to a single point corresponding to the sum-rate maximizer and that the coordinates of this point coincide with single user capacity bounds. Using the duality of Gaussian MAC and broadcast channels (BC), we provide a simple characterization of the BC capacity region at low power regime. © 2013 IEEE. Performance Analysis of Ad Hoc Dispersed Spectrum Cognitive Radio Networks over Fading Channels Mohammad Muneer Full Text Available Cognitive radio systems can utilize dispersed spectrum, and thus such approach is known as dispersed spectrum cognitive radio systems. In this paper, we first provide the performance analysis of such systems over fading channels. We derive the average symbol error probability of dispersed spectrum cognitive radio systems for two cases, where the channel for each frequency diversity band experiences independent and dependent Nakagami- fading. In addition, the derivation is extended to include the effects of modulation type and order by considering M-ary phase-shift keying ( -PSK and M-ary quadrature amplitude modulation -QAM schemes. We then consider the deployment of such cognitive radio systems in an ad hoc fashion. We consider an ad hoc dispersed spectrum cognitive radio network, where the nodes are assumed to be distributed in three dimension (3D. We derive the effective transport capacity considering a cubic grid distribution. Numerical results are presented to verify the theoretical analysis and show the performance of such networks. Joint Scheduling for Dual-Hop Block-Fading Broadcast Channels Zafar, Ammar In this paper, we propose joint user-and-hop scheduling over dual-hop block-fading broadcast channels in order to exploit multi-user diversity gains and multi-hop diversity gains all together. To achieve this objective, the first and second hops are scheduled opportunistically based on the channel state information and as a prerequisite we assume that the relay, which is half-duplex and operates using decode-and-forward, is capable of storing the received packets from the source until the channel condition of the destined user becomes good to be scheduled. We formulate the joint scheduling problem as maximizing the weighted sum of the long term achievable rates by the users under a stability constraint, which means that on the long term the rate received by the relay should equal the rate transmitted by it, in addition to constant or variable power constraints. We show that this problem is equivalent to a single-hop broadcast channel by treating the source as a virtual user with an optimal priority weight that maintains the stability constraint. We show how to obtain the source weight either off-line based on channel statistics or on real-time based on channel measurements. Furthermore, we consider special cases including the maximum sum rate scheduler and the proportional fair scheduler. We demonstrate via numerical results that our proposed joint scheduling scheme enlarges the rate region as compared with a scheme that employs multi-user scheduling alone. Performance of equal gain combining with quantized phases in rayleigh fading channels Rizvi, Umar H. In this paper, we analyze the error probability of equal gain combining with quantized channel phase compensation for binary phase shift keying signalling over Rayleigh fading channels. The probability density and characteristic functions of the combined signal amplitude are derived and used to compute the analytic expressions for the bit error probability in dependance of the number of quantization levels L, the number of diversity branches N-R and the average received signal-to-noise ratio. The analysis is utilized to outline the trade-off between N-R and L and to compare the performance with non-coherent binary frequency shift keying and differential binary phase shift keying schemes under diversity reception. © 2011 IEEE. Asymptotic Performance Analysis of the k-th Best Link Selection over Wireless Fading Channels: An Extreme Value Theory Approach Al-Badarneh, Yazan Hussein We consider a general selection-diversity (SD) scheme in which the k-th best link is selected from a number of links. We use extreme value theory (EVT) to derive simple closed-form asymptotic expressions for the average throughput, effective throughput and average bit error probability (BEP) for the k-th best link over various channel models that are widely used to characterize fading in wireless communication systems. As an application example, we consider the Weibull fading channel model and verify the accuracy of the derived asymptotic expressions through Monte Carlo simulations. A novel framework on exact average symbol error probabilities of multihop transmission over amplify-and-forward relay fading channels In this paper, we propose an analytical framework on the exact computation of the average symbol error probabilities (ASEP) of multihop transmission over generalized fading channels when an arbitrary number of amplify-and-forward relays is used. Our approach relies on moment generating function (MGF) framework to obtain exact single integral expressions which can be easily computed by Gauss-Chebyshev Quadrature (GCQ) rule. As such, the derived results are a convenient tool to analyze the ASEP performance of multihop transmission over amplify-and-forward relay fading channels. Numerical and simulation results, performed to verify the correctness of the proposed formulation, are in perfect agreement. © 2010 IEEE. Al-Badarneh, Yazan Hussein; Georghiades, Costas; Alouini, Mohamed-Slim On the Capacity of the Dirty Paper Channel with Fast Fading and Discrete Channel States Rini, Stefano; Shitz, Shlomo Shamai The "writing dirty paper" capacity result crucially dependents on the perfect channel knowledge at the transmitter as the presence of even a small uncertainty in the channel realization gravely hampers the ability of the transmitter to pre-code its transmission against the channel state. This is particularly disappointing as it implies that interference pre-coding in practical systems is effective only when the channel estimates at the users have very high precision, a condition which is gene... Exact Performance Analysis of Dual-Hop Semi-Blind AF Relaying over Arbitrary Nakagami-m Fading Channels Xia, Minghua; Xing, Chengwen; Wu, Yik-Chung; Aissa, Sonia Relay transmission is promising for future wireless systems due to its significant cooperative diversity gain. The performance of dual-hop semi-blind amplify-and-forward (AF) relaying systems was extensively investigated, for transmissions over Rayleigh fading channels or Nakagami-𝑚 fading channels with integer fading parameter. For the general Nakagami-𝑚 fading with arbitrary 𝑚 values, the exact closed-form system performance analysis is more challenging. In this paper, we explicitly derive the moment generation function (MGF), probability density function (PDF) and moments of the end-to-end signal-to-noise ratio (SNR) over arbitrary Nakagami-𝑚 fading channels with semi-blind AF relay. With these results, the system performance evaluation in terms of outage probability, average symbol error probability, ergodic capacity and diversity order, is conducted. The analysis developed in this paper applies to any semi-blind AF relaying systems with fixed relay gain, and two major strategies for computing the relay gain are compared in terms of system performance. All analytical results are corroborated by simulation results and they are shown to be efficient tools to evaluate system performance. Xia, Minghua Relay transmission is promising for future wireless systems due to its significant cooperative diversity gain. The performance of dual-hop semi-blind amplify-and-forward (AF) relaying systems was extensively investigated, for transmissions over Rayleigh fading channels or Nakagami- fading channels with integer fading parameter. For the general Nakagami- fading with arbitrary values, the exact closed-form system performance analysis is more challenging. In this paper, we explicitly derive the moment generation function (MGF), probability density function (PDF) and moments of the end-to-end signal-to-noise ratio (SNR) over arbitrary Nakagami- fading channels with semi-blind AF relay. With these results, the system performance evaluation in terms of outage probability, average symbol error probability, ergodic capacity and diversity order, is conducted. The analysis developed in this paper applies to any semi-blind AF relaying systems with fixed relay gain, and two major strategies for computing the relay gain are compared in terms of system performance. All analytical results are corroborated by simulation results and they are shown to be efficient tools to evaluate system performance. Joint nonbinary low-density parity-check codes and modulation diversity over fading channels Shi, Zhiping; Li, Tiffany Jing; Zhang, Zhongpei A joint exploitation of coding and diversity techniques to achieve efficient, reliable wireless transmission is considered. The system comprises a powerful non-binary low-density parity-check (LDPC) code that will be soft-decoded to supply strong error protection, a quadratic amplitude modulator (QAM) that directly takes in the non-binary LDPC symbols and a modulation diversity operator that will provide power- and bandwidth-efficient diversity gain. By relaxing the rate of the modulation diversity rotation matrices to below 1, we show that a better rate allocation can be arranged between the LDPC codes and the modulation diversity, which brings significant performance gain over previous systems. To facilitate the design and evaluation of the relaxed modulation diversity rotation matrices, based on a set of criteria, three practical design methods are given and their point pairwise error rate are analyzed. With EXIT chart, we investigate the convergence between demodulator and decoder.A rate match method is presented based on EXIT analysis. Through analysis and simulations, we show that our strategies are very effective in combating random fading and strong noise on fading channels. Exact capture probability analysis of GSC receivers over i.n.d. Rayleigh fading channels Nam, Sungsik A closed-form expression of the capture probability of generalized selection combining (GSC) RAKE receivers was introduced in [1]. The idea behind this new performance metric is to quantify how the remaining set of uncombined paths affects the overall performance both in terms of loss in power and increase in interference levels. In this previous work, the assumption was made that the fading is both independent and identically distributed from path to path. However, the average strength of each path is different in reality. In order to derive a closed-form expression of the capture probability over independent and non-identically distributed (i.n.d.) fading channels, we need to derive the joint statistics of ordered non-identical exponential variates. With this motivation in mind, we first provide in this paper some new order statistics results in terms of both moment generating function (MGF) and probability density function (PDF) expressions under an i.n.d. assumption and then derive a new exact closed-form expression for the capture probability GSC RAKE receivers in this more realistic scenario. © 2013 IEEE. Analytical bounds on the area spectral efficiency of uplink heterogeneous networks over generalized fading channels Shakir, Muhammad Heterogeneous networks (HetNets) are envisioned to enable next-generation cellular networks by providing higher spectral and energy efficiency. A HetNet is typically composed of multiple radio access technologies where several low-power low-cost operators or user-deployed small-cell base stations (SBSs) complement the macrocell network. In this paper, we consider a two-tier HetNet where the SBSs are arranged around the edge of the reference macrocell such that the resultant configuration is referred to as cell-on-edge (COE). Each mobile user in a small cell is considered capable of adapting its uplink transmit power according to a location-based slow power control mechanism. The COE configuration is observed to increase the uplink area spectral efficiency (ASE) and energy efficiency while reducing the cochannel interference power. A moment-generating-function (MGF)-based approach has been exploited to derive the analytical bounds on the uplink ASE of the COE configuration. The derived expressions are generalized for any composite fading distribution, and closed-form expressions are presented for the generalized- K fading channels. Simulation results are included to support the analysis and to show the efficacy of the COE configuration. A comparative performance analysis is also provided to demonstrate the improvements in the performance of cell-edge users of the COE configuration compared with that of macro-only networks (MoNets) and other unplanned deployment strategies. © 2013 IEEE. The introduction of new schemes that are based on the communication among nodes has motivated the use of composite fading models due to the fact that the nodes experience different multipath fading and shadowing statistics, which subsequently Upper Bounds on the BER Performance of MTCM-STBC Schemes over Shadowed Rician Fading Channels Full Text Available Space-time block coding (STBC provides substantial diversity advantages with a low decoding complexity. However, these codes are not designed to achieve coding gains. Outer codes should be concatenated with STBC to provide additional coding gain. In this paper, we analyze the performance of concatenated trellis-coded STBC schemes over shadowed Rician frequency-flat fading channels. We derive an exact pairwise error probability (PEP expression that reveals the dominant factors affecting performance. Based on the derived PEP, in conjunction with the transfer function technique, we also present upper bounds on the bit error rate (BER, which are further shown to be tight through a Monte-Carlo simulation study. Consumption Factor Optimization for Multihop Relaying over Nakagami-m Fading channels Randrianantenaina, Itsikiantsoa In this paper, the energy efficiency of multihop relaying over Nakagami-m fading channels is investigated. The "consumption factor�, adopted as a metric to evaluate the energy efficiency, is derived for both amplify-and-forward and decodeand- forward relaying strategies. Then, based on the obtained expressions, we propose a power allocation strategy maximizing the consumption factor. In addition, a sub-optimal, low complexity, power allocation algorithm is proposed and analyzed, and the obtained power allocation scheme is compared in terms of energy efficiency to other power allocation schemes from the literature. Analytical and simulation results confirm the accuracy of our derivations, and assess the performance gains of the proposed approach. On SIP Session setup delay for VoIP services over correlated fading channels Fathi, Hanane; Chakraborty, Shyam S.; Prasad, Ramjee In this paper, the session setup delay of the session initiation protocol (SIP) is studied. The transmissions on both the forward and reverse channel are assumed to experience Markovian errors. The session setup delay is evaluated for different transport protocols, and with the use of the radio...... link protocol (RLP). An adaptive retransmission timer is used to optimize SIP performances. Using numerical results, we find that SIP over user datagram protocol (UDP) instead of transport control protocol (TCP) can make the session setup up to 30% shorter. Also, RLP drastically reduces the session...... setup delay down to 4 to 5 s, even in environments with high frame error rates (10%) and significant correlation in the fading process (fDT=0.02). SIP is compared with its competitor H.323. SIP session setup delay with compressed messages outperforms H.323 session setup delay.... How Equalization Techniques Affect the TCP Performance of MC-CDMA Systems in Correlated Fading Channels Giacomo Leonardi Full Text Available This paper investigates the impact of several equalization techniques for multicarrier code division multiple access systems on the performance at both lower and upper layers (i.e., physical and TCP layers. Classical techniques such as maximal ratio combining, equal gain combining, orthogonality restoring combining, minimum mean square error, as well as a partial equalization (PE are investigated in time- and frequency-correlated fading channels with various numbers of interferers. Their impact on the performance at upper level is then studied. The results are obtained through an integrated simulation platform carefully reproducing all main aspects affecting the quality of service perceived by the final user, allowing an investigation of the real gain produced by signal processing techniques at TCP level. Transmit power optimization for green multihop relaying over Nakagami-m fading channels In this paper, we investigate the optimal transmit power strategy to maximize the energy efficiency of a multihop relaying network. Considering the communication between a source and a destination through multiple Amplify-and-Forward relays, we first give the expression of the total instantaneous system energy consumption. Then, we define the energy efficiency in our context and obtain its expression in closed-form when the communication is over Nakagami-m fading channels. The analysis yields to the derivation of a global transmit power strategy where each individual node is contributing to the end-to-end overall energy efficiency. Numercial results are presented to illustrate the analysis. Comparison with Monte Carlo simulation results confirms the accuracy of our derivations, and assesses the gains of the proposed power optimization strategy. © 2014 IEEE. Error rates of a full-duplex system over EGK fading channels subject to laplacian interference This paper develops a mathematical paradigm to study downlink error rates and throughput for half-duplex (HD) terminals served by a full-duplex (FD) base station (BS). Particularly, we study the dominant intra-cell interferer problem that appears between HD users scheduled on the same FD-channel. The distribution of the dominant interference is first characterized via its distribution function, which is derived in closed-form. Assuming Nakagami-m fading, the probability of error for different modulation schemes is studied and a unified closed-form expression for the average symbol error rate is derived. To this end, we show the effective downlink throughput gain, harvested by employing FD communication at a BS that serves HD users, as a function of the signal-to-interference-ratio when compared to an idealized HD interference and noise free BS operation. Zhong, Caijun In this paper, we analyze the asymptotic outage probability performance of both decode-and-forward (DF) and amplify-and-forward (AF) relaying systems using partial relay selection and the "best" relay selection schemes for Nakagami-m fading channels. We derive their respective outage probability expressions in the asymptotic high signal-to-noise ratio (SNR) regime, from which the diversity order and coding gain are analyzed. In addition, we investigate the impact of power allocation between the source and relay terminals and derive the diversity-multiplexing tradeoff (DMT) for these relay selection systems. The theoretical findings suggest that partial relay selection can improve the diversity of the system and can achieve the same DMT as the "best" relay selection scheme under certain conditions. © 2011 IEEE. Outage and Capacity Performance Evaluation of Distributed MIMO Systems over a Composite Fading Channel Wenjie Peng Full Text Available The exact closed-form expressions regarding the outage probability and capacity of distributed MIMO (DMIMO systems over a composite fading channel are derived. This is achieved firstly by using a lognormal approximation to a gamma-lognormal distribution when a mobile station (MS in the cell is in a fixed position, and the so-called maximum ratio transmission/selected combining (MRT-SC and selected transmission/maximum ratio combining (ST-MRC schemes are adopted in uplink and downlink, respectively. Then, based on a newly proposed nonuniform MS cell distribution model, which is more consistent with the MS cell hotspot distribution in an actual communication environment, the average outage probability and capacity formulas are further derived. Finally, the accuracy of the approximation method and the rationality of the corresponding theoretical analysis regarding the system performance are proven and illustrated by computer simulations. Al-Quwaiee, Hessa Closed-form expressions of the ergodic capacity of independent or correlated diversity branches over Log-Normal fading channels are not available in the literature. Thus, it is become of an interest to investigate the behavior of such metric at high signal-to-noise (SNR). In this work, we propose simple closed-form asymptotic expressions of the ergodic capacity of dual-branch correlated Log- Normal corresponding to selection combining, and switch-and-stay combining. Furthermore, we capitalize on these new results to find new asymptotic ergodic capacity of correlated dual- branch free-space optical communication system under the impact of pointing error with both heterodyne and intensity modulation/direct detection. © 2015 IEEE. Randrianantenaina, Itsikiantsoa; Benjillali, Mustapha; Alouini, Mohamed-Slim Energy-Efficient Optimization for HARQ Schemes over Time-Correlated Fading Channels Shi, Zheng Energy efficiency of three common hybrid automatic repeat request (HARQ) schemes including Type I HARQ, HARQ with chase combining (HARQ-CC) and HARQ with incremental redundancy (HARQ-IR), is analyzed and joint power allocation and rate selection to maximize the energy efficiency is investigated in this paper. Unlike prior literature, time-correlated fading channels is considered and two widely concerned quality of service (QoS) constraints, i.e., outage and goodput constraints, are also considered in the optimization, which further differentiates this work from prior ones. Using a unified expression of asymptotic outage probabilities, optimal transmission powers and optimal rate are derived in closed-forms to maximize the energy efficiency while satisfying the QoS constraints. These closed-form solutions then enable a thorough analysis of the maximal energy efficiencies of various HARQ schemes. It is revealed that with low outage constraint, the maximal energy efficiency achieved by Type I HARQ is $\\\\frac{1}{4\\\\ln2}$ bits/J, while HARQ-CC and HARQ-IR can achieve the same maximal energy efficiency as $\\\\frac{\\\\kappa_\\\\infty}{4\\\\ln2}$ bits/J where $\\\\kappa_\\\\infty = 1.6617$. Moreover, time correlation in the fading channels has a negative impact on the energy efficiency, while large maximal allowable number of transmissions is favorable for the improvement of energy efficiency. The effectiveness of the energy-efficient optimization is verified by extensive simulations and the results also show that HARQ-CC can achieve the best tradeoff between energy efficiency and spectral efficiency among the three HARQ schemes. New Exact and Asymptotic Results of Dual-Branch MRC over Correlated Nakagami-m Fading Channels We present in this paper a new performance analysis results of dual-branch maximal-ratio combining over correlated Nakagami-m fading channels with arbitrary fading parameter. In particular, we derive exact closed-form expressions of the outage probability, the average bit error rate, and the ergodic capacity in terms of the extended generalized bivariate Meijer G- function. Moreover, we also provide simple closed- form asymptotic expressions in the high signal-to- noise ratio regime of these three fundamental performance measures. © 2015 IEEE. Performance and delay analysis of hybrid ARQ with incremental redundancy over double rayleigh fading channels In this paper, we study the performance of hybrid automatic repeat request (HARQ) with incremental redundancy over double Rayleigh channels, a common model for the fading amplitude of vehicle-to-vehicle communication systems. We investigate the performance of HARQ from an information theoretic perspective. Analytical expressions are derived for the \\\\epsilon-outage capacity, the average number of transmissions, and the average transmission rate of HARQ with incremental redundancy assuming a maximum number of HARQ rounds. Moreover, we evaluate the delay experienced by Poisson arriving packets for HARQ with incremental redundancy. We provide analytical expressions for the expected waiting time, the packet\\'s sojourn time in the queue, the average consumed power, and the energy efficiency. In our study, the communication rate per HARQ round is adjusted to the average signal-to-noise ratio (SNR) such that a target outage probability is not exceeded. This setting conforms with communication systems in which a quality of service is expected regardless of the channel conditions. Our analysis underscores the importance of HARQ in improving the spectral efficiency and reliability of communication systems. We demonstrate as well that the explored HARQ scheme achieves full diversity. Additionally, we investigate the tradeoff between energy efficiency and spectral efficiency. Performance Analysis of Beamforming in MU-MIMO Systems for Rayleigh Fading Channels Hassan, Ahmad K. This paper characterizes the performance metrics of MU-MIMO systems under Rayleigh fading channels in the presence of both cochannel interference and additive noise with unknown channel state information and known correlation matrices. In the first task, we derive analytical expressions for the cumulative distribution function of the instantaneous signal-to-interference-plus-noise ratio (SINR) for any deterministic beamvectors. As a second task, exact closed-form expressions are derived for the instantaneous capacity, the upper bound on ergodic capacity, and the Gram-Schmidt orthogonalization-based ergodic capacity for similar intra-cell correlation coefficients. Finally, we present the utility of several structured-diagonalization techniques, which can achieve the tractability for the approximate solution of ergodic capacity for both similar as well as different intra-cell correlation matrices. The novelty of this paper is to formulate the received SINR in terms of indefinite quadratic forms, which allows us to use complex residue theory to characterize the system behavior. The analytical expressions obtained closely match simulation results. Hassan, Ahmad K.; Moinuddin, Muhammad; Al-Saggaf, Ubaid M.; Al-Naffouri, Tareq Y. Hwang, Kyusung; Hossain, Md Jahangir; Ko, Youngchai; Alouini, Mohamed-Slim at both the source and relay nodes, we develop an optimal channel allocation and rate adaptation policy for a dual-hop relayed transmission. As such the overall transmit power for the relayed system is minimized while a target packet dropping rate (PDR Bit Error Rate Performance Analysis of a Threshold-Based Generalized Selection Combining Scheme in Nakagami Fading Channels Kousa Maan Full Text Available The severity of fading on mobile communication channels calls for the combining of multiple diversity sources to achieve acceptable error rate performance. Traditional approaches perform the combining of the different diversity sources using either the conventional selective diversity combining (CSC, equal-gain combining (EGC, or maximal-ratio combining (MRC schemes. CSC and MRC are the two extremes of compromise between performance quality and complexity. Some researches have proposed a generalized selection combining scheme (GSC that combines the best branches out of the available diversity resources ( . In this paper, we analyze a generalized selection combining scheme based on a threshold criterion rather than a fixed-size subset of the best channels. In this scheme, only those diversity branches whose energy levels are above a specified threshold are combined. Closed-form analytical solutions for the BER performances of this scheme over Nakagami fading channels are derived. We also discuss the merits of this scheme over GSC. Performance Analysis of DPSK Signals with Selection Combining and Convolutional Coding in Fading Channel Ong, Choon The performance analysis of a differential phase shift keyed (DPSK) communications system, operating in a Rayleigh fading environment, employing convolutional coding and diversity processing is presented... A Novel OFDM Channel Estimation Algorithm with ICI Mitigation over Fast Fading Channels C. Tao Full Text Available Orthogonal frequency-division multiplexing (OFDM is well-known as a high-bit-rate transmission technique, but the Doppler frequency offset due to the high speed movement destroys the orthogonality of the subcarriers resulting in the intercarrier interference (ICI, and degrades the performance of the system at the same time. In this paper a novel OFDM channel estimation algorithm with ICI mitigation based on the ICI self-cancellation scheme is proposed. With this method, a more accurate channel estimation is obtained by comb-type double pilots and then ICI coefficients can be obtained to mitigate the ICI on each subcarrier under the assumption that the channel impulse response (CIR varies in a linear fashion. The theoretical analysis and simulation results show that the bit error rate (BER and spectral efficiency performances are improved significantly under high-speed mobility conditions (350 km/h – 500 km/h in comparison to ZHAO's ICI self-cancellation scheme. Effective capacity of Nakagami-m fading channels with full channel state information in the low power regime The effective capacity have been introduced by Wu and Neji as a link-layer model supporting statistical delay QoS requirements. In this paper, we propose to study the effective capacity of a Nakagami-m fading channel with full channel state information (CSI) at both the transmitter and at the receiver. We focus on the low Signal-to-Noise Ratio (SNR) regime. We show that the effective capacity for any arbitrary but finite statistically delay Quality of Service (QoS) exponent θ, scales essentially as S NRlog(1/SNR) exactly as the ergodic capacity, independently of any QoS constraint. We also characterize the minimum energy required for reliable communication, and the wideband slope to show that our results are in agreement with results established recently by Gursoy et al. We also propose an on-off power control scheme that achieves the capacity asymptotically using only one bit CSI feedback at the transmitter. Finally, some numerical results are presented to show the accuracy of our asymptotic results. © 2013 IEEE. Delay-distribution-dependent H∞ state estimation for delayed neural networks with (x,v)-dependent noises and fading channels. Sheng, Li; Wang, Zidong; Tian, Engang; Alsaadi, Fuad E This paper deals with the H ∞ state estimation problem for a class of discrete-time neural networks with stochastic delays subject to state- and disturbance-dependent noises (also called (x,v)-dependent noises) and fading channels. The time-varying stochastic delay takes values on certain intervals with known probability distributions. The system measurement is transmitted through fading channels described by the Rice fading model. The aim of the addressed problem is to design a state estimator such that the estimation performance is guaranteed in the mean-square sense against admissible stochastic time-delays, stochastic noises as well as stochastic fading signals. By employing the stochastic analysis approach combined with the Kronecker product, several delay-distribution-dependent conditions are derived to ensure that the error dynamics of the neuron states is stochastically stable with prescribed H ∞ performance. Finally, a numerical example is provided to illustrate the effectiveness of the obtained results. Copyright © 2016 Elsevier Ltd. All rights reserved. A novel ergodic capacity analysis of diversity combining and multihop transmission systems over generalized composite fading channels Ergodic capacity is an important performance measure associated with reliable communication at the highest rate at which information can be sent over the channel with a negligible probability of error. In the shadow of this definition, diversity receivers (such as selection combining, equal-gain combining and maximal-ratio combining) and transmission techniques (such as cascaded fading channels, amplify-and-forward multihop transmission) are deployed in mitigating various performance impairing effects such as fading and shadowing in digital radio communication links. However, the exact analysis of ergodic capacity is in general not always possible for all of these forms of diversity receivers and transmission techniques over generalized composite fading environments due to it\\'s mathematical intractability. In the literature, published papers concerning the exact analysis of ergodic capacity have been therefore scarce (i.e., only [1] and [2]) when compared to those concerning the exact analysis of average symbol error probability. In addition, they are essentially targeting to the ergodic capacity of the maximal ratio combining diversity receivers and are not readily applicable to the capacity analysis of the other diversity combiners / transmission techniques. In this paper, we propose a novel moment generating function-based approach for the exact ergodic capacity analysis of both diversity receivers and transmission techniques over generalized composite fading environments. As such, we demonstrate how to simultaneously treat the ergodic capacity analysis of all forms of both diversity receivers and multihop transmission techniques. © 2012 IEEE. Improving 3D-Turbo Code's BER Performance with a BICM System over Rayleigh Fading Channel R. Yao Full Text Available Classical Turbo code suffers from high error floor due to its small Minimum Hamming Distance (MHD. Newly-proposed 3D-Turbo code can effectively increase the MHD and achieve a lower error floor by adding a rate-1 post encoder. In 3D-Turbo codes, part of the parity bits from the classical Turbo encoder are further encoded through the post encoder. In this paper, a novel Bit-Interleaved Coded Modulation (BICM system is proposed by combining rotated mapping Quadrature Amplitude Modulation (QAM and 3D-Turbo code to improve the Bit Error Rate (BER performance of 3D-Turbo code over Raleigh fading channel. A key-bit protection scheme and a Two-Dimension (2D iterative soft demodulating-decoding algorithm are developed for the proposed BICM system. Simulation results show that the proposed system can obtain about 0.8-1.0 dB gain at BER of 10^{-6}, compared with the existing BICM system with Gray mapping QAM. Full-Duplex Relaying with Improper Gaussian Signaling over Nakagami-m Fading Channels Gaafar, Mohamed We study the potential employment of improper Gaussian signaling (IGS) in full-duplex relaying (FDR) with non-negligible residual self-interference (RSI) under Nakagami- m fading. IGS is recently shown to outperform traditional proper Gaussian signaling (PGS) in several interference-limited settings. In this work, IGS is employed as an attempt to alleviate RSI. We use two performance metrics, namely, the outage probability and the ergodic rate. First, we provide upper and lower bounds for the system performance in terms of the relay transmit power and circularity coefficient, a measure of the signal impropriety. Then, we numerically optimize the relay signal parameters based only on the channel statistics to improve the system performance. Based on the analysis, IGS allows FDR to operate even with high RSI. The results show that IGS can leverage higher power budgets to enhance the performance, meanwhile it relieves RSI impact via tuning the signal impropriety. Interestingly, one-dimensional optimization of the circularity coefficient, with maximum relay power, offers a similar performance as the joint optimization, which reduces the optimization complexity. From a throughput standpoint, it is shown that IGS-FDR can outperform not only PGS-FDR, but also half-duplex relaying with/without maximum ratio combining over certain regions of the target source rate. MC-DS-CDMA System based on DWT and STBC in ITU Multipath Fading Channels Model Nader Abdullah Khadam Full Text Available In this paper, the performance of multicarrier direct sequence code division multiple access (MC-DS-CDMA in fixed MC-DS-CDMA and Mobile MC-DS-CDMA applications have been improved by using the compensations of space time block coding and Discrete Fast Fourier transforms (FFT or Discrete Wavelets transform DWT. These MC-DS-CDMA systems had been simulated using MATLAB 2015a. Through simulation of the proposed system, various parameters can be changed and tested. The Bit Error Rate (BERs of these systems are obtained over wide range of signal to noise ratio. All simulation results had been compared with each other using different subcarrier size of FFT or DWT with STBC for 1,2,3 and 4 antennas in transmitter and under different ITU multipath fading channels and different Doppler frequencies (fd. The proposed structures of STBC-MC-DS-CDMA system based on (DWT batter than based on (FFT in varies Doppler frequencies and subcarrier size. Also, proposed system with STBC based on 4 transmitters better than other systems based on 1 or 2 or 3 transmitters in all Doppler frequencies and subcarrier size in all simulation results. Energy Efficient and Performance Analysis of Multihop Wireless Communication Over Nakagami-m Fading Channel The concept of multihop communications (where the source communicates with the destination via many intermediate nodes) has been revisited and adapted to mitigate wireless channel impairments and ensure broader coverage. It has been shown in the literature that, in addition to extending coverage, overcoming shadowing and reducing the transmit power, multihop communications can increase the capacity of the network at a low additional cost. On the other hand, the problem of energy efficiency is one of the current biggest challenges towards green radio communications. Morevover, electromagnetic radiation is at its limit in many contexts, while for battery-powered devices, transmit and circuit energy consumption has to be minimized for better battery lifetime and performance. In this work, the performance of multihop communication over Nakagami-m fading is investigated for both cases without and with diversity combining. Closed form expressions of the average ergodic capacity are derived for each of these cases. Then, an expression of the outage probability is obtained using the inverse of Laplace transform and the average bit error rate is bounded using the Moment-Generating-Function approach. The energy efficiency is analyzed using the "consumption factor" as a metric, and it is derived in closed-form. And based on the obtained expressions, we propose a power allocation strategy maximizing this consumption factor. Exact capture probability analysis of GSC receivers over Rayleigh fading channel For third generation systems and ultrawideband systems, RAKE receivers have been introduced due to the advantage of RAKE receivers which is their ability to combine different replicas of the transmitted signal arriving at different delays in a rich multipath environment. In principle, RAKE receivers combine all resolvable paths which gives the best performance in a rich diversity environment. However, this is usually costly in terms of hardware required as the number of RAKE fingers increases. Therefore, generalized selection combining (GSC) RAKE reception was proposed and has been studied by many researcher as an alternative to the classical two fundamental diversity schemes: maximal ratio combining and selection combining. Previous work on performance analyses of GSC RAKE receivers based on the signal to noise ratio focused on the development of methodologies to derive exact closedform expressions for various performance measures. However, the remaining set of uncombined paths affect the overall performance both in terms of loss in power. Therefore, to have a full understanding of the performance of GSC RAKE receivers, we introduce in this paper the notion of capture probability, which is defined as the ratio of the captured power (essentially combined paths power) to that of the total available power. The major difficulty in these problems is to derive some joint statistics of ordered exponential variates. With this motivation in mind, we capitalize in this paper on some new order statistics results to derive exact closed-form expressions for the capture probability over independent and identically distributed Rayleigh fading channels. © 2010 IEEE. Exact Outage Probability of Dual-Hop CSI-Assisted AF Relaying Over Nakagami-m Fading Channels In this correspondence, considering dual-hop channel state information (CSI)-assisted amplify-and-forward (AF) relaying over Nakagami- m fading channels, the cumulative distribution function (CDF) of the end-to-end signal-to-noise ratio (SNR) is derived. In particular, when the fading shape factors m1 and m2 at consecutive hops take non-integer values, the bivariate H-function and G -function are exploited to obtain an exact analytical expression for the CDF. The obtained CDF is then applied to evaluate the outage performance of the system under study. The analytical results of outage probability coincide exactly with Monte-Carlo simulation results and outperform the previously reported upper bounds in the low and medium SNR regions. Novel MGF-based expressions for the average bit error probability of binary signalling over generalized fading channels The main idea in the moment generating function (MGF) approach is to alternatively express the conditional bit error probability (BEP) in a desired exponential form so that possibly multi-fold performance averaging is readily converted into a computationally efficient single-fold averaging - sometimes into a closed-form - by means of using the MGF of the signal-to-noise ratio. However, as presented in [1] and specifically indicated in [2] and also to the best of our knowledge, there does not exist an MGF-based approach in the literature to represent Wojnar\\'s generic BEP expression in a desired exponential form. This paper presents novel MGF-based expressions for calculating the average BEP of binary signalling over generalized fading channels, specifically by expressing Wojnar\\'s generic BEP expression in a desirable exponential form. We also propose MGF-based expressions to explore the amount of dispersion in the BEP for binary signalling over generalized fading channels. Log-Likelihood Ratio Calculation for Iterative Decoding on Rayleigh Fading Channels Using Padé Approximation Gou Hosoya Full Text Available Approximate calculation of channel log-likelihood ratio (LLR for wireless channels using Padé approximation is presented. LLR is used as an input of iterative decoding for powerful error-correcting codes such as low-density parity-check (LDPC codes or turbo codes. Due to the lack of knowledge of the channel state information of a wireless fading channel, such as uncorrelated fiat Rayleigh fading channels, calculations of exact LLR for these channels are quite complicated for a practical implementation. The previous work, an LLR calculation using the Taylor approximation, quickly becomes inaccurate as the channel output leaves some derivative point. This becomes a big problem when higher order modulation scheme is employed. To overcome this problem, a new LLR approximation using Padé approximation, which expresses the original function by a rational form of two polynomials with the same total number of coefficients of the Taylor series and can accelerate the Taylor approximation, is devised. By applying the proposed approximation to the iterative decoding and the LDPC codes with some modulation schemes, we show the effectiveness of the proposed methods by simulation results and analysis based on the density evolution. Combined Rate and Power Allocation with Link Scheduling in Wireless Data Packet Relay Networks with Fading Channels Subhrakanti Dey; Minyi Huang We consider a joint rate and power control problem in a wireless data traffic relay network with fading channels. The optimization problem is formulated in terms of power and rate selection, and link transmission scheduling. The objective is to seek high aggregate utility of the relay node when taking into account buffer load management and power constraints. The optimal solution for a single transmitting source is computed by a two-layer dynamic programming algorithm which leads to optimal ... On the performance of free-space optical wireless communication systems over double generalized gamma fading channels Starting with the double generalized Gamma (GG) model that was proposed in [1] to describe turbulence-induced fading in free-space optical (FSO) systems, we propose a new unified model which accounts for the impact of pointing errors and type of receiver detector. Based on this new unified model, we study the performance of FSO links operating over these kind of channels. All our analytical results are verified using computer based Monte-Carlo simulations. © 2014 IEEE. Al-Quwaiee, Hessa; Ansari, Imran Shafique; Alouini, Mohamed-Slim An exact power series formula of the outage probability with noise and interference over generalized fading channels In this paper, we develop a generalized momentbased approach for the evaluation of the outage probability (OP) in the presence of co-channel interference and additive white Gaussian noise. The proposed method allows the evaluation of the OP of the signal-to-interference-plus-noise ratio by a power series expansion in the threshold value. Its main advantage is that it does not require a particular distribution for the interference channels. The only necessary ingredients are a power series expansion for the cumulative distribution function of the desired user power and the cross-moments of the interferers\\' powers. These requirements are easily met in many practical fading models, for which the OP might not be obtained in closed-form expression. For a sake of illustration, we consider the application of our method to the Rician fading environment. Under this setting, we carry out a convergence study of the proposed power series and corroborate the validity of our method for different values of fading parameters and various numbers of co-channel interferers. Cognitive radio with spectrum sharing feature is a promising technique to address the spectrum under-utilization problem in dynamically changing environments. In this paper, achievable capacity gain of spectrum sharing systems over dynamic fading A study of optimization problem for amplify-and-forward relaying over weibull fading channels Ikki, Salama Said; Aissa, Sonia This paper addresses the power allocation and relay positioning problems in amplify-and-forward cooperative networks operating in Weibull fading environments. We study adaptive power allocation (PA) with fixed relay location, optimal relay location experience identically distributed and non-identically distributed Rayleigh fading conditions. New closed-form expressions for the combined SNR statistics and some performance measures are presented. The system models adopted herein and the presented Gaafar, Mohamed; Khafagy, Mohammad Galal; Amin, Osama; Schaefer, Rafael F.; Alouini, Mohamed-Slim We study the potential employment of improper Gaussian signaling (IGS) in full-duplex relaying (FDR) with non-negligible residual self-interference (RSI) under Nakagami- m fading. IGS is recently shown to outperform traditional proper Gaussian On Secrecy Outage of Relay Selection in Underlay Cognitive Radio Networks over Nakagami-m Fading Channels Lei, Hongjiang; Zhang, Huan; Ansari, Imran Shafique; Ren, Zhi; Pan, Gaofeng; Qaraqe, Khalid A.; Alouini, Mohamed-Slim In this paper, the secrecy outage performance of an underlay cognitive decode-and-forward relay network over independent but not necessarily identical distributed (i.n.i.d) Nakagami-m fading channels is investigated, in which the secondary user transmitter communicates with the secondary destination via relays, and an eavesdropper attempts to overhear the information. Based on whether the channel state information (CSI) of the wiretap links is available or not, we analyze the secrecy outage performance with optimal relay selection (ORS) and suboptimal relay selection (SRS) schemes, and multiple relays combining scheme (MRC) scheme is considered for comparison purpose. The exact and asymptotic closed-form expressions for the secrecy outage probability with three different relay selection schemes are derived and verified by Monte-Carlo simulations. The numerical results illustrate that ORS scheme always outperforms SRS and MRC schemes, and SRS scheme is better than MRC scheme in the lower fading parameters scenario. Furthermore, through asymptotic analysis, we find that these three different schemes achieve the same secrecy diversity order, which is determined by the number of the relays, and the fading parameters of the links among the relays and the destination. A Combined Antenna Arrays and Reverse-Link Synchronous DS-CDMA System over Frequency-Selective Fading Channels with Power Control Error Yong-Seok Kim Full Text Available An improved antenna array (AA has been introduced, in which reverse-link synchronous transmission technique (RLSTT is incorporated to effectively make better an estimation of covariance matrices at a beamformer-RAKE receiver. While RLSTT is effective in the first finger at the RAKE receiver in order to reject multiple-access interference (MAI, the beamformer estimates the desired user's complex weights, enhancing its signal and reducing cochannel interference (CCI from the other directions. In this work, it is attempted to provide a comprehensive analysis of user capacity which reflects several important factors such as the shape of multipath intensity profile (MIP, the number of antennas, and power control error (PCE. Theoretical analysis, confirmed by the simulations, demonstrates that the orthogonality provided by employing RLSTT along with AA may make the DS-CDMA system insensitive to the PCE even with fewer numbers of antennas. Performance analysis of cognitive multihop relaying with m-QAM detect-and-forward in Nakagami-m fading channels Benjillali, Mustapha; Hyadi, Amal; Da Costa, Daniel Benevides Da; Alouini, Mohamed-Slim In this work, we investigate the performance of cognitive multihop regenerative relaying systems in the "underlay" spectrum sharing scenario. The multiple relays perform "detect-and-forward" relaying strategy to convey a message with an order m quadrature amplitude modulation (m-QAM) from the source to the destination over independent but not necessarily identical Nakagami-m fading channels. We adopt a closed-form analysis framework based on univariate and bivariate Meijer G-functions to derive the end-to-end error performance (in terms of bit and symbol error rates), the outage probability, and the ergodic capacity. Various numerical examples are presented to illustrate the results with a large combination of system and fading parameters, and simulation results confirm the accuracy of our closed-form analysis. © 2013 IEEE. Optimal Performance Monitoring of Hybrid Mid-Infrared Wavelength MIMO Free Space Optical and RF Wireless Networks in Fading Channels Schmidt, Barnet Michael An optimal performance monitoring metric for a hybrid free space optical and radio-frequency (RF) wireless network, the Outage Capacity Objective Function, is analytically developed and studied. Current and traditional methods of performance monitoring of both optical and RF wireless networks are centered on measurement of physical layer parameters, the most common being signal-to-noise ratio, error rate, Q factor, and eye diagrams, occasionally combined with link-layer measurements such as data throughput, retransmission rate, and/or lost packet rate. Network management systems frequently attempt to predict or forestall network failures by observing degradations of these parameters and to attempt mitigation (such as offloading traffic, increasing transmitter power, reducing the data rate, or combinations thereof) prior to the failure. These methods are limited by the frequent low sensitivity of the physical layer parameters to the atmospheric optical conditions (measured by optical signal-to-noise ratio) and the radio frequency fading channel conditions (measured by signal-to-interference ratio). As a result of low sensitivity, measurements of this type frequently are unable to predict impending failures sufficiently in advance for the network management system to take corrective action prior to the failure. We derive and apply an optimal measure of hybrid network performance based on the outage capacity of the hybrid optical and RF channel, the outage capacity objective function. The objective function provides high sensitivity and reliable failure prediction, and considers both the effects of atmospheric optical impairments on the performance of the free space optical segment as well as the effect of RF channel impairments on the radio frequency segment. The radio frequency segment analysis considers the three most common RF channel fading statistics: Rayleigh, Ricean, and Nakagami-m. The novel application of information theory to the underlying physics of the Code Tracking Algorithms for Mitigating Multipath Effects in Fading Channels for Satellite-Based Positioning Markku Renfors Full Text Available The ever-increasing public interest in location and positioning services has originated a demand for higher performance global navigation satellite systems (GNSSs. In order to achieve this incremental performance, the estimation of line-of-sight (LOS delay with high accuracy is a prerequisite for all GNSSs. The delay lock loops (DLLs and their enhanced variants (i.e., feedback code tracking loops are the structures of choice for the commercial GNSS receivers, but their performance in severe multipath scenarios is still rather limited. In addition, the new satellite positioning system proposals specify the use of a new modulation, the binary offset carrier (BOC modulation, which triggers a new challenge in the code tracking stage. Therefore, in order to meet this emerging challenge and to improve the accuracy of the delay estimation in severe multipath scenarios, this paper analyzes feedback as well as feedforward code tracking algorithms and proposes the peak tracking (PT methods, which are combinations of both feedback and feedforward structures and utilize the inherent advantages of both structures. We propose and analyze here two variants of PT algorithm: PT with second-order differentiation (Diff2, and PT with Teager Kaiser (TK operator, which will be denoted herein as PT(Diff2 and PT(TK, respectively. In addition to the proposal of the PT methods, the authors propose also an improved early-late-slope (IELS multipath elimination technique which is shown to provide very good mean-time-to-lose-lock (MTLL performance. An implementation of a noncoherent multipath estimating delay locked loop (MEDLL structure is also presented. We also incorporate here an extensive review of the existing feedback and feedforward delay estimation algorithms for direct sequence code division multiple access (DS-CDMA signals in satellite fading channels, by taking into account the impact of binary phase shift keying (BPSK as well as the newly proposed BOC modulation A Unified Approach to the Analysis of Multicarrier DS-CDMA over Nakagami-$M$ Fading Channels Yang, L-L.; Hanzo, L. A class of unified multicarrier DS-CDMA (MC DS-CDMA) schemes is defined and its performance is considered over multipath Nakagami-$M$ fading channels. The spacing between two adjacent subcarriers of the unified MC DS-CDMA scheme is a variable, allowing us to gain insight into the effects of the spacing on the bit error rate (BER) performance of MC DS-CDMA systems. This unified MC DS-CDMA scheme includes the subclasses of multitone DS-CDMA and orthogonal MC DS-CDMA as special cases. The optimu... SSC Diversity Receiver over Correlated Fading Channels in the Presence of Cochannel Interference Panić StefanR Full Text Available This paper studies the performances of a dual-branch switched-and-stay combining (SSC diversity receiver, operating over correlated fading in the presence of cochannel interference (CCI. Very useful, novel, infinite series expressions are obtained for the output signal to interference ratio's (SIR's probability density function (PDF and cumulative distribution function (CDF. The performance analysis is based on an outage probability (OP and an average bit error probability (ASEP criteria. ASEP is efficiently evaluated for modulation schemes such as noncoherent frequency-shift keying (NCFSK and binary differentially phase-shift keying (BDPSK. The effects of various parameters, such as input SIR unbalance, the level of correlation between received desired signals and interferences, nonlinearity of the environment, and fading severity on systems performances are graphically presented and analyzed. Cross-layer designed adaptive modulation algorithm with packet combining and truncated ARQ over MIMO Nakagami fading channels Aniba, Ghassane This paper presents an optimal adaptive modulation (AM) algorithm designed using a cross-layer approach which combines truncated automatic repeat request (ARQ) protocol and packet combining. Transmissions are performed over multiple-input multiple-output (MIMO) Nakagami fading channels, and retransmitted packets are not necessarily modulated using the same modulation format as in the initial transmission. Compared to traditional approach, cross-layer design based on the coupling across the physical and link layers, has proven to yield better performance in wireless communications. However, there is a lack for the performance analysis and evaluation of such design when the ARQ protocol is used in conjunction with packet combining. Indeed, previous works addressed the link layer performance of AM with truncated ARQ but without packet combining. In addition, previously proposed AM algorithms are not optimal and can provide poor performance when packet combining is implemented. Herein, we first show that the packet loss rate (PLR) resulting from the combining of packets modulated with different constellations can be well approximated by an exponential function. This model is then used in the design of an optimal AM algorithm for systems employing packet combining, truncated ARQ and MIMO antenna configurations, considering transmission over Nakagami fading channels. Numerical results are provided for operation with or without packet combining, and show the enhanced performance and efficiency of the proposed algorithm in comparison with existing ones. © 2011 IEEE. Performance of DS-UWB in MB-OFDM and multi-user interference over Nakagami-m fading channels Mehbodniya, Abolfazl The mutual interference between the two ultra wideband (UWB) technologies, which use the same frequency spectrum, will be a matter of concern in the near future. In this context, we present a performance analysis of direct-sequence (DS) UWB communication in the presence of multiband orthogonal frequency division multiplexing (MB-OFDM) UWB interfering transmissions. The channel fading is modeled according to Nakagami-m distribution, and multi-user interference is taken into account. The DS-UWB system performance is evaluated in terms of bit error rate (BER). Specifically, using the characteristic function approach, an analytical expression for the average BER is derived conditioned on the channel impulse response. Numerical and simulation results are provided and compared for different coexistence scenarios. © 2011 John Wiley & Sons, Ltd. Frequency selective bolometers Kowitt, M.S.; Fixsen, D.J.; Goldin, A. We propose a concept for radiometry in the millimeter, the submillimeter, and the far-IR spectral regions, the frequency selective bolometer (FSB). This system uses a bolometer as a coupled element of a tuned quasi-optical interference filter in which the absorption, the transmission......-dimensional transmission-line model. Instruments based on FSB technology should have several advantages over current multiband bolometric radiometers including smaller and more compact cryogenic optics; reduced demands on cryostat size and weight, high coupling efficiency, minimum constraints on the geometry in the focal... Channel estimation for physical layer network coding systems Gao, Feifei; Wang, Gongpu This SpringerBrief presents channel estimation strategies for the physical later network coding (PLNC) systems. Along with a review of PLNC architectures, this brief examines new challenges brought by the special structure of bi-directional two-hop transmissions that are different from the traditional point-to-point systems and unidirectional relay systems. The authors discuss the channel estimation strategies over typical fading scenarios, including frequency flat fading, frequency selective fading and time selective fading, as well as future research directions. Chapters explore the performa A performance study of two hop transmission in mixed underlay RF and FSO fading channels In this work, we present the performance analysis of a dual-hop transmission system composed of asymmetric radio frequency (RF) and free-space optical (FSO) links in underlay cognitive networks. For the RF link, we consider an underlay cognitive network where the secondary users share the spectrum with licensed primary users, where indoor femtocells act as a practical example for such networks. More specifically, we assume that the RF link is subject to an interference constraint. The FSO link accounts for pointing errors and both types of detection techniques (i.e. intensity modulation/direct detection (IM/DD) as well as heterodyne detection). On the other hand, RF link is modeled by the Rayleigh fading distribution that applies power control to maintain the interference at the primary network below a specific threshold whereas the FSO link is modeled by a unified Gamma-Gamma fading distribution. With this model, we derive new exact closed-form expressions for the cumulative distribution function, the probability density function, the moment generating function, and the moments of the end-to-end signal-to-interference plus noise ratio of these systems in terms of the Meijer\\'s G functions. We then capitalize on these results to offer new exact closed-form expressions for the outage probability, the higher-order amount of fading, and the average error rate for binary and Mary modulation schemes, all in terms of Meijer\\'s G functions. All our new analytical results are verified via computer-based Monte-Carlo simulations and are illustrated by some selected numerical results. In this work, the performance of multihop communication over Nakagami-m fading is investigated for both cases without and with diversity combining. Closed form expressions of the average ergodic capacity are derived for each of these cases. Then, an expression of the outage probability is obtained using the inverse of Laplace transform and the average bit error rate is bounded using the Moment-Generating-Function approach. The energy efficiency is analyzed using the "consumption factor" as a metric, and it is derived in closed-form. And based on the obtained expressions, we propose a power allocation strategy maximizing this consumption factor. Low complexity iterative MLSE equalization of M-QAM signals in extremely long rayleigh fading channels Myburgh, HC Full Text Available long channels. Its computational complexity is linear in the data block length and approximately independent of the channel memory length, whereas conventional equalization algorithms have computational complexity linear in the data block length... Secret-key agreement over spatially correlated fast-fading multiple-antenna channels with public discussion Zorgui, Marwen We consider secret-key agreement with public discussion over multiple-input multiple-output (MIMO) Rayleigh fast-fading channels under correlated environment. We assume that transmit, legitimate receiver and eavesdropper antennas are correlated. The legitimate receiver and the eavesdropper are assumed to have perfect channel knowledge while the transmitter has only knowledge of the correlation matrices. First, we derive the expression of the secret-key capacity under the considered setup. Then, we prove that the optimal transmit strategy achieving the secret-key capacity consists in transmitting independent Gaussian signals along the eingenvectors of the transmit correlation matrix. The powers allocated to each channel mode are determined as the solution to a numerical optimization problem that we derive. A necessary and sufficient condition for beamforming (i.e., transmitting along the strongest channel mode) to be capacity-achieving is derived. Finally, we analyze the impact of correlation matrices on the system performance and provide closed-form expressions of the gain/loss due to correlation in the high power regime. Analysis of a Combined Antenna Arrays and Reverse-Link Synchronous DS-CDMA System over Multipath Rician Fading Channels Kim Yong-Seok Full Text Available We present the BER analysis of antenna array (AA receiver in reverse-link asynchronous multipath Rician channels and analyze the performance of an improved AA system which applies a reverse-link synchronous transmission technique (RLSTT in order to effectively make a better estimation of covariance matrices at a beamformer-RAKE receiver. In this work, we provide a comprehensive analysis of user capacity which reflects several important factors such as the ratio of the specular component power to the Rayleigh fading power, the shape of multipath intensity profile, and the number of antennas. Theoretical analysis demonstrates that for the case of a strong specular path's power or for a high decay factor, the employment of RLSTT along with AA has the potential of improving the achievable capacity by an order of magnitude. Subhrakanti Dey Full Text Available We consider a joint rate and power control problem in a wireless data traffic relay network with fading channels. The optimization problem is formulated in terms of power and rate selection, and link transmission scheduling. The objective is to seek high aggregate utility of the relay node when taking into account buffer load management and power constraints. The optimal solution for a single transmitting source is computed by a two-layer dynamic programming algorithm which leads to optimal power, rate, and transmission time allocation at the wireless links. We further consider an optimal power allocation problem for multiple transmitting sources in the same framework. Performances of the resource allocation algorithms including the effect of buffer load control are illustrated via extensive simulation studies. Performance Analysis of 5G Transmission over Fading Channels with Random IG Distributed LOS Components Dejan Jaksic Full Text Available Mathematical modelling of the behavior of the radio propagation at mmWave bands is crucial to the development of transmission and reception algorithms of new 5G systems. In this study we will model 5G propagation in nondeterministic line-of-sight (LOS conditions, when the random nature of LOS component ratio will be observed as Inverse Gamma (IG distributed process. Closed-form expressions will be presented for the probability density function (PDF and cumulative distribution function (CDF of such random process. Further, closed-form expressions will be provided for important performance measures such as level crossing rate (LCR and average fade duration (AFD. Capitalizing on proposed expressions, LCR and AFD will be discussed in the function of transmission parameters. Ikki, Salama Said This paper addresses the power allocation and relay positioning problems in amplify-and-forward cooperative networks operating in Weibull fading environments. We study adaptive power allocation (PA) with fixed relay location, optimal relay location with fixed power allocation, and joint optimization of the PA and relay location under total transmit power constraint, in order to minimize the outage probability and average error probability at high signal-to-noise ratios (SNR). Analytical results are validated by numerical simulations and comparisons between the different optimization schemes and their performance are provided. Results show that optimum PA brings only coding gain, while optimum relay location yields, in addition to the latter, diversity gains as well. Also, joint optimization improves both, the diversity gain and coding gain. Furthermore, results illustrate that the analyzed adaptive algorithms outperform uniform schemes. ©2010 IEEE. On the Sum of Gamma Random Variates With Application to the Performance of Maximal Ratio Combining over Nakagami-m Fading Channels The probability distribution function (PDF) and cumulative density function of the sum of L independent but not necessarily identically distributed gamma variates, applicable to maximal ratio combining receiver outputs or in other words applicable to the performance analysis of diversity combining receivers operating over Nakagami-m fading channels, is presented in closed form in terms of Meijer G-function and Fox H-bar-function for integer valued fading parameters and non-integer valued fading parameters, respectively. Further analysis, particularly on bit error rate via PDF-based approach, too is represented in closed form in terms of Meijer G-function and Fox H-bar-function for integer-order fading parameters, and extended Fox H-bar-function (H-hat) for non-integer-order fading parameters. The proposed results complement previous results that are either evolved in closed-form, or expressed in terms of infinite sums or higher order derivatives of the fading parameter m. Ansari, Imran Shafique; Yilmaz, Ferkan; Alouini, Mohamed-Slim; Kucur, Oguz Minimum Probability of Error-Based Equalization Algorithms for Fading Channels Janos Levendovszky Full Text Available Novel channel equalizer algorithms are introduced for wireless communication systems to combat channel distortions resulting from multipath propagation. The novel algorithms are based on newly derived bounds on the probability of error (PE and guarantee better performance than the traditional zero forcing (ZF or minimum mean square error (MMSE algorithms. The new equalization methods require channel state information which is obtained by a fast adaptive channel identification algorithm. As a result, the combined convergence time needed for channel identification and PE minimization still remains smaller than the convergence time of traditional adaptive algorithms, yielding real-time equalization. The performance of the new algorithms is tested by extensive simulations on standard mobile channels. Channel Characteristics and Performance of MIMO E-SDM Systems in an Indoor Time-Varying Fading Environment Huu Phu Bui Full Text Available Multiple-input multiple-output (MIMO systems employ advanced signal processing techniques. However, the performance is affected by propagation environments and antenna characteristics. The main contributions of the paper are to investigate Doppler spectrum based on measured data in a typical meeting room and to evaluate the performance of MIMO systems based on an eigenbeam-space division multiplexing (E-SDM technique in an indoor time-varying fading environment, which has various distributions of scatterers, line-of-sight wave existence, and mutual coupling effect among antennas. We confirm that due to the mutual coupling among antennas, patterns of antenna elements are changed and different from an omnidirectional one of a single antenna. Results based on the measured channel data in our measurement campaigns show that received power, channel autocorrelation, and Doppler spectrum are dependent not only on the direction of terminal motion but also on the antenna configuration. Even in the obstructed-line-of-sight environment, observed Doppler spectrum is quite different from the theoretical U-shaped Jakes one. In addition, it has been also shown that a channel change during the time interval between the transmit weight matrix determination and the actual data transmission can degrade the performance of MIMO E-SDM systems. On the sum of Gamma-Gamma variates with application to the fast outage probability evaluation over fading channels Ben Issaid, Chaouki; Rached, Nadhir B.; Kammoun, Abla; Alouini, Mohamed-Slim; Tempone, Raul The Gamma-Gamma distribution has recently emerged in a number of applications ranging from modeling scattering and reverbation in sonar and radar systems to modeling atmospheric turbulence in wireless optical channels. In this respect, assessing the outage probability achieved by some diversity techniques over this kind of channels is of major practical importance. In many circumstances, this is intimately related to the difficult question of analyzing the statistics of a sum of Gamma-Gamma random variables. Answering this question is not a simple matter. This is essentially because outage probabilities encountered in practice are often very small, and hence the use of classical Monte Carlo methods is not a reasonable choice. This lies behind the main motivation of the present work. In particular, this paper proposes a new approach to estimate the left tail of the sum of independent and identically distributed Gamma-Gamma variates. More specifically, we propose a mean-shift importance sampling scheme that efficiently evaluates the outage probability of L-branch maximum ratio combining diversity receivers over Gamma-Gamma fading channels. The proposed estimator satisfies the well-known bounded relative error criterion. We show the accuracy and the efficiency of our approach compared to naive Monte Carlo via some selected numerical simulations. Ben Issaid, Chaouki Chelli, Ali; Zedini, Emna; Alouini, Mohamed-Slim; Patzold, Matthias Uwe; Balasingham, Ilangko -to-vehicle communication systems as well as amplify-and-forward relaying and keyhole channels. This work studies the performance of HARQ-CC over double Rayleigh channels from an information theoretic perspective. Analytical approximations are derived for the Zafar, Ammar; AlNuweiri, Hussein; Alouini, Mohamed-Slim; Shaqfeh, Mohammad weight that maintains the stability constraint. We show how to obtain the source weight either off-line based on channel statistics or on real-time based on channel measurements. Furthermore, we consider special cases including the maximum sum rate Eigenstructures of MIMO Fading Channel Correlation Matrices and Optimum Linear Precoding Designs for Maximum Ergodic Capacity Hamid Reza Bahrami Full Text Available The ergodic capacity of MIMO frequency-flat and -selective channels depends greatly on the eigenvalue distribution of spatial correlation matrices. Knowing the eigenstructure of correlation matrices at the transmitter is very important to enhance the capacity of the system. This fact becomes of great importance in MIMO wireless systems where because of the fast changing nature of the underlying channel, full channel knowledge is difficult to obtain at the transmitter. In this paper, we first investigate the effect of eigenvalues distribution of spatial correlation matrices on the capacity of frequency-flat and -selective channels. Next, we introduce a practical scheme known as linear precoding that can enhance the ergodic capacity of the channel by changing the eigenstructure of the channel by applying a linear transformation. We derive the structures of precoders using eigenvalue decomposition and linear algebra techniques in both cases and show their similarities from an algebraic point of view. Simulations show the ability of this technique to change the eigenstructure of the channel, and hence enhance the ergodic capacity considerably. Cooperative jamming power control to enhance secrecy communications of AF Relaying systems for Rayleigh fading channel Park, Kihong; Alouini, Mohamed-Slim allocation; (i) rate-optimal power allocation and (ii) outage-optimal power allocation. More specifically, without the instantaneous channel knowledge for the eavesdropper side, the outage probability of the secrecy rate is minimized with respect Blind cooperative diversity using distributed space-time coding in block fading channels Tourki, Kamel; Alouini, Mohamed-Slim; Deneire, Luc Mobile users with single antennas can still take advantage of spatial diversity through cooperative space-time encoded transmission. In this paper, we consider a scheme in which a relay chooses to cooperate only if its source-relay channel Low SNR capacity for MIMO Rician and Rayleigh-product fading channels with single co-channel interferer and noise Zhong, Caijun; Jin, Shi; Wong, Kaikit; Alouini, Mohamed-Slim; Ratnarajah, Tharm . Exact analytical expressions for the minimum energy per information bit, {Eb/N0min, and wideband slope, S0, are derived for both channels. Our results show that the minimum energy per information bit is the same for both channels while their wideband Performance Analysis of a Six-Port Receiver in a WCDMA Communication System including a Multipath Fading Channel A. O. Olopade Full Text Available Third generation communication systems require receivers with wide bandwidth of operation to support high transmission rates and are also reconfigurable to support various communication standards with different frequency bands. An ideal software defined radio (SDR will be the absolute answer to this requirement but it is not achievable with the current level of technology. This paper proposes the use of a six-port receiver (SPR front-end (FE in a WCDMA communication system. A WCDMA end-to-end physical layer MATLAB demo which includes a multipath channel distortion block is used to determine the viability of the six-port based receiver. The WCDMA signal after passing through a multipath channel is received using a constructed SPR FE. The baseband signal is then calibrated and corrected in MATLAB. The six-port receiver performance is measured in terms of bit error rate (BER. The signal-to-noise ratio (SNR of the transmitted IQ data is varied and the BER profile of the communication system is plotted. The effect of the multipath fading on the receiver performance and the accuracy of the calibration algorithm are obtained by comparing two different measured BER curves for different calibration techniques to the simulated BER curve of an ideal receiver. This paper studies the ergodic capacity of multiple-input multiple-output (MIMO) systems with a single co-channel interferer in the low signal-to-noise-ratio (SNR) regime. Two MIMO models namely Rician and Rayleigh-product channels are investigated. Exact analytical expressions for the minimum energy per information bit, {Eb/N0min, and wideband slope, S0, are derived for both channels. Our results show that the minimum energy per information bit is the same for both channels while their wideband slopes differ significantly. Further, the impact of the numbers of transmit and receive antennas, the Rician K factor, the channel mean matrix and the interference-to-noise-ratio (INR) on the capacity, is addressed. Results indicate that interference degrades the capacity by increasing the required minimum energy per information bit and reducing the wideband slope. Simulation results validate our analytical results. © 2010 IEEE. Relay Selections for Security and Reliability in Mobile Communication Networks over Nakagami-m Fading Channels Hongji Huang Full Text Available This paper studies the relay selection schemes in mobile communication system over Nakagami-m channel. To make efficient use of licensed spectrum, both single relay selection (SRS scheme and multirelays selection (MRS scheme over the Nakagami-m channel are proposed. Also, the intercept probability (IP and outage probability (OP of the proposed SRS and MRS for the communication links depending on realistic spectrum sensing are derived. Furthermore, this paper assesses the manifestation of conventional direct transmission scheme to compare with the proposed SRS and MRS ones based on the Nakagami-m channel, and the security-reliability trade-off (SRT performance of the proposed schemes and the conventional schemes is well investigated. Additionally, the SRT of the proposed SRS and MRS schemes is demonstrated better than that of direct transmission scheme over the Nakagami-m channel, which can protect the communication transmissions against eavesdropping attacks. Additionally, simulation results show that our proposed relay selection schemes achieve better SRT performance than that of conventional direct transmission over the Nakagami-m channel. Cross-Layer Framework for Multiuser Real Time H.264/AVC Video Encoding and Transmission over Block Fading MIMO Channels Using Outage Probability Slavche Pejoski Full Text Available We present a framework for cross-layer optimized real time multiuser encoding of video using a single layer H.264/AVC and transmission over MIMO wireless channels. In the proposed cross-layer adaptation, the channel of every user is characterized by the probability density function of its channel mutual information and the performance of the H.264/AVC encoder is modeled by a rate distortion model that takes into account the channel errors. These models are used during the resource allocation of the available slots in a TDMA MIMO communication system with capacity achieving channel codes. This framework allows for adaptation to the statistics of the wireless channel and to the available resources in the system and utilization of the multiuser diversity of the transmitted video sequences. We show the effectiveness of the proposed framework for video transmission over Rayleigh MIMO block fading channels, when channel distribution information is available at the transmitter. The exact analysis of the higher-order statistics of the channel capacity (i.e., higher-order ergodic capacity) often leads to complicated expressions involving advanced special functions. In this paper, we provide a generic framework Park, Kihong In this paper, we investigate secrecy communications in two-hop wireless relaying networks which consist of one source, one amplify-and-forward (AF) relay, one legitimate destination, and one eavesdropper. To prevent the eavesdropper from intercepting the source message, we make the destination send the intended noise to the AF relay during the first phase. This is referred to as cooperative jamming. According to the channel information at the destination, we address two types of jamming power allocation; (i) rate-optimal power allocation and (ii) outage-optimal power allocation. More specifically, without the instantaneous channel knowledge for the eavesdropper side, the outage probability of the secrecy rate is minimized with respect to the intended noise power level. We show that the outage-optimal allocation gives almost the same outage probability as the rateoptimal one. In addition, the jamming power consumption can be significantly reduced compared to the fixed and rate-optimal power allocation methods. © 2012 IEEE. On Multiple Users Scheduling Using Superposition Coding over Rayleigh Fading Channels In this letter, numerical results are provided to analyze the gains of multiple users scheduling via superposition coding with successive interference cancellation in comparison with the conventional single user scheduling in Rayleigh blockfading broadcast channels. The information-theoretic optimal power, rate and decoding order allocation for the superposition coding scheme are considered and the corresponding histogram for the optimal number of scheduled users is evaluated. Results show that at optimality there is a high probability that only two or three users are scheduled per channel transmission block. Numerical results for the gains of multiple users scheduling in terms of the long term throughput under hard and proportional fairness as well as for fixed merit weights for the users are also provided. These results show that the performance gain of multiple users scheduling over single user scheduling increases when the total number of users in the network increases, and it can exceed 10% for high number of users function (MGF)-based unified expression for the ABEP and AC of single and multiple link communications with maximal ratio combining. In addition, this paper proposes the hyper-Fox's H fading model as a unified fading distribution of a majority of the well Performance of Generalized Multicarrier DS-CDMA over Nakagami-$m$ Fading Channels A class of generalized multicarrier direct sequence code-division multiple-access (MC DS-CDMA) schemes is defined and its performance is considered over multipath Nakagamifading channels. The spacing between two adjacent subcarriers of the generalized MC DS-CDMA is a variable, allowing us to gain insight into the effects of the spacing on the bit error rate (BER) performance of MC DS-CDMA systems. This generalized MC DS-CDMA scheme includes the subclasses of multitone DS-CDMA and orthogonal M... Frequency and spatial correlation functions in a fading communication channel through the ionosphere Liu, C.H.; Yeh, K.C. Equations for the two-frequency two-position mutual coherence functions are derived under the usual parabolic and Markov approximations. These equations are then solved numerically. It is shown that the mutual coherence functions occur naturally in the study of pulse distortion through a random communication channel and in the investigation of signal correlations. Contour plots of correlation functions show the possibility of having equal values at two frequency separations for a given spatial separation. This behavior is explainable in terms of overlapping Fresnel zones Joint statistics of partial sums of ordered exponential variates and performance of GSC RAKE receivers over rayleigh fading channel Spread spectrum receivers with generalized selection combining (GSC) RAKE reception were proposed and have been studied as alternatives to the classical two fundamental schemes: maximal ratio combining and selection combining because the number of diversity paths increases with the transmission bandwidth. Previous work on performance analyses of GSC RAKE receivers based on the signal to noise ratio focused on the development of methodologies to derive exact closed-form expressions for various performance measures. However, some open problems related to the performance evaluation of GSC RAKE receivers still remain to be solved such as the exact performance analysis of the capture probability and an exact assessment of the impact of self-interference on GSC RAKE receivers. The major difficulty in these problems is to derive some joint statistics of ordered exponential variates. With this motivation in mind, we capitalize in this paper on some new order statistics results to derive exact closed-form expressions for the capture probability and outage probability of GSC RAKE receivers subject to self-interference over independent and identically distributed Rayleigh fading channels, and compare it to that of partial RAKE receivers. © 2011 IEEE. Exact closed form expressions for outage probability of GSC receivers over Rayleigh fading channel subject to self-interference Previous work on performance analyses of generalized selection combining (GSC) RAKE receivers based on the signal to noise ratio focused on the development of methodologies to derive exact closed-form expressions for various performance measures. However, some open problems related to the performance evaluation of GSC RAKE receivers still remain to be solved such that an assessment of the impact of self-interference on the performance of GSC RAKE receivers. To have a full and exact understanding of the performance of GSC RAKE receivers, the outage probability of GSC RAKE receivers needs to be analyzed as closed-form expressions. The major difficulty in this problem is to derive some joint statistics of ordered exponential variates. With this motivation in mind, we capitalize in this paper on some new order statistics results to derive exact closed-form expressions for outage probability of GSC RAKE receivers subject to self-interference over independent and identically distributed Rayleigh fading channels. © 2010 IEEE. Outage Performance Analysis of Relay Selection Schemes in Wireless Energy Harvesting Cooperative Networks over Non-Identical Rayleigh Fading Channels. Do, Nhu Tri; Bao, Vo Nguyen Quoc; An, Beongku In this paper, we study relay selection in decode-and-forward wireless energy harvesting cooperative networks. In contrast to conventional cooperative networks, the relays harvest energy from the source's radio-frequency radiation and then use that energy to forward the source information. Considering power splitting receiver architecture used at relays to harvest energy, we are concerned with the performance of two popular relay selection schemes, namely, partial relay selection (PRS) scheme and optimal relay selection (ORS) scheme. In particular, we analyze the system performance in terms of outage probability (OP) over independent and non-identical (i.n.i.d.) Rayleigh fading channels. We derive the closed-form approximations for the system outage probabilities of both schemes and validate the analysis by the Monte-Carlo simulation. The numerical results provide comprehensive performance comparison between the PRS and ORS schemes and reveal the effect of wireless energy harvesting on the outage performances of both schemes. Additionally, we also show the advantages and drawbacks of the wireless energy harvesting cooperative networks and compare to the conventional cooperative networks. Tourki, Kamel Mobile users with single antennas can still take advantage of spatial diversity through cooperative space-time encoded transmission. In this paper, we consider a scheme in which a relay chooses to cooperate only if its source-relay channel is of an acceptable quality and we evaluate the usefulness of relaying when the source acts blindly and ignores the decision of the relays whether they may cooperate or not. In our study, we consider the regenerative relays in which the decisions to cooperate are based on a signal-to-noise ratio (SNR) threshold and consider the impact of the possible erroneously detected and transmitted data at the relays. We derive the end-to-end bit-error rate (BER) expression and its approximation for binary phase-shift keying modulation and look at two power allocation strategies between the source and the relays in order to minimize the end-to-end BER at the destination for high SNR. Some selected performance results show that computer simulations based results coincide well with our analytical results. © 2010 IEEE. DFT-based channel estimation and noise variance estimation techniques for single-carrier FDMA Huang, G; Nix, AR; Armour, SMD Practical frequency domain equalization (FDE) systems generally require knowledge of the channel and the noise variance to equalize the received signal in a frequency-selective fading channel. Accurate channel estimate and noise variance estimate are thus desirable to improve receiver performance. In this paper we investigate the performance of the denoise channel estimator and the approximate linear minimum mean square error (A-LMMSE) channel estimator with channel power delay profile (PDP) ... Nam, Sungsik; Hasna, Mazen Omar; Alouini, Mohamed-Slim in mind, we capitalize in this paper on some new order statistics results to derive exact closed-form expressions for outage probability of GSC RAKE receivers subject to self-interference over independent and identically distributed Rayleigh fading Statistical analysis on finger replacement schemes for RAKE receivers in the soft handover region with multiple BSs over i.n.d. fading channels Nam, Sung Sik A new finger replacement technique which is applicable for RAKE receivers in the soft handover region has been proposed and studied under the ideal assumption that the fading is both independent and identically distributed from path to path. To supplement our previous work, we present a general comprehensive framework for the performance assessment of the proposed finger replacement schemes operating over independent but non-identically distributed (i.n.d.) faded paths. To accomplish this object, we derive new closed-form expressions for the target key statistics which are composed of i.n.d. exponential random variables. With these new expressions, the performance analysis of various wireless communication systems over more practical channel environments can be possible. On the Symbol Error Rate of M-ary MPSK over Generalized Fading Channels with Additive Laplacian Noise This work considers the symbol error rate of M-ary phase shift keying (MPSK) constellations over extended Generalized-K fading with Laplacian noise and using a minimum distance detector. A generic closed form expression of the conditional and the average probability of error is obtained and simplified in terms of the Fox's H function. More simplifications to well known functions for some special cases of fading are also presented. Finally, the mathematical formalism is validated with some numerical results examples done by computer based simulations [1]. This paper considers the symbol error rate of M-ary phase shift keying (MPSK) constellations over extended Generalized-K fading with Laplacian noise and using a minimum distance detector. A generic closed form expression of the conditional and the average probability of error is obtained and simplified in terms of the Fox\\'s H function. More simplifications to well known functions for some special cases of fading are also presented. Finally, the mathematical formalism is validated with some numerical results examples done by computer based simulations. © 2014 IEEE. Soury, Hamza; Alouini, Mohamed-Slim This paper considers the symbol error rate of M-ary phase shift keying (MPSK) constellations over extended Generalized-K fading with Laplacian noise and using a minimum distance detector. A generic closed form expression of the conditional and the average probability of error is obtained and simplified in terms of the Fox's H function. More simplifications to well known functions for some special cases of fading are also presented. Finally, the mathematical formalism is validated with some numerical results examples done by computer based simulations. © 2014 IEEE. Binaural frequency selectivity in humans. Verhey, Jesko L; van de Par, Steven Several behavioural studies in humans have shown that listening to sounds with two ears that is binaural hearing, provides the human auditory system with extra information on the sound source that is not available when sounds are only perceived through one ear that is monaurally. Binaural processing involves the analysis of phase and level differences between the two ear signals. As monaural cochlea processing (in each ear) precedes the neural stages responsible for binaural processing properties it is reasonable to assume that properties of the cochlea may also be observed in binaural processing. A main characteristic of cochlea processing is its frequency selectivity. In psychoacoustics, there is an ongoing discussion on the frequency selectivity of the binaural auditory system. While some psychoacoustic experiments seem to indicate poorer frequency selectivity of the binaural system than that of the monaural processing others seem to indicate the same frequency selectivity for monaural and binaural processing. This study provides an overview of these seemingly controversial results and the different explanations that were provided to account for the different results. © 2018 Federation of European Neuroscience Societies and John Wiley & Sons Ltd. Event-Triggered Asynchronous Guaranteed Cost Control for Markov Jump Discrete-Time Neural Networks With Distributed Delay and Channel Fading. Yan, Huaicheng; Zhang, Hao; Yang, Fuwen; Zhan, Xisheng; Peng, Chen This paper is concerned with the guaranteed cost control problem for a class of Markov jump discrete-time neural networks (NNs) with event-triggered mechanism, asynchronous jumping, and fading channels. The Markov jump NNs are introduced to be close to reality, where the modes of the NNs and guaranteed cost controller are determined by two mutually independent Markov chains. The asynchronous phenomenon is considered, which increases the difficulty of designing required mode-dependent controller. The event-triggered mechanism is designed by comparing the relative measurement error with the last triggered state at the process of data transmission, which is used to eliminate dispensable transmission and reduce the networked energy consumption. In addition, the signal fading is considered for the effect of signal reflection and shadow in wireless networks, which is modeled by the novel Rice fading models. Some novel sufficient conditions are obtained to guarantee that the closed-loop system reaches a specified cost value under the designed jumping state feedback control law in terms of linear matrix inequalities. Finally, some simulation results are provided to illustrate the effectiveness of the proposed method. ASEP of MIMO System with MMSE-OSIC Detection over Weibull-Gamma Fading Channel Subject to AWGGN Keerti Tiwari Full Text Available Ordered successive interference cancellation (OSIC is adopted with minimum mean square error (MMSE detection to enhance the multiple-input multiple-output (MIMO system performance. The optimum detection technique improves the error rate performance but increases system complexity. Therefore, MMSE-OSIC detection is used which reduces error rate compared to traditional MMSE with low complexity. The system performance is analyzed in composite fading environment that includes multipath and shadowing effects known as Weibull-Gamma (WG fading. Along with the composite fading, a generalized noise that is additive white generalized Gaussian noise (AWGGN is considered to show the impact of wireless scenario. This noise model includes various forms of noise as special cases such as impulsive, Gamma, Laplacian, Gaussian, and uniform. Consequently, generalized Q-function is used to model noise. The average symbol error probability (ASEP of MIMO system is computed for 16-quadrature amplitude modulation (16-QAM using MMSE-OSIC detection in WG fading perturbed by AWGGN. Analytical expressions are given in terms of Fox-H function (FHF. These expressions demonstrate the best fit to simulation results. Asymptotic Performance Analysis of Two-Way Relaying FSO Networks with Nonzero Boresight Pointing Errors Over Double-Generalized Gamma Fading Channels Yang, Liang; Alouini, Mohamed-Slim; Ansari, Imran Shafique In this correspondence, an asymptotic performance analysis for two-way relaying free-space optical (FSO) communication systems with nonzero boresight pointing errors over double-generalized gamma fading channels is presented. Assuming amplify-and-forward (AF) relaying, two nodes having the FSO ability can communicate with each other through the optical links. With this setup, an approximate cumulative distribution function (CDF) expression for the overall signal-to-noise ratio (SNR) is presented. With this statistic distribution, we derive the asymptotic analytical results for the outage probability and average bit error rate. Furthermore, we provide the asymptotic average capacity analysis for high SNR by using the momentsbased method. Yang, Liang Fading Skies Sio, Betsy Menson A sky fading from blue to white to red at the horizon, and water darkening from light to midnight blue. Strong diagonals slashing through the image, drawing a viewer's eyes deeper into the picture, and delicate trees poised to convey a sense of beauty. These are the fascinating strengths of the ukiyo-e woodblock prints of Japanese artist Ando… Iterative Pilot-Layer Aided Channel Estimation with Emphasis on Interleave-Division Multiple Access Systems Schoeneich Hendrik; Hoeher Peter Adam Channel estimation schemes suitable for interleave-division multiple access (IDMA) systems are presented. Training and data are superimposed. Training-based and semiblind linear channel estimators are derived and their performance is discussed and compared. Monte Carlo simulation results are presented showing that the derived channel estimators in conjunction with a superimposed pilot sequence and chip-by-chip processing are able to track fast-fading frequency-selective channels. As opposed ... RLS Channel Estimation with Adaptive Forgetting Factor for DS-CDMA Frequency-Domain Equalization Kojima, Yohei; Tomeba, Hiromichi; Takeda, Kazuaki; Adachi, Fumiyuki Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can increase the downlink bit error rate (BER) performance of DS-CDMA beyond that possible with conventional rake combining in a frequency-selective fading channel. FDE requires accurate channel estimation. Recently, we proposed a pilot-assisted channel estimation (CE) based on the MMSE criterion. Using MMSE-CE, the channel estimation accuracy is almost insensitive to the pilot chip sequence, and a good BER performance is achieved. In this paper, we propose a channel estimation scheme using one-tap recursive least square (RLS) algorithm, where the forgetting factor is adapted to the changing channel condition by the least mean square (LMS)algorithm, for DS-CDMA with FDE. We evaluate the BER performance using RLS-CE with adaptive forgetting factor in a frequency-selective fast Rayleigh fading channel by computer simulation. Analysis of Probability of Non-zero Secrecy Capacity for Multi-hop Networks in Presence of Hardware Impairments over Nakagami-m Fading Channels T.-T. Phu Full Text Available In this paper, we evaluate probability of non-zero secrecy capacity of multi-hop relay networks over Nakagami-m fading channels in presence of hardware impairments. In the considered protocol, a source attempts to transmit its data to a destination by using multi-hop randomize-and-forward (RF strategy. The data transmitted by the source and relays are overheard by an eavesdropper. For performance evaluation, we derive exact expressions of probability of non-zero secrecy capacity (PoNSC, which are expressed by sums of infinite series of exponential functions and exponential integral functions. We then perform Monte Carlo simulations to verify the theoretical analysis. On the Fast and Precise Evaluation of the Outage Probability of Diversity Receivers Over α−μ, κ−μ, and η−μ Fading Channels Ben Issaid, Chaouki; Alouini, Mohamed-Slim; Tempone, Raul In this paper, we are interested in determining the cumulative distribution function of the sum of α−μ, κ−μ, and η−μ random variables in the setting of rare event simulations. To this end, we present a simple and efficient importance sampling approach. The main result of this work is the bounded relative error property of the proposed estimators. Capitalizing on this result, we accurately estimate the outage probability of multibranch maximum ratio combining and equal gain diversity receivers over α−μ, κ−μ, and η−μ fading channels. Selected numerical simulations are discussed to show the robustness of our estimators compared to naive Monte Carlo estimators. On the Efficient Simulation of the Distribution of the Sum of Gamma-Gamma Variates with Application to the Outage Probability Evaluation Over Fading Channels The Gamma-Gamma distribution has recently emerged in a number of applications ranging from modeling scattering and reverberation in sonar and radar systems to modeling atmospheric turbulence in wireless optical channels. In this respect, assessing the outage probability achieved by some diversity techniques over this kind of channels is of major practical importance. In many circumstances, this is related to the difficult question of analyzing the statistics of a sum of Gamma- Gamma random variables. Answering this question is not a simple matter. This is essentially because outage probabilities encountered in practice are often very small, and hence the use of classical Monte Carlo methods is not a reasonable choice. This lies behind the main motivation of the present work. In particular, this paper proposes a new approach to estimate the left tail of the sum of Gamma-Gamma variates. More specifically, we propose robust importance sampling schemes that efficiently evaluates the outage probability of diversity receivers over Gamma-Gamma fading channels. The proposed estimators satisfy the well-known bounded relative error criterion for both maximum ratio combining and equal gain combining cases. We show the accuracy and the efficiency of our approach compared to naive Monte Carlo via some selected numerical simulations. The Gamma-Gamma distribution has recently emerged in a number of applications ranging from modeling scattering and reverbation in sonar and radar systems to modeling atmospheric turbulence in wireless optical channels. In this respect, assessing the outage probability achieved by some diversity techniques over this kind of channels is of major practical importance. In many circumstances, this is intimately related to the difficult question of analyzing the statistics of a sum of Gamma-Gamma random variables. Answering this question is not a simple matter. This is essentially because outage probabilities encountered in practice are often very small, and hence the use of classical Monte Carlo methods is not a reasonable choice. This lies behind the main motivation of the present work. In particular, this paper proposes a new approach to estimate the left tail of the sum of Gamma-Gamma variates. More specifically, we propose a mean-shift importance sampling scheme that efficiently evaluates the outage probability of L-branch maximum ratio combining diversity receivers over Gamma-Gamma fading channels. The proposed estimator satisfies the well-known bounded relative error criterion, a well-desired property characterizing the robustness of importance sampling schemes, for both identically and non-identically independent distributed cases. We show the accuracy and the efficiency of our approach compared to naive Monte Carlo via some selected numerical simulations. Ben Issaid, Chaouki; Ben Rached, Nadhir; Kammoun, Abla; Alouini, Mohamed-Slim; Tempone, Raul Frequency domain based LS channel estimation in OFDM based Power line communications Bogdanović, Mario This paper is focused on low voltage power line communication (PLC) realization with an emphasis on channel estimation techniques. The Orthogonal Frequency Division Multiplexing (OFDM) scheme is preferred technology in PLC systems because of its effective combat with frequency selective fading properties of PLC channel. As the channel estimation is one of the crucial problems in OFDM based PLC system because of a problematic area of PLC signal attenuation and interference, the improved LS est... Ergodic capacity is an important performance measure associated with reliable communication at the highest rate at which information can be sent over the channel with a negligible probability of error. In the shadow of this definition, diversity Zorgui, Marwen; Rezki, Zouheir; Alomair, Basel; Alouini, Mohamed-Slim numerical optimization problem that we derive. A necessary and sufficient condition for beamforming (i.e., transmitting along the strongest channel mode) to be capacity-achieving is derived. Finally, we analyze the impact of correlation matrices Modeling cellular networks in fading environments with dominant specular components AlAmmouri, Ahmad Stochastic geometry (SG) has been widely accepted as a fundamental tool for modeling and analyzing cellular networks. However, the fading models used with SG analysis are mainly confined to the simplistic Rayleigh fading, which is extended to the Nakagami-m fading in some special cases. However, neither the Rayleigh nor the Nakagami-m accounts for dominant specular components (DSCs) which may appear in realistic fading channels. In this paper, we present a tractable model for cellular networks with generalized two-ray (GTR) fading channel. The GTR fading explicitly accounts for two DSCs in addition to the diffuse components and offers high flexibility to capture diverse fading channels that appear in realistic outdoor/indoor wireless communication scenarios. It also encompasses the famous Rayleigh and Rician fading as special cases. To this end, the prominent effect of DSCs is highlighted in terms of average spectral efficiency. © 2016 IEEE. Pilot-Assisted Channel Estimation for Orthogonal Multi-Carrier DS-CDMA with Frequency-Domain Equalization Shima, Tomoyuki; Tomeba, Hiromichi; Adachi, Fumiyuki Orthogonal multi-carrier direct sequence code division multiple access (orthogonal MC DS-CDMA) is a combination of time-domain spreading and orthogonal frequency division multiplexing (OFDM). In orthogonal MC DS-CDMA, the frequency diversity gain can be obtained by applying frequency-domain equalization (FDE) based on minimum mean square error (MMSE) criterion to a block of OFDM symbols and can improve the bit error rate (BER) performance in a severe frequency-selective fading channel. FDE requires an accurate estimate of the channel gain. The channel gain can be estimated by removing the pilot modulation in the frequency domain. In this paper, we propose a pilot-assisted channel estimation suitable for orthogonal MC DS-CDMA with FDE and evaluate, by computer simulation, the BER performance in a frequency-selective Rayleigh fading channel. Outage Performance Analysis of Relay Selection Schemes in Wireless Energy Harvesting Cooperative Networks over Non-Identical Rayleigh Fading Channels †In this paper, we study relay selection in decode-and-forward wireless energy harvesting cooperative networks. In contrast to conventional cooperative networks, the relays harvest energy from the source's radio-frequency radiation and then use that energy to forward the source information. Considering power splitting receiver architecture used at relays to harvest energy, we are concerned with the performance of two popular relay selection schemes, namely, partial relay selection (PRS) scheme and optimal relay selection (ORS) scheme. In particular, we analyze the system performance in terms of outage probability (OP) over independent and non-identical (i.n.i.d.) Rayleigh fading channels. We derive the closed-form approximations for the system outage probabilities of both schemes and validate the analysis by the Monte-Carlo simulation. The numerical results provide comprehensive performance comparison between the PRS and ORS schemes and reveal the effect of wireless energy harvesting on the outage performances of both schemes. Additionally, we also show the advantages and drawbacks of the wireless energy harvesting cooperative networks and compare to the conventional cooperative networks. PMID:26927119 Indoor radio channel modeling and mitigation of fading effects using linear and circular polarized antennas in combination for smart home system at 868 MHz Wunderlich, S.; Welpot, M.; Gaspard, I. The markets for smart home products and services are expected to grow over the next years, driven by the increasing demands of homeowners considering energy monitoring, management, environmental controls and security. Many of these new systems will be installed in existing homes and offices and therefore using radio based systems for cost reduction. A drawback of radio based systems in indoor environments are fading effects which lead to a high variance of the received signal strength and thereby to a difficult predictability of the encountered path loss of the various communication links. For that reason it is necessary to derive a statistical path loss model which can be used to plan a reliable and cost effective radio network. This paper presents the results of a measurement campaign, which was performed in six buildings to deduce realistic radio channel models for a high variety of indoor radio propagation scenarios in the short range devices (SRD) band at 868 MHz. Furthermore, a potential concept to reduce the variance of the received signal strength using a circular polarized (CP) patch antenna in combination with a linear polarized antenna in an one-to-one communication link is presented. On the performance of dual-hop mixed RF/FSO wireless communication system in urban area over aggregated exponentiated Weibull fading channels with pointing errors Wang, Yue; Wang, Ping; Liu, Xiaoxia; Cao, Tian The performance of decode-and-forward dual-hop mixed radio frequency / free-space optical system in urban area is studied. The RF link is modeled by the Nakagami-m distribution and the FSO link is described by the composite exponentiated Weibull (EW) fading channels with nonzero boresight pointing errors (NBPE). For comparison, the ABER results without pointing errors (PE) and those with zero boresight pointing errors (ZBPE) are also provided. The closed-form expression for the average bit error rate (ABER) in RF link is derived with the help of hypergeometric function, and that in FSO link is obtained by Meijer's G and generalized Gauss-Laguerre quadrature functions. Then, the end-to-end ABERs with binary phase shift keying modulation are achieved on the basis of the computed ABER results of RF and FSO links. The end-to-end ABER performance is further analyzed with different Nakagami-m parameters, turbulence strengths, receiver aperture sizes and boresight displacements. The result shows that with ZBPE and NBPE considered, FSO link suffers a severe ABER degradation and becomes the dominant limitation of the mixed RF/FSO system in urban area. However, aperture averaging can bring significant ABER improvement of this system. Monte Carlo simulation is provided to confirm the validity of the analytical ABER expressions. Scheduling for dual-hop block-fading channels with two source-user pairs sharing one relay In this paper, we maximize the achievable rate region of a dual-hop network with two sources serving two users independently through a single shared relay. We formulate the problem as maximizing the sum of the weighted long term average throughputs of the two users under stability constraints on the long term throughputs of the source-user pairs. In order to solve the problem, we propose a joint user-and-hop scheduling scheme, which schedules the first or second hop opportunistically based on instantaneous channel state information, in order to exploit multiuser diversity and multihop diversity gains. Numerical results show that the proposed joint scheduling scheme enhances the achievable rate region as compared to a scheme that employs multi-user scheduling on the second-hop alone. Copyright © 2013 by the Institute of Electrical and Electronic Engineers, Inc. Performance Analysis of Wavelet Channel Coding in COST207-based Channel Models on Simulated Radio-over-Fiber Systems at the W-Band Cavalcante, Lucas Costa Pereira; Silveira, Luiz F. Q.; Rommel, Simon Millimeter wave communications based on photonic technologies have gained increased attention to provide optic fiber-like capacity in wireless environments. However, the new hybrid fiber-wireless channel represents new challenges in terms of signal transmission performance analysis. Traditionally......, such systems use diversity schemes in combination with digital signal processing (DSP) techniques to overcome effects such as fading and inter-symbol interference (ISI). Wavelet Channel Coding (WCC) has emerged as a technique to minimize the fading effects of wireless channels, which is a mayor challenge...... in systems operating in the millimeter wave regime. This work takes the WCC one step beyond by performance evaluation in terms of bit error probability, over time-varying, frequency-selective multipath Rayleigh fading channels. The adopted propagation model follows the COST207 norm, the main international... Schoeneich Hendrik Full Text Available Channel estimation schemes suitable for interleave-division multiple access (IDMA systems are presented. Training and data are superimposed. Training-based and semiblind linear channel estimators are derived and their performance is discussed and compared. Monte Carlo simulation results are presented showing that the derived channel estimators in conjunction with a superimposed pilot sequence and chip-by-chip processing are able to track fast-fading frequency-selective channels. As opposed to conventional channel estimation techniques, the BER performance even improves with increasing Doppler spread for typical system parameters. An error performance close to the case of perfect channel knowledge can be achieved with high power efficiency. Nam, Sung Sik; Ko, Young-Chai; Alouini, Mohamed-Slim; Choi, Seyeong . To supplement our previous work, we present a general comprehensive framework for the performance assessment of the proposed finger replacement schemes operating over independent but non-identically distributed (i.n.d.) faded paths. To accomplish this object, we 2-Step Maximum Likelihood Channel Estimation for Multicode DS-CDMA with Frequency-Domain Equalization Kojima, Yohei; Takeda, Kazuaki; Adachi, Fumiyuki Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can provide better downlink bit error rate (BER) performance of direct sequence code division multiple access (DS-CDMA) than the conventional rake combining in a frequency-selective fading channel. FDE requires accurate channel estimation. In this paper, we propose a new 2-step maximum likelihood channel estimation (MLCE) for DS-CDMA with FDE in a very slow frequency-selective fading environment. The 1st step uses the conventional pilot-assisted MMSE-CE and the 2nd step carries out the MLCE using decision feedback from the 1st step. The BER performance improvement achieved by 2-step MLCE over pilot assisted MMSE-CE is confirmed by computer simulation. Channel Estimation in DCT-Based OFDM Wang, Yulin; Zhang, Gengxin; Xie, Zhidong; Hu, Jing This paper derives the channel estimation of a discrete cosine transform- (DCT-) based orthogonal frequency-division multiplexing (OFDM) system over a frequency-selective multipath fading channel. Channel estimation has been proved to improve system throughput and performance by allowing for coherent demodulation. Pilot-aided methods are traditionally used to learn the channel response. Least square (LS) and mean square error estimators (MMSE) are investigated. We also study a compressed sensing (CS) based channel estimation, which takes the sparse property of wireless channel into account. Simulation results have shown that the CS based channel estimation is expected to have better performance than LS. However MMSE can achieve optimal performance because of prior knowledge of the channel statistic. PMID:24757439 Fading and shadowing in wireless systems Shankar, P Mohana This book offers a comprehensive overview of fading and shadowing in wireless channels. A number of statistical models including simple, hybrid, compound and cascaded ones are presented along with a detailed discussion of diversity techniques employed to mitigate the effects of fading and shadowing. The effects of co-channel interference before and after the implementation of diversity are also analyzed. To facilitate easy understanding of the models and the analysis, the background on probability and random variables is presented with relevant derivations of densities of the sums, products, ratios as well as order statistics of random variables. The book also provides material on digital modems of interest in wireless systems. The updated edition expands the background materials on probability by offering sections on Laplace and Mellin transforms, parameter estimation, statistical testing and receiver operating characteristics. Newer models for fading, shadowing and shadowed fading are included along with th... Fading and interference mitigation in wireless communications Panic, Stefan; Anastasov, Jelena; Spalevic, Petar The rapid advancement of various wireless communication system services has created the need to analyze the possibility of their performance improvement. Introducing the basic principles of digital communications performance analysis and its mathematical formalization, Fading and Interference Mitigation in Wireless Communications will help you stay up to date with recent developments in the performance analysis of space diversity reception over fading channels in the presence of cochannel interference.The book presents a unified method for computing the performance of digital communication sys Frequency selectivity at very low centre frequencies Orellana, Carlos Andrés Jurado; Pedersen, Christian Sejer; Marquardt, Torsten measurements based on OAE suppression techniques and notched-noise masking data psychophysically measured for centre frequencies in the range 50-125 Hz, this study examines how individual differences in frequency selectivity, as well as in masking, may occur at very low CFs due to individual differences... In recent decades, growth in the field of wireless communications has led to an exponential rise in the number of journals catering to the research community. Still unmet, however, is the need to fully and comprehensively understand the effects of channel degradation brought on by the statistical fluctuations in the channel. These fluctuations mainly manifest as variations in signal power observed in the channel generally modeled using a variety of probability distributions, both in straight forms as well as in compound forms. While the former might explain some of the effects, it is the latter, namely, the compound models, which incorporate both short term and long term power fluctuations in the channel, explain the much more complex nature of the signals in these channels. Fading and Shadowing in Wireless Systems offers a pedagogical approach to the topic, with insight into the modeling and analysis of fading and shadowing. Beginning with statistical background and digital communications, the book is formul... Alammouri, Ahmad; Elsawy, Hesham; Salem, Ahmed Sultan; Di Renzo, Marco; Alouini, Mohamed-Slim to the Nakagami-m fading in some special cases. However, neither the Rayleigh nor the Nakagami-m accounts for dominant specular components (DSCs) which may appear in realistic fading channels. In this paper, we present a tractable model for cellular networks Theoretical analysis and simulation of a code division multiple access system (cdma for secure signal transmission in wideband channels Stevan M. Berber Full Text Available Chaotic spreading sequences can increase secrecy and resistance to interception in signal transmission. Chaos-based CDMA systems have been well investigated in the case of flat fading and noise presence in the channel. However, these systems operating in wideband channels, characterized by the frequency selective fading and white Gaussian noise, have not been investigated to the level of understanding their practical applications. This paper presents a detailed mathematical model of a CDMA system based on chaotic spreading sequences. In a theoretical analysis, all signals are represented in the discrete time domain. Using the theory of discrete time stochastic processes, the probability of error expressions are derived in a closed form for a multi-user chaos based CDMA system. For the sake of comparison, the expressions for the probability of error are derived separately for narrowband and wideband channels. The application of the system interleaving technique is investigated in particular, which showed that this technique can substantially improve probability of error in the system. The system is simulated and the findings of the simulation confirmed theoretically expected results. Possible improvements in the probability of bit error due to multipath channel nature, with and without interleavers, are quantified depending on the random delay and the number of users in the system. In the analyzed system, a simplified version of the wideband channel model, proposed for modern wideband wireless networks, is used. Introduction Over the past years, the demand for wireless communications has increased substantially due to advancements in mobile communication systems and networks. Following these increasing demands, modern communication systems require the ability to handle a large number of users to process and transmit wideband signals through complex frequency selective channels. One of the techniques for transmission of multi-user signals is the Multivariate η-μ fading distribution with arbitrary correlation model Ghareeb, Ibrahim; Atiani, Amani An extensive analysis for the multivariate ? distribution with arbitrary correlation is presented, where novel analytical expressions for the multivariate probability density function, cumulative distribution function and moment generating function (MGF) of arbitrarily correlated and not necessarily identically distributed ? power random variables are derived. Also, this paper provides exact-form expression for the MGF of the instantaneous signal-to-noise ratio at the combiner output in a diversity reception system with maximal-ratio combining and post-detection equal-gain combining operating in slow frequency nonselective arbitrarily correlated not necessarily identically distributed ?-fading channels. The average bit error probability of differentially detected quadrature phase shift keying signals with post-detection diversity reception system over arbitrarily correlated and not necessarily identical fading parameters ?-fading channels is determined by using the MGF-based approach. The effect of fading correlation between diversity branches, fading severity parameters and diversity level is studied. On Outage Performance of Spectrum-Sharing Communication over M-Block Fading Alabbasi, AbdulRahman; Rezki, Zouheir; Shihada, Basem In this paper, we consider a cognitive radio system in which a block-fading channel is assumed. Each transmission frame consists of M blocks and each block undergoes a different channel gain. Instantaneous channel state information about Performance Analysis of Rayleigh Fading and Cochannel Interference in Wireless Communication Gao, Chunjun ...) performance of adaptive arrays for wireless communications over fading channels in the presence of cochannel interference, particularly the case when the number of interference sources exceeds... Frequency Selective Surface for Structural Health Monitoring Norlyana Azemi, Saidatul; Mustaffa, Farzana Hazira Wan; Faizal Jamlos, Mohd; Abdullah Al-Hadi, Azremi; Soh, Ping Jack Structural health monitoring (SHM) technologies have attained attention to monitor civil structures. SHM sensor systems have been used in various civil structures such as bridges, buildings, tunnels and so on. However the previous sensor for SHM is wired and encounter with problem to cover large areas. Therefore, wireless sensor was introduced for SHM to reduce network connecting problem. Wireless sensors for Structural Health monitoring are new technology and have many advantages to overcome the drawback of conventional and wired sensor. This project proposed passive wireless SHM sensor using frequency selective surface (FSS) as an alternative to conventional sensors. The electromagnetic wave characteristic of FSS will change by geometrical changes of FSS due to mechanical strain or structural failure. The changes feature is used as a sensing function without any connecting wires. Two type of design which are circular ring and square loop along with the transmission and reflection characteristics of SHM using FSS were discussed in this project. A simulation process has shown that incident angle characteristics can be use as a data for SHM application. the outage probability achieved by some diversity techniques over this kind of channels is of major practical importance. In many circumstances, this is related to the difficult question of analyzing the statistics of a sum of Gamma- Gamma random variables Digital Communications Over Non-Fading and Fading Channels Hernandez, Jr, Jose H .... In this thesis, the probability of bit error expressions for many common digital modulation techniques, both binary and non-binary, are inverted to obtain expressions for the required signal-to-noise... A study of the impact of frequency selectivity on link adaptive wireless LAN systems Armour, SMD; Doufexi, A; Nix, AR; Bull, DR Wireless local area networks (WLANs) supporting broadband multimedia communication are being developed and standardized around the world. The HIPERLAN/2, 802.11a and HiSWANa standards provide channel adaptive data rates between 6 and 54 Mbps in the 5GHz radio band. The link adaptation mechanism is not specified in the standards. In this paper the performance of the HIPERLAN/2 system is evaluated in terms of throughput in a range of test channels with different degrees of frequency selectivity... Multiuser Diversity with Adaptive Modulation in Non-Identically Distributed Nakagami Fading Environments Rao, Anlei; Alouini, Mohamed-Slim In this paper, we analyze the performance of adaptive modulation with single-cell multiuser scheduling over independent but not identical distributed (i.n.i.d.) Nakagami fading channels. Closed-form expressions are derived for the average channel Chip-Level Channel Equalization in WCDMA Downlink Kari Hooli Full Text Available The most important third generation (3G cellular communications standard is based on wideband CDMA (WCDMA. Receivers based on TDMA style channel equalization at the chip level have been proposed for a WCDMA downlink employing long spreading sequences to ensure adequate performance even with a high number of active users. These receivers equalize the channel prior to despreading, thus restoring the orthogonality of users and resulting in multiple-access interference (MAI suppression. In this paper, an overview of chip-level channel equalizers is delivered with special attention to adaptation methods suitable for the WCDMA downlink. Numerical examples on the equalizers′ performance are given in Rayleigh fading frequency-selective channels. Error Probability of Binary and -ary Signals with Spatial Diversity in Nakagami- (Hoyt) Fading Channels Duong Trung Q; Shin Hyundong; Hong Een-Kee We analyze the exact average symbol error probability (SEP) of binary and -ary signals with spatial diversity in Nakagami- (Hoyt) fading channels. The maximal-ratio combining and orthogonal space-time block coding are considered as diversity techniques for single-input multiple-output and multiple-input multiple-output systems, respectively. We obtain the average SEP in terms of the Lauricella multivariate hypergeometric function . The analysis is verified by comparing with Monte Carlo simu... Physical Layer Design in Wireless Sensor Networks for Fading Mitigation Nuo Chen Full Text Available This paper presents the theoretical analysis, simulation results and suggests design in digital technology of a physical layer for wireless sensor networks. The proposed design is able to mitigate fading inside communication channel. To mitigate fading the chip interleaving technique is proposed. For the proposed theoretical model of physical layer, a rigorous mathematical analysis is conducted, where all signals are presented and processed in discrete time domain form which is suitable for further direct processing necessary for devices design in digital technology. Three different channels are used to investigate characteristics of the physical layer: additive white Gaussian noise channel (AWGN, AWG noise and flat fading channel and AWG noise and flat fading channel with interleaver and deinterleaver blocks in the receiver and transmitter respectively. Firstly, the mathematical model of communication system representing physical layer is developed based on the discrete time domain signal representation and processing. In the existing theory, these signals and their processing are represented in continuous time form, which is not suitable for direct implementation in digital technology. Secondly, the expressions for the probability of chip, symbol and bit error are derived. Thirdly, the communication system simulators are developed in MATLAB. The simulation results confirmed theoretical findings. Frequency selective surfaces integrated with phased array antennas Monni, S. Frequency Selective Surfaces (FSS's) are periodic arrays of patches and/or slots etched on a metal plate, having frequency and angular ??ltering properties. The FSS response to an excitation (for example a plane wave) is characterized in terms of its re ection and transmission coe??cient, and Consequences of peripheral frequency selectivity for nonsimultaneous masking Duifhuis, H. The frequency selectivity of the peripheral ear (e.g., at the VIIIth nerve level) is so acute that onset and offset transients in responses to short signals produce a nonnegligible extension of the signal duration. Thus, peripheral excitation patterns produced by signals which were separated in time Characteristics of spectro-temporal modulation frequency selectivity in humans. Oetjen, Arne; Verhey, Jesko L There is increasing evidence that the auditory system shows frequency selectivity for spectro-temporal modulations. A recent study of the authors has shown spectro-temporal modulation masking patterns that were in agreement with the hypothesis of spectro-temporal modulation filters in the human auditory system [Oetjen and Verhey (2015). J. Acoust. Soc. Am. 137(2), 714-723]. In the present study, that experimental data and additional data were used to model this spectro-temporal frequency selectivity. The additional data were collected to investigate to what extent the spectro-temporal modulation-frequency selectivity results from a combination of a purely temporal amplitude-modulation filter and a purely spectral amplitude-modulation filter. In contrast to the previous study, thresholds were measured for masker and target modulations with opposite directions, i.e., an upward pointing target modulation and a downward pointing masker modulation. The comparison of this data set with previous corresponding data with the same direction from target and masker modulations indicate that a specific spectro-temporal modulation filter is required to simulate all aspects of spectro-temporal modulation frequency selectivity. A model using a modified Gabor filter with a purely temporal and a purely spectral filter predicts the spectro-temporal modulation masking data. Cavity-soliton laser with frequency-selective feedback Scroggie, A. J.; Firth, W. J.; Oppo, G.-L. We present a coupled-cavity model of a laser with frequency-selective feedback, and use it to analyze and explain the existence of stationary and dynamic spatial solitons in the device. Particular features of soliton addressing in this system are discussed. We demonstrate the advantages of our model with respect to the common Lang-Kobayashi approximation. Frequency Selective Surfaces for extended Bandwidth backing reflector functions Pasian, M.; Neto, A.; Monni, S.; Ettorre, M.; Gerini, G. This paper deals with the use of Frequency Selective Surfaces (FSS) to increase the Efficiency × Bandwidth product in Ultra-Wide Band (UWB) antenna arrays whose efficiency is limited by the front-to-back ratio. If the backing reflector is realized in one metal plane solution its location will be An Alternative Method to Compute the Bit Error Probability of Modulation Schemes Subject to Nakagami- Fading Madeiro Francisco Full Text Available Abstract This paper presents an alternative method for determining exact expressions for the bit error probability (BEP of modulation schemes subject to Nakagami- fading. In this method, the Nakagami- fading channel is seen as an additive noise channel whose noise is modeled as the ratio between Gaussian and Nakagami- random variables. The method consists of using the cumulative density function of the resulting noise to obtain closed-form expressions for the BEP of modulation schemes subject to Nakagami- fading. In particular, the proposed method is used to obtain closed-form expressions for the BEP of -ary quadrature amplitude modulation ( -QAM, -ary pulse amplitude modulation ( -PAM, and rectangular quadrature amplitude modulation ( -QAM under Nakagami- fading. The main contribution of this paper is to show that this alternative method can be used to reduce the computational complexity for detecting signals in the presence of fading. Earth-Space Links and Fade-Duration Statistics Davarian, Faramaz In recent years, fade-duration statistics have been the subject of several experimental investigations. A good knowledge of the fade-duration distribution is important for the assessment of a satellite communication system's channel dynamics: What is a typical link outage duration? How often do link outages exceeding a given duration occur? Unfortunately there is yet no model that can universally answer the above questions. The available field measurements mainly come from temperate climatic zones and only from a few sites. Furthermore, the available statistics are also limited in the choice of frequency and path elevation angle. Yet, much can be learned from the available information. For example, we now know that the fade-duration distribution is approximately lognormal. Under certain conditions, we can even determine the median and other percentiles of the distribution. This paper reviews the available data obtained by several experimenters in different parts of the world. Areas of emphasis are mobile and fixed satellite links. Fades in mobile links are due to roadside-tree shadowing, whereas fades in fixed links are due to rain attenuation. On the MIMO Capacity for Distributed System under Composite Rayleigh/Rician Fading and Shadowing Santiago González-Aurioles Full Text Available Wireless channels are commonly affected by short-term fading and long-term fading (shadowing. The shadowing effects must be taken into account also when mobility is present in the wireless scenario. Using a composite fading model, the total channel capacity can be studied for a scenario with short-term Rayleigh fading along with shadowing. This work provides quantitative results for these kinds of scenarios with Rayleigh fading and shadowing, considering also multiple-input and multiple-output systems, which have not been previously reported. In addition, the channel capacity has been studied in depth in its relation with the shadowing level, signal to noise ratio, and the number of elements in the multiple-input and multiple-output system. Moreover, the channel performance with shadowing has been compared to the one without it. Furthermore, Rician model with shadowing is studied and its results are reported. In addition, correlated and experimental results are provided. It is identified that the distributed MIMO systems can benefit from shadowing in Rician channels. This advantage has not been reported previously. This type of fading is proposed for massive MIMO by others and our results open the door to emulate massive MIMO on a reverberation chamber. Compressive Sensing Based Bayesian Sparse Channel Estimation for OFDM Communication Systems: High Performance and Low Complexity Xu, Li; Shan, Lin; Adachi, Fumiyuki In orthogonal frequency division modulation (OFDM) communication systems, channel state information (CSI) is required at receiver due to the fact that frequency-selective fading channel leads to disgusting intersymbol interference (ISI) over data transmission. Broadband channel model is often described by very few dominant channel taps and they can be probed by compressive sensing based sparse channel estimation (SCE) methods, for example, orthogonal matching pursuit algorithm, which can take the advantage of sparse structure effectively in the channel as for prior information. However, these developed methods are vulnerable to both noise interference and column coherence of training signal matrix. In other words, the primary objective of these conventional methods is to catch the dominant channel taps without a report of posterior channel uncertainty. To improve the estimation performance, we proposed a compressive sensing based Bayesian sparse channel estimation (BSCE) method which cannot only exploit the channel sparsity but also mitigate the unexpected channel uncertainty without scarifying any computational complexity. The proposed method can reveal potential ambiguity among multiple channel estimators that are ambiguous due to observation noise or correlation interference among columns in the training matrix. Computer simulations show that proposed method can improve the estimation performance when comparing with conventional SCE methods. PMID:24983012 Analysis of Various Frequency Selective Shielding Glass by FDTD method ç¬ å¶‹, 善憲; Kasashima, Yoshinori A frequency Selective shielding (FSS) glass is a print of many same size antennas on a sheet of glass, and it has high shielding properties for one specific frequency. This time, the author analyzed characteristics of various FSSs whose antenna types are different by FDTD method. The antenna types are cross dipole, circular loop, square loop, circular patch, and square patch. As the result, the FSSs can be composed of the various types of the antennas, and the FSSs have broad-band shielding c... Frequency selective bolometer development at Argonne National Laboratory Datesman, Aaron; Pearson, John; Wang, Gensheng; Yefremenko, Volodymyr; Divan, Ralu; Downes, Thomas; Chang, Clarence; McMahon, Jeff; Meyer, Stephan; Carlstrom, John; Logan, Daniel; Perera, Thushara; Wilson, Grant; Novosad, Valentyn We discuss the development, at Argonne National Laboratory, of a four-pixel camera suitable for photometry of distant dusty galaxies located by Spitzer and SCUBA, and for study of other millimeter-wave sources such as ultra-luminous infrared galaxies, the Sunyaev-Zeldovich (SZ) effect in clusters, and galactic dust. Utilizing Frequency Selective Bolometers (FSBs) with superconducting Transition-Edge Sensors (TESs), each of the camera's four pixels is sensitive to four colors, with frequency bands centered approximately at 150, 220, 270, and 360 GHz. The current generation of these devices utilizes proximity effect superconducting bilayers of Mo/Au or Ti/Au for TESs, along with frequency selective circuitry on membranes of silicon nitride 1 cm across and 1 micron thick. The operational properties of these devices are determined by this circuitry, along with thermal control structures etched into the membranes. These etched structures do not perforate the membrane, so that the device is both comparatively robust mechanically and carefully tailored in terms of its thermal transport properties. In this paper, we report on development of the superconducting bilayer TES technology and characterization of the FSB stacks. This includes the use of new materials, the design and testing of thermal control structures, the introduction of desirable thermal properties using buried layers of crystalline silicon underneath the membrane, detector stability control, and optical and thermal test results. The scientific motivation, FSB design, FSB fabrication, and measurement results are discussed. Effect of efferent activation on binaural frequency selectivity. Verhey, Jesko L; Kordus, Monika; Drga, Vit; Yasin, Ifat Binaural notched-noise experiments indicate a reduced frequency selectivity of the binaural system compared to monaural processing. The present study investigates how auditory efferent activation (via the medial olivocochlear system) affects binaural frequency selectivity in normal-hearing listeners. Thresholds were measured for a 1-kHz signal embedded in a diotic notched-noise masker for various notch widths. The signal was either presented in phase (diotic) or in antiphase (dichotic), gated with the noise. Stimulus duration was 25 ms, in order to avoid efferent activation due to the masker or the signal. A bandpass-filtered noise precursor was presented prior to the masker and signal stimuli to activate the efferent system. The silent interval between the precursor and the masker-signal complex was 50 ms. For comparison, thresholds for detectability of the masked signal were also measured in a baseline condition without the precursor and, in addition, without the masker. On average, the results of the baseline condition indicate an effectively wider binaural filter, as expected. For both signal phases, the addition of the precursor results in effectively wider filters, which is in agreement with the hypothesis that cochlear gain is reduced due to the presence of the precursor. Copyright © 2017 Elsevier B.V. All rights reserved. Secure Broadcasting with Uncertain Channel State Information Hyadi, Amal; Rezki, Zouheir; Khisti, Ashish; Alouini, Mohamed-Slim -main CSI are also analyzed. Analytical derivations and numerical results are presented to illustrate the obtained expressions for the case of independent and identically distributed Rayleigh fading channels. Observations of anomalous fading in maiolica Bowman, S.G.E. In the course of an authenticity study on Italian maiolica (tin-glazed earthenware of the Renaissance period), storage at elevated temperature was used to accelerate anomalous fading. Substantial levels of fading were observed in about half of the samples, and in these cases the variation of fading with glow curve temperature accounted for the lack of an equivalent dose plateau. Some evidence was found for a difference in the fading between alpha and beta induced thermoluminescence (TL). More importantly, some samples with unstable natural TL were found: the implications of this for dating and the circumvention of fading are discussed. (author) MGF Approach to the Analysis of Generalized Two-Ray Fading Models Rao, Milind; Lopez-Martinez, F. Javier; Alouini, Mohamed-Slim; Goldsmith, Andrea We analyze a class of Generalized Two-Ray (GTR) fading channels that consist of two line of sight (LOS) components with random phase plus a diffuse component. We derive a closedform expression for the moment generating function (MGF) of the signal-to-noise ratio (SNR) for this model, which greatly simplifies its analysis. This expression arises from the observation that the GTR fading model can be expressed in terms of a conditional underlying Rician distribution. We illustrate the approach to derive simple expressions for statistics and performance metrics of interest such as the amount of fading, the level crossing rate, the symbol error rate, and the ergodic capacity in GTR fading channels. We also show that the effect of considering a more general distribution for the phase difference between the LOS components has an impact on the average SNR. Mechanical stress-controlled tunable active frequency-selective surface Huang, Bo-Cin; Hong, Jian-Wei; Lo, Cheng-Yao This study proposes a tunable active frequency-selective surface (AFSS) realized by mechanically expanding or contracting a split-ring resonator (SRR) array. The proposed AFSS transfers mechanical stress from its elastic substrate to the top of the SRR, thereby achieving electromagnetic (EM) modulation without the need for an additional external power supply, meeting the requirements for the target application: the invisibility cloak. The operating mechanism of the proposed AFSS differs from those of other AFSSs, supporting modulations in arbitrary frequencies in the target range. The proposed stress-controlled or strain-induced EM modulation proves the existence of an identical and linear relationship between the strain gradient and the frequency shift, implying its suitability for other EM modulation ranges and applications. Nanoparticle array based optical frequency selective surfaces: theory and design. Saeidi, Chiya; van der Weide, Daniel We demonstrate a synthesis procedure for designing a bandstop optical frequency selective surface (FSS) composed of nanoparticle (NP) elements. The proposed FSS uses two-dimensional (2-D) periodic arrays of NPs with subwavelength unit-cell dimensions. We derive equivalent circuit for a nanoparticle array (NPA) using the closed-form solution for a 2-D NPA excited by a plane wave in the limit of the dipole approximation, which includes contribution from both individual and collective plasmon modes. Using the extracted equivalent circuit, we demonstrate synthesis of an optical FSS using cascaded NPA layers as coupled resonators, which we validate with both circuit model and full-wave simulation for a third-order Butterworth bandstop prototype. Frequency selective surfaces based high performance microstrip antenna Narayan, Shiv; Jha, Rakesh Mohan This book focuses on performance enhancement of printed antennas using frequency selective surfaces (FSS) technology. The growing demand of stealth technology in strategic areas requires high-performance low-RCS (radar cross section) antennas. Such requirements may be accomplished by incorporating FSS into the antenna structure either in its ground plane or as the superstrate, due to the filter characteristics of FSS structure. In view of this, a novel approach based on FSS technology is presented in this book to enhance the performance of printed antennas including out-of-band structural RCS reduction. In this endeavor, the EM design of microstrip patch antennas (MPA) loaded with FSS-based (i) high impedance surface (HIS) ground plane, and (ii) the superstrates are discussed in detail. The EM analysis of proposed FSS-based antenna structures have been carried out using transmission line analogy, in combination with the reciprocity theorem. Further, various types of novel FSS structures are considered in desi... Optically transparent frequency selective surfaces on flexible thin plastic substrates Dewani, Aliya A., E-mail: [email protected]; O'Keefe, Steven G.; Thiel, David V.; Galehdar, Amir [School Of Electrical Engineering, Griffith University, Brisbane, 4111 (Australia) A novel 2D simple low cost frequency selective surface was screen printed on thin (0.21 mm), flexible transparent plastic substrate (relative permittivity 3.2). It was designed, fabricated and tested in the frequency range 10-20 GHz. The plane wave transmission and reflection coefficients agreed with numerical modelling. The effective permittivity and thickness of the backing sheet has a significant effect on the frequency characteristics. The stop band frequency reduced from 15GHz (no backing) to 12.5GHz with polycarbonate. The plastic substrate thickness beyond 1.8mm has minimal effect on the resonant frequency. While the inner element spacing controls the stop-band frequency, the substrate thickness controls the bandwidth. The screen printing technique provided a simple, low cost FSS fabrication method to produce flexible, conformal, optically transparent and bio-degradable FSS structures which can find their use in electromagnetic shielding and filtering applications in radomes, reflector antennas, beam splitters and polarizers. Dewani, Aliya A.; O'Keefe, Steven G.; Thiel, David V.; Galehdar, Amir Tunable antenna radome based on graphene frequency selective surface Qu, Meijun; Rao, Menglou; Li, Shufang; Deng, Li In this paper, a graphene-based frequency selective surface (FSS) is proposed. The proposed FSS exhibits a tunable bandpass filtering characteristic due to the alterable conductivity of the graphene strips which is controlled by chemical potential. Based on the reconfigurable bandpass property of the proposed FSS, a cylindrical antenna radome is designed using the FSS unit cells. A conventional omnidirectional dipole can realize a two-beam directional pattern when it is placed into the proposed antenna radome. Forward and backward endfire radiations of the dipole loaded with the radome is realized by properly adjusting the chemical potential. The proposed antenna radome is extremely promising for beam-scanning in terahertz and mid-infrared plasmonic devices and systems when the gain of a conventional antenna needs to be enhanced. Cooperative AF Relaying in Spectrum-Sharing Systems: Performance Analysis under Average Interference Power Constraints and Nakagami-m Fading Xia, Minghua; Aissa, Sonia the optimal end-to-end performance, the transmit powers of the secondary source and the relays are optimized with respect to average interference power constraints at primary users and Nakagami-$m$ fading parameters of interference channels (for mathematical A Study of an Iterative Channel Estimation Scheme of FS-FBMC System YongJu Won Full Text Available A filter bank multicarrier on offset-quadrature amplitude modulation (FBMC/OQAM system is an alternative multicarrier modulation scheme that does not need cyclic prefix (CP even in the presence of a multipath fading channel by the properties of prototype filter. The FBMC/OQAM system can be implemented either by using the poly-phase network with fast fourier transform (PPN-FFT or by using the extended FFT on a frequency-spreading (FS domain. In this paper, we propose an iterative channel estimation scheme for each sub channel of a FBMC/OQAM system over a frequency-spreading domain. The proposed scheme first estimates the channel using the received pilot signal in the subchannel domain and interpolates the estimated channel to fine frequency-spreading domain. Then the channel compensated FS domain pilot is despread again to modify the channel state information (CSI estimation. Computer simulation shows that the proposed method outperforms the conventional FBMC/OQAM channel estimator in a frequency selective channel. Fade Mitigation Techniques at Ka-Band Dissanayake, Asoka (Editor) Rain fading is the dominant propagation impairment affecting Ka-band satellite links and rain fade mitigation is a key element in the design of Ka-band satellite networks. Some of the common fade mitigation techniques include: power control, diversity, adaptive coding, and resource sharing. The Advanced Communications Technology Satellite (ACTS) provides an excellent opportunity to develop and test Ka-band rain impairment amelioration techniques. Up-link power control and diversity are discussed in this paper. Dual-band frequency selective surface with large band separation and stable performance Zhou, Hang; Qu, Shao-Bo; Peng, Wei-Dong; Lin, Bao-Qin; Wang, Jia-Fu; Ma, Hua; Zhang, Jie-Qiu; Bai, Peng; Wang, Xu-Hua; Xu, Zhuo A new technique of designing a dual-band frequency selective surface with large band separation is presented. This technique is based on a delicately designed topology of L- and Ku-band microwave filters. The two band-pass responses are generated by a capacitively-loaded square-loop frequency selective surface and an aperture-coupled frequency selective surface, respectively. A Faraday cage is located between the two frequency selective surface structures to eliminate undesired couplings. Based on this technique, a dual-band frequency selective surface with large band separation is designed, which possesses large band separation, high selectivity, and stable performance under various incident angles and different polarizations. Aliya A. Dewani Full Text Available A novel 2D simple low cost frequency selective surface was screen printed on thin (0.21 mm, flexible transparent plastic substrate (relative permittivity 3.2. It was designed, fabricated and tested in the frequency range 10-20 GHz. The plane wave transmission and reflection coefficients agreed with numerical modelling. The effective permittivity and thickness of the backing sheet has a significant effect on the frequency characteristics. The stop band frequency reduced from 15GHz (no backing to 12.5GHz with polycarbonate. The plastic substrate thickness beyond 1.8mm has minimal effect on the resonant frequency. While the inner element spacing controls the stop-band frequency, the substrate thickness controls the bandwidth. The screen printing technique provided a simple, low cost FSS fabrication method to produce flexible, conformal, optically transparent and bio-degradable FSS structures which can find their use in electromagnetic shielding and filtering applications in radomes, reflector antennas, beam splitters and polarizers. Spectro-temporal modulation masking patterns reveal frequency selectivity. The present study investigated the possibility that the human auditory system demonstrates frequency selectivity to spectro-temporal amplitude modulations. Threshold modulation depth for detecting sinusoidal spectro-temporal modulations was measured using a generalized masked threshold pattern paradigm with narrowband masker modulations. Four target spectro-temporal modulations were examined, differing in their temporal and spectral modulation frequencies: a temporal modulation of -8, 8, or 16 Hz combined with a spectral modulation of 1 cycle/octave and a temporal modulation of 4 Hz combined with a spectral modulation of 0.5 cycles/octave. The temporal center frequencies of the masker modulation ranged from 0.25 to 4 times the target temporal modulation. The spectral masker-modulation center-frequencies were 0, 0.5, 1, 1.5, and 2 times the target spectral modulation. For all target modulations, the pattern of average thresholds for the eight normal-hearing listeners was consistent with the hypothesis of a spectro-temporal modulation filter. Such a pattern of modulation-frequency sensitivity was predicted on the basis of psychoacoustical data for purely temporal amplitude modulations and purely spectral amplitude modulations. An analysis of separability indicates that, for the present data set, selectivity in the spectro-temporal modulation domain can be described by a combination of a purely spectral and a purely temporal modulation filter function. Substrate integrated ferrite phase shifters and active frequency selective surfaces Cahill, B.M. There are two distinct parts to this thesis; the first investigates the use of ferrite tiles in the construction of printed phase shifting transmission lines, culminating in the design of two compact electromagnetic controlled beam steered patch and slot antenna arrays. The second part investigates the use of active frequency selective surfaces (AFSS), which are later used to cover a uPVC constructed enclosure. Field intensity measurements are taken from within the enclosure to determine the dynamic screening effectiveness. Trans Tech G-350 Ferrite is investigated to determine its application in printed microstrip and stripline phase shifting transmission lines. 50-Ohm transmission lines are constructed using the ferrite tile and interfaced to Rogers RT Duroid 5870 substrate. Scattering parameter measurements are made under the application of variable magnetic fields to the ferrite. Later, two types of planar microwave beam steering antennas are constructed. The first uses the ferrites integrated into the Duroid as microstrip lines with 3 patch antennas as the radiating elements. The second uses stripline transmission lines, with slot antennas as the radiating sources etched into the ground plane of the triplate. Beam steering is achieved by the application of an external electromagnet. An AFSS is constructed by the interposition of PIN diodes into a dipole FSS array. Transmission response measurements are then made for various angles of electromagnetic wave incidence. Two states of operation exist: when a current is passed through the diodes and when the diodes are switched off. These two states form a high pass and band stop space filter respectively. An enclosure covered with the AFSS is constructed and externally illuminated in the range 2.0 - 2.8GHz. A probe antenna inside the enclosure positioned at various locations through out the volume is used to establish the effective screening action of the AFSS in 3 dimensional space. (author) Performance of Cooperative Spectrum Sensing over Non-Identical Fading Environments Different from previous works in cooperative spec- trum sensing that assumed the sensing channels independent identically distributed (i.i.d.), we investigate in this paper the independent but not identically distributed (i.n.i.d.) situations. In particular, we derive the false-alarm probability and the detection probability of cooperative spectrum sensing with the scheme of energy fusion over i.n.i.d. Rayleigh, Nakagami, and Rician fading channels. From the selected numerical results, we can see that cooperative spectrum sensing still gives considerably better performance even over i.n.i.d. fading environments. Performance analysis of a finite radon transform in OFDM system under different channel models Dawood, Sameer A.; Anuar, M. S.; Fayadh, Rashid A. [School of Computer and Communication Engineering, Universiti Malaysia Perlis (UniMAP) Pauh Putra, 02000 Arau, Parlis (Malaysia); Malek, F.; Abdullah, Farrah Salwani [School of Electrical System Engineering, Universiti Malaysia Perlis (UniMAP) Pauh Putra, 02000 Arau, Parlis (Malaysia) In this paper, a class of discrete Radon transforms namely Finite Radon Transform (FRAT) was proposed as a modulation technique in the realization of Orthogonal Frequency Division Multiplexing (OFDM). The proposed FRAT operates as a data mapper in the OFDM transceiver instead of the conventional phase shift mapping and quadrature amplitude mapping that are usually used with the standard OFDM based on Fast Fourier Transform (FFT), by the way that ensure increasing the orthogonality of the system. The Fourier domain approach was found here to be the more suitable way for obtaining the forward and inverse FRAT. This structure resulted in a more suitable realization of conventional FFT- OFDM. It was shown that this application increases the orthogonality significantly in this case due to the use of Inverse Fast Fourier Transform (IFFT) twice, namely, in the data mapping and in the sub-carrier modulation also due to the use of an efficient algorithm in determining the FRAT coefficients called the optimal ordering method. The proposed approach was tested and compared with conventional OFDM, for additive white Gaussian noise (AWGN) channel, flat fading channel, and multi-path frequency selective fading channel. The obtained results showed that the proposed system has improved the bit error rate (BER) performance by reducing inter-symbol interference (ISI) and inter-carrier interference (ICI), comparing with conventional OFDM system. Dawood, Sameer A.; Anuar, M. S.; Fayadh, Rashid A.; Malek, F.; Abdullah, Farrah Salwani In this paper, a class of discrete Radon transforms namely Finite Radon Transform (FRAT) was proposed as a modulation technique in the realization of Orthogonal Frequency Division Multiplexing (OFDM). The proposed FRAT operates as a data mapper in the OFDM transceiver instead of the conventional phase shift mapping and quadrature amplitude mapping that are usually used with the standard OFDM based on Fast Fourier Transform (FFT), by the way that ensure increasing the orthogonality of the system. The Fourier domain approach was found here to be the more suitable way for obtaining the forward and inverse FRAT. This structure resulted in a more suitable realization of conventional FFT- OFDM. It was shown that this application increases the orthogonality significantly in this case due to the use of Inverse Fast Fourier Transform (IFFT) twice, namely, in the data mapping and in the sub-carrier modulation also due to the use of an efficient algorithm in determining the FRAT coefficients called the optimal ordering method. The proposed approach was tested and compared with conventional OFDM, for additive white Gaussian noise (AWGN) channel, flat fading channel, and multi-path frequency selective fading channel. The obtained results showed that the proposed system has improved the bit error rate (BER) performance by reducing inter-symbol interference (ISI) and inter-carrier interference (ICI), comparing with conventional OFDM system Performance Evaluation of Proportional Fair Scheduling Algorithm with Measured Channels Sørensen, Troels Bundgaard; Pons, Manuel Rubio subjected to measured channel traces. Specifically, we applied measured signal fading recorded from GSM cell phone users making calls on an indoor wireless office system. Different from reference channel models, these measured channels have much more irregular fading between users, which as we show... When does fading enhance perceptual category learning? Pashler, Harold; Mozer, Michael C Training that uses exaggerated versions of a stimulus discrimination (fading) has sometimes been found to enhance category learning, mostly in studies involving animals and impaired populations. However, little is known about whether and when fading facilitates learning for typical individuals. This issue was explored in 7 experiments. In Experiments 1 and 2, observers discriminated stimuli based on a single sensory continuum (time duration and line length, respectively). Adaptive fading dramatically improved performance in training (unsurprisingly) but did not enhance learning as assessed in a final test. The same was true for nonadaptive linear fading (Experiment 3). However, when variation in length (predicting category membership) was embedded among other (category-irrelevant) variation, fading dramatically enhanced not only performance in training but also learning as assessed in a final test (Experiments 4 and 5). Fading also helped learners to acquire a color saturation discrimination amid category-irrelevant variation in hue and brightness, although this learning proved transitory after feedback was withdrawn (Experiment 7). Theoretical implications are discussed, and we argue that fading should have practical utility in naturalistic category learning tasks, which involve extremely high dimensional stimuli and many irrelevant dimensions. PsycINFO Database Record (c) 2013 APA, all rights reserved. Distributed detection in UWB sensor networks under non-orthogonal Nakagami-m fading Mehbodniya, Abolfazl; Bielefeld, Daniel; Aissa, Sonia; Mathar, Rudolf; Adachi, Fumiyuki . In this paper, we present an opportunistic power assignment strategy for distributed detection in parallel fusion WSNs, considering a Nakagami-m fading model for the communication channel and time-hopping (TH) UWB for the transmitter circuit of the sensor nodes Zhou Hang; Qu Shao-Bo; Lin Bao-Qin; Wang Jia-Fu; Ma Hua; Zhang Jie-Qiu; Peng Wei-Dong; Bai Peng; Wang Xu-Hua; Xu Zhuo A new technique of designing a dual-band frequency selective surface with large band separation is presented. This technique is based on a delicately designed topology of L- and Ku-band microwave filters. The two band-pass responses are generated by a capacitively-loaded square-loop frequency selective surface and an aperture-coupled frequency selective surface, respectively. A Faraday cage is located between the two frequency selective surface structures to eliminate undesired couplings. Based on this technique, a dual-band frequency selective surface with large band separation is designed, which possesses large band separation, high selectivity, and stable performance under various incident angles and different polarizations. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics) Alabbasi, AbdulRahman In this paper, we consider a cognitive radio system in which a block-fading channel is assumed. Each transmission frame consists of M blocks and each block undergoes a different channel gain. Instantaneous channel state information about the interference links remains unknown to the primary and secondary users. We minimize the secondary user\\'s targeted outage probability over the block-fading channels. To protect the primary user, a statistical constraint on its targeted outage probability is enforced. The secondary user\\'s targeted outage region and the corresponding optimal power are derived. We also propose two sub-optimal power strategies and derive compact expressions for the corresponding outage probabilities. These probabilities are shown to be asymptotic lower and upper bounds on the outage probability. Utilizing these bounds, we derive the exact diversity order of the secondary user outage probability. Selected numerical results are presented to characterize the system\\'s behavior. MGF approach to the capacity analysis of Generalized Two-Ray fading models Rao, Milind We propose a class of Generalized Two-Ray (GTR) fading channels that consists of two line of sight (LOS) components with random phase and a diffuse component. Observing that the GTR fading model can be expressed in terms of the underlying Rician distribution, we derive a closed-form expression for the moment generating function (MGF) of the signal-to-noise ratio (SNR) of this model. We then employ this approach to compute the ergodic capacity with receiver side information. The impact of the underlying phase difference between the LOS components on the average SNR of the signal received is also illustrated. © 2015 IEEE. Application of high Tc superconductors as frequency selective surfaces: Experiment and theory Dawei Zhang; Yahya Rahmat-Samii; Fetterman, H.R. YBa 2 Cu 3 O 7-x and Tl 2 CaBa 2 Cu 2 O 8 high temperature superconducting thin films were utilized to fabricate frequency selective surfaces (FSS) at millimeter-wave frequencies (75--110 GHz). An analytical/numerical model was applied, using a Floquet expansion and the Method of Moments, to analyze bandstop superconducting frequency selective surfaces. Experimental results were compared with the model, and showed a good agreement with resonant frequency prediction with an accuracy of better than 1%. The use of the superconducting frequency selective surfaces as quasi-optical millimeter-wave bandpass filters was also demonstrated Observation of auroral fading before breakup Pellinen, R.J.; Heikkila, W.J. We have obtained detailed observations of the onset of auroral breakup using a variety of instruments with time resolution of some tens of seconds. Rapid sequences of all-sky photographs, and fast meridian scans by photometers, show that breakup is usually preceded by moderate brightening, followed by fading of the auroral brightness lasting one or two minutes, before the actual breakup itself. At the time of the fading there is a brief darkening of the poleward sky. Often the breakup is preceded by one or more rapid intensifications, each one preceded by local fading. Pseudo-breakups may also occur without the development of a major event. A bonafide breakup may begin on the fading arc, on an adjacent arc, or in an entirely new region nearby. This optical activity is closely correlated with the development of auroral radar echoes, suggesting that variations in the ionospheric and magnetospheric electric and magnetic fields are responsible for the observed auroral variations. Data from the IMS magnetometer network provide some indication of a correlated response by the local auroral and ionospheric current, although this could be partly due to changes in conductivity. Riometer recordings show a slow decrease in ionsperic radio wave absorption over a period of about ten minutes prior to breakup, with the largest decrease essentially to quiet-time values in the region of auroral fading and subsequent breakup. The implications of these observations regarding the trigger mechanism for the expansion phase of a magnetospheric substorm are discussed. (author) Multisource full waveform inversion of marine streamer data with frequency selection Huang, Yunsong; Schuster, Gerard T. Multisource migration with frequency selection is now extended to multisource full waveform inversion (FWI) of supergathers for marine streamer data. There are three advantages of this approach compared to conventional FWI for marine streamer data. 1. The multisource FWI method with frequency selection is computationally more efficient than conventional FWI. 2. A supergather requires more than an order of magnitude less storage than the the original data. 3. Frequency selection overcomes the acquisition mismatch between the observed data and the simulated multisource supergathers for marine data. This mismatch problem has prevented the efficient application of FWI to marine geometries in the space-time domain. Preliminary result of applying multisource FWI with frequency selection to a synthetic marine data set suggests it is at least four times more efficient than standard FWI. Error Rates of M-PAM and M-QAM in Generalized Fading and Generalized Gaussian Noise Environments This letter investigates the average symbol error probability (ASEP) of pulse amplitude modulation and quadrature amplitude modulation coherent signaling over flat fading channels subject to additive white generalized Gaussian noise. The new ASEP results are derived in a generic closed-form in terms of the Fox H function and the bivariate Fox H function for the extended generalized-K fading case. The utility of this new general closed-form is that it includes some special fading distributions, like the Generalized-K, Nakagami-m, and Rayleigh fading and special noise distributions such as Gaussian and Laplacian. Some of these special cases are also treated and are shown to yield simplified results. Full Waveform Inversion with Multisource Frequency Selection of Marine Streamer Data The theory and practice of multisource full waveform inversion of marine supergathers are described with a frequency-selection strategy. The key enabling property of frequency selection is that it eliminates the crosstalk among sources, thus overcoming the aperture mismatch of marine multisource inversion. Tests on multisource full waveform inversion of synthetic marine data and Gulf of Mexico data show speedups of 4× and 8×, respectively, compared to conventional full waveform inversion. Huang, Yunsong Effects of fading and spatial correlation on node selection for estimation in Wireless Sensor Networks In densely deployed sensor networks, correlation among measurements may be high. Spatial sampling through node selection is usually used to minimize this correlation and to save energy consumption. However because of the fading nature of the wireless channels, extra care should be taken when performing this sampling. In this paper, we develop expressions for the distortion which include the channel effects. The asymptotic behavior of the distortion as the number of sensors or total transmit power increase without bound is also investigated. Further, based on the channel and position information we propose and test several node selection schemes. Performance of Digital Communications over Selective Fading Channels. and we assume that the "" receiver has compensated for the mean path delay td . Equivalently, we let d - : - 0 in order to establish a time reference...Pv dl g(V v( 4bd 2G2 E 211v/T2 (3.37) N0 4'. .4 which implies that iv is the key parameter in determining the signal-to-noise ratio for single-pulse...in (4.8) are defined by R-77 7777 7777,077 7-7.7 70 jnht(-b )/ Td Kv(h,bi,:) = , v(t) v(t-4) e h i dt (4.10a) and R=hbit v(t) v(t+T-4) e•j uh t (1- b k– fading channels: a finite state Markov modelling approach C Priyanka Feb 7, 2018 ... 1 Department of Electronics and Communication Engineering, SRM University, ... probability; level crossing rate; state-transition probability; state-time duration. 1. ... statistics of the mobile radio signals are described by dif-. . Using cooperation in a cognitive set up make the use of spectrum much more efficient. Moreover, it helps to extend the coverage area of the cognitive network and also to reduce the transmitting power and, thus, the generated interference. In this work In this paper, we analyze the performance of adaptive modulation with single-cell multiuser scheduling over independent but not identical distributed (i.n.i.d.) Nakagami fading channels. Closed-form expressions are derived for the average channel capacity, spectral efficiency, and bit-error-rate (BER) for both constant-power variable-rate and variable-power variable-rate uncoded/coded M-ary quadrature amplitude modulation (M-QAM) schemes. We also study the impact of time delay on the average BER of adaptive M-QAM. Selected numerical results show that the multiuser diversity brings a considerably better performance even over i.n.i.d. fading environments. Several attractive features of ultra wideband (UWB) communications make it a good candidate for physical-layer of wireless sensor networks (WSN). These features include low power consumption, low complexity and low cost of implementation. In this paper, we present an opportunistic power assignment strategy for distributed detection in parallel fusion WSNs, considering a Nakagami-m fading model for the communication channel and time-hopping (TH) UWB for the transmitter circuit of the sensor nodes. In a parallel fusion WSN, local decisions are made by local sensors and transmitted through wireless channels to a fusion center. The fusion center processes the information and makes the final decision. Simulation results are provided for the global probability of detection error and relative performance gain to evaluate the efficiency of the proposed power assignment strategy in different fading environments. © 2011 IEEE. LDPC concatenated space-time block coded system in multipath fading environment: Analysis and evaluation Full Text Available Irregular low-density parity-check (LDPC codes have been found to show exceptionally good performance for single antenna systems over a wide class of channels. In this paper, the performance of LDPC codes with multiple antenna systems is investigated in flat Rayleigh and Rician fading channels for different modulation schemes. The focus of attention is mainly on the concatenation of irregular LDPC codes with complex orthogonal space-time codes. Iterative decoding is carried out with a density evolution method that sets a threshold above which the code performs well. For the proposed concatenated system, the simulation results show that the QAM technique achieves a higher coding gain of 8.8 dB and 3.2 dB over the QPSK technique in Rician (LOS and Rayleigh (NLOS faded environments respectively. Multi-carrier Communications over Time-varying Acoustic Channels Aval, Yashar M. Acoustic communication is an enabling technology for many autonomous undersea systems, such as those used for ocean monitoring, offshore oil and gas industry, aquaculture, or port security. There are three main challenges in achieving reliable high-rate underwater communication: the bandwidth of acoustic channels is extremely limited, the propagation delays are long, and the Doppler distortions are more pronounced than those found in wireless radio channels. In this dissertation we focus on assessing the fundamental limitations of acoustic communication, and designing efficient signal processing methods that cam overcome these limitations. We address the fundamental question of acoustic channel capacity (achievable rate) for single-input-multi-output (SIMO) acoustic channels using a per-path Rician fading model, and focusing on two scenarios: narrowband channels where the channel statistics can be approximated as frequency- independent, and wideband channels where the nominal path loss is frequency-dependent. In each scenario, we compare several candidate power allocation techniques, and show that assigning uniform power across all frequencies for the first scenario, and assigning uniform power across a selected frequency-band for the second scenario, are the best practical choices in most cases, because the long propagation delay renders the feedback information outdated for power allocation based on the estimated channel response. We quantify our results using the channel information extracted form the 2010 Mobile Acoustic Communications Experiment (MACE'10). Next, we focus on achieving reliable high-rate communication over underwater acoustic channels. Specifically, we investigate orthogonal frequency division multiplexing (OFDM) as the state-of-the-art technique for dealing with frequency-selective multipath channels, and propose a class of methods that compensate for the time-variation of the underwater acoustic channel. These methods are based on multiple Multipass Channel Estimation and Joint Multiuser Detection and Equalization for MIMO Long-Code DS/CDMA Systems Buzzi Stefano Full Text Available The problem of joint channel estimation, equalization, and multiuser detection for a multiantenna DS/CDMA system operating over a frequency-selective fading channel and adopting long aperiodic spreading codes is considered in this paper. First of all, we present several channel estimation and multiuser data detection schemes suited for multiantenna long-code DS/CDMA systems. Then, a multipass strategy, wherein the data detection and the channel estimation procedures exchange information in a recursive fashion, is introduced and analyzed for the proposed scenario. Remarkably, this strategy provides, at the price of some attendant computational complexity increase, excellent performance even when very short training sequences are transmitted, and thus couples together the conflicting advantages of both trained and blind systems, that is, good performance and no wasted bandwidth, respectively. Space-time coded systems are also considered, and it is shown that the multipass strategy provides excellent results for such systems also. Likewise, it is also shown that excellent performance is achieved also when each user adopts the same spreading code for all of its transmit antennas. The validity of the proposed procedure is corroborated by both simulation results and analytical findings. In particular, it is shown that adopting the multipass strategy results in a remarkable reduction of the channel estimation mean-square error and of the optimal length of the training sequence. Monocular depth effects on perceptual fading. Hsu, Li-Chuan; Kramer, Peter; Yeh, Su-Ling After prolonged viewing, a static target among moving non-targets is perceived to repeatedly disappear and reappear. An uncrossed stereoscopic disparity of the target facilitates this Motion-Induced Blindness (MIB). Here we test whether monocular depth cues can affect MIB too, and whether they can also affect perceptual fading in static displays. Experiment 1 reveals an effect of interposition: more MIB when the target appears partially covered by, than when it appears to cover, its surroundings. Experiment 2 shows that the effect is indeed due to interposition and not to the target's contours. Experiment 3 induces depth with the watercolor illusion and replicates Experiment 1. Experiments 4 and 5 replicate Experiments 1 and 3 without the use of motion. Since almost any stimulus contains a monocular depth cue, we conclude that perceived depth affects perceptual fading in almost any stimulus, whether dynamic or static. Copyright 2010 Elsevier Ltd. All rights reserved. Spatial Tuning of a RF Frequency Selective Surface through Origami (Postprint) computational tools to systematically predict optimal folds. 15. SUBJECT TERMS origami, frequency selective surface, tuning, radio frequency 16...experimental study and motivates the development of computational tools to systematically predict optimal fold patterns for targeted frequency response...folding motions. The precise mapping of origami presents a novel method to spatially tune radio frequency (RF) devices, including adaptive antennas Detection of optic nerve lesions in optic neuritis using frequency-selective fat-saturation sequences Miller, D.H.; MacManus, D.G.; Bartlett, P.A.; Kapoor, R.; Morrissey, S.P.; Moseley, I.F. MRI was performed on seven patients with acute optic neuritis, using two sequences which suppress the signal from orbital fat: frequency-selective fat-saturation and inversion recovery with a short inversion time. Lesions were seen on both sequences in all the symptomatic optic nerves studied. (orig.) On the Secrecy Capacity Region of the Block-Fading BCC with Limited CSI Feedback In this work, we examine the secrecy capacity region of the block-fading broadcast channel with confidential messages (BCC) when the transmitter has limited knowledge of the channel. In particular, we consider a two-user communication system where the transmitter has one common message to be transmitted to both users and one confidential message intended to only one of them. The confidential message has to be kept secret from the other user to whom the information is not intended. The transmitter is not aware of the channel state information (CSI) of neither channel and is only provided by limited CSI feedback sent at the beginning of each fading block. Assuming an error-free feedback link, we characterize the secrecy capacity region of this channel and show that even with a 1-bit CSI feedback, a positive secrecy rate can still be achieved. Then, we look at the case where the feedback link is not error- free and is rather a binary erasure channel (BEC). In the latter case, we provide an achievable secrecy rate region and show that as long as the erasure event is not a probability 1 event, the transmitter can still transmit the confidential information with a positive secrecy rate. Channel Modelling for Multiprobe Over-the-Air MIMO Testing Pekka Kyösti a fading emulator, an anechoic chamber, and multiple probes. Creation of a propagation environment inside an anechoic chamber requires unconventional radio channel modelling, namely, a specific mapping of the original models onto the probe antennas. We introduce two novel methods to generate fading emulator channel coefficients; the prefaded signals synthesis and the plane wave synthesis. To verify both methods we present a set of simulation results. We also show that the geometric description is a prerequisite for the original channel model. Ergodic Capacity for the SIMO Nakagami- Channel Vagenas EfstathiosD Full Text Available This paper presents closed-form expressions for the ergodic channel capacity of SIMO (single-input and multiple output wireless systems operating in a Nakagami- fading channel. As the performance of SIMO channel is closely related to the diversity combining techniques, we present closed-form expressions for the capacity of maximal ratio combining (MRC, equal gain combining (EGC, selection combining (SC, and switch and stay (SSC diversity systems operating in Nakagami- fading channels. Also, the ergodic capacity of a SIMO system in a Nakagami- fading channel without any diversity technique is derived. The latter scenario is further investigated for a large amount of receive antennas. Finally, numerical results are presented for illustration. Optimal Superimposed Training Sequences for Channel Estimation in MIMO-OFDM Systems Ratnam V. Raja Kumar Full Text Available In this work an iterative time domain Least Squares (LS based channel estimation method using superimposed training (ST for a Multiple Input Multiple Output Orthogonal Frequency Division Multiplexing (MIMO-OFDM system over time varying frequency selective fading channels is proposed. The performance of the channel estimator is analyzed in terms of the Mean Square Estimation Error (MSEE and its impact on the uncoded Bit Error Rate (BER of the MIMO-OFDM system is studied. A new selection criterion for the training sequences that jointly optimizes the MSEE and the BER of the OFDM system is proposed. Chirp based sequences are proposed and shown to satisfy the same. These are compared with the other sequences proposed in the literature and are found to yield a superior performance. The sequences, one for each transmitting antenna, offers fairness through providing equal interference in all the data carriers unlike earlier proposals. The effectiveness of the mathematical analysis presented is demonstrated through a comparison with the simulation studies. Experimental studies are carried out to study and validate the improved performance of the proposed scheme. The scheme is applied to the IEEE 802.16e OFDM standard and a case is made with the required design of the sequence. Frequency Selective Properties of Coaxial Transmission Lines Loaded with Combined Artificial Inclusions Francisco Falcone Full Text Available The properties of a modified coaxial transmission line by periodic inclusions will be discussed. The introduction of split ring resonators, conductor stubs, air gaps, and combination of these gives rise to new frequency selective properties, such as stopband or passband behavior, observable in planar as well as volumetric metamaterial structures. These results envisage new potential applications and implementation of devices in coaxial transmission line technology. Design of an anti-Rician-fading modem for mobile satellite communication systems Kojima, Toshiharu; Ishizu, Fumio; Miyake, Makoto; Murakami, Keishi; Fujino, Tadashi To design a demodulator applicable to mobile satellite communication systems using differential phase shift keying modulation, we have developed key technologies including an anti-Rician-fading demodulation scheme, an initial acquisition scheme, automatic gain control (AGC), automatic frequency control (AFC), and bit timing recovery (BTR). Using these technologies, we have developed one-chip digital signal processor (DSP) modem for mobile terminal, which is compact, of light weight, and of low power consumption. Results of performance test show that the developed DSP modem achieves good performance in terms of bit error ratio in mobile satellite communication environment, i.e., Rician fading channel. It is also shown that the initial acquisition scheme acquires received signal rapidly even if the carrier-to-noise power ratio (CNR) of the received signal is considerably low. Least-squares reverse time migration of marine data with frequency-selection encoding Dai, Wei The phase-encoding technique can sometimes increase the efficiency of the least-squares reverse time migration (LSRTM) by more than one order of magnitude. However, traditional random encoding functions require all the encoded shots to share the same receiver locations, thus limiting the usage to seismic surveys with a fixed spread geometry. We implement a frequency-selection encoding strategy that accommodates data with a marine streamer geometry. The encoding functions are delta functions in the frequency domain, so that all the en- coded shots have unique non-overlapping frequency content, and the receivers can distinguish the wavefield from each shot with a unique frequency band. Since the encoding functions are orthogonal to each other, there will be no crosstalk between different shots during modeling and migration. With the frequency-selection encoding method, the computational efficiency of LSRTM is increased so that its cost is compara- ble to conventional RTM for both the Marmousi2 model and a marine data set recorded in the Gulf of Mexico. With more iterations, the LSRTM image quality is further improved. We conclude that LSRTM with frequency-selection is an efficient migration method that can sometimes produce more focused images than conventional RTM. Mechanical frequency selectivity of an artificial basilar membrane using a beam array with narrow supports Kim, Sangwon; Jang, Jongmoon; Choi, Hongsoo; Song, Won Joon; Jang, Jeong Hun The study presented in this paper assessed the frequency selectivity of an artificial basilar membrane (ABM) constructed using a piezoelectric beam array with narrow supports. Three ABM samples were constructed. Each ABM contained 16 beams with various lengths in a one-dimensional array. To experimentally assess the frequency selectivity of the ABM, mechanical vibration induced either by an electrical or an acoustic stimulus was measured with a scanning laser-Doppler vibrometer. The electro-mechanical and acousto-mechanical transfer functions were defined for the same purpose. The tonotopy of each beam array was visualized by post-processing the experimental results. Finite element analyses were conducted to numerically compute the resonance frequencies, identify the associated vibrational modes, and evaluate the harmonic responses of the beams. The influence of the residual stresses existing in the beams was reflected in the geometric models by introducing three different levels of arc-shaped lateral deformations in the beams. The harmonic analyses revealed that each beam of the ABM samples presented independent band-pass characteristics. The experiments and simulations commonly showed a frequency selectivity of the fabricated ABMs in the range of 2–20 kHz. Therefore, the device is suitable for development of a totally implantable artificial cochlea, implementing a mechanical frequency analyzer. This work is part of research to develop a prototype of a totally implantable artificial cochlea. (paper) The phase-encoding technique can sometimes increase the efficiency of the least-squares reverse time migration (LSRTM) by more than one order of magnitude. However, traditional random encoding functions require all the encoded shots to share the same receiver locations, thus limiting the usage to seismic surveys with a fixed spread geometry. We implement a frequency-selection encoding strategy that accommodates data with a marine streamer geometry. The encoding functions are delta functions in the frequency domain, so that all the encoded shots have unique nonoverlapping frequency content, and the receivers can distinguish the wavefield from each shot with a unique frequency band. Because the encoding functions are orthogonal to each other, there will be no crosstalk between different shots during modeling and migration. With the frequency-selection encoding method, the computational efficiency of LSRTM is increased so that its cost is comparable to conventional RTM for the Marmousi2 model and a marine data set recorded in the Gulf of Mexico. With more iterations, the LSRTM image quality is further improved by suppressing migration artifacts, balancing reflector amplitudes, and enhancing the spatial resolution. We conclude that LSRTM with frequency-selection is an efficient migration method that can sometimes produce more focused images than conventional RTM. © 2013 Society of Exploration Geophysicists. A Bayesian matching pursuit based scheduling algorithm for feedback reduction in MIMO broadcast channels Shibli, Hussain J.; Eltayeb, Mohammed E.; Al-Naffouri, Tareq Y. challenges are faced during uplink transmission. First of all, the statistics of the noisy and fading feedback channels are unknown at the base station (BS) and channel training is usually required from all users. Secondly, the amount of network resources Efficient Weibull channel model for salinity induced turbulent underwater wireless optical communications Oubei, Hassan M.; Zedini, Emna; Elafandy, Rami T.; Kammoun, Abla; Ng, Tien Khee; Alouini, Mohamed-Slim; Ooi, Boon S. Recent advances in underwater wireless optical communications necessitate a better understanding of the underwater channel. We propose the Weibull model to characterize the fading of salinity induced turbulent underwater wireless optical channels Channel characterization for high-speed W-band wireless communication links Rommel, Simon; Cavalcante, Lucas Costa Pereira; Vegas Olmos, Juan José We present and discuss results from an experimental characterization of the W-band indoor wireless channel, including both large and small scale fading phenomena as well as corresponding channel parameters and their impact on system performance.... Temperature dependency of cupular mechanics and hair cell frequency selectivity in the fish canal lateral line organ Wiersinga-Post, JEC; van Netten, SM The mechanical frequency selectivity of the cupula located in the supraorbital lateral line canal and the frequency selectivity of the hair cells driven by the cupula were measured simultaneously in vivo. Laser interferometry was used to measure cupular mechanics and extracellular receptor Experimental study of the helicopter-mobile radioelectrical channel and possible extension to the satellite-mobile channel Blanchetiere-Ciarletti, V.; Sylvain, M.; Lemenn, P. The use of satellite seems to be an answer to the radioelectrical covering problem for the mobile communications, particularly in the low populated areas. Frequency bands at 1.5 and 2.5 GHz have been dedicated to these future services. Satellite-mobile links will be much more affected by propagation phenomena than the existing links between satellites and fixed stations. The reasons for that are twofold: The probable use of LEO (Low-Earth-Orbit) satellites instead of GEO; such satellites will have to be received at relatively low elevation to limit their number; the use of mobile communication terminals with small and non directive antennas that must work in various environments instead of terrestrian stations located at carefully chosen places and equipped with large diameter paraboloids. These propagation phenomena mainly consist in the fading of the signal level (shadowing of the link), and a frequency selective fading due to multipath propagation. The experience run by C.R.P.E. is aimed at a better understanding of the satellite-mobile propagation channel at fixed frequency as well as on a large band. In this paper, we discuss preliminary results from a series of propagation measurements performed (by lack of any experimental satellite) on an experimental radio link at 1.45 GHz on a of 20 MHz bandwidth between a helicopter flying at a height of 2 km and a mobile receiver. The whole experiment has been run in a rural environment in Brittany (France). In a first part, we illustrate the quality of the data collected during the experiment on a typical case study and give a possible physical interpretation of the observed phenomena. Then we present statistical results concerning the various characteristics (attenuation and delay spreads) of the propagation channel. Finally, we discuss the problem of using a helicopter (flying at a height of 2 km) as a substitute for a satellite at about 1000 km and try to estimate to what extent it is possible to use the data Flash-induced fading: Dependence on colour and shape similarity Vergeer, M.L.T.; Lier, R.J. van We investigated the effects of perceptual grouping by colour and shape similarity on flash-induced perceptual fading. This flash-induced fading effect (Kanai et al, 2003 Journal of Cognitive Neuroscience 15 664 - 672) is considered as a time-locked variant of the Troxler effect. In the original Modeling and Analysis of Cellular CDMA Forward Channel Tighe, Jan In this thesis, we develop the forward channel model for a DS-CDMA cellular system operating in a slow-flat Rayleigh fading and log normal shadowing environment, which incorporates the extended Hata... Autonomous detection of ISO fade point with color laser printers Yan, Ni; Maggard, Eric; Fothergill, Roberta; Jessome, Renee J.; Allebach, Jan P. Image quality assessment is a very important field in image processing. Human observation is slow and subjective, it also requires strict environment setup for the psychological test 1. Thus developing algorithms to match desired human experiments is always in need. Many studies have focused on detecting the fading phenomenon after the materials are printed, that is to monitor the persistence of the color ink 2-4. However, fading is also a common artifact produced by printing systems when the cartridges run low. We want to develop an automatic system to monitor cartridge life and report fading defects when they appear. In this paper, we first describe a psychological experiment that studies the human perspective on printed fading pages. Then we propose an algorithm based on Color Space Projection and K-means clustering to predict the visibility of fading defects. At last, we integrate the psychological experiment result with our algorithm to give a machine learning tool that monitors cartridge life. Secure Path Selection under Random Fading Furqan Jameel Full Text Available Application-oriented Wireless Sensor Networks (WSNs promises to be one of the most useful technologies of this century. However, secure communication between nodes in WSNs is still an unresolved issue. In this context, we propose two protocols (i.e. Optimal Secure Path (OSP and Sub-optimal Secure Path (SSP to minimize the outage probability of secrecy capacity in the presence of multiple eavesdroppers. We consider dissimilar fading at the main and wiretap link and provide detailed evaluation of the impact of Nakagami-m and Rician-K factors on the secrecy performance of WSNs. Extensive simulations are performed to validate our findings. Although the optimal scheme ensures more security, yet the sub-optimal scheme proves to be a more practical approach to secure wireless links. Analysis of Broad-band Frequency Selective Shielding Glass by FDTD method A frequency Selective shielding (FSS) glass is a print of many same size antennas on a sheet of glass, and it has high shielding properties for one specific frequency. In the past, the author analyzed theoretically the characteristics of the FSS, as a large scale array antenna. The FSS has narrow-band shielding characteristics. This time, the author analyzed accurately the characteristics of a FSS glass being a print of many same size dipole antennas on a sheet of glass by FDTD method. As the... Spatial Frequency Selectivity Is Impaired in Dopamine D2 Receptor Knockout Mice Souza, Bruno Oliveira Ferreira; Abou Rjeili, Mira; Quintana, Clémentine; Beaulieu, Jean M.; Casanova, Christian Dopamine is a neurotransmitter implicated in several brain functions, including vision. In the present study, we investigated the impacts of the lack of D2 dopamine receptors on the structure and function of the primary visual cortex (V1) of D2-KO mice using optical imaging of intrinsic signals. Retinotopic maps were generated in order to measure anatomo-functional parameters such as V1 shape, cortical magnification factor, scatter, and ocular dominance. Contrast sensitivity and spatial frequency selectivity (SF) functions were computed from responses to drifting gratings. When compared to control mice, none of the parameters of the retinotopic maps were affected by D2 receptor loss of function. While the contrast sensitivity function of D2-KO mice did not differ from their wild-type counterparts, SF selectivity function was significantly affected as the optimal SF and the high cut-off frequency (p D2-KO than in WT mice. These findings show that the lack of function of D2 dopamine receptors had no influence on cortical structure whereas it had a significant impact on the spatial frequency selectivity and high cut-off. Taken together, our results suggest that D2 receptors play a specific role on the processing of spatial features in early visual cortex while they do not seem to participate in its development. PMID:29379422 Frequency-Selective Attention in Auditory Scenes Recruits Frequency Representations Throughout Human Superior Temporal Cortex. Riecke, Lars; Peters, Judith C; Valente, Giancarlo; Kemper, Valentin G; Formisano, Elia; Sorger, Bettina A sound of interest may be tracked amid other salient sounds by focusing attention on its characteristic features including its frequency. Functional magnetic resonance imaging findings have indicated that frequency representations in human primary auditory cortex (AC) contribute to this feat. However, attentional modulations were examined at relatively low spatial and spectral resolutions, and frequency-selective contributions outside the primary AC could not be established. To address these issues, we compared blood oxygenation level-dependent (BOLD) responses in the superior temporal cortex of human listeners while they identified single frequencies versus listened selectively for various frequencies within a multifrequency scene. Using best-frequency mapping, we observed that the detailed spatial layout of attention-induced BOLD response enhancements in primary AC follows the tonotopy of stimulus-driven frequency representations-analogous to the "spotlight" of attention enhancing visuospatial representations in retinotopic visual cortex. Moreover, using an algorithm trained to discriminate stimulus-driven frequency representations, we could successfully decode the focus of frequency-selective attention from listeners' BOLD response patterns in nonprimary AC. Our results indicate that the human brain facilitates selective listening to a frequency of interest in a scene by reinforcing the fine-grained activity pattern throughout the entire superior temporal cortex that would be evoked if that frequency was present alone. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: [email protected]. Video error concealment using block matching and frequency selective extrapolation algorithms P. K., Rajani; Khaparde, Arti Error Concealment (EC) is a technique at the decoder side to hide the transmission errors. It is done by analyzing the spatial or temporal information from available video frames. It is very important to recover distorted video because they are used for various applications such as video-telephone, video-conference, TV, DVD, internet video streaming, video games etc .Retransmission-based and resilient-based methods, are also used for error removal. But these methods add delay and redundant data. So error concealment is the best option for error hiding. In this paper, the error concealment methods such as Block Matching error concealment algorithm is compared with Frequency Selective Extrapolation algorithm. Both the works are based on concealment of manually error video frames as input. The parameter used for objective quality measurement was PSNR (Peak Signal to Noise Ratio) and SSIM(Structural Similarity Index). The original video frames along with error video frames are compared with both the Error concealment algorithms. According to simulation results, Frequency Selective Extrapolation is showing better quality measures such as 48% improved PSNR and 94% increased SSIM than Block Matching Algorithm. Temporal and Spatial Characterization of GPS Fading From Ionospheric Irregularities Under Low latitude De Paula, E. R.; Moraes, A. D. O.; Vani, B. C.; Sobral, J. H. A.; Abdu, M. A.; Galera Monico, J. F. The ionosphere over the peak of the anomaly represents a treat for navigation systems based on GNSS. Brazilian territory is mostly under this harsh layer for satellite communication in general and in particular for navigation, like GPS users, where their receivers tracking performance are damaged under scintillation conditions. Ionospheric scintillation is responsible for significant degradation in the accuracy of navigation and positioning. Phase shifts accompanied by amplitude fades significantly degrades the signal-to-noise ratio of the received signal disrupting the channel and loosing navigation performance. The stronger the scintillations are, more difficulty will be for the GNSS receiver tracking loops to recover the phase and code replicas. These phenomena under specific geophysical conditions may severely affect the system availability and positioning. In this work the temporal characteristics of amplitude scintillation will be analyzed at the three available GPS frequencies, L1, L2C and L5. Aspect like fading duration and depth will be evaluated and compared among the three available frequencies for the current solar cycle. This work will use GPS scintillation data recorded during six months of data during 2014 to 2015 at three stations under Brazilian territory located near the southern crest of the equatorial ionization anomaly. The analysis will be performed focusing on the fading profile of the three frequencies comparing how the fading of those signals behave statistically between them. Aspects like loss of lock, spatial orientation between the signal across the ionospheric irregularity will also be discussed showing how much more susceptible the new frequencies might be in comparison to the widely used and studied L1 frequency. Fade detector for the FODA-TDMA access scheme Celandroni, Nedo; Ferro, Erina; Marzoli, Antonio The First in first out Ordered Demand Assignment-Time Division Multiple Access (FODA-TDMA) satellite access scheme designed for simultaneous transmissions of real time data, like packetized voice and slow-scan images (stream traffic) and data coming from standard EDP applications, such as bulk data tansfer, interactive computer access, mailing, data base enquiry and updating (datagram traffic) is described. When deep fades are experienced due to rain attenuation, the system is able to counter the fade. Techniques to detect the fade are presented. Improved CT-detection of acute bowel ischemia using frequency selective non-linear image blending. Schneeweiss, Sven; Esser, Michael; Thaiss, Wolfgang; Boesmueller, Hans; Ditt, Hendrik; Nikolau, Konstantin; Horger, Marius Computed tomography (CT) as a fast and reliable diagnostic technique is the imaging modality of choice for acute bowel ischemia. However, diagnostic is often difficult mainly due to low attenuation differences between ischemic and perfused segments. To compare the diagnostic efficacy of a new post-processing tool based on frequency selective non-linear blending with that of conventional linear contrast-enhanced CT (CECT) image blending for the detection of bowel ischemia. Twenty-seven consecutive patients (19 women; mean age = 73.7 years, age range = 50-94 years) with acute bowel ischemia were scanned using multidetector CT (120 kV; 100-200 mAs). Pre-contrast and portal venous scans (65-70 s delay) were acquired. All patients underwent surgery for acute bowel ischemia and intraoperative diagnosis as well as histologic evaluation of explanted bowel segments was considered "gold standard." First, two radiologists read the conventional CECT images in which linear blending was adapted for optimal contrast, and second (three weeks later) the frequency selective non-linear blending (F-NLB) image. Attenuation values were compared, both in the involved and non-involved bowel segments creating ratios between unenhanced and CECT. The mean attenuation difference between ischemic and non-ischemic wall in the portal venous scan was 69.54 HU (reader 2 = 69.01 HU) higher for F-NLB compared with conventional CECT. Also, the attenuation ratio between contrast-enhanced and pre-contrast CT data for the non-ischemic walls showed significantly higher values for the F-NLB image (CECT: reader 1 = 2.11 (reader 2 = 3.36), F-NLB: reader 1 = 4.46 (reader 2 = 4.98)]. Sensitivity in detecting ischemic areas increased significantly for both readers using F-NLB (CECT: reader 1/2 = 53%/65% versus F-NLB: reader 1/2 = 62%/75%). Frequency selective non-linear blending improves detection of bowel ischemia compared with conventional CECT by increasing New Closed-Form Results on Ordered Statistics of Partial Sums of Gamma Random Variables and its Application to Performance Evaluation in the Presence of Nakagami Fading Nam, Sung Sik; Ko, Young-Chai; Alouini, Mohamed-Slim in the literature. In addition, as a feasible application example in which our new offered derived closed-form results can be applied is presented. In particular, we analyze the outage performance of the finger replacement schemes over Nakagami fading channels On the ergodic secrecy capacity of the wiretap channel under imperfect main channel estimation Rezki, Zouheir; Khisti, Ashish J.; Alouini, Mohamed-Slim imperfect main channel estimation at the transmitter. Inner and outer bounds on the ergodic secrecy capacity are derived for a class of independent identically distributed (i.i.d.) fading channels. The achievable rate is a simple on-off scheme using a All-dielectric metamaterial frequency selective surface based on spatial arrangement ceramic resonators Li, Liyang; Wang, Jun; Feng, Mingde; Ma, Hua; Wang, Jiafu; Du, Hongliang; Qu, Shaobo In this paper, we demonstrate a method of designing all-dielectric metamaterial frequency selective surface (FSS) with ceramic resonators in spatial arrangement. Compared with the traditional way, spatial arrangement provides a flexible way to handle the permutation and combination of different ceramic resonators. With this method, the resonance response can be adjusted easily to achieve pass/stop band effects. As an example, a stop band spatial arrangement all-dielectric metamaterial FSS is designed. Its working band is in 11.65-12.23GHz. By adjusting permittivity and geometrical parameters of ceramic resonators, we can easily modulate the resonances, band pass or band stop characteristic, as well as the working band. A Comparison Between Jerusalem Cross and Square Patch Frequency Selective Surfaces for Low Profile Antenna Applications Cure, David; Weller, Thomas; Miranda, Felix A. In this paper, a comparison between Jerusalem Cross (JC) and Square Patch (SP) based Frequency Selected Surfaces (FSS) for low profile antenna applications is presented. The comparison is aimed at understanding the performance of low profile antennas backed by high impedance surfaces. In particular, an end loaded planar open sleeve dipole (ELPOSD) antenna is examined due to the various parameters within its configuration, offering significant design flexibility and a wide operating bandwidth. Measured data of the antennas demonstrate that increasing the number of unit cells improves the fractional bandwidth. The antenna bandwidth increased from 0.8% to 1.8% and from 0.8% to 2.7% for the JC and SP structures, respectively. The number of unit cells was increased from 48 to 80 for the JC-FSS and from 24 to 48 for the SP-FSS. Passive Frequency Selective Surface Array as a Diffuser for Destroying Millimeter Wave Coherence Saiful Islam Full Text Available This paper presents the design, construction, and testing of grounded frequency selective surface (FSS array as a diffuser for destroying millimeter wave coherence which is used to eliminate speckle in active millimeter wave imaging. To create stochastically independent illumination patterns, we proposed a diffuser based on random-phase distributions obtained by changing the incident frequency. The random-phase diffuser was obtained by mixing up the phase relations between the cells of a deterministic function (e.g., beam splitter. The slot length of FSS is the main design parameter used to optimize the phase shifting properties of the array. The critical parameters of the diffuser array design, such as phase relation with slot lengths, losses, and bandwidth, are discussed. We designed the FSS arrays with finite integral technique (FIT, fabricated by etching technique, and characterized the S-parameters with a free-space MVNA, and measured the radiation patterns with a BWO in motorized setup. Generation of radio vortex beams with designable polarization using anisotropic frequency selective surface Yang, Jin; Zhang, Cheng; Ma, Hui Feng; Zhao, Jie; Dai, Jun Yan; Yuan, Wei; Yang, Liu Xi; Cheng, Qiang; Cui, Tie Jun We propose a strategy to convert a linearly polarized wave from a single point source to an orbital angular momentum (OAM) wave by arbitrary polarization via an anisotropic frequency selective surface (FSS) in the microwave frequency. By tailoring the geometries of FSS elements, reflection-phases in x and y polarizations are engineered and encoded independently, which allows us to design the eventual polarization state of the generated OAM vortex beam by elaborately selecting individual coding sequences for each polarization. Two types of FSSs are designed and experimentally characterized to demonstrate the capability of OAM generation with circular and linear polarizations, respectively, showing excellent performance in a wide bandwidth from 14 to 16 GHz. This method provides opportunities for polarization multiplexing in microwave OAM communication systems. Fine-structure processing, frequency selectivity and speech perception in hearing-impaired listeners Strelcyk, Olaf; Dau, Torsten Hearing-impaired people often experience great difficulty with speech communication when background noise is present, even if reduced audibility has been compensated for. Other impairment factors must be involved. In order to minimize confounding effects, the subjects participating in this study...... consisted of groups with homogeneous, symmetric audiograms. The perceptual listening experiments assessed the intelligibility of full-spectrum as well as low-pass filtered speech in the presence of stationary and fluctuating interferers, the individual's frequency selectivity and the integrity of temporal...... modulation were obtained. In addition, these binaural and monaural thresholds were measured in a stationary background noise in order to assess the persistence of the fine-structure processing to interfering noise. Apart from elevated speech reception thresholds, the hearing impaired listeners showed poorer... Analytical Solution for Elliptical Cloaks Based on The Frequency Selective Surface E. Ghasemi Mizuji Full Text Available In this paper the elliptical dielectric cylinder which is covered with FSS cloak is considered. Frequency selective surface cloak which Alu named it mantle cloak is one of the recent techniques for cloaking. In this method an appropriate FSS can act as cloaking device for suppressing the scattering of object in the desired frequency. With using this method the dimension of the cloaks is extremely reduced. By this proposed structure, the RCS of elliptical cylinder is reduced about 10-20 dB and designed cloak has an appropriate performance. The analytical solution for the wave in each layer is presented and with using simulation, the electric field and the scattering pattern has been drawn. On the application of frequency selective common mode feedback for multifrequency EIT. Langlois, Peter J; Wu, Yu; Bayford, Richard H; Demosthenous, Andreas Common mode voltages are frequently a problem in electrical impedance tomography (EIT) and other bioimpedance applications. To reduce their amplitude common mode feedback is employed. Formalised analyses of both current and voltage feedback is presented in this paper for current drives. Common mode effects due to imbalances caused by the current drives, the electrode connections to the body load and the introduction of the body impedance to ground are considered. Frequency selective narrowband common mode feedback previously proposed to provide feedback stability is examined. As a step towards multifrequency applications the use of narrowband feedback is experimentally demonstrated for two simultaneous current drives. Measured results using standard available components show a reduction of 62 dB for current feedback and 31 dB for voltage feedback. Frequencies ranged from 50 kHz to 1 MHz. An Automated Fading Procedure to Alter Sexual Responsiveness in Pedophiles Laws, D. R.; Pawlowski, A. V. An automated stimulus fading procedure was used to strengthen sexual responsiveness to adult stimuli in two pedophiles. The degree of responsiveness was indicated by changes in the penile response. Implications for future research are discussed. (Author) A reason of fast and deep fading of centimeter wave Danzan, D.; Damdinsuren, E.; Hiamjav, J.; Chuluunbaatar, Ch.; Battulga, S. First discovered experimentally exactly correlation between of appearance and of disappearance of optical mirage and fast and deep fading of horizontal polarization of centimeter wave. Proved the interference of the straight and reflected rays from the thin layer of air in mirage a reason of this fading. The physical parameters data of the layer of mirage: change of dielectric permeability and n/ h gradient of refraction index of air in this layer are been showed Wireless Power Transfer in Cooperative DF Relaying Networks with Log-Normal Fading Energy-harvesting (EH) and wireless power transfer in cooperative relaying networks have recently attracted a considerable amount of research attention. Most of the existing work on this topic however focuses on Rayleigh fading channels which represents outdoor environments. Unlike these studies, in this paper we analyze the performance of wireless power transfer in two-hop decode-and- forward (DF) cooperative relaying systems in indoor channels characterized by log-normal fading. Three well-known EH protocols are considered in our evaluations: a) time switching relaying (TSR), b) power splitting relaying (PSR) and c) ideal relaying receiver (IRR). The performance is evaluated in terms of the ergodic outage probability for which we derive accurate analytical expressions for the three systems under consideration. Results reveal that careful selection of the EH time and power splitting factors in the TSR- and PSR-based system are important to optimize performance. It is also presented that the optimized PSR system has near- ideal performance and that increasing the source transmit power and/or the energy harvester efficiency can further improve performance. Outage Analysis of Spectrum-Sharing over M-Block Fading with Sensing Information Future wireless technologies, such as, 5G, are expected to support real-time applications with high data throughput, e.g., holographic meetings. From a bandwidth perspective, cognitive radio is a promising technology to enhance the system's throughput via sharing the licensed spectrum. From a delay perspective, it is well known that increasing the number of decoding blocks will improve the system robustness against errors, while increasing the delay. Therefore, optimally allocating the resources to determine the tradeoff of tuning the length of decoding blocks while sharing the spectrum is a critical challenge for future wireless systems. In this work, we minimize the targeted outage probability over the block-fading channels while utilizing the spectrum-sharing concept. The secondary user's outage region and the corresponding optimal power are derived, over twoblocks and M-blocks fading channels. We propose two suboptimal power strategies and derive the associated asymptotic lower and upper bounds on the outage probability with tractable expressions. These bounds allow us to derive the exact diversity order of the secondary user's outage probability. To further enhance the system's performance, we also investigate the impact of including the sensing information on the outage problem. The outage problem is then solved via proposing an alternating optimization algorithm, which utilizes the verified strict quasiconvex structure of the problem. Selected numerical results are presented to characterize the system's behavior and show the improvements of several sharing concepts. Rabie, Khaled M.; Adebisi, Bamidele; Alouini, Mohamed-Slim or noisy CSI at the transmitter and perfect CSI at the receiver. We also show that one-bit CSI at the transmitter is enough to achieve this asymptotic capacity using an On-Off power control scheme. Our framework may be seen as a generalization of previous multidimensional rectangle structure and thus is simply characterized by single user capacity points. More specifically, we show that at low power regime, the boundary surface of the capacity region shrinks to a single point corresponding to the sum rate maximizer MIMO Fading Emulator Development with FPGA and Its Application to Performance Evaluation of Mobile Radio Systems Yoshio Karasawa Full Text Available We present four new developments for a multiple-input multiple-output (MIMO over-the-air measurement system based on our previous studies. The first two developments relate to the channel model for multipath environment generation. One is a further simplification of the circuit configuration without performance degradation by reducing the number of delay generation units, which dominate the performance limit when implementing the circuit on a field-programmable gate array (FPGA. The other is to realize spatial correlation characteristics among the input ports on the transmission side, whereas the previously proposed channel model did not consider this correlation. The third development involves the details of implementing the MIMO fading emulator on an FPGA as a two-stage scheme. The fourth is the demonstration of application examples of the developed system. Frequency Selective Non-Linear Blending to Improve Image Quality in Liver CT. Bongers, M N; Bier, G; Kloth, C; Schabel, C; Fritz, J; Nikolaou, K; Horger, M Purpose: To evaluate the effects of a new frequency selective non-linear blending (NLB) algorithm on the contrast resolution of liver CT with low intravascular concentration of iodine contrast. Materials and Methods: Our local ethics committee approved this retrospective study. The informed consent requirement was waived. CT exams of 25 patients (60 % female, mean age: 65 ± 16 years of age) with late phase CT scans of the liver were included as a model for poor intrahepatic vascular contrast enhancement. Optimal post-processing settings to enhance the contrast of hepatic vessels were determined. Outcome variables included signal-to-noise (SNR) and contrast-to-noise ratios (CNR) of hepatic vessels and SNR of liver parenchyma of standard and post-processed images. Image quality was quantified by two independent readers using Likert scales. Results: The post-processing settings for the visualization of hepatic vasculature were optimal at a center of 115HU, delta of 25HU, and slope of 5. Image noise was statistically indifferent between standard and post-processed images. The CNR between the hepatic vasculature (HV) and liver parenchyma could be significantly increased for liver veins (CNR Standard 1.62 ± 1.10, CNR NLB 3.6 ± 2.94, p = 0.0002) and portal veins (CNR Standard 1.31 ± 0.85, CNR NLB 2.42 ± 3.03, p = 0.046). The SNR of liver parenchyma was significantly higher on post-processed images (SNR NLB 11.26 ± 3.16, SNR Standard 8.85 ± 2.27, p = 0.008). The overall image quality and depiction of HV were significantly higher on post-processed images (NLB DHV : 4 [3 - 4.75], S tandardDHV : 2 [1.3 - 2.5], p = algorithm increases the contrast resolution of liver CT and can improve the visibility of the hepatic vasculature in the setting of a low contrast ratio between vessels and the parenchyma. Key Points: • Using the new frequency selective non-linear blending algorithm is feasible in contrast Performance of super-orthogonal space-time trellis code in a multipath environment Sokoya, OA Full Text Available This paper investigates the performance of Super-Orthogonal Space-time Trellis Code (SOSTTC) designed primarily for non-frequency selective (i.e. flat) fading channel but now applied to a frequency selective fading channel. A new decoding trellis... Frequency selective non-linear blending to improve image quality in liver CT Bongers, M.N.; Bier, G.; Kloth, C.; Schabel, C.; Nikolaou, K.; Horger, M.; Fritz, J. To evaluate the effects of a new frequency selective non-linear blending (NLB) algorithm on the contrast resolution of liver CT with low intravascular concentration of iodine contrast. Our local ethics committee approved this retrospective study. The informed consent requirement was waived. CT exams of 25 patients (60% female, mean age: 65±16 years of age) with late phase CT scans of the liver were included as a model for poor intrahepatic vascular contrast enhancement. Optimal post-processing settings to enhance the contrast of hepatic vessels were determined. Outcome variables included signal-to-noise (SNR) and contrast-to-noise ratios (CNR) of hepatic vessels and SNR of liver parenchyma of standard and post-processed images. Image quality was quantified by two independent readers using Likert scales. The post-processing settings for the visualization of hepatic vasculature were optimal at a center of 115HU, delta of 25HU, and slope of 5. Image noise was statistically indifferent between standard and post-processed images. The CNR between the hepatic vasculature (HV) and liver parenchyma could be significantly increased for liver veins (CNR Standard 1.62±1.10, CNR NLB 3.6±2.94, p=0.0002) and portal veins (CNR Standard 1.31±0.85, CNR NLB 2.42±3.03, p=0.046). The SNR of liver parenchyma was significantly higher on post-processed images (SNR NLB 11.26±3.16, SNR Standard 8.85± 2.27, p=0.008). The overall image quality and depiction of HV were significantly higher on post-processed images (NLB DHV : 4 [3-4.75], S tandardDHV : 2 [1.3-2.5], p=<0.0001; NLBIQ : 4 [4-4], StandardIQ : 2 [2-3], p=<0.0001). The use of a frequency selective non-linear blending algorithm increases the contrast resolution of liver CT and can improve the visibility of the hepatic vasculature in the setting of a low contrast ratio between vessels and the parenchyma. Bongers, M.N.; Bier, G.; Kloth, C.; Schabel, C.; Nikolaou, K.; Horger, M. [University Hospital of Tuebingen (Germany). Dept. of Diagnostic and Interventional Radiology; Fritz, J. [Johns Hopkins University School of Medicine, Baltimore, MD (United States). Russell H. Morgan Dept. of Radiology and Radiological Science To evaluate the effects of a new frequency selective non-linear blending (NLB) algorithm on the contrast resolution of liver CT with low intravascular concentration of iodine contrast. Our local ethics committee approved this retrospective study. The informed consent requirement was waived. CT exams of 25 patients (60% female, mean age: 65±16 years of age) with late phase CT scans of the liver were included as a model for poor intrahepatic vascular contrast enhancement. Optimal post-processing settings to enhance the contrast of hepatic vessels were determined. Outcome variables included signal-to-noise (SNR) and contrast-to-noise ratios (CNR) of hepatic vessels and SNR of liver parenchyma of standard and post-processed images. Image quality was quantified by two independent readers using Likert scales. The post-processing settings for the visualization of hepatic vasculature were optimal at a center of 115HU, delta of 25HU, and slope of 5. Image noise was statistically indifferent between standard and post-processed images. The CNR between the hepatic vasculature (HV) and liver parenchyma could be significantly increased for liver veins (CNR{sub Standard} 1.62±1.10, CNR{sub NLB} 3.6±2.94, p=0.0002) and portal veins (CNR{sub Standard} 1.31±0.85, CNR{sub NLB} 2.42±3.03, p=0.046). The SNR of liver parenchyma was significantly higher on post-processed images (SNR{sub NLB} 11.26±3.16, SNR{sub Standard} 8.85± 2.27, p=0.008). The overall image quality and depiction of HV were significantly higher on post-processed images (NLB{sub DHV}: 4 [3-4.75], S{sub tandardDHV}: 2 [1.3-2.5], p=<0.0001; {sub NLBIQ}: 4 [4-4], {sub StandardIQ}: 2 [2-3], p=<0.0001). The use of a frequency selective non-linear blending algorithm increases the contrast resolution of liver CT and can improve the visibility of the hepatic vasculature in the setting of a low contrast ratio between vessels and the parenchyma. Frequency selective surface based passive wireless sensor for structural health monitoring Jang, Sang-Dong; Kang, Byung-Woo; Kim, Jaehwan Wireless sensor networks or ubiquitous sensor networks are a promising technology giving useful information to people. In particular, the chipless passive wireless sensor is one of the most important developments in wireless sensor technology because it is compact and does not need a battery or chip for the sensor operation. So it has many possibilities for use in various types of sensor system with economical efficiency and robustness in harsh environmental conditions. This sensor uses an electromagnetic resonance frequency or phase angle shift associated with a geometrical change of the sensor tag or an impedance change of the sensor. In this paper, a chipless passive wireless structural health monitoring (SHM) sensor is made using a frequency selective surface (FSS). The cross type FSS is introduced, and its SHM principle is explained. The electromagnetic characteristics of the FSS are simulated in terms of transmission and reflection coefficients using simulation software, and an experimental verification is conducted. The electromagnetic characteristic change of the FSS in the presence of mechanical strain or a structural crack is investigated by means of simulation and experiment. Since large-area structures can be covered by deploying FSS, it is possible to detect the location of any cracks. (paper) Wideband RCS Reduction of Microstrip Array Antenna Based on Absorptive Frequency Selective Surface and Microstrip Resonators Jingjing Xue Full Text Available An approach for wideband radar cross section (RCS reduction of a microstrip array antenna is presented and discussed. The scheme is based on the microstrip resonators and absorptive frequency selective surface (AFSS with a wideband absorptive property over the low band 1.9–7.5 GHz and a transmission characteristic at high frequency 11.05 GHz. The AFSS is designed to realize the out-of-band RCS reduction and preserve the radiation performance simultaneously, and it is placed above the antenna with the operating frequency of 11.05 GHz. Moreover, the microstrip resonators are loaded to obtain the in-band RCS reduction. As a result, a significant RCS reduction from 1.5 GHz to 13 GHz for both types of polarization has been accomplished. Compared with the reference antenna, the simulated results exhibit that the monostatic RCS of the proposed array antenna in x- and y-polarization can be reduced as much as 17.6 dB and 21.5 dB, respectively. And the measured results agree well with the simulated ones. Analysis and applications of a frequency selective surface via a random distribution method Xie Shao-Yi; Huang Jing-Jian; Yuan Nai-Chang; Liu Li-Guo A novel frequency selective surface (FSS) for reducing radar cross section (RCS) is proposed in this paper. This FSS is based on the random distribution method, so it can be called random surface. In this paper, the stacked patches serving as periodic elements are employed for RCS reduction. Previous work has demonstrated the efficiency by utilizing the microstrip patches, especially for the reflectarray. First, the relevant theory of the method is described. Then a sample of a three-layer variable-sized stacked patch random surface with a dimension of 260 mm×260 mm is simulated, fabricated, and measured in order to demonstrate the validity of the proposed design. For the normal incidence, the 8-dB RCS reduction can be achieved both by the simulation and the measurement in 8 GHz–13 GHz. The oblique incidence of 30° is also investigated, in which the 7-dB RCS reduction can be obtained in a frequency range of 8 GHz–14 GHz. (condensed matter: electronic structure, electrical, magnetic, and optical properties) Ferrite Film Loaded Frequency Selective Metamaterials for Sub-GHz Applications Bo Gao Full Text Available Electromagnetic metamaterials are constructed with sub-wavelength structures that exhibit particular electromagnetic properties under a certain frequency range. Because the form-factor of the substructures has to be comparable to the wavelength of the operating frequency, few papers have discussed the metamaterials under GHz frequency. In this paper, we developed an innovative method to reduce the resonant frequency of metamaterals. By integrating the meta-structures with ferrite materials of higher permeability, the cell size of the meta-structure can be scaled down. This paper describes the methodology, design, and development of low-profile GHz ferrite loaded metamaterials. A ferrite film with a permeability of 20 could reduce the resonant frequency of metamaterials by up to 50%. A prototype has been fabricated and the measurement data align well with the simulation results. Because of the lowered operational frequency, the proposed ferrite loaded metamaterials offer more flexibility for various sub-GHz microwave applications, such as cloaks, absorbers, and frequency selective surfaces. A reconfigurable frequency-selective surface for dual-mode multi-band filtering applications Majidzadeh, Maryam; Ghobadi, Changiz; Nourinia, Javad A reconfigurable single-layer frequency-selective surface (FSS) with dual-mode multi-band modes of operation is presented. The proposed structure is printed on a compact 10 × 10 mm2 FR4 substrate with the thickness of 1.6 mm. A simple square loop is printed on the front side while another one along with two defected vertical arms is deployed on the backside. To realise the reconfiguration, two pin diodes are embedded on the backside square loop. Suitable insertion of conductive elements along with pin diodes yields in dual-mode multi-band rejection of applicable in service frequency ranges. The first operating mode due to diodes' 'ON' state provides rejection of 2.4 GHz WLAN in 2-3 GHz, 5.2/5.8 GHz WLAN and X band in 5-12 GHz, and a part of Ku band in 13.9-16 GHz. In diodes 'OFF' state, the FSS blocks WLAN in 4-7.3 GHz, X band in 8-12.7 GHz as well as part of Ku band in 13.7-16.7 GHz. As well, high attenuation of incident waves is observed by a high shielding effectiveness (SE) in the blocked frequency bands. Also, a stable behaviour against different polarisations and angles of incidence is obtained. Comprehensive studies are conducted on a fabricated prototype to assess its performance from which encouraging results are obtained. Hybrid digital-analog coding with bandwidth expansion for correlated Gaussian sources under Rayleigh fading Yahampath, Pradeepa Consider communicating a correlated Gaussian source over a Rayleigh fading channel with no knowledge of the channel signal-to-noise ratio (CSNR) at the transmitter. In this case, a digital system cannot be optimal for a range of CSNRs. Analog transmission however is optimal at all CSNRs, if the source and channel are memoryless and bandwidth matched. This paper presents new hybrid digital-analog (HDA) systems for sources with memory and channels with bandwidth expansion, which outperform both digital-only and analog-only systems over a wide range of CSNRs. The digital part is either a predictive quantizer or a transform code, used to achieve a coding gain. Analog part uses linear encoding to transmit the quantization error which improves the performance under CSNR variations. The hybrid encoder is optimized to achieve the minimum AMMSE (average minimum mean square error) over the CSNR distribution. To this end, analytical expressions are derived for the AMMSE of asymptotically optimal systems. It is shown that the outage CSNR of the channel code and the analog-digital power allocation must be jointly optimized to achieve the minimum AMMSE. In the case of HDA predictive quantization, a simple algorithm is presented to solve the optimization problem. Experimental results are presented for both Gauss-Markov sources and speech signals. Rain Fade Compensation for Ka-Band Communications Satellites Mitchell, W. Carl; Nguyen, Lan; Dissanayake, Asoka; Markey, Brian; Le, Anh This report provides a review and evaluation of rain fade measurement and compensation techniques for Ka-band satellite systems. This report includes a description of and cost estimates for performing three rain fade measurement and compensation experiments. The first experiment deals with rain fade measurement techniques while the second one covers the rain fade compensation techniques. The third experiment addresses a feedback flow control technique for the ABR service (for ATM-based traffic). The following conclusions were observed in this report; a sufficient system signal margin should be allocated for all carriers in a network, that is a fixed clear-sky margin should be typically in the range of 4-5 dB and should be more like 15 dB in the up link for moderate and heavy rain zones; to obtain a higher system margin it is desirable to combine the uplink power control technique with the technique that implements the source information rate and FEC code rate changes resulting in a 4-5 dB increase in the dynamic part of the system margin. The experiments would assess the feasibility of the fade measurements and compensation techniques, and ABR feedback control technique. Performance Analysis of Iterative Channel Estimation and Multiuser Detection in Multipath DS-CDMA Channels Li, Husheng; Betz, Sharon M.; Poor, H. Vincent This paper examines the performance of decision feedback based iterative channel estimation and multiuser detection in channel coded aperiodic DS-CDMA systems operating over multipath fading channels. First, explicit expressions describing the performance of channel estimation and parallel interference cancellation based multiuser detection are developed. These results are then combined to characterize the evolution of the performance of a system that iterates among channel estimation, multiuser detection and channel decoding. Sufficient conditions for convergence of this system to a unique fixed point are developed. Fast Faraday fading of long range satellite signals. Heron, M. L. 20 MHz radio signals have been received during the day from satellite Beacon-B when it was below the optical horizon by using a bank of narrow filters to improve the signal to noise ratio. The Faraday fading rate becomes constant, under these conditions, at a level determined by the plasma frequency just below the F-layer peak. Variations in the Faraday fading rate reveal fluctuations in the electron density near the peak, while the rate of attaining the constant level depends on the shape of the electron density profile. Novel receivers for AF relaying with distributed STBC using cascaded and disintegrated channel estimation Khan, Fahd Ahmed; Chen, Yunfei; Alouini, Mohamed-Slim history in the receivers. The decision history is also incorporated in the existing Euclidean metric to improve its performance. Simulation results show that, for 16-quadrature-amplitude-modulation in a Rayleigh fading channel, a performance gain of up Exact performance of cooperative spectrum sensing for cognitive radios with quantized information under imperfect reporting channels Ben Ghorbel, Mahdi; Nam, Haewoon; Alouini, Mohamed-Slim between the sensing channels to overcome the fading and shadowing effects which allows reduction of miss-detection and false alarm probabilities. Information can be exchanged between cooperating users in different formats from the binary hard information A Framework for Uplink Intercell Interference Modeling with Channel-Based Scheduling Tabassum, Hina; Yilmaz, Ferkan; Dawy, Zaher; Alouini, Mohamed-Slim This paper presents a novel framework for modeling the uplink intercell interference(ICI) in a multiuser cellular network. The proposed framework assists in quantifying the impact of various fading channel models and state-of-the-art scheduling K Coverage Probability of 5G Wireless Cognitive Radio Network under Shadow Fading Effects Ankur S. Kang Full Text Available Land mobile communication is burdened with typical propagation constraints due to the channel characteristics in radio systems.Also,the propagation characteristics vary form place to place and also as the mobile unit moves,from time to time.Hence,the tramsmission path between transmitter and receiver varies from simple direct LOS to the one which is severely obstructed by buildings,foliage and terrain.Multipath propagation and shadow fading effects affect the signal strength of an arbitrary Transmitter-Receiver due to the rapid fluctuations in the phase and amplitude of signal which also determines the average power over an area of tens or hundreds of meters.Shadowing introduces additional fluctuations,so the received local mean power varies around the area –mean.The present section deals with the performance analysis of fifth generation wireless cognitive radio network on the basis of signal and interference level based k coverage probability under the shadow fading effects. Fading Evaluation in the 60 GHz Band in Line-of-Sight Conditions J. Reig Full Text Available An exhaustive analysis of the small-scale fading amplitude in the 60 GHz band is addressed for line-of-sight conditions (LOS. From a measurement campaign carried out in a laboratory, we have estimated the distribution of the small-scale fading amplitude over a bandwidth of 9 GHz. From the measured data, we have estimated the parameters of the Rayleigh, Rice, Nakagami-m, Weibull, and α-μ distributions for the small-scale amplitudes. The test of Kolmogorov-Smirnov (K-S for each frequency bin is used to evaluate the performance of such statistical distributions. Moreover, the distributions of the main estimated parameters for such distributions are calculated and approximated for lognormal statistics in some cases. The matching of the above distributions to the experimental distribution has also been analyzed for the lower tail of the cumulative distribution function (CDF. These parameters offer information about the narrowband channel behavior that is useful for a better knowledge of the propagation characteristics at 60 GHz. Adaptive Combined Source and Channel Decoding with Modulation ... In this paper, an adaptive system employing combined source and channel decoding with modulation is proposed for slow Rayleigh fading channels. Huffman code is used as the source code and Convolutional code is used for error control. The adaptive scheme employs a family of Convolutional codes of different rates ... Complex wireless transmission systems require multi-dimensional joint statistical techniques for performance evaluation. Here, we first present the exact closed-form results on order statistics of any arbitrary partial sums of Gamma random variables with the closedform results of core functions specialized for independent and identically distributed Nakagami-m fading channels based on a moment generating function-based unified analytical framework. These both exact closed-form results have never been published in the literature. In addition, as a feasible application example in which our new offered derived closed-form results can be applied is presented. In particular, we analyze the outage performance of the finger replacement schemes over Nakagami fading channels as an application of our method. Note that these analysis results are directly applicable to several applications, such as millimeter-wave communication systems in which an antenna diversity scheme operates using an finger replacement schemes-like combining scheme, and other fading scenarios. Note also that the statistical results can provide potential solutions for ordered statistics in any other research topics based on Gamma distributions or other advanced wireless communications research topics in the presence of Nakagami fading. On the performance of multiuser scheduling with post-examining under non-identical fading Gaaloul, Fakhreddine; Alouini, Mohamed-Slim; Radaydeh, Redha M.; Yang, Hong-Chuan We investigate the performance of a multiuser downlink access scheme based on a post-selection switch and examine algorithm. The studied scheme sequentially switches over the users that experience independent and non-identically distributed fading conditions, and selects a single user with an acceptable channel quality as compared to a pre-selected signal-to-noise ratio (SNR) threshold. If no one of the users can satisfy the target channel quality, the base station (BS) takes the advantage of the knowledge of all users channels and serves the user with the best channel quality among all users. This scheme reduces considerably the feedback load but offers a lower average spectral efficiency (ASE) as compared to that of the full feedback system with instantaneous best user selection. On the other hand, it improves the system performances, such as outage probability and average bit error rate (BER), as compared to a system that is based on a standard switching scheme without post-selection. Numerical results for the ASE average BER, and average feed back load, are presented for the cases of outdated and non-outdated rate-adaptive modulation scheme operating over independent and non-identically distributed users. Gaaloul, Fakhreddine Least-squares Migration and Full Waveform Inversion with Multisource Frequency Selection Multisource Least-Squares Migration (LSM) of phase-encoded supergathers has shown great promise in reducing the computational cost of conventional migration. But for the marine acquisition geometry this approach faces the challenge of erroneous misfit due to the mismatch between the limited number of live traces/shot recorded in the field and the pervasive number of traces generated by the finite-difference modeling method. To tackle this mismatch problem, I present a frequency selection strategy with LSM of supergathers. The key idea is, at each LSM iteration, to assign a unique frequency band to each shot gather, so that the spectral overlap among those shots—and therefore their crosstallk—is zero. Consequently, each receiver can unambiguously identify and then discount the superfluous sources—those that are not associated with the receiver in marine acquisition. To compare with standard migration, I apply the proposed method to 2D SEG/EAGE salt model and obtain better resolved images computed at about 1/8 the cost; results for 3D SEG/EAGE salt model, with Ocean Bottom Seismometer (OBS) survey, show a speedup of 40×. This strategy is next extended to multisource Full Waveform Inversion (FWI) of supergathers for marine streamer data, with the same advantages of computational efficiency and storage savings. In the Finite-Difference Time-Domain (FDTD) method, to mitigate spectral leakage due to delayed onsets of sine waves detected at receivers, I double the simulation time and retain only the second half of the simulated records. To compare with standard FWI, I apply the proposed method to 2D velocity model of SEG/EAGE salt and to Gulf Of Mexico (GOM) field data, and obtain a speedup of about 4× and 8×. Formulas are then derived for the resolution limits of various constituent wavepaths pertaining to FWI: diving waves, primary reflections, diffractions, and multiple reflections. They suggest that inverting multiples can provide some low and intermediate Further investigations on 'non-fading' in K-Feldspar Jain, Mayank; Buylaert, Jan-Pieter; Thomsen, Kristina Jørkov , suggesting that the higher the stimulation temperature in post IR-IRSL methods, the greater the ability to access distant electron hole pairs. We further find that the delayed off-time signal in time-resolved IRSL has immense potential for sampling non-fading signal and should be explored further... GPS Multipath Fade Measurements to Determine L-Band Ground Reflectivity Properties Kavak, Adnan; Xu, Guang-Han; Vogel, Wolfhard J. In personal satellite communications, especially when the line-of-sight is clear, ground specular reflected signals along with direct signals are received by low gain, almost omni-directional subscriber antennas. A six-channel, C/A code processing, GPS receiver with an almost omni-directional patch antenna was used to take measurements over three types of ground to characterize 1.575 GHz specular ground reflections and ground dielectric properties. Fade measurements were taken over grass, asphalt, and lake water surfaces by placing the antenna in a vertical position at a fixed height from the ground. Electrical characteristics (conductivity and dielectric constant) of these surfaces (grass, asphalt, lake water) were obtained by matching computer simulations to the experimental results. Since the electromagnetic spectrum resource becomes more and more scarce, improving spectral efficiency is extremely important for the sustainable development of wireless communication systems and services. Integrating cooperative relaying techniques into spectrum-sharing cognitive radio systems sheds new light on higher spectral efficiency. In this paper, we analyze the end-to-end performance of cooperative amplify-and-forward (AF) relaying in spectrum-sharing systems. In order to achieve the optimal end-to-end performance, the transmit powers of the secondary source and the relays are optimized with respect to average interference power constraints at primary users and Nakagami-$m$ fading parameters of interference channels (for mathematical tractability, the desired channels from secondary source to relay and from relay to secondary destination are assumed to be subject to Rayleigh fading). Also, both partial and opportunistic relay-selection strategies are exploited to further enhance system performance. Based on the exact distribution functions of the end-to-end signal-to-noise ratio (SNR) obtained herein, the outage probability, average symbol error probability, diversity order, and ergodic capacity of the system under study are analytically investigated. Our results show that system performance is dominated by the resource constraints and it improves slowly with increasing average SNR. Furthermore, larger Nakagami-m fading parameter on interference channels deteriorates system performance slightly. On the other hand, when interference power constraints are stringent, opportunistic relay selection can be exploited to improve system performance significantly. All analytical results are corroborated by simulation results and they are shown to be efficient tools for exact evaluation of system performance. Frequency-Selective Signal Sensing with Sub-Nyquist Uniform Sampling Scheme Pierzchlewski, Jacek; Arildsen, Thomas In this paper the authors discuss a problem of acquisition and reconstruction of a signal polluted by adjacent- channel interference. The authors propose a method to find a sub-Nyquist uniform sampling pattern which allows for correct reconstruction of selected frequencies. The method is inspired...... by the Restricted Isometry Property, which is known from the field of compressed sensing. Then, compressed sensing is used to successfully reconstruct a wanted signal even if some of the uniform samples were randomly lost, e. g. due to ADC saturation. An experiment which tests the proposed method in practice... Half-Duplex and Full-Duplex AF and DF Relaying with Energy-Harvesting in Log-Normal Fading Energy-harvesting (EH) and wireless power transfer in cooperative relaying networks have recently attracted a considerable amount of research attention. Most of the existing work on this topic however focuses on Rayleigh fading channels, which represent outdoor environments. In contrast, this paper is dedicated to analyze the performance of dual-hop relaying systems with EH over indoor channels characterized by log-normal fading. Both half-duplex (HD) and full-duplex (FD) relaying mechanisms are studied in this work with decode-and-forward (DF) and amplify-and-forward (AF) relaying protocols. In addition, three EH schemes are investigated, namely, time switching relaying, power splitting relaying and ideal relaying receiver which serves as a lower bound. The system performance is evaluated in terms of the ergodic outage probability for which we derive accurate analytical expressions. Monte Carlo simulations are provided throughout to validate the accuracy of our analysis. Results reveal that, in both HD and FD scenarios, AF relaying performs only slightly worse than DF relaying which can make the former a more efficient solution when the processing energy cost at the DF relay is taken into account. It is also shown that FD relaying systems can generally outperform HD relaying schemes as long as the loop-back interference in FD is relatively small. Furthermore, increasing the variance of the log-normal channel has shown to deteriorate the performance in all the relaying and EH protocols considered. Tuning In to Sound: Frequency-Selective Attentional Filter in Human Primary Auditory Cortex Da Costa, Sandra; van der Zwaag, Wietske; Miller, Lee M.; Clarke, Stephanie Cocktail parties, busy streets, and other noisy environments pose a difficult challenge to the auditory system: how to focus attention on selected sounds while ignoring others? Neurons of primary auditory cortex, many of which are sharply tuned to sound frequency, could help solve this problem by filtering selected sound information based on frequency-content. To investigate whether this occurs, we used high-resolution fMRI at 7 tesla to map the fine-scale frequency-tuning (1.5 mm isotropic resolution) of primary auditory areas A1 and R in six human participants. Then, in a selective attention experiment, participants heard low (250 Hz)- and high (4000 Hz)-frequency streams of tones presented at the same time (dual-stream) and were instructed to focus attention onto one stream versus the other, switching back and forth every 30 s. Attention to low-frequency tones enhanced neural responses within low-frequency-tuned voxels relative to high, and when attention switched the pattern quickly reversed. Thus, like a radio, human primary auditory cortex is able to tune into attended frequency channels and can switch channels on demand. PMID:23365225 Oubei, Hassan M. Recent advances in underwater wireless optical communications necessitate a better understanding of the underwater channel. We propose the Weibull model to characterize the fading of salinity induced turbulent underwater wireless optical channels. The model shows an excellent agreement with the measured data under all channel conditions. On the secrecy capacity of the wiretap channel with imperfect main channel estimation We study the secrecy capacity of fast fading channels under imperfect main channel (between the transmitter and the legitimate receiver) estimation at the transmitter. Lower and upper bounds on the ergodic secrecy capacity are derived for a class of independent identically distributed (i.i.d.) fading channels. The achievable rate follows from a standard wiretap code in which a simple on-off power control is employed along with a Gaussian input. The upper bound is obtained using an appropriate correlation scheme of the main and eavesdropper channels and is the best known upper bound so far. The upper and lower bounds coincide with recently derived ones in case of perfect main CSI. Furthermore, the upper bound is tight in case of no main CSI, where the secrecy capacity is equal to zero. Asymptotic analysis at high and low signal-to-noise ratio (SNR) is also given. At high SNR, we show that the capacity is bounded by providing upper and lower bounds that depend on the channel estimation error. At low SNR, however, we prove that the secrecy capacity is asymptotically equal to the capacity of the main channel as if there were no secrecy constraint. Numerical results are provided for i.i.d. Rayleigh fading channels. Performance of fading multi-user diversity for underlay cognitive networksy Khan, Fahd Ahmed Having multiple secondary users (SUs) can be exploited to achieve multiuser diversity and improve the throughput of the underlay secondary network. In the cognitive setting, satisfying the interference constraint is essential, and thus, a scheduling scheme is considered where some SUs are preselected based on the low interference power. From this subset, the SU that yields the highest throughput is selected for transmission. This scheduling scheme helps to lower the interference power while giving good throughput. For an independent but not identically distributed Nakagami-m fading channel, we obtain exact closed-form expressions of the capacity of this scheduling scheme. Furthermore, the scheduling time of SUs is characterized and closed-form expressions for the mean time after which a SU is selected for transmission are obtained. Numerical simulations are performed to corroborate the derived analytical results. Our results show that at low interference threshold, increasing transmit power of the SUs is not beneficial and results in reduced capacity. Furthermore, the channel idle time (i.e. time that no user is utilizing the channel) reduces with increasing the number of SUs. © 2013 IEEE. On the secrecy capacity of the broadcast wiretap channel with imperfect channel state information In this paper, we consider secure broadcasting over fast fading channels. Assuming imperfect main channel state information (CSI) at the transmitter, we first provide an upper and a lower bounds on the ergodic secrecy capacity when a common message is broadcasted to multiple legitimate receivers in the presence of one eavesdropper. For this case, we show that the secrecy rate is limited by the legitimate receiver having, on average, the worst main channel link. Then, we present an expression for the achievable secrecy sum-rate when each legitimate receiver is interested in an independent message. The special cases of high SNR, perfect and no-main CSI are also analyzed. Numerical results are presented to illustrate the obtained results for the case of independent but not necessarily identically distributed Rayleigh fading channels. FADES: A tool for automated fault analysis of complex systems Wood, C. FADES is an Expert System for performing fault analyses on complex connected systems. By using a graphical editor to draw components and link them together the FADES system allows the analyst to describe a given system. The knowledge base created is used to qualitatively simulate the system behaviour. By inducing all possible component failures in the system and determining their effects, a set of facts is built up. These facts are then used to create Fault Trees, or FMEA tables. The facts may also be used for explanation effects and to generate diagnostic rules allowing system instrumentation to be optimised. The prototype system has been built and tested and is preently undergoing testing by users. All comments from these trials will be used to tailor the system to the requirements of the user so that the end product performs the exact task required Troxler Fading, Eye Movements, and Retinal Ganglion Cell Properties Romain Bachy Full Text Available We present four movies demonstrating the effect of flicker and blur on the magnitude and speed of adaptation for foveal and peripheral vision along the three color axes that isolate retinal ganglion cells projecting to magno, parvo, and konio layers of the LGN. The demonstrations support the eye movement hypothesis for Troxler fading for brightness and color, and demonstrate the effects of flicker and blur on adaptation of each class of retinal ganglion cells. Global solutions to the equation of thermoelasticity with fading memory Okada, Mari; Kawashima, Shuichi We consider the initial-history value problem for the one-dimensional equation of thermoelasticity with fading memory. It is proved that if the data are smooth and small, then a unique smooth solution exists globally in time and converges to the constant equilibrium state as time goes to infinity. Our proof is based on a technical energy method which makes use of the strict convexity of the entropy function and the properties of strongly positive definite kernels. Radiation degradation of molasses pigment. The fading color and product Sawai, Teruko; Sekiguchi, Masayuki; Tanabe, Hiroko; Sawai, Takeshi Water demand in Tokyo has increased rapidly. Because of the scarcity of water supply sources within the city, Tokyo is dependent on the water from other prefectures. Recycling of municipal effluent is an effective means of coping with water shortage in Tokyo. We have studied the radiation treatment of waste water for recycling. In this paper, the radiation decomposition methods for fading color of molasses pigment in the effluent from the sewage treatment plant and in the food industry wastwater were reported. The refractory organic substances (molasses pigment) in samples were degraded by gamma irradiation. The COD values decreased and the dark brown color faded with increasing dosage. The high molecular weight components of molasses pigment were degraded to lower molecular weight substances and were decomposed finally to carbon dioxide. The organic acids, such as formic acid, acetic acid, oxalic acid, citric acid and succinic acid were measured as intermediates of radiolytic decomposition. When we added hydrogen peroxide in samples to the gamma irradiation process, the dark brown color of molasses pigment faded with greater efficiency. (author) Analysis of Fade Dynamic at Ku-Band in Malaysia Siat Ling Jong Full Text Available This work investigates fade dynamics of satellite communication systems in equatorial heavy rain region based on a one year of Ku-band propagation measurement campaign carried out in Universiti Teknologi Malaysia (UTM, Johor, Malaysia. First order statistics of rain attenuation are deduced and the results are found to be in good agreement with those obtained from other beacon measurements gathered within the same area (Kuala Lumpur. Moreover, the fade duration and slope statistics of the satellite signal variations are also carefully derived and subsequently compared with the ITU-R recommendation model. Such information is useful for the system operator and radio communication engineer for the design of appropriate fade mitigation techniques as well as the quality of service that could be offered to the user (according to the time interval for a typical day. Further evaluation on the performances of several ITU-R models in the heavy rain region are needed based on the measurement database available of this climatic region. DISK-RELATED BURSTS AND FADES IN YOUNG STARS Findeisen, Krzysztof; Hillenbrand, Lynne; Levitan, David; Sesar, Branimir; Ofek, Eran; Laher, Russ; Surace, Jason We present first results from a new, multiyear, time domain survey of young stars in the North America Nebula complex using the Palomar Transient Factory. Our survey is providing an unprecedented view of aperiodic variability in young stars on timescales of days to years. The analyzed sample covers R PTF ≈ 13.5-18 and spans a range of mid-infrared color, with larger-amplitude optical variables (exceeding 0.4 mag root mean squared) more likely to have mid-infrared evidence for circumstellar material. This paper characterizes infrared excess stars with distinct bursts above or fades below a baseline of lower-level variability, identifying 41 examples. The light curves exhibit a remarkable diversity of amplitudes, timescales, and morphologies, with a continuum of behaviors that cannot be classified into distinct groups. Among the bursters, we identify three particularly promising sources that may represent theoretically predicted short-timescale accretion instabilities. Finally, we find that fading behavior is approximately twice as common as bursting behavior on timescales of days to years, although the bursting and fading duty cycle for individual objects often varies from year to year. Transmission of Voice Signal: BER Performance Analysis of Different FEC Schemes Based OFDM System over Various Channels Rashed, Md. Golam; Kabir, M. Hasnat; Reza, Md. Selim; Islam, Md. Matiqul; Shams, Rifat Ara; Masum, Saleh; Ullah, Sheikh Enayet In this paper, we investigate the impact of Forward Error Correction (FEC) codes namely Cyclic Redundancy Code and Convolution Code on the performance of OFDM wireless communication system for speech signal transmission over both AWGN and fading (Rayleigh and Rician) channels in term of Bit Error Probability. The simulation has been done in conjunction with QPSK digital modulation and compared with uncoded resultstal modulation. In the fading channels, it is found via computer simulation that... The ergodic secrecy capacity of the wiretap channel is known when the main channel (between the transmitter and the legitimate receiver) state information (CSI) is perfect at the transmitter and the coherence period is sufficiently large to enable random coding arguments in each block. In a fast fading scenario, when the codeword length spans many coherence periods, the secrecy capacity is still not known. In this paper, we present a framework that characterizes this secrecy capacity under imperfect main channel estimation at the transmitter. Inner and outer bounds on the ergodic secrecy capacity are derived for a class of independent identically distributed (i.i.d.) fading channels. The achievable rate is a simple on-off scheme using a Gaussian input. The upper bound is obtained using an appropriate correlation scheme of the main and the eavesdropper channels. The upper and the lower bounds coincide with recently derived ones in the perfect main CSI extreme. Furthermore, the lower bound matches the upper bound in no main CSI extreme, where the secrecy capacity is equal to zero. Numerical results are provided for independent identically distributed (i.i.d.) Rayleigh fading channels. © 2011 IEEE. Fading AGN Candidates: AGN Histories and Outflow Signatures Keel, William C.; Maksym, W. Peter [Department of Physics and Astronomy, University of Alabama, Box 870324, Tuscaloosa, AL 35487 (United States); Lintott, Chris J. [Astrophysics, Oxford University and Adler Planetarium, 1300 S. Lakeshore Drive, Chicago, IL 60605 (United States); Bennert, Vardha N.; Scott, Bryan; Showley, Charles; Flatland, Kelsi [Physics Department, California Polytechnic State University, San Luis Obispo, CA 93407 (United States); Chojnowski, S. Drew [Department of Astronomy, New Mexico State University, P.O. Box 30001, MSC 4500, Las Cruces, NM 88003-8001 (United States); Moiseev, Alexei; Smirnova, Aleksandrina [Special Astrophysical Observatory, Russian Academy of Sciences, Nizhny Arkhyz, 369167 (Russian Federation); Schawinski, Kevin; Sartori, Lia F. [Institute for Astronomy, ETH Zürich, Wolfgang-Pauli-Straße 27, CH-8093 Zurich (Switzerland); Urry, C. Megan [Department of Physics, Yale University, P.O. Box 208120, New Haven, CT 06520-8120 (United States); Pancoast, Anna [Center for Astrophysics, 60 Garden St., Cambridge, MA 02138 (United States); Schirmer, Mischa, E-mail: [email protected] [Gemini Observatory, La Serena (Chile) We consider the energy budgets and radiative history of eight fading active galactic nuclei (AGNs), identified from an energy shortfall between the requirements to ionize very extended (radius > 10 kpc) ionized clouds and the luminosity of the nucleus as we view it directly. All show evidence of significant fading on timescales of ≈50,000 yr. We explore the use of minimum ionizing luminosity Q {sub ion} derived from photoionization balance in the brightest pixels in H α at each projected radius. Tests using presumably constant Palomar–Green QSOs, and one of our targets with detailed photoionization modeling, suggest that we can derive useful histories of individual AGNs, with the caveat that the minimum ionizing luminosity is always an underestimate and subject to uncertainties about fine structure in the ionized material. These consistency tests suggest that the degree of underestimation from the upper envelope of reconstructed Q {sub ion} values is roughly constant for a given object and therefore does not prevent such derivation. The AGNs in our sample show a range of behaviors, with rapid drops and standstills; the common feature is a rapid drop in the last ≈2×10{sup 4} yr before the direct view of the nucleus. The e -folding timescales for ionizing luminosity are mostly in the thousands of years, with a few episodes as short as 400 yr. In the limit of largely obscured AGNs, we find additional evidence for fading from the shortfall between even the lower limits from recombination balance and the maximum luminosities derived from far-infrared fluxes. We compare these long-term light curves, and the occurrence of these fading objects among all optically identified AGNs, to simulations of AGN accretion; the strongest variations over these timespans are seen in models with strong and local (parsec-scale) feedback. We present Gemini integral-field optical spectroscopy, which shows a very limited role for outflows in these ionized structures. While rings and Keel, William C.; Maksym, W. Peter; Lintott, Chris J.; Bennert, Vardha N.; Scott, Bryan; Showley, Charles; Flatland, Kelsi; Chojnowski, S. Drew; Moiseev, Alexei; Smirnova, Aleksandrina; Schawinski, Kevin; Sartori, Lia F.; Urry, C. Megan; Pancoast, Anna; Schirmer, Mischa We consider the energy budgets and radiative history of eight fading active galactic nuclei (AGNs), identified from an energy shortfall between the requirements to ionize very extended (radius > 10 kpc) ionized clouds and the luminosity of the nucleus as we view it directly. All show evidence of significant fading on timescales of ≈50,000 yr. We explore the use of minimum ionizing luminosity Q ion derived from photoionization balance in the brightest pixels in H α at each projected radius. Tests using presumably constant Palomar–Green QSOs, and one of our targets with detailed photoionization modeling, suggest that we can derive useful histories of individual AGNs, with the caveat that the minimum ionizing luminosity is always an underestimate and subject to uncertainties about fine structure in the ionized material. These consistency tests suggest that the degree of underestimation from the upper envelope of reconstructed Q ion values is roughly constant for a given object and therefore does not prevent such derivation. The AGNs in our sample show a range of behaviors, with rapid drops and standstills; the common feature is a rapid drop in the last ≈2×10 4 yr before the direct view of the nucleus. The e -folding timescales for ionizing luminosity are mostly in the thousands of years, with a few episodes as short as 400 yr. In the limit of largely obscured AGNs, we find additional evidence for fading from the shortfall between even the lower limits from recombination balance and the maximum luminosities derived from far-infrared fluxes. We compare these long-term light curves, and the occurrence of these fading objects among all optically identified AGNs, to simulations of AGN accretion; the strongest variations over these timespans are seen in models with strong and local (parsec-scale) feedback. We present Gemini integral-field optical spectroscopy, which shows a very limited role for outflows in these ionized structures. While rings and loops of emission Stochastic Power Control for Time-Varying Long-Term Fading Wireless Networks Charalambous Charalambos D Full Text Available A new time-varying (TV long-term fading (LTF channel model which captures both the space and time variations of wireless systems is developed. The proposed TV LTF model is based on a stochastic differential equation driven by Brownian motion. This model is more realistic than the static models usually encountered in the literature. It allows viewing the wireless channel as a dynamical system, thus enabling well-developed tools of adaptive and nonadaptive estimation and identification techniques to be applied to this class of problems. In contrast with the traditional models, the statistics of the proposed model are shown to be TV, but converge in steady state to their static counterparts. Moreover, optimal power control algorithms (PCAs based on the new model are proposed. A centralized PCA is shown to reduce to a simple linear programming problem if predictable power control strategies (PPCS are used. In addition, an iterative distributed stochastic PCA is used to solve for the optimization problem using stochastic approximations. The latter solely requires each mobile to know its received signal-to-interference ratio. Generalizations of the power control problem based on convex optimization techniques are provided if PPCS are not assumed. Numerical results show that there are potentially large gains to be achieved by using TV stochastic models, and the distributed stochastic PCA provides better power stability and consumption than the distributed deterministic PCA. Efficient collaborative sparse channel estimation in massive MIMO Masood, Mudassir; Afify, Laila H.; Al-Naffouri, Tareq Y. We propose a method for estimation of sparse frequency selective channels within MIMO-OFDM systems. These channels are independently sparse and share a common support. The method estimates the impulse response for each channel observed by the antennas at the receiver. Estimation is performed in a coordinated manner by sharing minimal information among neighboring antennas to achieve results better than many contemporary methods. Simulations demonstrate the superior performance of the proposed method. Masood, Mudassir Fading of LiF and CaF2:Dy Ben-Shachar, B.; German, U.; Weiser, G. The fading of LiF and CaF 2 :Dy was investigated and the results were compared to the literature. The effect of thermal annealing was studied in order to reduce the fading in both phosphors and to minimize the effects of the environment on CaF 2 :Dy. Minimizing the fading and knowing its time dependence make possible the exact personal and environmental dosimetry. (Author) Fade statistics of M-turbulent optical links Jurado-Navas, Antonio; Maria Garrido-Balsells, Jose; Castillo-Vazquez, Miguel A new and generalized statistical model, called Malaga or simply M distribution, has been derived recently to characterize the irradiance fluctuations of an unbounded optical wavefront propagating through a turbulent medium under all irradiance fluctuation conditions. The aforementioned model...... extends and unifies in a simple analytical closed-form expression most of the proposed statistical models for free-space optical (FSO) communications widely employed until now in the scientific literature. Based on that M model, we have studied some important features associated to its fade statistics... Inkjet printed paper based frequency selective surfaces and skin mounted RFID tags : the interrelation between silver nanoparticle ink, paper substrate and low temperature sintering technique Sanchez-Romaquera, V.; Wïnscher, S.; Turki, B.M.; Abbel, R.J.; Barbosa, S.; Tate, D.J.; Oyeka, D.; Batchelor, J.C.; Parker, E.A.; Schubert, U.S.; Yeates, S.G. Inkjet printing of functional frequency selective surfaces (FSS) and radio frequency identification (RFID) tags on commercial paper substrates using silver nanoparticle inks sintered using low temperature thermal, plasma and photonic techniques is reported. Printed and sintered FSS devices The fading of irradiated blue-colored pearls Okamoto, Shinichi The fading of irradiated and natural blue-colored pearls was investigated in this experiment. Thirty natural blue-colored pearls and sixty irradiated blue-colored pearls were used. Some of them were placed at a light position of RT. Another pearls were placed at a dark position of 50 0 C. The irradiated pearls placed at a light position of RT didn't show remarkable fading in their color in 294 days. But the natural blue-colored pearls showed a little recovery from 4% to 8% in reflection factors in 223 days at RT. The irradiated pearls placed at a dark position of 50 0 C showed the recovery from 9% to 14% in 264 days independently of irradiation times. The natural blue-colored pearls also showed the bleaching from 5% to 10% in reflection factor in 86 days at 50 0 C. Both irradiated and natural blue-colored pearls hardly showed their remarkable changes in their chromaticities independently of temperatures. (author) Color transplant for reverse ageing of faded artworks Del Mastio, A.; Piva, A.; Barni, M.; Cappellini, V.; Stefanini, L. Nowadays, photographs are one of the most used media for communication. Images are used for the representation of documents, Cultural goods, and so on: they are used to pass on a wedge of historical memory of the society. Since its origin, the photographic technique has got several improvements; nevertheless, photos are liable to several damages, both concerning the physical support and concerning the colors and figures which are depicted in it: for example, think about scratches or rips happened to a photo, or think about the fading or red (or yellow) toning concerning the colors of a photo. In this paper, we propose a novel method which is able to assess the original beauty of digital reproductions of aged photos, as well as digital reproductions of faded goods. The method is based on the comparison of the degraded image with a not-degraded one showing similar contents; thus, the colors of the not-degraded image can be transplanted in the degraded one. The key idea is a dualism between the analytical mechanics and the color theory: for each of the degraded and not-degraded images we compute first a scatter plot of the x and y normalized coordinates of their colors; these scatter diagrams can be regarded as a system of point masses, thus provided of inertia axes and an inertia ellipsoid. Moving the scatter diagram of the degraded image over the one belonging to the not-degraded image, the colors of the degraded image can be restored. Tackling capacity fading in vanadium flow batteries with amphoteric membranes Oldenburg, Fabio J.; Schmidt, Thomas J.; Gubler, Lorenz Capacity fading and poor electrolyte utilization caused by electrolyte imbalance effects are major drawbacks for the commercialization of vanadium flow batteries (VFB). The influence of membrane type (cationic, anionic, amphoteric) on these effects is studied by determining the excess and net flux of each vanadium ion in an operating VFB assembled with a cation exchange membrane (CEM), Nafion® NR212, an anion exchange membrane (AEM), Fumatech FAP-450, and an amphoteric ion exchange membrane (AIEM) synthesized in-house. It is shown that the net vanadium flux, accompanied by water transport, is directed towards the positive side for the CEM and towards the negative side for the AEM. The content of cation and anion exchange groups in the AIEM is adjusted via radiation grafting to balance the vanadium flux between the two electrolyte sides. With the AIEM the net vanadium flux is significantly reduced and capacity fading due to electrolyte imbalances can be largely eliminated. The membrane's influence on electrolyte imbalance effects is characterized and quantified in one single charge-discharge cycle by analyzing the content of the four different vanadium species in the two electrolytes. The experimental data recorded herewith conclusively explains the electrolyte composition after 80 cycles. Advanced Channel Estimation and Multiuser Detection in GSM Arildsen, Thomas; Blauendahl, Jesper A single-antenna interference cancellation-capable data detector employing the SAGE-algorithm for GSM downlink transmission with co-channel interference has been designed and tested. Two scenarios were considered: First, a frequency-flat Rayleigh fading scenario with synchronously received users... Wide-range frequency selectivity in an acoustic sensor fabricated using a microbeam array with non-uniform thickness Shintaku, Hirofumi; Kotera, Hidetoshi; Kobayashi, Takayuki; Zusho, Kazuki; Kawano, Satoyuki In this study, we have demonstrated the fabrication of a microbeam array (MBA) with various thicknesses and investigated the suitability it for an acoustic sensor with wide-range frequency selectivity. For this, an MBA composed of 64 beams, with thicknesses varying from 2.99–142 µm, was fabricated by using single gray-scale lithography and a thick negative photoresist. The vibration of the beams in air was measured using a laser Doppler vibrometer; the resonant frequencies of the beams were measured to be from 11.5 to 290 kHz. Lastly, the frequency range of the MBA with non-uniform thickness was 10.9 times that of the MBA with uniform thickness. (paper) Design of a Novel Miniaturized Frequency Selective Surface Based on 2.5-Dimensional Jerusalem Cross for 5G Applications Full Text Available A compact frequency selective surface (FSS for 5G applications has been designed based on 2.5-dimensional Jerusalem cross. The proposed element consists of two main parts: the successive segments of the metal traces placed alternately on the two surfaces of the substrate and the vertical vias connecting traces. Compared with previous published two-dimensional miniaturized elements, the transmission curves indicate a significant size reduction (1/26 wavelengths at the resonant frequency and exhibit good angular and polarization stabilities. Furthermore, a general equivalent circuit model is established to provide direct physical insight into the operating principle of this FSS. A prototype of the proposed FSS has been fabricated and measured, and the results validate this design. Design, Fabrication and Characterization of MIM Diodes and Frequency Selective Thermal Emitters for Solar Energy Harvesting and Detection Devices Sharma, Saumya could be achieved in this case. These long chain polymeric molecules exhibit a two-dimensional molecular assembly thereby reducing the tunneling distance between the metal electrodes on either side of the insulating layer. Rectification ratios as high as 450:1 at +/-200mV were obtained for an MIM diode configuration of Ni-LB films of Arachidic Acid films-(Au/Pd). The bandwidth of the incident radiation that can be used by this rectenna assembly is limited to 9.5% of 30THz or +/-1.5THz from the center frequency based on the antenna designs which were proposed for this research. This bandwidth constraint has led to research in the field of frequency selective emitters capable of providing a narrowband emission around 30THz. Several grating structures were fabricated in the form of Ni-Si periodic arrays, in a cleanroom environment using photolithography, sputtering and deep reactive ion etching. These frequency selective samples were characterized with the help of focusing optics, monochromators and HgCdTe detectors. The results obtained from the emission spectra were utilized to calibrate a simulation model with Computer Simulation Technology (CST) which uses numerous robust solving techniques, such as the finite element method, in order to obtain the optical parameters for the model. Thereafter, a thorough analysis of the different dimensional and material parameters was performed, to understand their dependence on the emissivity of the selective emitter. Further research on the frequency selectivity of the periodic nano-disk or nano-hole array led to the temperature dependence of the simulated spectra, because the material parameters, such as refractive index or drude model collision frequency, vary with temperature. Thus, the design of frequency selective absorbers/emitters was found to be significantly affected with temperature range of operation of these structures. An ultrathin wide-band planar metamaterial absorber based on a fractal frequency selective surface and resistive film Fan Yue-Nong; Cheng Yong-Zhi; Nie Yan; Wang Xian; Gong Rong-Zhou We propose an ultrathin wide-band metamaterial absorber (MA) based on a Minkowski (MIK) fractal frequency selective surface and resistive film. This absorber consists of a periodic arrangement of dielectric substrates sandwiched with an MIK fractal loop structure electric resonator and a resistive film. The finite element method is used to simulate and analyze the absorption of the MA. Compared with the MA-backed copper film, the designed MA-backed resistive film exhibits an absorption of 90% at a frequency region of 2 GHz–20 GHz. The power loss density distribution of the MA is further illustrated to explain the mechanism of the proposed MA. Simulated absorptions at different incidence cases indicate that this absorber is polarization-insensitive and wide-angled. Finally, further simulated results indicate that the surface resistance of the resistive film and the dielectric constant of the substrate can affect the absorbing property of the MA. This absorber may be used in many military fields Underwater wireless optical communications: From system-level demonstrations to channel modelling In this paper, we discuss about recent experimental advances in underwater wireless optical communications (UWOC) over various underwater channel water types using different modulation schemes as well as modelling and describing the statistical properties of turbulence-induced fading in underwater wireless optical channels using laser beam intensity fluctuations measurements. Training sequence design for MIMO channels : An application-oriented approach Katselis, D.; Rojas, C.R.; Bengtsson, M.; Bjornson, E.; Bombois, X.; Shariati, N.; Jansson, M.; Hjalmarsson, H. In this paper, the problem of training optimization for estimating a multiple-input multiple-output (MIMO) flat fading channel in the presence of spatially and temporally correlated Gaussian noise is studied in an application-oriented setup. So far, the problem of MIMO channel estimation has mostly The FADE mass-stat: A technique for inserting or deleting particles in molecular dynamics simulations Borg, Matthew K., E-mail: [email protected] [Department of Mechanical and Aerospace Engineering, University of Strathclyde, Glasgow G1 1XJ (United Kingdom); Lockerby, Duncan A., E-mail: [email protected] [School of Engineering, University of Warwick, Coventry CV4 7AL (United Kingdom); Reese, Jason M., E-mail: [email protected] [School of Engineering, University of Edinburgh, Edinburgh EH9 3JL (United Kingdom) The emergence of new applications of molecular dynamics (MD) simulation calls for the development of mass-statting procedures that insert or delete particles on-the-fly. In this paper we present a new mass-stat which we term FADE, because it gradually "fades-in� (inserts) or "fades-out� (deletes) molecules over a short relaxation period within a MD simulation. FADE applies a time-weighted relaxation to the intermolecular pair forces between the inserting/deleting molecule and any neighbouring molecules. The weighting function we propose in this paper is a piece-wise polynomial that can be described entirely by two parameters: the relaxation time scale and the order of the polynomial. FADE inherently conserves overall system momentum independent of the form of the weighting function. We demonstrate various simulations of insertions of atomic argon, polyatomic TIP4P water, polymer strands, and C{sub 60} Buckminsterfullerene molecules. We propose FADE parameters and a maximum density variation per insertion-instance that restricts spurious potential energy changes entering the system within desired tolerances. We also demonstrate in this paper that FADE compares very well to an existing insertion algorithm called USHER, in terms of accuracy, insertion rate (in dense fluids), and computational efficiency. The USHER algorithm is applicable to monatomic and water molecules only, but we demonstrate that FADE can be generally applied to various forms and sizes of molecules, such as polymeric molecules of long aspect ratio, and spherical carbon fullerenes with hollow interiors. Borg, Matthew K.; Lockerby, Duncan A.; Reese, Jason M. The emergence of new applications of molecular dynamics (MD) simulation calls for the development of mass-statting procedures that insert or delete particles on-the-fly. In this paper we present a new mass-stat which we term FADE, because it gradually "fades-in� (inserts) or "fades-out� (deletes) molecules over a short relaxation period within a MD simulation. FADE applies a time-weighted relaxation to the intermolecular pair forces between the inserting/deleting molecule and any neighbouring molecules. The weighting function we propose in this paper is a piece-wise polynomial that can be described entirely by two parameters: the relaxation time scale and the order of the polynomial. FADE inherently conserves overall system momentum independent of the form of the weighting function. We demonstrate various simulations of insertions of atomic argon, polyatomic TIP4P water, polymer strands, and C 60 Buckminsterfullerene molecules. We propose FADE parameters and a maximum density variation per insertion-instance that restricts spurious potential energy changes entering the system within desired tolerances. We also demonstrate in this paper that FADE compares very well to an existing insertion algorithm called USHER, in terms of accuracy, insertion rate (in dense fluids), and computational efficiency. The USHER algorithm is applicable to monatomic and water molecules only, but we demonstrate that FADE can be generally applied to various forms and sizes of molecules, such as polymeric molecules of long aspect ratio, and spherical carbon fullerenes with hollow interiors Computer game-based mathematics education : Embedded faded worked examples facilitate knowledge acquisition ter Vrugte, Judith; de Jong, Anthonius J.M.; Vandercruysse, Sylke; Wouters, Pieter; van Oostendorp, Herre; Elen, Jan This study addresses the added value of faded worked examples in a computer game-based learning environment. The faded worked examples were introduced to encourage active selection and processing of domain content in the game. The content of the game was proportional reasoning and participants were Dependence of (anomalous) fading of infra-red stimulated luminescence on trap occupancy in feldspars Morthekai, P.; Jain, Mayank; Gach, Grzegorz Dose dependency of anomalous fading of infra-red stimulated luminescence (IRSL) from feldspars has been investigated using radiations of different ionisation qualities. The rate of fading of the IRSL signal after proton irradiation (9.4–30%/decade) is on an average almost twice compared... Outage probability of distributed beamforming with co-channel interference In this letter, we consider a distributed beamforming scheme (DBF) in the presence of equal-power co-channel interferers for both amplify-and-forward and decode-and-forward relaying protocols over Rayleigh fading channels. We first derive outage probability expressions for the DBF systems. We then present a performance analysis for a scheme relying on source selection. Numerical results are finally presented to verify our analysis. © 2011 IEEE. Polarimetry and photometry of M87: is the jet fading Warren-Smith, R F; King, D J; Scarrott, S M [Durham Univ. (UK). Dept. of Physics Optical linear polarization mapping and photometry of M87 is presented in B and R wavebands. The results indicate significant polarization in the galactic nucleus. Polarization in the jet in B is consistent with other recent maps obtained for this area in blue light and no significant variation with wavelength between B and R wavebands is found. The spectral index of the jet radiation is measured to be S=-1.65+-0.2 over the wavelength range observed. Comparison of the integrated B magnitude of the jet with previous independent measurements over the period 1934-80 suggests that the jet is variable and has been fading more or less uniformly by about 0.8 mag per decade between 1964 and 1980. Developing a numerical simulation for fading in feldspar Larsen, A.; Greilich, S.; Jain, M.; Murray, A.S. Most models describing anomalous fading in feldspars are based on analytical solutions. As an alternative approach we present an entirely numerical model based on statistical sampling that simulates stepwise the charge creation/trapping and recombination in a given phosphor. We assume the number density of electrons and holes at any time to be equal, although the model is not bound to that assumption. The model is flexible enough to be used on any combination of geological and laboratory timescales and for any defined configuration of defects. Using this approach we observed reorganization of nearest-neighbor distances with time. The best agreement with experimental data is achieved if we assume the crystal to consist of small subvolumes (nanocrystals) only within which charge carriers are allowed to recombine. Outage and BER analysis for ultrawideband-based WPAN in Nakagami-m fading channels This paper presents a performance analysis of multiband orthogonal frequency-division multiplexing (MB-OFDM) in ultra wideband (UWB)-based personal area networks (UPANs). A UPAN consists of devices with different UWB technologies at the physical layer. Approximate expressions for the outage probability and average bit error rate (BER) are derived in closed form for the MB-OFDM target receiver, taking into account multi-user interference (MUI), as well as external interference in the form of time-hopping (TH) and direct-sequence (DS) UWB signals. © 2010 IEEE. Correlation properties of dual polarized antennas with finite pattern orthogonality in mobile fading channels G. Armbrecht Full Text Available Starting from planar broadband log.-per. antenna design, offering the possibility of dual-polarized reception properties, in this article a generalized mathematical approach for rapidly estimating the resulting signal correlation coefficient in a stochastically modeled propagation environment solely based on measured or simulated radiation characteristics of one single antenna element is presented. The obtained results are marking an upper limit and are describing the worst-case scenario according to the signal correlation at the antenna feeding points in terms of line-of-sight (LOS reception in main beam direction. The knowledge of the derived relationship may be helpful especially for antenna designers to combine antenna performance values with the significant communication system performance parameters, as e.g. in case of Multiple-Input Multiple-Output (MIMO and diversity configurations. Nam, Sungsik; Yang, Hongchuan; Alouini, Mohamed-Slim; Kim, Dongin variates. With this motivation in mind, we first provide in this paper some new order statistics results in terms of both moment generating function (MGF) and probability density function (PDF) expressions under an i.n.d. assumption and then derive a new Nam, Sungsik; Alouini, Mohamed-Slim; Hasna, Mazen Omar combined paths power) to that of the total available power. The major difficulty in these problems is to derive some joint statistics of ordered exponential variates. With this motivation in mind, we capitalize in this paper on some new order statistics Ansari, Imran Shafique; Abdallah, Mohamed M.; Alouini, Mohamed-Slim; Qaraqe, Khalid A. In this work, we present the performance analysis of a dual-hop transmission system composed of asymmetric radio frequency (RF) and free-space optical (FSO) links in underlay cognitive networks. For the RF link, we consider an underlay cognitive Optimal throughput for cognitive radio with energy harvesting in fading wireless channel. Vu-Van, Hiep; Koo, Insoo Energy resource management is a crucial problem of a device with a finite capacity battery. In this paper, cognitive radio is considered to be a device with an energy harvester that can harvest energy from a non-RF energy resource while performing other actions of cognitive radio. Harvested energy will be stored in a finite capacity battery. At the start of the time slot of cognitive radio, the radio needs to determine if it should remain silent or carry out spectrum sensing based on the idle probability of the primary user and the remaining energy in order to maximize the throughput of the cognitive radio system. In addition, optimal sensing energy and adaptive transmission power control are also investigated in this paper to effectively utilize the limited energy of cognitive radio. Finding an optimal approach is formulated as a partially observable Markov decision process. The simulation results show that the proposed optimal decision scheme outperforms the myopic scheme in which current throughput is only considered when making a decision. Chelli, Ali; Zedini, Emna; Alouini, Mohamed-Slim; Barry, John R.; Pä tzold, Matthias the performance of HARQ from an information theoretic perspective. Analytical expressions are derived for the \\epsilon-outage capacity, the average number of transmissions, and the average transmission rate of HARQ with incremental redundancy assuming a maximum
CommonCrawl
Asian Pacific Journal of Cancer Prevention Pages.7561-7566 Asian Pacific Organization for Cancer Prevention (아시아태평양암예방학회) Dietary Ziziphus jujuba Fruit Influence on Aberrant Crypt Formation and Blood Cells in Colitis-Associated Colorectal Cancer Mice Periasamy, Srinivasan (Department of Environmental and Occupational Health, College of Medicine, National Cheng Kung University) ; Liu, Chung-Teng (Department of Environmental and Occupational Health, College of Medicine, National Cheng Kung University) ; Wu, Wang-Hung (Department of Environmental and Occupational Health, College of Medicine, National Cheng Kung University) ; Chien, Se-Ping (Department of Food and Beverage Service, Tainan University of Technology) ; Liu, Ming-Yie (Department of Environmental and Occupational Health, College of Medicine, National Cheng Kung University) https://doi.org/10.7314/APJCP.2015.16.17.7561 Ziziphus jujuba (ZJ) fruit is rich in bioactive functional components such as polysaccharides, triterpenoid acid, flavonoids and oleamide. It has been commonly used in the treatment of various diseases including diabetes, digestive disorders, diarrhea, skin infections, liver and urinary diseases. However, its dietary effect on chemoprevention of colon cancer has never been studied. The present study was to evaluate the protective effects of dietary ZJ on colitis-associated colon carcinogenesis in azoxymethane (AOM)-dextran sodium sulphate (DSS)-treated mice. AOM was injected (10 mg/kg b.wt., i.p.) and three cycles of 2% DSS in drinking water for 7 days with 14 days of normal drinking water in-between was administered to induce colitis-associated colon cancer. ZJ fruit was supplemented in feed as 5 and 10%. Dietary ZJ significantly attenuated aberrant crypt foci (ACF) formation thereby decreasing the progression of hyperplasia to dysplasia. In addition, it significantly reduced circulating white blood cells, lymphocytes, neutrophils, monocytes, eosinophils, basophils and platelets compared to colon cancer mice. We conclude that ZJ supplementation delayed the progression of colon cancer from hyperplasia to dysplasia and ultimately adenocarcinoma and cancer. In addition, it decreased circulating tumor-related leucocytes, main regulators of cancer inflammation. Therefore, dietary consumption of ZJ fruit attenuated the formation of ACF and delayed the progression of colon cancer. Ziziphus jujuba;colon cancer;azoxymethane;blood cells;aberrant crypt foci Supported by : Taiwan Ministry of Sciences and Technology Abdull Razis AF, Noor NM (2013). Cruciferous vegetables: dietary phytochemicals for cancer prevention. Asian Pac J Cancer Prev, 14, 1565-70. https://doi.org/10.7314/APJCP.2013.14.3.1565 Absenger G, Szkandera J, Pichler M, et al (2013). A derived neutrophil to lymphocyte ratio predicts clinical outcome in stage II and III colon cancer patients. Br J Cancer, 109, 395-400. https://doi.org/10.1038/bjc.2013.346 Baranyai Z, Josa V, Krzystanek M, et al (2013). Evaluation of thrombocytosis as predictive factor in colorectal cancer. Magy Seb, 66, 331-7. https://doi.org/10.1556/MaSeb.66.2013.6.5 BHP, Bureau of Health Promotion (2007). Department of Health, R.O.C. (Taiwan). http://www.bhp.doh.gov.tw/BHPnet/Portal/StatisticsShow.aspx?No0200911300001 Chen Q, Liu ZC, Cheng LP, et al (2012). Analysis of incidence and mortality of colorectal cancer in china 2003-2007. Chin Cancer, 21, 179-82. Chiang SF, Hung HY, Tang R, et al (2012). Can neutrophil-tolymphocyte ratio predict the survival of colorectal cancer patients who have received curative surgery electively? Int J Colorectal Dis, 27, 1347-57. https://doi.org/10.1007/s00384-012-1459-x Dehghan Esmatabadi MJ, Farhangi B, Safari Z, et al (2015). Dendrosomal curcumin inhibits metastatic potential of human SW480 colon cancer cells through Down-regulation of Claudin1, Zeb1 and Hef1-1 gene expression. Asian Pac J Cancer Prev, 16, 2473-81. https://doi.org/10.7314/APJCP.2015.16.6.2473 Erlinger TP, Muntner P, Helzlsouer KJ (2004). WBC and the risk of cancer mortality in a national sample of U.S. adults: results from the Second National Health and Nutrition Examination Survey mortality study. Cancer Epidemiol Biomarkers Prev, 13, 1052-6 Goh KL, Quek KF, Yeo GT, et al (2005). Colorectal cancer in Asians: a demographic and anatomic survey in Malaysian patients undergoing colonoscopy. Aliment Pharmacol Ther, 22, 859-64. https://doi.org/10.1111/j.1365-2036.2005.02646.x Guizani N, Waly MI, Singh V, Rahman MS, et al (2013). Nabag (Zizyphus spina-christi) extract prevents aberrant crypt foci development in colons of azoxymethane-treated rats by abrogating oxidative stress and inducing apoptosis. Asian Pac J Cancer Prev, 14, 5031-5. https://doi.org/10.7314/APJCP.2013.14.9.5031 Hamiza OO, Rehman MU, Tahir M, et al (2012). Amelioration of 1,2 Dimethylhydrazine (DMH) induced colon oxidative stress, inflammation and tumor promotion response by tannic acid in Wistar rats. Asian Pac J Cancer Prev, 13, 4393-402. https://doi.org/10.7314/APJCP.2012.13.9.4393 Hassan AS, Naicker M, Yusof KH, Wan Ishak WZ (2015). Prognostic factors and the role of adjuvant chemotherapy in post-curative surgery for Dukes B and C colon cancers and survival outcomes: a Malaysian experience. Asian Pac J Cancer Prev, 16, 2237-43. https://doi.org/10.7314/APJCP.2015.16.6.2237 Huang X, Kojima-Yuasa A, Norikura T, et al (2007). Mechanism of the anti-cancer activity of Zizyphus jujuba in HepG2 cells. Am J Chin Med, 35, 517-32. https://doi.org/10.1142/S0192415X0700503X Hung CF, Hsu BY, Chang SC, Chen BH (2012). Antiproliferation of melanoma cells by polysaccharide isolated from Zizyphus jujuba. Nutrition, 28, 98-105. https://doi.org/10.1016/j.nut.2011.05.009 Koushik A, Hunter DJ, Spiegelman D, et al (2007). Fruits, vegetables, and colon cancer risk in a pooled analysis of 14 cohort studies. J Natl Cancer Inst, 99, 1471-83. https://doi.org/10.1093/jnci/djm155 Lee YJ, Lee HR, Nam CM, Hwang UK, Jee SH (2006). White blood cell count and the risk of colon cancer. Yonsei Med J, 47, 646-56. https://doi.org/10.3349/ymj.2006.47.5.646 Legrand F, Driss V, Delbeke M, et al (2010). Human eosinophils exert TNF-$\alpha$ and granzyme A-mediated tumoricidal activity toward colon carcinoma cells. J Immunol, 185, 7443-51. https://doi.org/10.4049/jimmunol.1000446 Li JW, Ding SD, Ding XL (2007a). Optimization of the ultrasonically assisted extraction of polysaccharides from Zizyphus jujuba cv. Jinsixiaozao. J Food Eng, 80, 176-83. https://doi.org/10.1016/j.jfoodeng.2006.05.006 Li JW, Chen YY, Ding SD, Zhang LF (2007b). Isolation and analysis of a novel proteoglycan from Zizyphus jujuba cv. Jinsixiaozao. J Food Drug Anal, 15, 271-7. Li JY, Li Y, Jiang Z, Wang RT, Wang XS (2014). Elevated mean platelet volume is associated with presence of colon cancer. Asian Pac J Cancer Prev, 15, 10501-4. Lin MS, Huang JX, Zhu J, Shen HZ (2012). Elevation of platelet count in patients with colorectal cancer predicts tendency to metastases and poor prognosis. Hepatogastroenterol, 59, 1687-90. Liu SH, Cheng YC (2012). Old formula, new Rx: the journey of PHY906 as cancer adjuvant therapy. J Ethnopharmacol, 140, 614-23. https://doi.org/10.1016/j.jep.2012.01.047 Mantovani A, Allavena P, Sica A, Balkwill F (2008). Cancerrelated inflammation. Nature, 454, 436-44. https://doi.org/10.1038/nature07205 Mytar B, Baj-Krzyworzeka M, Majka M, Stankiewicz D, Zembala M (2008). Human monocytes both enhance and inhibit the growth of human pancreatic cancer in SCID mice. Anticancer Res, 28, 187-92. Pandurangan AK, Esa NM (2014). Luteolin, a bioflavonoid inhibits colorectal cancer through modulation of multiple signaling pathways: a review. Asian Pac J Cancer Prev, 15, 5501-8. https://doi.org/10.7314/APJCP.2014.15.14.5501 Rajamanickam S, Agarwal R (2008). Natural products and colon cancer: current status and future prospects. Drug Devel Res, 69, 460-71. https://doi.org/10.1002/ddr.20276 Siegel R, Naishadham D, Jemal A (2013). Cancer statistics, 2013. CA Cancer J Clin, 63, 11-30. https://doi.org/10.3322/caac.21166 Song Y, Liu M, Yang F, et al (2015). Dietary fibre and the risk of colorectal cancer: a case- control study. Asian Pac J Cancer Prev, 16, 3747-52. https://doi.org/10.7314/APJCP.2015.16.9.3747 Stotz M, M Pichler, G Absenger, et al (2014). The preoperative lymphocyte to monocyte ratio predicts clinical outcome in patients with stage III colon cancer. Brit J Cancer, 110, 435-40. https://doi.org/10.1038/bjc.2013.785 Sun YF, Liang ZS, Shan CJ, Viernstein H, Unger F (2011). Comprehensive evaluation of natural antioxidants and antioxidant potentials in Ziziphus jujuba Mill. var. spinosa (Bunge) Hu ex H. F. Chou fruits based on geographical origin by TOPSIS method. Food Chem, 124, 1612-9. https://doi.org/10.1016/j.foodchem.2010.08.026 Tammasakchai A, Chaiyasut C, Riengrojpitak S, Suwannalert P (2015). Unpolished thai rice prevents ACF formation and dysplastic progression in aom-induced rats and induces apoptosis through redox alteration in CaCo-2 cells. Asian Pac J Cancer Prev, 16, 2827-32. https://doi.org/10.7314/APJCP.2015.16.7.2827 TCR, Taiwan Cancer Registry (2011). http://tcr.cph.ntu.edu.tw/main.php?Page=N2 Van Duijnhoven FJ, Bueno-De-Mesquita HB, Ferrari P, et al (2009). Fruit, vegetables, and colorectal cancer risk: the European Prospective Investigation into Cancer and Nutrition. Am J Clin Nutr, 89, 1441-52. https://doi.org/10.3945/ajcn.2008.27120 Waly MI, Al-Ghafri BR, Guizani N, Rahman MS (2015). Phytonutrient Effects of Date Pit Extract against Azoxymethane- Induced Oxidative Stress in the Rat Colon. Asian Pac J Cancer Prev, 16, 3473-7. https://doi.org/10.7314/APJCP.2015.16.8.3473 Yue Y, Wu S, Zhang H, et al (2014). Characterization and hepatoprotective effect of polysaccharides from Ziziphus jujuba Mill. var. spinosa (Bunge) Hu ex H. F. Chou sarcocarp. Food Chem Toxicol, 74, 76-84. https://doi.org/10.1016/j.fct.2014.09.006 Zhang Z-Y, Dong J-H, Chen Y-W, et al (2012). Galectin-9 acts as a prognostic factor with antimetastatic potential in hepatocellular carcinoma. Asian Pac J Cancer Prev, 13, 2503-9. https://doi.org/10.7314/APJCP.2012.13.6.2503
CommonCrawl
Sum of random variables without normalization approaches gaussian The central limit theorem states that the limiting distribution of a centered and normalized sum of independent random variables with mean $\mu$ and finite variance $\sigma^2$ is Gaussian. $$ \frac{\sum_{i=1}^n(X_i-\mu)}{\sigma\sqrt{n}}\xrightarrow{d}N(0,1) $$ However in practice, we may not be working with sums of centered and normalized random variables. Still, if we run experiments where we sum without normalization, the distribution of the sum can look increasingly Gaussian with increasing mean and variance. The statement $$ \sum_{i=1}^nX_i\xrightarrow{d}N(n\mu,n\sigma^2) $$ would capture this intuition, but doesn't make sense because the "$\xrightarrow{d}$" is a claim in the limit as $n\rightarrow\infty$, and it doesn't make sense to talk about a Gaussian with infinite mean and variance. Is there a theorem capturing the notion that the distribution of an uncentered and unnormalized sum still approaches a Gaussian? Or is this simply a corollary of the CLT? I'm looking for a proof of something like the following statement: For a given $\delta > 0$, there exists an $N>0$ such that $$ \text{distance}\left(\sum_i^nX_i, N(n\mu,n\sigma^2)\right)<\delta $$ for $n>N$ and some distance function of the distributions. That is, if we sum enough random variables, we can get as close to a Gaussian as we like. normal-distribution central-limit-theorem sum fragapanagos fragapanagosfragapanagos $\begingroup$ The un-normalized sum goes to infinity and so as you mention the distribution has mean and variance growing without bound. I also think you mean some distance function been the distribution of the un-normalized sum and the N(n$\mu$, n$\sigma^2$). $\endgroup$ – Michael R. Chernick May 24 '17 at 23:03 $\begingroup$ Yes: the name of this theorem is the CLT. The point is that if you don't standardize, there is no limiting distribution at all; and if you want there to be a limiting distribution, then you have to change the location and scale in a way that's asymptotically equivalent to standardization. These are all part of the content of the CLT. I emphasized these points in my account of the CLT at stats.stackexchange.com/a/3904/919. $\endgroup$ – whuber♦ May 24 '17 at 23:43 $\begingroup$ @MichaelChernick Yes, I mean some distance function and have edited the question to reflect that. I'm open to suggestions if there better way to notate this. $\endgroup$ – fragapanagos May 24 '17 at 23:44 $\begingroup$ If you want to talk about behavior in finite samples, you'd need to go to something like the Berry-Esséen inequality. While the inequality is stated with the variables in standardized form, the bound on the difference in cdf isn't affected by the horizontal scaling factor nor by a shift. This isn't specifically convergence because we're dealing with some particular $n$ but it may do what you need for a particular sense of "close to Gaussian" (N.B. a small bound on the difference in cdf doesn't imply Gaussian-like behavior) $\endgroup$ – Glen_b May 25 '17 at 0:25 $\begingroup$ Thanks @Glen_b. The Berry-Esséen inequality seems to supply what I was searching for. $\endgroup$ – fragapanagos May 25 '17 at 0:48 If you want to talk about behavior in finite samples, you'd need to go to something like the Berry-Esséen inequality. While the inequality is stated with the variables in standardized form, the bound on the difference in cdf isn't affected by the horizontal scaling factor nor by a shift. This isn't specifically convergence because we're dealing with some particular $n$ but it may do what you need for a particular sense of "close to Gaussian" Note, however, that a close-to-Gaussian cdf implied by a small bound on the difference in cdf doesn't imply Gaussian-like behavior of the variable. For example, it's possible for a variable that has at most a miniscule absolute deviation from a Gaussian (e.g.bounded by some fixed but small $\epsilon >0$) to have infinite variance and no mean. Glen_bGlen_b Not the answer you're looking for? Browse other questions tagged normal-distribution central-limit-theorem sum or ask your own question. What intuitive explanation is there for the central limit theorem? Sum of random variables without central limit theorem Central Limit Theorem for square roots of sums of i.i.d. random variables Intuition about Central limit theorem Is applying the CLT to the sum of random variables a good approximation? Does the sum of discrete uniforms coverge to a discrete Gaussian?
CommonCrawl
How to prove that Z operator rotates points on Bloch sphere about Z axis through 180°? My idea was to apply $Z$ operator 𝐭𝐰𝐢𝐜𝐞, which leads us back to the point where we started from, and also show that after applying the $Z$ operator just 𝐨𝐧𝐜𝐞 we are not at the same point where we started (this is for showing that we are not rotating by a multiple of $360^{\circ}$). Is this the correct proof? What about the general case, where we want to find out through how many degrees a given operator rotates the points? quantum-gate bloch-sphere edited Jun 9 '18 at 13:54 Sanchayan Dutta Archil ZhvaniaArchil Zhvania The Pauli-$Z$ gate maps $|0\rangle$ to $|0\rangle$ and $|1\rangle$ to $-|1\rangle$. For Bloch sphere representation, state of a qubit is written like (look at my previous answer for a detailed explanation) $$|\psi\rangle = \cos(\theta/2)|0\rangle + e^{i\phi}\sin(\theta/2)|1\rangle$$ Apply the Pauli-$Z$ gate on this and you get: $$|\psi'\rangle = \cos(\theta/2)|0\rangle + (-1)e^{i\phi}\sin(\theta/2)|1\rangle$$ $$=\cos(\theta/2)|0\rangle + e^{i(\phi+\pi)}\sin(\theta/2)|1\rangle$$ Thus, the angle $\phi$ changes by $\pi$. We can call $\phi + \pi$ as $\phi'$ now. Remind yourself what the Bloch sphere looks like: Clearly, from the diagram, if the angle $\phi$ is changed by $\pi$, it will imply that the state of the qubit has been rotated about the Z-axis by $180$ degrees. Sanchayan DuttaSanchayan Dutta One can more generally show that $R_z(\theta)=e^{-i \theta Z/2}=\cos(\theta/2)-iZ\sin(\theta/2)$ rotates points on the Bloch sphere by an angle $\theta$ around the $z$-axis, and note that $Z=i R_z(\pi)$. Let $|\psi\rangle$ be an arbitrary pure state. The coordinates of the point representing $|\psi(\theta)\rangle\equiv R_z(\theta)|\psi\rangle$ on the Bloch sphere are $$x(\theta)=\langle\psi(\theta)|X|\psi(\theta)\rangle, \\ y(\psi)=\langle\psi(\theta)|Y|\psi(\theta)\rangle, \\ z(\psi)=\langle\psi(\theta)|Z|\psi(\theta)\rangle.$$ That this point follows a circular trajectory around the $z$-axis when $\theta$ goes from $0$ to $2\pi$, can be seen by direct calculation as follows: \begin{align} x(\theta) &=\langle\psi\rvert R_z(-\theta)\,X\,R_z(\theta)\lvert\psi\rangle = \cos\theta\,x(0) + \sin\theta\,y(0), \\ y(\theta) &=\langle\psi\rvert R_z(-\theta)\,Y\,R_z(\theta)\lvert\psi\rangle = -\sin\theta\,x(0) + \cos\theta\,y(0), \\ z(\theta) &= z(0). \end{align} Note that the same is true more generally for mixed states: the point representing $\rho(\theta)\equiv R_z(\theta)\rho R_z(\theta)^\dagger$ in the Bloch sphere is $$\newcommand{\Tr}{\operatorname{Tr}} x(\theta)=\Tr(X\rho(\theta)), \\ y(\theta)=\Tr(Y\rho(\theta)), \\ z(\theta)=\Tr(Z\rho(\theta)),$$ and one can show by direct calculation that this point evolves similarly to the pure case. Not the answer you're looking for? Browse other questions tagged quantum-gate bloch-sphere or ask your own question. Alternative to Bloch sphere to represent a single qubit What applications does the quantum gate [(i,1),(1,i)] have? How to think about the Z gate in a Bloch sphere? Rotating about the y- or z-axis of the Bloch sphere How are measurements on $Z$ and $X$ axes interpreted in the Bloch sphere? How to prove that antipodal points on the Bloch sphere are orthogonal? How is a qubit represented on a bloch sphere? Represent a pure state in terms of 2 antipodal points on the Bloch sphere How does a generic controlled-U operation work? Show that the Hadamard gate is equivalent to a 180 degree rotation of a certain axis How does a rotation about the y-axis on a Bloch sphere affect the state of a qubit? Why is there no angle for the $z$ axis in the Bloch sphere?
CommonCrawl
Quality evaluation of Lycium barbarum (wolfberry) from different regions in China based on polysaccharide structure, yield and bioactivities Ying Wang1,2, Hongyu Jin2, Xiaoxv Dong1, Shuang Yang3, Shuangcheng Ma2 & Jian Ni1 Lycium barbarum (wolfberry) has been widely cultivated in China, particularly in northwest regions. However, the fruit size and taste of L. barbarum from different habitats are quite different. Traditionally, only the fruit of L. barbarum produced in Ningxia province is recorded as an authentic herb, although the detailed mechanism responsible for this remains obscure. Polysaccharides are considered major active ingredients in L. barbarum which is crucial for its quality evaluation. In this study, we assessed the yield, monosaccharide composition, molecular weight, and conformation of L. barbarum polysaccharides (LBPs) collected from different regions of China. The antioxidant and immune activities of LBPs were also determined as its quality indicator. Our results showed that the similarity values of monosaccharide composition were larger than 0.926, and the Mw of the two fractions (peaks 1–2) in LBPs were ranging from 1.36 × 106 to 2.01 × 106 (peak 1), and 6.85 × 104 to 10.30 × 104 (peak 2) which indicated that the structure of LBPs were similar. In addition, results showed that there was no significant difference in antioxidant and immune activities of nine LBPs from different regions. However, the yield of LBPs from Qinghai Province (low atmospheric temperature, high altitude) was significantly lower (p < 0.05) than those collected from Xinjiang and Ningxia province. These data suggested that the L. barbarum produced in Ningxia and Xinjiang maybe more suitable as materials for medicines and functional foods. This study also provides a reference for improving the quality control standard of LBPs. Lycium barbarum L., a well-known Chinese herb, is one of the medicinal and food homologous traditional Chinese medicines. L. barbarum products have been classified as nutraceutical foods or dietary supplements and have become popular in East Asia, Europe, and North America [1]. Generally, the polysaccharide is the main bioactive ingredient in L. barbarum, which exerts various effects, including immunomodulation, antioxidant, anti-hypertension, and anticancer activities [2]. In China, L. barbarum has been widely cultivated, particularly, in northwest regions, such as Qinghai, Xinjiang, Ningxia, and Gansu. However, the fruit size and taste of L. barbarum from different habitats are considerably different. Traditionally, only the fruit of L. barbarum produced in Ningxia is recorded as an authentic (Daodi) herb in China. Whether differences in the structure and activity of L. barbarum polysaccharides (LBPs) from typical regions exist is crucial for its quality evaluation and related product development. Unlike small-molecule compounds, polysaccharides are difficult to evaluate only by their content determination or a specific spectrum. In the 2015 edition of Chinese Pharmacopoeia and a few studies [3], the polysaccharide quality of L. barbarum was evaluated only by determination of the content of total sugars using the phenol–sulfuric acid method with glucose as a reference standard. Indeed, this method is not appropriate for quality evaluation of LBPs due to its poor specificity and inability to reflect the structural characteristics of LBPs [4]. Structural features, including sugar compositions, molecular weight, and conformation are all required for comprehensive description of the polysaccharide products. In recent studies, multiple fingerprints of LBPs have been established [5]. Researcher also compared the LBPs obtained from different locations by molecular weight (Mw) determination [6]. The results in these studies showed that the similarity of those polysaccharides in L. barbarum collected from different regions of China was high. Moreover, antioxidant and immune activity of LBPs have been studied in some researches. But these reports focused on the antioxidant and immune activity of different separated fractions in LBPs [7, 8]. Xie and others evaluated and compared the effects of eight LBPs from different producing areas on macrophage function [9]. In our study, the quality of LBPs from different producing areas was comprehensively evaluated by structure, yield and activity. In this study, nine LBPs samples of the identical varieties from different locations were extracted and analyzed to determine their structural properties, including the monosaccharide composition, Mw and conformation. The similarity of LBPs from different regions was assessed using statistical and similarity analyses with professional software. Polysaccharides are the major bioactive compounds in L. barbarum by report, which have possessed the immunological adjuvant effect and antioxidant activity. To obtain deep insights into the quality of L. barbarum polysaccharides, their antioxidant and immune activity were also detected as a quality indicator. The antioxidant activities of LBPs were evaluated by hydroxyl and ABTS radicals scavenging assays. In addition, immune activity were evaluated by detect the effect of LBPs on NO, TNF-α, IL-6 production of macrophage. The results could serve as a scientific basis to further evaluate the quality of L. barbarum and will provide a reference for improving the quality control standard of LBPs production. In 2018, nine batches of L. barbarum were collected from three main producing regions of China, including Ningxia (NX), Qinghai (QH) and Xinjiang (XJ) (Table 1). The samples of L. barbarum were identified by Professor NanPing Zhang from the National Institute for Food and Drug Control, China. For authentication, their species were confirmed by a survey of their plants in cultivation base, and the macromorphological properties of the fruits. Table 1 Chemical composition of LBPs in nine samples d-Mannose (Man), d-ribose (Rib), l-rhamnose (Rha), d-glucuronic acid (GlcA), d-galacturonic acid (GalA), d-glucose (Glc), d-galactose (Gal), d-xylose (Xyl), and l-arabinose (Ara) were obtained from the National Institute for Food and Drug Control (China). Bovine serum albumin (BSA) was manufactured by Thermo (USA). 1-Phenyl-3-methyl-5-pyrazolone (PMP), 2,2-azino-bis-(3-ethylbenzothiazoline-6-sulfonic acid) (ABTS) and 2,2-diphenyl-1-picrylhydroxyl (DPPH) were purchased from Sigma-Aldrich Trading (St. Louis, MO, USA). Other reagents and chemicals were of analytical grade and obtained from Sinopharm Chemical Reagent Co. Ltd. (Shanghai, China). Preparation of LBPs Each of the samples (5.0 g) was defatted with ether for 2 h in a Soxhlet extractor. Then, the residues were immersed twice into an 80% ethanol solution (for 1 h each time) to remove small-molecule materials and pigments. Next, the dry residues were extracted with water at 80 °C two times (for 1 h each time) [10]. After centrifugation (4000×g for 5 min), the supernatant was evaporated to 10.0 mL of solution under vacuum using a rotary evaporator. Next, ethanol (95%, w/v) was added to the final concentration of 80% (v/v) for precipitation and left to react overnight. The precipitate was then collected by centrifugation and successively washed with ethanol and acetone [11]. Then, the precipitate was redissolved in 10.0 mL of hot water. The 85% protein in the crude polysaccharide was removed by the Sevag's method to avoid interference with the activity determination. Impurities in the crude polysaccharide, such as salts and particles with Mw < 3 kDa were further removed using a ultracentrifugal filter with molecular weight cutoff of 3 kDa [6] by centrifugation (4500×g for 20 min). Finally, the crude polysaccharides, LBPs, were obtained by freeze-drying of the retentate of ultrafiltration. The LBPs in nine samples were prepared under the above extraction conditions for further analysis and bioactivity study. The yield of LBPs was calculated using the following equation: $$ {\text{LBPs yield}}\% \;({\text{w}}/{\text{w}}) = \frac{\text{dried LBPs weight}}{{{\text{powder weight}}\,(5{\text{ g}})}} \times 100\% $$ Analytical methods for composition determination Neutral sugar levels were determined by the phenol–sulfuric acid method as d-glucose equivalents. The sulfuric acid–carbazole method was adapted to measure the uronic acid content with d-galacturonic acid used as a standard. The protein content was determined by the Bradford method using bovine serum albumin (BSA) as a standard [5]. Determination of molecular weight and polydispersity index The Mw and polydispersity (Mw/Mn) of LBPs were measured using HPSEC-MALLS-RID methods. Each sample of 20 mg was dissolved in 2 mL of the mobile phase and then filtered through a 0.45-µm membrane. The Mw of LBPs were established by a HPLC system (Shimadzu Company, Japan), equipped with a refractive index detector (RID), combined with a multi-angle laser light scattering detector (MALLS, DAWN HELEOS, Wyatt Technology Co., Santa Barbara, CA, USA). After optimizing the chromatographic conditions, multiple size exclusion columns Shodex SB806 (300 mm × 7.8 mm, i.d.), Shodex SB805 (300 mm × 7.8 mm, i.d.), and Shodex SB804 (300 mm × 7.5 mm, i.d.) were used to obtain a good separation efficiency. The mobile phase was 0.1% NaNO3 aqueous solution applied at a flow rate of 0.5 mL/min. An injection volume of 100 µL was used, and each sample was run for 80 min, and a temperature maintained at 40 °C. Monosaccharide composition analysis The composition analysis of polysaccharide is an important step to control the quality and obtain basic information. Briefly, each of the LBPs (1 mg/mL) was hydrolyzed by trifluoroacetic acid (TFA, 4 mol/L) at 120 °C for 4 h, followed by complexing with PMP (0.5 mol/L) [5]. Sample solution (20 μL) was injected and analyzed by HPLC-PAD using a ZORBAX Eclipse XDB-C18 column (250 × 4.6 mm, 5 μm, Agilent, USA) and UV detection at 250 nm. The flow rate of the mobile phase consisting of acetonitrile and 0.125 mol/L KH2PO4 (v/v = 16:84, pH 6.9) was 1.0 mL/min. Antioxidative activity study Hydroxyl radical scavenging activity The hydroxyl radical scavenging activities of LBPs were determined using the Fenton's reaction [12] with some modifications [13]. The hydroxyl radical was generated in the mixture of 0.8 mL of 2 mmol/L salicylic acid solution, 1 mL of 0.15 mmol/L FeSO4 and 1 mL of H2O2 (0.01%, v/v). After addition of 0.2 mL of the sample, the mixture was incubated at 37 °C for 60 min and the absorbance measured at 510 nm. The absorbance of the mixture (A1) was measured at 510 nm, the absorbance of the blank control (A0, water instead of sample solution) and H2O2 control (A2, water instead of H2O2) were measured by the same method, and VC was used as positive control. The scavenging activity of the hydroxyl radical was calculated according to following formula: $$ {\text{Scavenging activity}}\left( \% \right) = ({\text{A}}_{1} - {\text{A}}_{0} ) \, /({\text{A}}_{2} - {\text{A}}_{0} ) \times 100. $$ Total antioxidant capacity The ABTS radical scavenging activities of LBPs were measured by a previously described method [14] with some modifications. The ABTS radical solution was diluted with phosphate buffer saline (pH 7.0) to an absorbance of 0.70 ± 0.02 at 734 nm. A volume of 0.2 mL of LBP solution was added to ABTS radical solution in a ratio of 1:20, and the mixture solution was incubated for 60 min in the dark. Then, the absorbance of the mixture (Ax) and the blank control (A0, water instead of sample) were measured at 734 nm. VC was utilized as positive control. The scavenging activity of the ABTS radical was calculated by the following equation: $$ {\text{Scavenging activity}}\left( \% \right) = (1 - {\text{A}}_{\text{x}} /{\text{A}}_{0} ) \times 100. $$ RAW264.7 macrophage proliferation RAW264.7 macrophage cells in an RPMI-1640 medium containing 10% FBS were plated in a 96-well microplate (1 × 104 cells/well, 100-µL volume; ATCC). The cells were incubated with 100 µL of the LBPs at different concentrations (50, 100 and 200 µg/mL) in triplicate. The cell cultures were kept in humidified atmosphere containing 5% CO2 at 37 °C for 72 h. Then, the WST-1 solution (20 µL) was added to the wells and the solution was further incubated for 4 h at 37 °C. The optical density was measured at 450 nm using a microplate reader (EL-800; BioTek Instruments, Winooski, VT, USA). The absorbance (A) was translated into a macrophage proliferation ratio (%) = At/Ac 100, where At and Ac are the absorbance of the test group and control group, respectively. Reverse transcription-polymerase chain reaction (RT-PCR) RAW264.7 cells in logarithmic growth phase were seeded in 6-well plates with a cell density of 1 × 105/mL. After 12 h of culture, the culture medium was replaced with the Dulbecco's modified Eagle's medium (DMEM) containing NX1, QH1, XJ1 at the concentrations of 50, 100 and 200 µg/mL, respectively, or 1 μg/mL LPS. After incubation at 37 °C for 18 h. The TRIzol reagent (Invitrogen, Carlsbad, CA, USA) was used to extract the total RNA from RAW264.7 cells according to the manufacturer's protocol. Reverse transcriptase-generated complementary DNA encoding iNOS, IL-6 and TNF-α genes were amplified by polymerase chain reaction using specific primers. The nucleotide sequences of the primers were as follows: iNOS, 5′-TGCCACGGACGAGACGGATAC-3′(forward) and 5′-CCATTGCACAACTCTTTTCTCA-3′(reverse); IL-6,5′-CTCCCAACAGACCTGTCTATAC-3′(forward) and 5′-CCATTGCACAACTCTTTTCTCA-3′(reverse); TNF-α,5′-ATGTCTCAGCCTCTTCTCATTC-3′(forward) and 5′-GCTTCTCACTCGAATTTTGAGA-3′(reverse); GAPDH,5′-TGGCCTTCCGTGTTCCTAC-3′(forward) and 5′-GAGTTGCTGTTGAAGTCGCA-3′(reverse). Each sample was tested three times and the average of the three values was used for calculation. In a parallel experiment, glyceraldehyde-3-phosphate dehydrogenase (GAPDH) was used as an internal control. Each assay was performed in triplicate. The data are expressed as mean ± standard deviation (SD). Statistical differences between the different groups were assessed by Student's t-test. The similarities in the HPLC chromatograms were evaluated by the software "Similarity Evaluation System for Chromatographic Fingerprint of traditional Chinese medicine (TCM)" published by China Pharmacopoeia Committee (Version 2012A). Chemical composition of LBPs The contents of protein, total carbohydrate, and yield of LBPs, collected from different sites are summarized in Table 1. Each assay was performed in triplicate, the data are expressed as mean ± SD. The average contents of total carbohydrates in the samples from Ningxia, Qinghai, and Xinjiang were 67.4%, 61.9%, and 62.5%, respectively. Our results showed that the average contents of LBPs from Xinjiang, Qinghai and Ningxia were similar. However, the yield of LBPs ranged from 1.6 to 4.8%, indicating that significant differences existed in the yield of LBP's obtained from plants collected from different origins. The LBPs yields of the plants samples collected from Ningxia and Xinjiang were significantly higher than those from Qinghai (p < 0.05), which might have been affected by the specific conditions of the different cultivation regions, such as soil factors [15], light and temperature [16]. This difference is also reflected in the appearance characteristics of the samples (Fig. 1). Typical materials of L. barbarum collected in China (NX, QH, and XJ L. barbarum collected from Ningxia, Qinghai, and Xinjiang in China.) The L. barbarum fruits collected from Qinghai (average altitude 2740 m, annual average temperature 5.0 °C) had significantly bigger sizes than those from Ningxia (average altitude 1180 m, annual average temperature 10.2 °C) and Xinjiang (average altitude 750 m, annual average temperature 9.5 °C). The results of the determination of 100-grain weight, one of the evaluation indexes utilized in the commercial grade standard for L. barbarum evaluation in China were also in agreement with observed appearance characteristics, i.e., the size of the fruits. The average 100-grain weight of three bathes L. barbarum fruits from Qinghai (25.59 g) was obviously higher (nearly twice higher) than that of the sample from Ningxia (13.32 g) and Xinjiang (14.29 g). Previous studies revealed that low atmospheric temperature, high altitude, and long sunshine time exerted favorable effects on the appearance characteristics of L. barbarum fruits. This conclusion is consistent with our observations. In addition, the L. barbarum fruits from Qinghai had the biggest grains, but the yield of LBPs from them was lower than others, which indicated that the yield of LBPs did not increase with the increase of the size of the grain. Molecular weight and conformation of LBPs Usually, the Mw of polysaccharides has important effects on their biological activities [17, 18] and is also one of the key indicators to evaluate the quality of polysaccharides. HPSEC-RI-MALLS is an absolute and efficient method to determine the Mw and polydispersity index of polymers without relying on assumptions regarding column calibration standards or elution properties [19]. To further analyze the structure of LBPs from different areas in China, the HPSEC chromatogram and Mw were compared. The HPSEC representative chromatogram of LBPs can be seen in Fig. 2b, whereas. Figure 2b represents the HPSEC chromatogram comparison of LBPs of the nine samples examined. HPSEC chromatograms of LBPs of the samples studied. a Typical HPSEC chromatograms of LBPs; b comparison among the HPSEC chromatograms of LBPs from the nine batches As can be seen in Fig. 2b the HPSEC chromatograms of the nine samples were similar, which implies that the structures of LBPs obtained from different origins are alike. To diminish the possible interference from compounds in our tests, the small-molecular component was partly removed using an ultracentrifugal filter with a molecular weight cutoff value of 3 kDa. The molecular weight (Mw < 3 kDa) was not precisely determined due to the relatively poor resolution of the column and the co-elution of various different small molecules [6]. Therefore, to increase the accuracy of the determination results, the peak from 59.0 to 64.0 min was not used in the Mw calculation. The detailed information of the Mw and polydispersity index (Mw/Mn) in LBPs collected from different regions is summarized in Table 2. Table 2 Molecular weight and polydispersity index of LBPs from nine samples The results showed that the Mw of the two polysaccharide fractions (peaks 1–2) in LBPs collected from Ningxia, Qinghai, and Xinjiang were similar, ranging from 1.36 × 106 to 2.01 × 106 (peak 1), and 6.85 × 104 to 10.30 × 104 (peak 2), respectively. These findings were in accordance with the ones of previous studies showing that the molecular weights of polysaccharides in LBPs were within the range from 1.0 × 104 to 2.3 × 106 Da [20]. Furthermore, the polydispersity index of the two polysaccharide fractions (peaks 1 and 2) in LBPs were from 1.56 to 2.59 and from 1.19 to 1.48, respectively, suggesting that molecular the weight distribution of each polysaccharide fraction in L. barbarum was relatively narrow. The polymer conformation of polysaccharides, another important parameter reflecting structure can be determined by HPSEC-MALLS [21]. The conformation, denoted as α, can be examined by determining both Mw and size (rms radius, rg) at each elution volume, independently, then, plotting log(Rg) as a function of log(Mw). The resulting slopes reveal whether the molecule's conformation is approximated by a sphere (slope of about 0.33), random coil (slope of 0.5–0.6) or rod (slope of 1.0) [22]. Of much greater interest was the comparison in molecular conformation, shown in Table 2. The conformation plot, α ranging from 0.189 to 0.252, suggested that the natural LBPs have a compact, sphere-like structure. It also be observed that the conformation of LBPs from nine samples showed high similarity which further demonstrate the consistency of similarity of polysaccharide structure in L. barbarum. Monosaccharide analysis and HPLC-PAD fingerprints The comparison of HPLC-PAD chromatogram from nine samples are presented in Fig. 3a. The results of the HPLC analysis reveal that the LBPs from the different locations consisted of nine types of monosaccharide. The 1st to the 9th peaks represent Man, Rib, Rha, GlcA, GalA, Glc, Gal, Xyl, and Ara. The molar ratios of the monosaccharides in the LBPs of the nine samples are summarized in Table 3, the results in which indicate that the LBPs from different locations had identical monosaccharide compositions, but the molar ratios were not exactly the same. HPLC-PAD chromatogram of LBPs. a HPLC-PAD chromatogram of nine batches of LBPs. b HPLC-PAD referential fingerprint. c HPLC-PAD chromatogram of nine monosaccharide types Table 3 Monosaccharide composition of LBPs With the development of modern analytical technology, extensive exploratory work has been conducted on the fingerprints of polysaccharides from herbs [23, 24]. These studies on polysaccharide fingerprint profiling have often been performed by HPLC. Similarity analysis (SA) have been usually employed to evaluate chromatographic fingerprinting data [25]. In our study, the standard fingerprints were established and the similarity of fingerprints were calculated by the software "Similarity Evaluation System for Chromatographic Fingerprint of Traditional Chinese Medicine (TCM)". Nine characteristic peaks in fingerprints are displayed in Fig. 3b. The mixed standard sample of nine types of monosaccharide in Fig. 3c was used for qualitative analysis of monosaccharide peaks of samples in the same chromatographic conditions. The results showed that all the similarity values of each of the LBPs with the standard fingerprint were larger than 0.926. The values had highly similar fingerprint characteristics of LBPs although the samples were from different geographical regions. It also implied the standard fingerprint was representative enough that could be used to control and evaluate the quality of L. barbarum. Antioxidant activities of LBPS from different locations Two assays were conducted to assess the antioxidant capacities of the nine polysaccharides. The hydroxyl and ABTS radical has been widely used to test the ability of compounds as free radical scavengers or hydrogen donors [26]. The antioxidant activities of LBPs are illustrated in Fig. 4. Our results indicated that the increase in the concentration of polysaccharides enhanced the scavenging activities towards hydroxyl and ABTS radicals. All samples were effective in scavenging hydroxyl radicals, as can be seen in Fig. 4a. The scavenging effect of Ningxia (60.3%, the average value of NX1, NX2 and NX3), Qinghai (57.3%, the average value of QH1, QH2, and QH3) and Xinjiang (61.3%, the average value of XJ1, XJ2 and XJ3) at high concentration (800 µg/mL) was comparable to that of Vit C (95.0%) at 100 µg/mL. The ABTS radical scavenging activities of each sample are illustrated in Fig. 4b. Comparatively, the LBPs at 400 µg/mL concentration possessed a high ABTS radical-scavenging activity (97.7%, the average value of nine samples) which was close to that of VC at low concentration (100 µg/mL). Scavenging effects of nine samples at different concentrations on hydroxyl radicals (a) and ABTS radicals (b). Each value represents the mean (n = 3) Our findings indicate that all LBPs from different regions had effective scavenging activities, which may be associated with the structure of LBPs. As mentioned in previous reports [27, 28], polysaccharides is characterized by high contents of galacturonic acid, a high degree of branching and favorable linkage usually showed a strong free radical scavenging activity. Interestingly, LBPs just has these structural features which together contribute to their predominant antioxidant activity. Results also showed that there was no significant difference in antioxidant capacity of nine LBPs towards hydroxyl and ABTS radicals determination. Immunomodulatory activities of LBPS from different locations Comparison of RAW264.7 cell viability The cytotoxicity of LBPs from three samples were examined using RAW264.7 macrophage cell lines over the concentration range of 50–200 µg/mL. As presented in Fig. 5a, compared with the untreated cells (control), the viability of macrophage cells incubated with LBPs was more than 100%. When RAW264.7 cells were treated with NX1, significant proliferation up to 150% was observed at 50 µg/mL (p < 0.05). QH1 and XJ1 also stimulate macrophage proliferation up to 160% and 146%. In addition, there was no significant difference in the effect of three LBPs on macrophage proliferation. a Proliferation activity and b mRNA expression of cytokines secretion of NO, IL-6 and TNF-α of RAW264.7 macrophage cells after treatment with polysaccharide from L. barbarum (NX1, QH1, XJ1). The values are presented as mean ± SD (n = 3). Significant differences with control cells were designated as *p < 0.05 Effect of LBPs from different locations on RAW264.7 TNF-α, IL-6 and NO production Regulation of macrophage function by polysaccharides has been reported in diverse ways including promoting macrophage activation, inducing the secretion of TNF-α, IL-6 and NO, and enhancing phagocytosis and antigen presenting ability [29, 30]. Thus, the effects of LBPs from different regions induced the secretion of NO, IL-6 and TNF-α of macrophage were evaluated and compared in this study. The results showed that RAW264.7 cells treated with NX1, QH1, XJ1 or LPS resulted in significantly increased the mRNA transcription levels of iNOS, IL-6 and TNF-α (Fig. 5b). Our results showed that the structural properties including monosaccharide composition, Mw and conformation of LBPs from different areas were similar, which is consistent with the results obtained from their antioxidant and immune activities. It may implied that the species is a more important factor affecting the structure and bioactivities of LBPs than the origin. However, the average yield of LBPs from different producing areas showed great differences: the LBPs from Qinghai were significantly lower (p < 0.05) than those from Xinjiang and Ningxia, which may indicate that the L. barbarum produced in Ningxia and Xinjiang maybe more suitable as materials for medicines and functional foods. This study also provides a reference for improving the quality control standard of LBPs. According to our research results, in consideration of structural similarity of LBPs from different producing areas, the yield of LBPs should be used as a key index to evaluate the quality of L. barbarum. First, the HPLC-PAD typical fingerprint of the monosaccharide composition and Mw of peaks can be determined to identify the LBPs and the related products from polysaccharides of other herbs. Then the yield of LBPs can be used as an important index to evaluate the quality of L. barbarum. On the basis of the existing standards for the determination of LBPs content, the identification tests for the determination of monosaccharide composition and Mw should also be developed. Nevertheless, conducting a large-scale study, including the analysis of larger numbers of samples from Ningxia, Xinjiang, and Qinghai, is required to further confirm the present results. Amagase H, Sun B, Borek C. Lycium barbarum (goji) juice improves in vivo antioxidant biomarkers in serum of healthy adults. Nutr Res. 2009;29:19–25. Gan L, Hua Zhang S, Liang Yang X, Bi XuH. Immunomodulation and antitumor activity by a polysaccharide-protein complex from Lycium barbarum. Int Immunopharmacol. 2004;4:563–9. Yao X, Li-Jia XU, Xiao W, Peng Y, Xiao PG. Analysis of Lycium barbarum polysaccharide from different lycii fructus. Herald Med. 2011;30:426–8. Cuesta G, Suarez N, Bessio MI, Ferreira F, Massaldi H. Quantitative determination of pneumococcal capsular polysaccharide serotype 14 using a modification of phenol–sulfuric acid method. J Microbiol Methods. 2003;52:69–73. Liu W, Xu J, Zhu R, Zhu Y, Zhao Y, Chen P, et al. Fingerprinting profile of polysaccharides from Lycium barbarum using multiplex approaches and chemometrics. Int J Biol Macromol. 2015;78:230–7. Wu DT, Lam SC, Cheong KL, Wei F, Lin PC, Long ZR, et al. Simultaneous determination of molecular weights and contents of water-soluble polysaccharides and their fractions from Lycium barbarum collected in China. J Pharm Biomed Anal. 2016;129:210–8. Lin CL, Wang CC, Chang SC, Inbaraj BS, Chen BH. Antioxidative activity of polysaccharide fractions isolated from Lycium barbarum Linnaeus. Int J Biol Macromol. 2009;45:146–51. Zhao Q, Dong B, Chen J, Zhao B, Wang X, Wang L, et al. Effect of drying methods on physicochemical properties and antioxidant activities of wolfberry (Lycium barbarum) polysaccharide. Carbohyd Polym. 2015;127:176–81. Xie J, Wu DT, Li WZ, Ning CG, Tang YP, Zhao J, et al. Effects of polysaccharides in lycium barbarum berries from different regions of China on macrophages function and their correlation to the glycosidic linkages. J Food Sci. 2017;82:2411–20. Redgwell RJ, Curti D, Wang J, Dobruchowska JM, Gerwig GJ, Kamerling JP, et al. Cell wall polysaccharides of Chinese Wolfberry (Lycium barbarum): Part 1. Characterisation of soluble and insoluble polymer fractions. Carbohydr Polym. 2011;84:1344–9. Zhu J, Liu W, Yu J, Zou S, Wang J, Yao W, et al. Characterization and hypoglycemic effect of a polysaccharide extracted from the fruit of Lycium barbarum L. Carbohydr Polym. 2013;98:8–16. Wu WL, Zhu YT, Zhang L, Yang RW, Zhou YH. Extraction, preliminary structural characterization, and antioxidant activities of polysaccharides from Salvia miltiorrhiza Bunge. Carbohydr Polym. 2012;87:1348–53. Chen Y, Li XH, Zhou LY, Li W, Liu L, Wang DD, et al. Structural elucidation of three antioxidative polysaccharides from Tricholoma lobayense. Carbohydr Polym. 2017;157:484–92. Memarpoor-Yazdi M, Asoodeh A, Chamani J. A novel antioxidant and antimicrobial peptide from hen egg white lysozyme hydrolysates. J Funct Foods. 2012;4:278–86. Aruani MC, Reeb PD, Barnes NE. Influence of soil properties on yield and fruit maturity at harvest of 'williams' pear. Chil J Agric Res. 2014;74:460–7. Medlicott AP, Reynolds SB, Thompson AK. Effects of temperature on the ripening of mango fruit (Mangifera indica L. var. tommy atkins). J Sci Food Agric. 2010;37:469–74. Sun L, Ling W, Yan Z. Immunomodulation and antitumor activities of different-molecular-weight polysaccharides from Porphyridium cruentum. Carbohydr Polym. 2012;87:1206–10. Sheng J, Sun Y. Antioxidant properties of different molecular weight polysaccharides from Athyrium multidentatum (Doll.) Ching. Carbohydr Polym. 2014;108:41–5. Hu DJ, Cheong KL, Zhao J, Li SP. Chromatography in characterization of polysaccharides from medicinal plants and fungi. J Sep Sci. 2013;36:1–19. Liang B, Jin M, Liu H. Water-soluble polysaccharide from dried Lycium barbarum fruits: isolation, structural features and antioxidant activity. Carbohydr Polym. 2011;83:1947–51. Cheong KL, Wu DT, Zhao J, Li SP. A rapid and accurate method for the quantitative estimation of natural polysaccharides and their fractions using high performance size exclusion chromatography coupled with multi-angle laser light scattering and refractive index detector. J Chromatogr A. 2015;1400:98–106. Liu W, Liu Y, Zhu R, Yu J, Lu W, Pan C, et al. Structure characterization, chemical and enzymatic degradation, and chain conformation of an acidic polysaccharide from Lycium barbarum L. Carbohydr Polym. 2016;147:114–24. Wang Y, Xian J, Xi X, Wei X. Multi-fingerprint and quality control analysis of tea polysaccharides. Carbohydr Polym. 2013;92:583–90. Sun X, Wang H, Han X, Chen S, Zhu S, Dai J. Fingerprint analysis of polysaccharides from different Ganoderma by HPLC combined with chemometrics methods. Carbohydr Polym. 2014;114:432–9. Yang Z, Niu Y, Xie Z, Shi H, Pei C, Yu L. Differentiating leaf and whole-plant samples of di- and tetraploid Gynostemma pentaphyllum (Thunb.) Makino using flow-injection mass spectrometric fingerprinting method. J Funct Foods. 2013;5:1288–97. Jao CL, Wen-Ching KO. 1,1Diphenyl2-picrylhydrazyl (DPPH) radical scavenging by protein hydrolyzates from tuna cooking juice. Fish Sci. 2011;68:430–5. Asker MMS, Mahmoud MG, Ibrahim GS. Structural characterization and biological activity of acidic polysaccharide fractions isolated from Bacillus polymyxa NRC-A. J Appl Sci Res. 2007;3:1170–7. Volman JJ, Helsper JP, Wei S, Baars JJ. Effects of mushroom-derived beta-glucan-rich polysaccharide extracts on nitric oxide production by bone marrow-derived macrophages and nuclear factor-kappaB transactivation in Caco-2 reporter cells: can effects be explained by structure? Mol Nutr Food Res. 2010;54:268–76. Mehdi T, SangGuan Y, Elham HD, Utoomporn S. Water-soluble polysaccharides from Ulva intestinalis: molecular properties, structural elucidation and immunomodulatory activities. J Food Drug Anal. 2018;26:599–608. Zhang X, Li Y, Cheng J, Liu G, Qi C, Zhou W, et al. Immune activities comparison of polysaccharide and polysaccharide-protein complex from Lycium barbarum L. Int J Biol Macromol. 2014;65:441–5. This study was supported by Grants from "12th Five-Year Plan," and "Creation of Major New Drugs," as well as "The Platform of Safety Testing Technology and Standards of Traditional Chinese Medicine" (NO. 2014ZX09304307-002) from the Important Program of Ministry of Science and Technology of the People's Republic of China. School of Chinese Materia Medica, Beijing University of Chinese Medicine, No.11 North 3rd Ring East Road, Chaoyang District, Beijing, 100102, China Ying Wang, Xiaoxv Dong & Jian Ni Institute for Control of Chinese Traditional Medicine and Ethnic Medicine (ICCTMEM), National Institutes for Food and Drug Control (NIFDC), No. 2 Tiantan Xili, Dongcheng District, Beijing, 100050, China Ying Wang, Hongyu Jin & Shuangcheng Ma School of Medicine and Pharmacy, Ocean University of China, Qingdao, 266003, China Shuang Yang Ying Wang Hongyu Jin Xiaoxv Dong Shuangcheng Ma Jian Ni All persons who meet authorship criteria are listed as authors, and all authors certify that they have participated sufficiently in the work to take public responsibility for the content, including participation in the concept, design, analysis, writing, or revision of the manuscript. Furthermore, each author certifies that this material or similar material has not been and will not be submitted to or published in any other publication. The specific contributions made by each author have been listed as follows: YW: conception and design of study, acquisition of data and drafting the manuscript; HJ: acquisition of data, analysis and interpretation of data; XD, SY: revising the manuscript critically for important intellectual content; SM: conception and design of study; JN: conception and design of study, analysis and interpretation of data. All authors read and approved the final manuscript. Correspondence to Shuangcheng Ma or Jian Ni. Wang, Y., Jin, H., Dong, X. et al. Quality evaluation of Lycium barbarum (wolfberry) from different regions in China based on polysaccharide structure, yield and bioactivities. Chin Med 14, 49 (2019). https://doi.org/10.1186/s13020-019-0273-6 Lycium barbarum L. Polysaccharide Quality evaluation Antioxidative activity
CommonCrawl
Genetic programming Evolutionary machine learning Gesture recognition Prototype generation Computer vision Kinect Multi-objective optimization Representation learning Transfer learning Unsupervised image annotation 1NN classification 65D19 68T10 97K80 Academic competitions Mexico [x] 47 (%) United States 12 (%) Spain 8 (%) Colombia 6 (%) France 5 (%) Instituto Nacional de Astrofísica, Óptica y Electrónica 16 (%) Instituto Nacional de Astrofísica, Óptica y Electrónica (INAOE) 11 (%) ChaLearn 6 (%) University of Barcelona 6 (%) Escalante, Hugo Jair [x] 47 (%) Montes-y-Gómez, Manuel 11 (%) Morales-Reyes, Alicia 8 (%) Escalera, Sergio 7 (%) Guyon, Isabelle 7 (%) Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications 8 (%) Pattern Recognition 4 (%) Multimedia Tools and Applications 3 (%) Pattern Recognition and Information Forensics 3 (%) Advances in Artificial Intelligence 2 (%) Journal 13 (%) Pattern Recognition 28 (%) Image Processing and Computer Vision 27 (%) Information Systems Applications (incl. Internet) 17 (%) 83 Institutions Distributional Term Representations for Short-Text Categorization Computational Linguistics and Intelligent Text Processing (2013-01-01) 7817: 335-346 , January 01, 2013 By Cabrera, Juan Manuel; Escalante, Hugo Jair; Montes-y-Gómez, Manuel Everyday, millions of short-texts are generated for which effective tools for organization and retrieval are required. Because of the tiny length of these documents and of their extremely sparse representations, the direct application of standard text categorization methods is not effective. In this work we propose using distributional term representations (DTRs) for short-text categorization. DTRs represent terms by means of contextual information, given by document occurrence and term co-occurrence statistics. Therefore, they allow us to develop enriched document representations that help to overcome, to some extent, the small-length and high-sparsity issues. We report experimental results in three challenging collections, using a variety of classification methods. These results show that the use of DTRs is beneficial for improving the classification performance of classifiers in short-text categorization. Genetic Programming of Heterogeneous Ensembles for Classification Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications (2013-01-01) 8258: 9-16 , January 01, 2013 By Escalante, Hugo Jair; Acosta-Mendoza, Niusvel; Morales-Reyes, Alicia; Gago-Alonso, Andrés Show all (4) The ensemble classification paradigm is an effective way to improve the performance and stability of individual predictors. Many ways to build ensembles have been proposed so far, most notably bagging and boosting based techniques. Evolutionary algorithms (EAs) also have been widely used to generate ensembles. In the context of heterogeneous ensembles EAs have been successfully used to adjust weights of base classifiers or to select ensemble members. Usually, a weighted sum is used for combining classifiers outputs in both classical and evolutionary approaches. This study proposes a novel genetic program that learns a fusion function for combining heterogeneous-classifiers outputs. It evolves a population of fusion functions in order to maximize the classification accuracy. Highly non-linear functions are obtained with the proposed method, subsuming the existing weighted-sum formulations. Experimental results show the effectiveness of the proposed approach, which can be used not only with heterogeneous classifiers but also with homogeneous-classifiers and under bagging/boosting based formulations. Evolutionary Multi-Objective Approach for Prototype Generation and Feature Selection Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications (2014-01-01) 8827: 424-431 , January 01, 2014 By Rosales-Pérez, Alejandro; Gonzalez, Jesus A.; Coello-Coello, Carlos A.; Reyes-Garcia, Carlos A.; Escalante, Hugo Jair Show all (5) This paper introduces EMOPG+FS, a novel approach to prototype generation and feature selection that explicitly minimizes the classification error rate, the number of prototypes, and the number of features. Under EMOPG+FS, prototypes are initialized from a subset of training instances, whose positions are adjusted through a multi-objective evolutionary algorithm. The optimization process aims to find a set of suitable solutions that represent the best possible trade-offs among the considered criteria. Besides this, we also propose a strategy for selecting a single solution from the several that are generated during the multi-objective optimization process.We assess the performance of our proposed EMOPG+FS using a suite of benchmark data sets and we compare its results with respect to those obtained by other evolutionary and non-evolutionary techniques. Our experimental results indicate that our proposed approach is able to achieve highly competitive results. Novel Distributional Visual-Feature Representations for image classification Multimedia Tools and Applications (2019-05-01) 78: 11313-11336 , May 01, 2019 By López-Monroy, A. Pastor; Montes-y-Gómez, Manuel; Escalante, Hugo Jair; González, Fabio A. Show all (4) The Bag-of-Visual-Words (BoVW) representation is a well known strategy to approach many computer vision problems. The idea behind BoVW is similar to the Bag-of-Words (BoW) used in text mining tasks: to build word histograms to represent documents. Regarding computer vision, most of the research has been devoted to obtain better visual words, rather than in improving the final representation. This is somewhat surprising, as there are many alternative ways of improving the BoW representation within the text mining community that can be applied in computer vision as well. This paper aims at evaluating the usefulness of Distributional Term Representations (DTRs) for image classification. DTRs represent instances by exploiting statistics of feature occurrences and co-occurrences along the dataset. We focus in the suitability and effectiveness of using well-known DTRs in different image collections. Furthermore, we devise two novel distributional strategies that learn appropriated groups of images to compute better suited distributional features. We report experimental results in several image datasets showing the effectiveness of the proposed DTRs over BoVW and other methods in the literature including deep learning based strategies. In particular we show the effectiveness of the proposed representations on image collections from narrow domains, where target categories are subclasses of a more general class (e.g., subclasses of birds, aircrafts, or dogs). Towards a Generic Ontology for Video Surveillance Applications for Future Internet (2017-01-01) 179: 3-7 , January 01, 2017 By Hernandez-Leal, Pablo; Escalante, Hugo Jair; Sucar, L. Enrique Video surveillance is an important problem that has been studied for several years. Nowadays, in the context of smart cities, intelligent video surveillance is an important topic which has several subproblems which need to be solved and then integrated. For example, on one side there are several algorithms for detection, recognition and tracking of objects and people. On the other side, it is necessary to recognize not only objects and persons but complex behaviors (fights, thefts, attacks). To solve these challenges, the use of ontologies has been proposed as a tool to reduce this gap between low and high level information. In this work, we present the foundations of an ontology to be used in an intelligent video surveillance system. Multimodal indexing based on semantic cohesion for image retrieval Information Retrieval (2012-02-01) 15: 1-32 , February 01, 2012 By Escalante, Hugo Jair; Montes, Manuel; Sucar, Enrique This paper introduces two novel strategies for representing multimodal images with application to multimedia image retrieval. We consider images that are composed of both text and labels: while text describes the image content at a very high semantic level (e.g., making reference to places, dates or events), labels provide a mid-level description of the image (i.e., in terms of the objects that can be seen in the image). Accordingly, the main assumption of this work is that by combining information from text and labels we can develop very effective retrieval methods. We study standard information fusion techniques for combining both sources of information. However, whereas the performance of such techniques is highly competitive, they cannot capture effectively the content of images. Therefore, we propose two novel representations for multimodal images that attempt to exploit the semantic cohesion among terms from different modalities. Such representations are based on distributional term representations widely used in computational linguistics. Under the considered representations the content of an image is modeled by a distribution of co-occurrences over terms or of occurrences over other images, in such a way that the representation can be considered an expansion of the multimodal terms in the image. We report experimental results using the SAIAPR TC12 benchmark on two sets of topics used in ImageCLEF competitions with manually and automatically generated labels. Experimental results show that the proposed representations outperform significantly both, standard multimodal techniques and unimodal methods. Results on manually assigned labels provide an upper bound in the retrieval performance that can be obtained, whereas results with automatically generated labels are encouraging. The novel representations are able to capture more effectively the content of multimodal images. We emphasize that although we have applied our representations to multimedia image retrieval the same formulation can be adopted for modeling other multimodal documents (e.g., videos). Annotation-Based Expansion and Late Fusion of Mixed Methods for Multimedia Image Retrieval By Escalante, Hugo Jair; Gonzalez, Jesús A.; Hernández, Carlos A.; López, Aurelio; Montes, Manuel; Morales, Eduardo; Sucar, Luis E.; Villaseñor-Pineda, Luis Show all (8) This paper describes experimental results of two approaches to multimedia image retrieval: annotation-based expansion and late fusion of mixed methods. The former formulation consists of expanding manual annotations with labels generated by automatic annotation methods. Experimental results show that the performance of text-based methods can be improved with this strategy, specially, for visual topics; motivating further research in several directions. The second approach consists of combining the outputs of diverse image retrieval models based on different information. Experimental results show that competitive performance, in both retrieval and results diversification, can be obtained with this simple strategy. It is interesting that, contrary to previous work, the best results of the fusion were obtained by assigning a high weight to visual methods. Furthermore, a probabilistic modeling approach to result-diversification is proposed; experimental results reveal that some modifications are needed to achieve satisfactory results with this method. Learning When to Classify for Early Text Classification Computer Science – CACIC 2017 (2018-01-01) 790: 24-34 , January 01, 2018 By Loyola, Juan Martín; Errecalde, Marcelo Luis; Escalante, Hugo Jair; Montes y Gomez, Manuel Show all (4) The problem of classification is a widely studied one in supervised learning. Nonetheless, there are scenarios that received little attention despite its applicability. One of such scenarios is early text classification, where one needs to know the category of a document as soon as possible. The importance of this variant of the classification problem is evident in tasks like sexual predator detection, where one wants to identify an offender as early as possible. This paper presents a framework for early text classification which highlights the two main pieces involved in this problem: classification with partial information and deciding the moment of classification. In this context, a novel approach that learns the second component (when to classify) and an adaptation of a temporal measurement for multi-class problems are introduced. Results with a classical text classification corpus in comparison against a model that reads the entire documents confirm the feasibility of our approach. Object Recognition with Näive Bayes-NN via Prototype Generation Pattern Recognition (2014-01-01) 8495: 162-171 , January 01, 2014 By Escalante, Hugo Jair; Sotomayor, Mauricio; Montes, Manuel; Lopez-Monroy, A. Pastor Show all (4) Naïve Bayes nearest neighbors (NBNN) is a variant of the classic KNN classifier that has proved to be very effective for object recognition and image classification tasks. Under NBNN an unseen image is classified by looking at the distance between the sets of visual descriptors of test and training images. Although NBNN is a very competitive pattern classification approach, it presents a major drawback: it requires of large storage and computational resources. NBNN's requirements are even larger than those of the standard KNN because sets of raw descriptors must be stored and compared, therefore, efficiency improvements for NBNN are necessary. Prototype generation (PG) methods have proved to be helpful for reducing the storage and computational requirements of standard KNN. PG methods learn a reduced subset of prototypical instances to be used by KNN for classification. In this contribution we study the suitability of PG methods for enhancing the capabilities of NBNN. Throughout an extensive comparative study we show that PG methods can reduce dramatically the number of descriptors required by NBNN without significantly affecting its discriminative performance. In fact, we show that PG methods can improve the classification performance of NBNN by using much less visual descriptors. We compare the performance of NBNN to other state-of-the-art object recognition approaches and show the combination of PG and NBNN outperforms alternative techniques. A Weighted Profile Intersection Measure for Profile-Based Authorship Attribution Advances in Artificial Intelligence (2011-01-01) 7094: 232-243 , January 01, 2011 By Escalante, Hugo Jair; Montes-y-Gómez, Manuel; Solorio, Thamar This paper introduces a new similarity measure called weighted profile intersection (WPI) for profile-based authorship attribution (PBAA). Authorship attribution (AA) is the task of determining which, from a set of candidate authors, wrote a given document. Under PBAA an author's profile is created by combining information extracted from sample documents written by the author of interest. An unseen document is associated with the author whose profile is most similar to the document. Although competitive performance has been obtained with PBAA, the method is limited in that the most used similarity measure only accounts for the number of overlapping terms among test documents and authors' profiles. We propose a new measure for PBAA, WPI, which takes into account an inter-author term penalization factor, besides the number of overlapping terms. Intuitively, in WPI we rely more on those terms that are (frequently) used by the author of interest and not (frequently) used by other authors when computing the similarity of the author's profile and a test document. We evaluate the proposed method in several AA data sets, including many data subsets from Twitter. Experimental results show that the proposed technique outperforms the standard PBAA method in all of the considered data sets; although the baseline method resulted very effective. Further, the proposed method achieves performance comparable to classifier-based AA methods (e.g., methods based on SVMs), which often obtain better classification results at the expense of limited interpretability and a higher computational cost. Particle Swarm Model Selection for Authorship Verification By Escalante, Hugo Jair; Montes, Manuel; Villaseñor, Luis Authorship verification is the task of determining whether documents were or were not written by a certain author. The problem has been faced by using binary classifiers, one per author, that make individual yes/no decisions about the authorship condition of documents. Traditionally, the same learning algorithm is used when building the classifiers of the considered authors. However, the individual problems that such classifiers face are different for distinct authors, thus using a single algorithm may lead to unsatisfactory results. This paper describes the application of particle swarm model selection (PSMS) to the problem of authorship verification. PSMS selects an ad-hoc classifier for each author in a fully automatic way; additionally, PSMS also chooses preprocessing and feature selection methods. Experimental results on two collections give evidence that classifiers selected with PSMS are advantageous over selecting the same classifier for all of the authors involved. Genetic Programming of Prototypes for Pattern Classification Pattern Recognition and Image Analysis (2013-01-01): 7887 , January 01, 2013 By Escalante, Hugo Jair; Mendoza, Karlo; Graff, Mario; Morales-Reyes, Alicia Show all (4) This paper introduces a genetic programming approach to the generation of classification prototypes. Prototype-based classification is a pattern recognition methodology in which the training set of a classification problem is represented by a small subset of instances. The assignment of labels to test instances is usually done by a 1NN rule. We propose a new prototype generation method, based on genetic programming, in which examples of each class are automatically combined to generate highly effective classification prototypes. The genetic program aims to maximize an estimate of the generalization performance of a 1NN classifier using the prototypes. We report experimental results on a benchmark for the evaluation of prototype generation methods. Experimental results show the validity of our approach: the proposed method outperforms most of the state of the art techniques when using both small and large data sets. Better results are obtained for data sets with numeric attributes only, although the performance of our method on mixed data is very competitive as well. A One-Shot DTW-Based Method for Early Gesture Recognition By Sabinas, Yared; Morales, Eduardo F.; Escalante, Hugo Jair Early gesture recognition consists of recognizing gestures at their beginning, using incomplete information. Among other applications, these methods can be used to compensate for the delay of gesture-based interactive systems. We propose a new approach for early recognition of full-body gestures based on dynamic time warping (DTW) that uses a single example from each category. Our method is based on the comparison between time sequences obtained from known and unknown gestures. The classifier provides a response before the unknown gesture finishes. We performed experiments in the MSR-Actions3D benchmark and another data set we built. Results show that, in average, the classifier is capable of recognizing gestures with 60% of the information, losing only 7.29% of accuracy with respect to using all of the information. Instance Selection in Text Classification Using the Silhouette Coefficient Measure By Dey, Debangana; Solorio, Thamar; Montes y Gómez, Manuel; Escalante, Hugo Jair Show all (4) The paper proposes the use of the Silhouette Coefficient (SC) as a ranking measure to perform instance selection in text classification. Our selection criterion was to keep instances with mid-range SC values while removing the instances with high and low SC values. We evaluated our hypothesis across three well-known datasets and various machine learning algorithms. The results show that our method helps to achieve the best trade-off between classification accuracy and training time. Structurally Layered Representation Learning: Towards Deep Learning Through Genetic Programming Genetic Programming (2018-01-01): 10781 , January 01, 2018 By Rodriguez-Coayahuitl, Lino; Morales-Reyes, Alicia; Escalante, Hugo Jair We introduce a novel method for representation learning based on genetic programming (GP). Inspired into the way that deep neural networks learn descriptive/discriminative representations from raw data, we propose a structurally layered representation that allows GP to learn a feature space from large scale and high dimensional data sets. Previous efforts from the GP community for feature learning have focused on small data sets with a few input variables, also, most approaches rely on domain expert knowledge to produce useful representations. In this paper, we introduce the structurally layered GP formulation, together with an efficient scheme to explore the search space and show that this framework can be used to learn representations from large data sets of high dimensional raw data. As case of study we describe the implementation and experimental evaluation of an autoencoder developed under the proposed framework. Results evidence the benefits of the proposed framework and pave the way for the development of deep geneticprogramming. Bias and Variance Multi-objective Optimization for Support Vector Machines Model Selection By Rosales-Pérez, Alejandro; Escalante, Hugo Jair; Gonzalez, Jesus A.; Reyes-Garcia, Carlos A.; Coello Coello, Carlos A. Show all (5) In this paper, we describe a novel model selection approach for a SVM. Each model can be composed by a feature selection method and a pre-processing method besides the classifier. Our approach is based on a multi-objective evolutionary algorithm and on the bias-variance definition. This strategy allows us to explore the hyperparameters space and to select the solutions with the best bias-variance trade-off. The proposed method is evaluated using a number of benchmark data sets for classification tasks. Experimental results show that it is possible to obtain models with an acceptable generalization performance using the proposed approach. Improved Learning Rule for LVQ Based on Granular Computing Pattern Recognition (2015-01-01): 9116 , January 01, 2015 By Cruz-Vega, Israel; Escalante, Hugo Jair LVQ classifiers are particularly intuitive and simple to understand because they are based on the notion of class representatives (i.e., prototypes). Several approaches for improving the performance of LVQ in batch-learning scenarios are found in the literature. However, all of them assume a fixed number of prototypes in the learning process; we claim that the quantized approximation to the distribution of the input data using a finite number of prototypes, should not be fixed. Thus, in this paper we propose an improved learning algorithm for batch and on-line variants in LVQ. The proposed algorithm is based on a modified LVQ rule and granular computing, a simple and low cost computational process of clustering. All this, increases the dynamics in the learning process, proposing new prototypes which have a better covering of the distribution of classes, rather than using a fixed number of them. Similarly, in order to avoid an exponential growth in the number of prototypes, an automatic pruning step is implemented, respecting the desired reduction rate. Guest editorial: special issue on human abnormal behavioural analysis Machine Vision and Applications (2019-07-01) 30: 807-811 , July 01, 2019 By Anbarjafari, Gholamreza; Escalera, Sergio; Nasrollahi, Kamal; Escalante, Hugo Jair; Baro, Xavier; Wan, Jun; Moeslund, Thomas B. Show all (7) Time series forecasting with genetic programming Natural Computing (2017-03-01) 16: 165-174 , March 01, 2017 By Graff, Mario; Escalante, Hugo Jair; Ornelas-Tellez, Fernando; Tellez, Eric S. Show all (4) Genetic programming (GP) is an evolutionary algorithm that has received a lot of attention lately due to its success in solving hard world problems. There has been a lot of interest in using GP to tackle forecasting problems. Unfortunately, it is not clear whether GP can outperform traditional forecasting techniques such as auto-regressive models. In this contribution, we present a comparison between standard GP systems qand auto-regressive integrated moving average model and exponential smoothing. This comparison points out particular configurations of GP that are competitive against these forecasting techniques. In addition to this, we propose a novel technique to select a forecaster from a collection of predictions made by different GP systems. The result shows that this selection scheme is competitive with traditional forecasting techniques, and, in a number of cases it is statistically better. Enhanced Fuzzy-Relational Neural Network with Alternative Relational Products By Mendoza-Castañeda, Efraín; Reyes-García, Carlos A.; Escalante, Hugo Jair; Moreno, Wilfrido; Rosales-Pérez, Alejandro Show all (5) This paper describes an extension of fuzzy relational neural networks (FRNNs) that aims at improving their classification performance. We consider Pedrycz's FRNN, which is one of the most effective and popular models. This model has traditionally used a single relational product (Circlet). The extension described in this paper consists in allowing applying other relational products in the training phase to the basic FRNN, looking to increase its predictive capabilities. The relational products considered for the extension are the so called BK-Products: SubTriangle, SupTriangle and Square; in addition, we propose the use of more general operators (t-norms and s-norms) in their definitions, which are also applied to the Circlet relational product. We explore the effectiveness of this extension in classification problems, through testing experiments on benchmark data sets with and without noise. Experimental results reveal that the proposed extension improves the classification performance of the basic FRNN, particularly in noisy data sets. Exploiting label semantic relatedness for unsupervised image annotation with large free vocabularies Multimedia Tools and Applications (2019-07-30) 78: 19641-19662 , July 30, 2019 By Pellegrin, Luis; Escalante, Hugo Jair; Montes-y-Gómez, Manuel; González, Fabio A. Show all (4) Automatic Image Annotation (AIA) is the task of assigning keywords to images, with the aim to describe their visual content. Recently, an unsupervised approach has been used to tackle this task. Unsupervised AIA (UAIA) methods use reference collections that consist of the textual documents containing images. The aim of the UAIA methods is to extract words from the reference collection to be assigned to images. In this regard, by using an unsupervised approach it is possible to include large vocabularies because any word could be extracted from the reference collection. However, having a greater diversity of words for labeling entails to deal with a larger number of wrong annotations, due to the increasing difficulty for assigning a correct relevance to the labels. With this problem in mind, this paper presents a general strategy for UAIA methods that reranks assigned labels. The proposed method exploits the semantic-relatedness information among labels in order to assign them an appropriate relevance for describing images. Experimental results in different benchmark datasets show the flexibility of our method to deal with assignments from free-vocabularies, and its effectiveness to improve the initial annotation performance for different UAIA methods. Moreover, we found that (1) when considering the semantic-relatedness information among the assigned labels, the initial ranking provided by a UAIA method is improved in most of the cases; and (2) the robustness of the proposed method to be applied on different UAIA methods, will allow extending capabilities of state-of-the-art UAIA methods. Semi-supervised Online Kernel Semantic Embedding for Multi-label Annotation Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications (2018-01-01): 10657 , January 01, 2018 By Vanegas, Jorge A.; Escalante, Hugo Jair; González, Fabio A. This paper presents a multi-label annotation method that uses a semantic embedding strategy based on kernel matrix factorization. The proposed method called Semi-supervised Online Kernel Semantic Embedding (SS-OKSE) learns to predict the labels of a document by building a semantic representation of the document features that takes into account the labels, when available. A remarkable characteristic of the algorithm is that it is based on a kernel formulation that allows to model non-linear relationships. The SS-OKSE method was evaluated under a semi-supervised learning setup for a multi-label annotation task, over two text document datasets and was compared against several supervised and semi-supervised methods. Experimental results show that SS-OKSE exhibits a significant improvement, showing that a better modeling can be achieved with an adequate selection/construction of a kernel input representation. Overview of the Multimedia Information Processing for Personality & Social Networks Analysis Contest Pattern Recognition and Information Forensics (2019-01-01): 11188 , January 01, 2019 By Ramírez, Gabriela; Villatoro, Esaú; Ionescu, Bogdan; Escalante, Hugo Jair; Escalera, Sergio; Larson, Martha; Müller, Henning; Guyon, Isabelle Show all (8) Progress in the autonomous analysis of human behavior from multimodal information has lead to very effective methods able to deal with problems like action/gesture/activity recognition, pose estimation, opinion mining, user tailored retrieval, etc. However, it is only recently that the community has been starting to look into related problems associated with more complex behavior, including personality analysis, deception detection, among others. We organized an academic contest co-located with ICPR2018 running two tasks in this direction. On the one hand, we organized an information fusion task in the context of multimodal image retrieval in social media. On the other hand, we ran another task in which we aim to infer personality traits from written essays, including textual and handwritten information. This paper describes both tasks, detailing for each of them the associated problem, data sets, evaluation metrics and protocol, as well as an analysis of the performance of simple baselines. A Two-Step Retrieval Method for Image Captioning Experimental IR Meets Multilinguality, Multimodality, and Interaction (2016-01-01): 9822 , January 01, 2016 By Pellegrin, Luis; Vanegas, Jorge A.; Arevalo, John; Beltrán, Viviana; Escalante, Hugo Jair; Montes-y-Gómez, Manuel; González, Fabio A. Show all (7) Image captioning is the task of assigning phrases to images describing their visual content. Two main approaches for image captioning are commonly used. On the one hand, traditional approaches assign the captions from the most similar images to the image query. On the other hand, recent methods generate captions by sentence generation systems that learn a joint distribution of captions-images relying on a training set. The main limitation is that both approaches require a great number of manually labeled captioned images. This paper presents a unsupervised approach for image captioning based in a two steps image-textual retrieval process. First, given a query image, visually related words are retrieved from a multimodal indexing. The multimodal indexing is built by using a large dataset of web pages containing images. A vocabulary of words is extracted from web pages, for each word is used the visual representation of images to learn a feature model, in this way we can match query images with words by simply measuring visual similarity. Second, a query is formed with the retrieved words and candidate captions are retrieved from a reference dataset of sentences. Despite the simplicity of our method, it is able to get rid of the need of manually labeled images and instead takes advantage of the noisy data derived from the Web, e.g. web pages. The proposed approach has been evaluated on Generation of Textual Descriptions of Images Task at ImageCLEF 2015. Experimental results show the competitiveness of the proposed approach. In addition we report preliminary results on the use of our method for the auto-illustration task. ChaLearn LAP 2016: First Round Challenge on First Impressions - Dataset and Results Computer Vision – ECCV 2016 Workshops (2016-01-01): 9915 , January 01, 2016 By Ponce-López, Víctor; Chen, Baiyu; Oliu, Marc; Corneanu, Ciprian; Clapés, Albert; Guyon, Isabelle; Baró, Xavier; Escalante, Hugo Jair; Escalera, Sergio Show all (9) This paper summarizes the ChaLearn Looking at People 2016 First Impressions challenge data and results obtained by the teams in the first round of the competition. The goal of the competition was to automatically evaluate five "apparent" personality traits (the so-called "Big Five") from videos of subjects speaking in front of a camera, by using human judgment. In this edition of the ChaLearn challenge, a novel data set consisting of 10,000 shorts clips from YouTube videos has been made publicly available. The ground truth for personality traits was obtained from workers of Amazon Mechanical Turk (AMT). To alleviate calibration problems between workers, we used pairwise comparisons between videos, and variable levels were reconstructed by fitting a Bradley-Terry-Luce model with maximum likelihood. The CodaLab open source platform was used for submission of predictions and scoring. The competition attracted, over a period of 2 months, 84 participants who are grouped in several teams. Nine teams entered the final phase. Despite the difficulty of the task, the teams made great advances in this round of the challenge. In Defense of Online Kmeans for Prototype Generation and Instance Reduction Advances in Artificial Intelligence - IBERAMIA 2016 (2016-01-01): 10022 , January 01, 2016 By García-Limón, Mauricio; Escalante, Hugo Jair; Morales-Reyes, Alicia The nearest neighbor rule is one of the most popular algorithms for data mining tasks due in part to its simplicity and theoretical/empirical properties. However, with the availability of large volumes of data, this algorithm suffers from two problems: the computational cost to classify a new example, and the need to store the whole training set. To alleviate these problems instance reduction algorithms are often used to obtain a new condensed training set that in addition to reducing the computational burden, in some cases they improve the classification performance. Many instance reduction algorithms have been proposed so far, obtaining outstanding performance in mid size data sets. However, the application of the most competitive instance reduction algorithms becomes prohibitive when dealing with massive data volumes. For this reason, in recent years, it has become crucial the development of large scale instance reduction algorithms. This paper elaborates on the usage of a classic algorithm for clustering: K-means for tackling the instance reduction problem in big data. We show that this traditional algorithm outperforms most state of the art instance reduction methods in mid size data sets. In addition, this algorithm can cope well with massive data sets and still obtain quite competitive performance. Therefore, the main contribution of this work is showing the validity of this often depreciated algorithm for a quite relevant task in a quite relevant scenario. Analysis of the AutoML Challenge Series 2015–2018 Automated Machine Learning (2019-01-01): 177-219 , January 01, 2019 By Guyon, Isabelle; Sun-Hosoya, Lisheng; Boullé, Marc; Escalante, Hugo Jair; Escalera, Sergio; Liu, Zhengying; Jajetic, Damir; Ray, Bisakha; Saeed, Mehreen; Sebag, Michèle; Statnikov, Alexander; Tu, Wei-Wei; Viegas, Evelyne Show all (13) The ChaLearn AutoML Challenge (The authors are in alphabetical order of last name, except the first author who did most of the writing and the second author who produced most of the numerical analyses and plots.) (NIPS 2015 – ICML 2016) consisted of six rounds of a machine learning competition of progressive difficulty, subject to limited computational resources. It was followed bya one-round AutoML challenge (PAKDD 2018). The AutoML setting differs from former model selection/hyper-parameter selection challenges, such as the one we previously organized for NIPS 2006: the participants aim to develop fully automated and computationally efficient systems, capable of being trained and tested without human intervention, with code submission. This chapter analyzes the results of these competitions and provides details about the datasets, which were not revealed to the participants. The solutions of the winners are systematically benchmarked over all datasets of all rounds and compared with canonical machine learning algorithms available in scikit-learn. All materials discussed in this chapter (data and code) have been made publicly available at http://automl.chalearn.org/ . Automated Detection of Hummingbirds in Images: A Deep Learning Approach Pattern Recognition (2018-01-01): 10880 , January 01, 2018 By Serrano, Sergio A.; Benítez-Jimenez, Ricardo; Nuñez-Rosas, Laura; Coro Arizmendi, Ma; Greeney, Harold; Reyes-Meza, Veronica; Morales, Eduardo; Escalante, Hugo Jair Show all (8) The analysis of natural images has been the topic of research in uncountable articles in computer vision and pattern recognition (e.g., natural images has been used as benchmarks for object recognition and image retrieval). However, despite the research progress in such field, there is a gap in the analysis of certain type of natural images, for instance, those in the context of animal behavior. In fact, biologists perform the analysis of natural images manually without the aid of techniques that were supposedly developed for this purpose. In this context, this paper presents a study on automated methods for the analysis of natural images of hummingbirds with the goal to assist biologists in the study of animal behavior. The automated analysis of hummingbird behavior is challenging mainly because of (1) the speed at which these birds move and interact; (2) the unpredictability of their trajectories; and (3) its camouflage skills. We report a comparative study of two deep learning approaches for the detection of hummingbirds in their nest. Two variants of transfer learning from convolutional neural networks (CNNs) are evaluated in real imagery for hummingbird behavior analysis. Transfer learning is adopted because not enough images are available for training a CNN from scratch, besides, transfer learning is less time consuming. Experimental results are encouraging, as acceptable classification performance is achieved with CNN-based features. Interestingly, a pretrained CNN without fine tunning and a standard classifier performed better in the considered data set. Convolutional Genetic Programming In recent years Convolutional Neural Networks (CNN) have come to dominate many machine learning tasks, specially those related to image analysis, such as object recognition. Herein we explore the possibility of developing image denoising filters by stacking multiple Genetic Programming (GP) syntax trees, in a similar fashion to how CNNs are designed. We test the evolved filters performance in removing additive Gaussian noise. Results show that GP is able to generate a diverse set of feature maps at the 'hidden' layers of the proposed architecture. Although more research is required to validate the suitability of GP for image denoising, our work set the basis for bridging the gap between deep learning and evolutionary computation. Recognition of Apparent Personality Traits from Text and Handwritten Images By Costa, Ernesto Pérez; Villaseñor-Pienda, Luis; Morales, Eduardo; Escalante, Hugo Jair Show all (4) Personality has been considered as one of the most difficult human attributes to understand. It is very important as it can be used to define the uniqueness of a person, and it has a direct impact into several aspects of everyone's life. This paper describes our participation in the HWxPI challenge @ ICPR 2018, an academic competition focusing on the development of methods for estimation of apparent personality from handwritten and textual information. The proposed solution combined information extracted from both text and images. From the textual modality, words, and other linguistic features were considered; whereas handwritten information was represented with shape features extracted from segmented characters. Although the task was extremely hard, we show that the considered features indeed convey useful information that can be used to estimate apparent personality. Evaluating Term-Expansion for Unsupervised Image Annotation Human-Inspired Computing and Its Applications (2014-01-01): 8856 , January 01, 2014 By Pellegrin, Luis; Escalante, Hugo Jair; Montes-y-Gómez, Manuel Automatic image annotation (AIA) deals with the problem of automatically providing images with labels/keywords that describe their visual content. Unsupervised AIA methods are often preferred because they can annotate (virtually) any possible concept to images and do not require labeled data as their supervised counterparts. Unsupervised AIA methods use a reference collection of images with associated (unstructured, freeform) text to annotate images. Thus, this type of methods heavily rely on the quality of the text in the reference collection. With the goal of improving the annotation performance of unsupervised AIA methods, we propose in this paper a term expansion strategy that expands the text associated with images from the reference collection. The proposed method is based on term co-occurrence analysis. We evaluate the impact that the proposed expansion has in the annotation performance of a straight unsupervised AIA method using a benchmark for large scale image annotation. Two types of associated text are used and several image descriptors are considered. Experimental results show that, by using the proposed expansion, better annotation performance can be obtained, where the improvements depend on the type of associated text that is considered. Bayesian Chain Classifier with Feature Selection for Multi-label Classification Advances in Soft Computing (2018-01-01): 11288 , January 01, 2018 By Jiménez, Ricardo Benítez; Morales, Eduardo F.; Escalante, Hugo Jair Multi-label classification task has many applications in Text Categorization, Multimedia, Biology, Chemical data analysis and Social Network Mining, among others. Different approaches have been developed: Binary Relevance (BR), Label Power Set (LPS), Random k label sets (RAkEL), some of them consider the interaction between labels in a chain (Chain Classifier) and other alternatives around this method are derived, for instance, Probabilistic Chain Classifier, Monte Carlo Chain Classifier and Bayesian Chain Classifier (BCC). All previous approaches have in common and focus on is in considering different orders or combinations of the way the labels have to be predicted. Given that feature selection has proved to be important in classification tasks, reducing the dimensionality of the problem and even improving classification model's accuracy. In this work a feature selection technique is tested in BCC algorithm with two searching methods, one using Best First (BF-FS-BCC) and another with GreedyStepwise (GS-FS-BCC), these methods are compared, the winner is also compared with BCC, both tests are compared through Wilcoxon Signed Rank test, in addition it is compared with others Chain Classifier and finally it is compared with others approaches (BR, RAkEL, LPS). Towards Annotation-Based Query and Document Expansion for Image Retrieval By Escalante, Hugo Jair; Hernández, Carlos; López, Aurelio; Marín, Heidy; Montes, Manuel; Morales, Eduardo; Sucar, Enrique; Villaseñor, Luis Show all (8) In this paper we report results of experiments conducted with strategies for improving text-based image retrieval. The adopted strategies were evaluated in the photographic retrieval task at ImageCLEF2007. We propose a Web-based method for expanding textual queries with related terms. This technique was the top-ranked query expansion method among those proposed by other ImageCLEF2007 participants. We also consider two methods for combining visual and textual information in the retrieval process: late-fusion and intermedia-feedback. The best results were obtained by combining intermedia-feedback and our expansion technique. The main contribution of this paper, however, is the proposal of "annotation-based expansion"; a novel approach that consists of using labels assigned to images (with image annotation methods) for expanding textual queries and documents. We introduce this idea and report results of initial experiments towards enhancing text-based image retrieval via image annotation. Preliminary results show that this expansion strategy could be useful for image retrieval in the near future. The ChaLearn gesture dataset (CGD 2011) Machine Vision and Applications (2014-11-01) 25: 1929-1951 , November 01, 2014 By Guyon, Isabelle; Athitsos, Vassilis; Jangyodsuk, Pat; Escalante, Hugo Jair Show all (4) This paper describes the data used in the ChaLearn gesture challenges that took place in 2011/2012, whose results were discussed at the CVPR 2012 and ICPR 2012 conferences. The task can be described as: user-dependent, small vocabulary, fixed camera, one-shot-learning. The data include 54,000 hand and arm gestures recorded with an RGB-D $$\hbox {Kinect}^\mathrm{TM}$$ camera. The data are organized into batches of 100 gestures pertaining to a small gesture vocabulary of 8–12 gestures, recorded by the same user. Short continuous sequences of 1–5 randomly selected gestures are recorded. We provide man-made annotations (temporal segmentation into individual gestures, alignment of RGB and depth images, and body part location) and a library of function to preprocess and automatically annotate data. We also provide a subset of batches in which the user's horizontal position is randomly shifted or scaled. We report on the results of the challenge and distribute sample code to facilitate developing new solutions. The data, datacollection software and the gesture vocabularies are downloadable from http://gesture.chalearn.org . We set up a forum for researchers working on these data http://groups.google.com/group/gesturechallenge . A Flexible Framework for the Evaluation of Unsupervised Image Annotation By Pellegrin, Luis; Escalante, Hugo Jair; Montes-y-Gómez, Manuel; Villegas, Mauricio; González, Fabio A. Show all (5) Automatic Image Annotation (AIA) consists in assigning keywords to images describing their visual content. A prevalent way to address the AIA task is based on supervised learning. However, the unsupervised approach is a new alternative that makes a lot of sense when there are not manually labeled images to train supervised techniques. AIA methods are typically evaluated using supervised learning performance measures, however applying these kind of measures to unsupervised methods is difficult and unfair. The main restriction has to do with the fact that unsupervised methods use an unrestricted annotation vocabulary while supervised methods use a restricted one. With the aim to alleviate the unfair evaluation, in this paper we propose a flexible evaluation framework that allows us to compare coverage and relevance of the assigned words by unsupervised automatic image annotation (UAIA) methods. We show the robustness of our framework through a set of experiments where we evaluated the output of both, unsupervised and supervised methods. Local and global approaches for unsupervised image annotation Multimedia Tools and Applications (2017-08-01) 76: 16389-16414 , August 01, 2017 Image annotation is the task of assigning keywords to images with the goal of facilitating their organization and accessibility options (e.g., searching by keywords). Traditional annotation methods are based on supervised learning. Although being very effective, these methods require of large amounts of manually labeled images, and are limited in the sense that images can only be labeled with concepts seen during the training phase. Unsupervised automatic image annotation (UAIA) methods, on the other hand, neglect strongly-labeled images and instead rely on huge collections of unstructured text containing images for the annotation. In addition to not requiring labeled images, unsupervised techniques are advantageous because they can assign (virtually) any concept to an image. Despite these benefits, unsupervised methods have not been widely studied in image annotation, a reason for this is the lack of a reference framework for UAIA. In this line, this paper introduces two effective methods for UAIA in the context of a common framework inspired in the way a query is expanded throughout Automatic Query Expansion (AQE) in information retrieval. On the one hand, we describe a local method that processes text information associated to images retrieved when using the image to annotate as query, several methods from the state of the art can be described under this formulation. On the other hand, we propose a global method that pre-process offline the reference collection to identify visual-textual associations that are later used for annotation. Both methods are extensively evaluated in benchmarks for large-scale UAIA. Experimental results show the competitiveness of both strategies when compared to the state of the art. We foresee the AQE-based framework will pave the way for the development of alternative and effective methods for UAIA. A Knowledge-Based Weighted KNN for Detecting Irony in Twitter Advances in Computational Intelligence (2018-01-01): 11289 , January 01, 2018 By Hernández Farías, Delia Irazú; Montes-y-Gómez, Manuel; Escalante, Hugo Jair; Rosso, Paolo; Patti, Viviana Show all (5) In this work, we propose a variant of a well-known instance-based algorithm: WKNN. Our idea is to exploit task-dependent features in order to calculate the weight of the instances according to a novel paradigm: the Textual Attraction Force, that serves to quantify the degree of relatedness between documents. The proposed method was applied to a challenging text classification task: irony detection. We experimented with corpora in the state of the art. The obtained results show that despite being a simple approach, our method is competitive with respect to more advanced techniques. From Text to Speech: A Multimodal Cross-Domain Approach for Deception Detection By Rill-García, Rodrigo; Villaseñor-Pineda, Luis; Reyes-Meza, Verónica; Escalante, Hugo Jair Show all (4) Deception detection -identifying when someone is trying to cause someone else to believe something that is not true- is a hard task for humans. The task is even harder for automatic approaches, that must deal with additional problems like the lack of enough labeled data. In this context, transfer learning in the form of cross-domain classification is a task that aims to leverage labeled data from certain domains for which labeled data is available to others for which data is scarce. This paper presents a study on the suitability of linguistic features for cross-domain deception detection on multimodal data. Specifically, we aim to learn models for deception detection across different domains of written texts (one modality) and apply the new knowledge to unrelated topics transcribed from spoken statements (another modality). Experimental results reveal that by using LIWC and POS n-grams we reach a in-modality accuracy of 69.42%, as well as an AUC ROC of 0.7153. When doing transfer learning, we achieve an accuracy of 63.64% and get an AUC ROC of 0.6351. Towards Autonomous Flight of Low-Cost MAVs by Using a Probabilistic Visual Odometry Approach Advances in Artificial Intelligence and Its Applications (2015-01-01): 9414 , January 01, 2015 By Martínez-Carranza, José; Garcia, Esteban Omar; Escalante, Hugo Jair; Mayol-Cuevas, Walterio Show all (4) In this paper we present a methodology to localise and control low-budget Micro Aerial Vehicles (MAVs) in GPS-denied environments. The control law is based on a PD controller that controls height, orientation, roll and pitch in order to enable the MAV to fly autonomously towards a specific target. The core of our approach is the implementation of a fast probabilistic approach robust to erratic motion and capable of processing imagery data transmitted from the MAV to the Ground Control Station (GCS). The latter is due to the architecture of our low-budget MAVs which can not carry out any processing on board. However, images captured with the camera mounted on board the MAV can be transmitted via either wireless LAN or through analogue transmission to the GCS, where our fast probabilistic Visual Odometry system is used in order to rapidly obtain position estimates of the vehicle. Such estimates can be used accordingly to communicate back with the vehicle in order to submit control signals to drive its autonomous flight. An iterative genetic programming approach to prototype generation Genetic Programming and Evolvable Machines (2017-06-01) 18: 123-147 , June 01, 2017 By Valencia-Ramírez, José María; Graff, Mario; Escalante, Hugo Jair; Cerda-Jacobo, Jaime Show all (4) In this paper, we propose a genetic programming (GP) approach to the problem of prototype generation for nearest-neighbor (NN) based classification. The problem consists of learning a set of artificial instances that effectively represents the training set of a classification problem, with the goal of reducing the storage requirements and the computational cost inherent in NN classifiers. This work introduces an iterative GP technique to learn such artificial instances based on a non-linear combination of instances available in the training set. Experiments are reported in a benchmark for prototype generation. Experimental results show our approach is very competitive with the state of the art, in terms of accuracy and in its ability to reduce the training set size. Evolving autoencoding structures through genetic programming Genetic Programming and Evolvable Machines (2019-05-25): 1-28 , May 25, 2019 We propose a novel method to evolve autoencoding structures through genetic programming (GP) for representation learning on high dimensional data. It involves a partitioning scheme of high dimensional input representations for distributed processing as well as an on-line form of learning that allows GP to efficiently process training datasets composed of hundreds or thousands of samples. The use of this on-line learning approach has important consequences in computational cost given different evolutionary population dynamics, namely steady state evolution and generational replacement. We perform a complete experimental study to compare the evolution of autoencoders (AEs) under different population dynamics and genetic operators useful to evolve GP based AEs' individuals. Also, we compare the performance of GP based AEs with another representation learning method. Competitive results have been achieved through the proposed method. To the best of the authors' knowledge, this research work is a precursor within the field of evolutionary deep learning. Evolving weighting schemes for the Bag of Visual Words Neural Computing and Applications (2017-05-01) 28: 925-939 , May 01, 2017 By Escalante, Hugo Jair; Ponce-López, Víctor; Escalera, Sergio; Baró, Xavier; Morales-Reyes, Alicia; Martínez-Carranza, José Show all (6) The Bag of Visual Words (BoVW) is an established representation in computer vision. Taking inspiration from text mining, this representation has proved to be very effective in many domains. However, in most cases, standard term-weighting schemes are adopted (e.g., term-frequency or TF-IDF). It remains open the question of whether alternative weighting schemes could boost the performance of methods based on BoVW. More importantly, it is unknown whether it is possible to automatically learn and determine effective weighting schemes from scratch. This paper brings some light into both of these unknowns. On the one hand, we report an evaluation of the most common weighting schemes used in text mining, but rarely used in computer vision tasks. Besides, we propose an evolutionary algorithm capable of automatically learning weighting schemes for computer vision problems. We report empirical results of an extensive study in several computer vision problems. Results show the usefulness of the proposed method. Principal motion components for one-shot gesture recognition Pattern Analysis and Applications (2017-02-01) 20: 167-182 , February 01, 2017 By Escalante, Hugo Jair; Guyon, Isabelle; Athitsos, Vassilis; Jangyodsuk, Pat; Wan, Jun Show all (5) This paper introduces principal motion components (PMC), a new method for one-shot gesture recognition. In the considered scenario a single training video is available for each gesture to be recognized, which limits the application of traditional techniques (e.g., HMMs). In PMC, a 2D map of motion energy is obtained per each pair of consecutive frames in a video. Motion maps associated to a video are processed to obtain a PCA model, which is used for recognition under a reconstruction-error approach. The main benefits of the proposed approach are its simplicity, easiness of implementation, competitive performance and efficiency. We report experimental results in one-shot gesture recognition using the ChaLearn Gesture Dataset; a benchmark comprising more than 50,000 gestures, recorded as both RGB and depth video with a Kinect™camera. Results obtained with PMC are competitive with alternative methods proposed for the same data set. Predicting Academic-Challenge Success By López, Dante; Villaseñor, Luis; Montes-y-Gómez, Manuel; Morales, Eduardo; Escalante, Hugo Jair Show all (5) Academic competitions and challenges comprise an effective mechanism for rapidly advancing the state of the art in diverse research fields and for solving practical problems arising in industry. In fact, academic competitions are increasingly becoming an essential component of academic events, like conferences. With the proliferation of challenges, it is becoming more and more relevant to distinguish potentially successful challenges before they are launched. This in order to better allocate resources, time slots, sponsorship and even to have a better estimate of expected participation. This paper presents a first study in this direction: we collected a data set from Kaggle and aim to predict challenge success by using information that is available before a competition starts. We characterize competition proposals by textual information and meta-features derived from information provided by organizers, and use these features to predict challenge success (estimated by the number of participants and submissions). We show that both, text and meta-features convey predictive information that can be used to estimate the success of an academic challenge. Deep Learning for Action and Gesture Recognition in Image Sequences: A Survey Gesture Recognition (2017-01-01): 539-578 , January 01, 2017 By Asadi-Aghbolaghi, Maryam; Clapés, Albert; Bellantonio, Marco; Escalante, Hugo Jair; Ponce-López, Víctor; Baró, Xavier; Guyon, Isabelle; Kasaei, Shohreh; Escalera, Sergio Show all (9) Interest in automatic action and gesture recognition has grown considerably in the last few years. This is due in part to the large number of application domains for this type of technology. As in many other computer vision areas, deep learning based methods have quickly become a reference methodology for obtaining state-of-the-art performance in both tasks. This chapter is a survey of current deep learning based methodologies for action and gesture recognition in sequences of images. The survey reviews both fundamental and cutting edge methodologies reported in the last few years. We introduce a taxonomy that summarizes important aspects of deep learning for approaching both tasks. Details of the proposed architectures, fusion strategies, main datasets, and competitions are reviewed. Also, we summarize and discuss the main works proposed so far with particular interest on how they treat the temporal dimension of data, their highlighting features, and opportunities and challenges for future research. To the best of our knowledge this is the first survey in the topic. We foresee this survey will become a reference in this ever dynamic field of research. Looking at People Special Issue International Journal of Computer Vision (2018-04-01) 126: 141-143 , April 01, 2018 By Escalera, Sergio; Gonzàlez, Jordi; Escalante, Hugo Jair; Baró, Xavier; Guyon, Isabelle Show all (5) MOPG: a multi-objective evolutionary algorithm for prototype generation Pattern Analysis and Applications (2017-02-01) 20: 33-47 , February 01, 2017 By Escalante, Hugo Jair; Marin-Castro, Maribel; Morales-Reyes, Alicia; Graff, Mario; Rosales-Pérez, Alejandro; Montes-y-Gómez, Manuel; Reyes, Carlos A.; Gonzalez, Jesus A. Show all (8) Prototype generation deals with the problem of generating a small set of instances, from a large data set, to be used by KNN for classification. The two key aspects to consider when developing a prototype generation method are: (1) the generalization performance of a KNN classifier when using the prototypes; and (2) the amount of data set reduction, as given by the number of prototypes. Both factors are in conflict because, in general, maximizing data set reduction implies decreasing accuracy and viceversa. Therefore, this problem can be naturally approached with multi-objective optimization techniques. This paper introduces a novel multi-objective evolutionary algorithm for prototype generation where the objectives are precisely the amount of reduction and an estimate of generalization performance achieved by the selected prototypes. Through a comprehensive experimental study we show that the proposed approach outperforms most of the prototype generation methods that have been proposed so far. Specifically, the proposed approach obtains prototypes that offer a better tradeoff between accuracy and reduction than alternative methodologies.
CommonCrawl
How do banks interact with fintech startups? Lars Hornuf ORCID: orcid.org/0000-0002-0576-77591,2,3, Milan F. Klus4, Todor S. Lohwasser4 & Armin Schwienbacher5 Small Business Economics volume 57, pages 1505–1526 (2021)Cite this article The increasing pervasiveness of technology-driven firms that offer financial services has led to growing pressure on traditional banks to modernize their core business activities and services. Many banks tackle the challenges of digitalization by cooperating with startup firms that offer technology-driven financial services and novel service packages (fintechs). In this article, we examine which banks typically collaborate with fintechs, how intensely they do so, and which form of alliance they prefer. Using hand-collected data covering the largest banks from Canada, France, Germany, and the United Kingdom, we provide detailed evidence on the different forms of alliances occurring in practice. We show that banks are significantly more likely to form alliances with fintechs when they pursue a well-defined digital strategy and/or employ a chief digital officer. Moreover, in line with incomplete contract theory, we find that banks more frequently invest in small fintechs but often build product-related collaborations with larger fintechs. In the past two decades, digitalization has influenced many industries, offering new entrepreneurial opportunities and creating new systems of innovation (Barrett et al. 2015; Autio et al. 2018). Most recently, the banking industry, one of the most traditional and conservative sectors in the economy, has been confronted with potentially disruptive technology-driven innovations and Internet-based solutions (Navaretti et al. 2017). By developing new information technology–(IT-) enabled service models, startup firms and multinational technology companies have in many cases created more customer-oriented and user-friendly digital applications in the banking industry, leading to growing digital servitization of financial products.Footnote 1Many of these new banking solutions have also been developed by financial technology (fintech) companies. Some of the new digital innovations have the potential to reshape or even crowd out some of the business activities of more traditional banks. As a result, digitalization and platform-enabled fintechs have forced banks to reconsider their corporate boundaries and make them more permeable to market interactions (Kohtamäki et al. 2019). More permeable organizational forms such as strategic alliances allow banks to confront the threat of technology-driven firms and offer traditional banks new advantages to benefit from innovations developed by fintechs in ways different from the simple "make-or-buy" decision (Borah and Tellis 2014; Jacobides and Billinger 2006). For example, banks have established fintech incubators and accelerators to enable innovations while maintaining control through a minority share in the firms that are built or supervised. A lack of legacy infrastructure and comparatively low level of organizational complexity often enable fintech firms to be more agile, innovate faster, and be more radical in their approach to innovation (Brandl and Hornuf 2020). By contrast, it is more difficult for traditional banks to adapt to some of the new technological developments because they need to comply with more extensive regulatory requirements. Often, a larger number of stakeholders need to be convinced when adopting far-reaching organizational changes in a traditional bank (Klus et al. 2019). Moreover, digital service innovations typically crowd out banks' existing distribution channels (Vendrell-Herrero et al. 2017), thereby reducing banks' incentives to introduce new distribution channels on their own. The sluggishness of traditional banks to adapt to digital challenges not only has implications at the individual bank but also affects the entire financial ecosystem. Given the legacy infrastructure and high level of organizational complexity inherent in many banks, they need to re-organize their ecosystem to improve the digital services offered to retail and business clients (for related work on multinational industries, see Sklyar et al. (2019). In this article, we analyze which characteristics of banks are associated with different forms of alliances with fintech companies. The Financial Stability Board of the Bank for International Settlements defines fintech as "technologically enabled financial innovation that could result in new business models, applications, processes, or products with an associated material effect on financial markets and institutions and the provision of financial services" (European Banking Authority 2017, p. 4). We are particularly interested in the number of bank–fintech alliances that have been established in developed economies and the factors related to different forms of alliances such as investments or product-related collaborations. Finally, we investigate the impact of these alliances on banks' market value.Footnote 2 The literature on financial innovation in general and bank–fintech alliances in particular is scarce. First, our analysis contributes to the empirical literature on financial innovation. Lerner (2002) and Miller (1986) provide empirical evidence that financial innovation, as measured by the filing of financial patents, has been increasing since the late 1970s. Moreover, Scott et al. (2017) find that the financial industry had historically spent a large share of expenses on IT, which reached more than one-third of all expenses in 1992. One reason for the high share of IT expenses was that the financial industry employed computers early on as part of their business model. Historically, innovations (e.g., the automated teller machine) have led to changes in financial organizations and services (Merton 1995). The quality of financial patents and financial innovations was, nevertheless, often low (Lerner et al. 2015). Therefore, the financial industry was perceived as one of the least innovative. Still, scant empirical research has investigated whether fintech startups have pressured traditional banks to innovate or even forced banks to engage in strategic alliances with them. We fill this gap in the literature by analyzing bank characteristics that are associated with different forms of alliances with fintech companies. Second, our analysis contributes to the emerging literature investigating not only individual business models but also the fintech market in its entirety. Haddad and Hornuf (2019) analyze fintechs in 55 countries and provide evidence that markets witness more fintech formations when the economy is well-developed and venture capital is easily accessible. Other relevant factors for the formation of fintechs are access to loans, secure Internet servers, mobile telephone subscriptions, and a large labor force. Cumming and Schwienbacher (2018) find that differences in the enforcement of financial regulations of startups and banks after the financial crisis contributed to venture capital investments in fintech startups. Puschmann (2017) provides a model to categorize the industry. Navaretti et al. (2017, p. 17) conduct a conceptual analysis on the relationship between fintechs and banks and find that the "game is still open" and "a lot of work lies ahead" for the industry. A related article to ours is that of Brandl and Hornuf (2020), who run a bank–fintech network analysis for Germany and find that most relationships are product-related collaborations. They argue that this is because most fintechs develop an algorithm or software solution, the value of which can only be determined over time, when the software has been adapted more thoroughly to customer needs. We add to their findings by investigating the particular bank characteristics associated with a bank–fintech alliance. These alliances occur against the backdrop that the arrival of fintechs modifies the supply chain interdependency of banks and thus also establishes new ecosystems (Kohtamäki et al. 2019; Vendrell-Herrero et al. 2017). More precisely, we consider different forms of alliances, such as product-related cooperation and minority and majority equity stakes, which tend to be classified in the transaction cost literature as "hybrid structures" (Jacobides and Billinger 2006; Williamson 1991), and investigate bank characteristics (e.g., profitability) associated with these alliances. Finally, we also contribute to the "make, buy, or ally" literature (Borah and Tellis 2014; Jacobides and Billinger 2006), which evidences a broad range of interactions that firms can have with other firms in the market, particularly in the context of innovation management. In particular, our results on why certain types of alliances occur (e.g., investments vs. product-related collaborations) are consistent with incomplete contract theory (Aghion and Bolton 1992; Grossman and Hart 1986). In a broad sense, we also contribute to research on servitization, especially the service science stream of the literature that Rabetino et al. (2018) identifies and that focuses on business-oriented approaches to servitization (e.g., Baines et al. 2009), the systematic development of new services (e.g., Bullinger et al. 2003), and the role of organizational, technological, and human factors in the configuration of new services (e.g., Spohrer et al. 2007; Vargo and Lusch 2011). The financial industry is a relevant sector to examine in this context, given the new service strategies banks and fintechs are currently developing, the additional services incumbents and new market participants add to existing financial products, and the novel service packages now being offered by platforms such as Bó, Mettle, and N26. Digital servitization in the banking industry initiated an evolving ecosystem that results from the digitalization of financial products and new IT-enabled service models. Industries affected by digital servitization typically confront upstream and downstream competition (Barrett et al. 2015); the current changes in the financial industry also affect both upstream (through new services and service packages offered) and downstream (through enhanced customer services and novel distribution channels) competition. In the former case, dis-intermediated finance solutions such as crowdfunding give retail investors access to new investment products. In the latter case, the emergence of various platforms that allow customers to directly compare prices of different banks has modified how financial products are offered and distributed. The structure of this article is as follows: In the "Literature review and hypotheses" section, we outline our theory and hypotheses, and in the "Data and methods" section, we describe our data and the methods applied. In the "Empirical results" section, we present the results. The "Discussion" section provides an analytical discussion, and the "Conclusion" section concludes with implications for practice, and outlines avenues for future research. Literature review and hypotheses To increase their profitability, banks have historically developed financial innovations (Scott et al. 2017) and more recently embraced digital services as a new engine of growth (Barrett et al. 2015). Beck et al. (2016) show that financial innovations are positively associated with bank growth. The recently emerging service science literature also suggests that the development of new service models can reduce costs to firms and add value to customers (e.g., Sakao and Shimomura 2007). Similar to the recent transformation of century-old business models in the computer equipment and software industry, new IT-enabled service models and digital servitization are likely to enhance the financial performance of incumbent firms in the banking industry (Kohtamäki et al. 2020; Spohrer and Maglio 2010). Moreover, novel digital infrastructures such as the blockchain technology can facilitate the combinatorial potential for enhanced service innovations (Yoo et al. 2010). In their study on 50 Swedish advanced service providers, Sjödin et al. (2019) examine how relational governance for the provision of advanced services can enhance the financial performance of a firm. They identify a need to apply a set of diverse relational governance strategies to generate superior financial performance. In line with these findings, we derive testable hypotheses about what drives bank–fintech interactions under the premise that alliances are the result of mutually beneficial transactions between banks and fintechs (Coase 1960; Scott et al. 2017). These transactions are meant to enhance the bank's value through the implementation of financial innovations. In other words, bank–fintech alliances aim to improve the market value of both fintechs and banks. While early research on the boundary of firms primarily considered market transactions versus the acquisition of firms, and thus the internalization of externally developed products or services (starting with Coase 1937), recent research on organizations has evidenced various other forms of interactions that could lead to alliances for the joint development of products or services and the exploitation of innovation opportunities (Borah and Tellis 2014; Jacobides and Billinger 2006). Current innovations pose particular challenges to the optimal boundary of banks, for which market transactions could provide more flexible solutions to the increasing digitalization of organizations and the emergence of platform-based business models in the financial industry. If banks cannot develop new digital services themselves to reap the benefit of digitalization, they must adopt a more permeable structure that facilitates interactions with fintechs to better match financial service capabilities with the particular needs of the market. Fintechs might collaborate with banks for several reasons. Through an alliance with an established player in the financial industry, fintechs can obtain access to a broader customer base, gain access to superior knowledge in how to deal with financial regulations, and improve their own digital services. Some fintechs engage in an alliance with a bank to obtain access to a banking license, which in many cases would be too cumbersome and too expensive for a fintech startup to obtain (Klus et al. 2019). By contrast, banks can secure a competitive advantage by collaborating with fintechs that are developing or have already developed a better way to provide financial services. In some cases, investing in a fintech firm can give a bank the exclusive rights to use a specific application or license, enabling it to exclude competitors at its discretion. Similar to industrial firms, banks can thus protect their core businesses (Hagedoorn and Duysters 2002). Moreover, such an investment allows the bank to exercise control and directly influence the product development process and service strategies of the fintech. Given the opportunities and challenges associated with the digital transformation of the financial industry, the majority of banks have by now adopted a digital strategy that outlines how digital transformation should occur. One way to execute this transformation is to assign responsibility for this process to a designated manager, and some banks have thus created the position of a chief digital officer (CDO). While research has examined the role of the chief executive officer and chief financial officer in earnings management (Jiang et al. 2010) and explored whether hiring a chief financial officer changes fraudulent financial reporting (Geiger and North 2006), little is known about the role of the CDO. This lack of research is likely due to the recent creation of this new board position. Given the specific tasks assigned to the CDO and the context in which this position has been created, the CDO may predominantly develop in-house digitalization competencies as well as new service strategies and collaborate with fintechs only if doing so is the most cost-efficient solution. However, ceteris paribus, a bank with a CDO, may also interact more frequently with fintechs than banks without such a position because initiating alliances with fintechs could simply be part of the same corporate change strategy. If CDOs implement strategies to develop new digital services within the bank, they might also be more likely to pursue organizational changes that make banks more permeable to the market to reap the full benefits of the new services, thus making interactions with fintechs more likely. These interactions may take the form of an investment or a product-related collaboration. Alliances enable banks to benefit from innovations without facing the burden of having to develop them in the presence of existing organizational structures and legacy IT systems. A clear mission of a CDO and the reduced burden to innovate in the absence of a legacy system thus lead banks with a CDO to launch more alliances. Similarly, some banks do not hire a CDO but nevertheless develop a clear digital strategy and delegate the development of this strategy to other managers of the bank. As this may lead to the same outcome, we conjecture that banks with a clear digital strategy are more likely to have alliances with fintechs than banks without such a strategy. We summarize these predictions as follows: H1 (CDO or digital strategy): banks with a CDO or that have defined digitalization as a goal in their corporate strategy are more likely to establish alliances with fintechs. Banks have different motives when they engage in an alliance with a fintech. The development of digital services affects how financial products look and how they are distributed to customers. If banks cannot develop new digital services themselves because of their IT legacy and organizational structure, product-related collaborations enable them to broaden their portfolio and use alternative distribution channels to reach new customers. Offering fintech services or applications on their websites helps banks maintain their customer base without having to develop new services or applications themselves. Often, developing these services or applications alone is a cumbersome task because many banks operate software systems that are barely compatible with modern end-user applications and suffer from organizational legacy (Brandl and Hornuf 2020). Moreover, because many fintechs offer software solutions, which must be customized to end-user needs and updated at regular intervals, acquiring a fintech is risky for a bank. Whether a fintech can develop efficient digital services in a timely manner is uncertain, and having the option to choose the software of another provider can be a risk-minimizing strategy for a bank. Waiting until the digital service of a fintech has been customized and is running in the mass market that might therefore be a better strategy. By acquiring a fintech early on in the development or even commercialization phase, banks can easily bet on the wrong horse. In such a situation, taking the route of setting up alliances may allow the bank to reduce technological and market risk. As the make, buy, or ally literature indicates (Borah and Tellis 2014; Jacobides and Billinger 2006), alliances can therefore represent a more flexible solution particularly suitable for innovations. The relative benefits of setting up an alliance with an existing fintech startup rather than acquiring it are factor specific, as not all banks will benefit equally from forming an alliance. Strategic alliances may also fail to generate superior financial performance, notably because banks and/or fintechs lack specialized knowledge or proper decision-making authority to operate successfully within a novel financial ecosystem (Das and Teng 2000; Li et al. 2019). If, however, banks wait too long, given the competitive environment in which they are evolving, they might lose a valuable innovation to a competitor, something banks may be able to shield themselves against by acquiring the startup early on. Large banks often have deeper pockets than small banks and can also bear the risk of acquiring the wrong fintech. An investment, through either a minority or majority acquisition, in a fintech allows banks to internalize the knowledge of the fintech better and obtain sole possession of its knowledge (Teece 1986). We, therefore, expect bank size to be associated with the form of alliances chosen and conjecture the following: H2a (type of alliance): large banks are more likely to invest in fintech firms, while small banks engage in product-related collaborations. In the context of innovation, the theoretical literature on incomplete contracting has developed strong arguments on the choice between building corporate, collaborative relationships governed by contracts and acquiring the innovating firm (Grossman and Hart 1986). Innovation activities are typically difficult to contract because their ultimate outcome is hard to determine ex ante and thus is non-verifiable ex post (Aghion and Bolton 1992). This is especially true for early-stage firms, in which the ultimate outcome of an innovation is still largely unknown. In this case, contracting between the fintech and the bank is not an effective way of generating synergies because the fintech cannot be contractually constrained in creating synergies with the bank. When contract terms about future innovations cannot clearly be written down, ex post enforcement becomes impossible. Consequently, investing in the fintech is superior to a product-related cooperation because it allows the bank to control the decisions made inside the fintech firm more directly. H2b (type of alliance): banks are more likely to invest in small fintechs and engage in product-related collaborations with large fintechs. After analyzing the characteristics of alliances, an important question is whether the alliances between banks and fintechs ultimately create economic value. Because many banks have only recently engaged in alliances with fintechs, it is still too early to investigate the effect of these alliances on long-term performance measures of banks or even their corporate structure. Nevertheless, event studies are an established method to evaluate the market expectations of future cash flows that might result from organizational changes, such as mergers, joint ventures, or strategic alliances (Amici et al. 2013; Gleason et al. 2003; Marciukaityte et al. 2009). Given the increasing importance of digitalization for the financial industry and its impact on the survival of incumbent banks, we expect markets to react to announcements of bank–fintech alliances. If stock prices reflect future earnings of banks and if strategic alliances with fintechs are value enhancing, for example, through superior digital servitization strategies, this should be reflected in the market valuation of the involved bank. H3 (economic value): announcements of bank–fintech alliances have a positive impact on the market value of the bank. To test our hypotheses, we follow a quantitative approach. Although a qualitative approach would have been equally valid, we were interested in the extent of bank–fintech alliances and wanted to compare them across different countries and financial subsectors. We therefore hand-collected detailed information on alliances formed by the hundred largest banks in Canada, France, Germany, and the United Kingdom between 2007 and 2017. In this section, we present our data, describe the methods used, and outline our empirical models. Our initial sample consists of the hundred largest legally independent banks, as measured by their total assets, in each of the following four countries: Canada, France, Germany, and the United Kingdom. The list of banks came from the respective national supervisory authorities and comprises all active banks as of 2017. We chose these four countries because they represent different financial systems: while Canada and the United Kingdom are traditionally considered market-based financial systems, France and Germany are considered bank-based financial systems (Demirguc-Kunt and Levine 1999). Furthermore, the four countries are similar in size, allowing us to provide a comprehensive overview of the respective market. We deliberately decided not to analyze countries with a large fintech market, such as China and the USA, because our results might have been less comprehensive, and our language proficiency (in the case of China) would not have been sufficient for a rigorous analysis. Finally, these two countries would have been outliers that would have skewed the findings of the empirical analysis (Haddad and Hornuf 2019). To assemble a comprehensive overview of existing bank–fintech alliances, we used a broad Internet search encompassing four steps. First, we searched all bank websites to find official press releases about alliances with fintechs. The decision as to whether the respective partner is a fintech was based on the definition provided by the Financial Stability Board of the Bank for International Settlements. Second, we investigated the fintech side and searched the Crunchbase database for alliances with banks. Third, we ran a comprehensive search for news articles about bank–fintech alliances on Factiva, which also helped us obtain more information on the respective forms of alliances. To ensure that no alliance was omitted, we carried out a systematic Google search in the fourth step.Footnote 3For coding purposes, an alliance counted as an investment if the bank made a minority or majority acquisition and a product-related collaboration if it formed a contract-based partnership. To be included in our sample, three additional requirements needed to be met: (1) the alliance must have been announced between January 1, 2007, and January 1, 2018; (2) at least one bank was involved in the alliance together with at least one fintech; and (3) the bank was located in one of the four studied countries; fintech firms, however, could be located anywhere in the world. Our sample consists of 400 banks that formed 500 bank–fintech alliances. Figure 1 presents an overview of the emergence of bank alliances with fintechs by country and year. The figure shows the cumulative number of alliances in each year and evidences that banks from the United Kingdom formed fintech alliances early on. Our data suggest a perceptible increase for all other countries from the year 2013 onward. During an average year, every 10th bank has engaged in an alliance with a fintech. However, there is a strong variation among banks, with some initiating up to 51 alliances during the sample period and others initiating none. Emergence of bank–fintech alliances by country and year. The sample includes 500 fintechs from 27 countries collected from 2007 to 2017. The figure shows the cumulative number of alliances in each year, grouped by the banks' home country To investigate which banks collaborate with fintechs and to what extent they do so, we defined two dependent variables: (1) a binary dependent variable Alliance, which equals 1 if bank i has made at least one alliance with a fintech in year t and 0 otherwise, and (2) the number of new alliances (Number of New Alliances) that bank i has begun in year t. To test Hypothesis 1, our two main explanatory variables are the dummy variables Digital Strategy, which equals 1 if bank i has a digital strategy in year t and 0 otherwise, and CDO, which equals to 1 if bank i employs a CDO in year t and 0 otherwise. We hand-collected both variables through a systematic analysis of the banks' annual reports and their websites.Footnote 4The year in which a CDO joined the board came from the banks' annual reports, their websites, and LinkedIn profiles. As the core task of a CDO is to design and support technology-driven process changes, the time-varying variable CDO provides an indicator of the digital orientation of a bank. We treat the bank as having adopted a digital strategy if it has officially declared a strategy to foster digitalization. More specifically, the time-varying variable Digital Strategy indicates whether or not a bank has a well-defined digital strategy. We obtained the variable by analyzing the annual reports of all banks during our sample period. According to our definition, a bank only follows a digital strategy if it has announced an explicit digitalization-related implementation plan; general statements on the importance of digitalization are not enough. From the year the bank first announced a digital strategy, we presume that it continued with this strategy and coded the variable Digital Strategy. We consider various control variables that have recently been used in the banking literature (Peng et al. 2017). These include firm characteristics, such as whether the bank is publicly listed (Bank Listed) or a universal bank (Universal Bank), and financial indicators, such as the natural logarithm of total assets (ln(Bank Total Assets)), the loan-to-asset ratio (Bank Loan-to-Asset Ratio), and return on average assets (Bank ROAA). General information about the banks, such as balance sheet data, came from the banks' annual reports and the Fitch Connect database. However, we could only collect some information, such as financial data of privately owned banks, if banks were subject to some form of disclosure requirement. Finally, we collapsed the data into a panel dataset for the 2007–2017 period to obtain bank-year observations. To test Hypotheses 2a and 2b, we focus on our sample of 500 bank–fintech alliances and create the dummy variable Investment, which equals 1 if a bank has invested in at least one fintech and 0 if the alliance is characterized by a product-related collaboration. We use ln(Bank Total Assets) and the variable Fintech Employees, indicating the fintech's number of employees, as respective measures of bank and fintech size, which serve as our main explanatory variables. In addition to the control variables from Hypothesis 1, we include other variables to control for fintech characteristics: Fintech Front-End Solution, which is a dummy variable equal to 1 if a fintech offers front-end solutions and 0 if it offers back-end solutions; Fintech Headquarter, which is a dummy variable equal to 1 if the fintech operates in the same country as the partnering bank; Fintech Number of Patents, which counts the number of patents the fintech had previously registered; and Fintech Age, which accounts for the years since the fintech's founding. Table 1 provides detailed definitions of all variables and their sources. Table 1 Definitions of variables included in the regression models To test Hypothesis 1, we estimate probit panel regressions. We follow Peng et al. (2017) and also include bank, country, and year fixed-effects to minimize the potential bias stemming from differences in, for example, national regulation or general technological trends that occur over time. Considering that many fintechs, such as Alipay or PayPal, are established firms and thus might act differently, we run a subsample analysis that excludes fintechs with more than 1000 employees or that were established at least 10 years before the bank–fintech alliance. Furthermore, we include our two main explanatory variables Digital Strategy and CDO interchangeably in the same regressions because they are strongly correlated (ρ = 0.29) and potentially suffer from multicollinearity. Moreover, they constitute two alternative proxies for the same factor—a bank's strategic orientation in terms of digitalization. In our baseline specification, we estimate the following panel probit model, where Pr(Allianceit = 1) is the probability that a bank i has at least one alliance with a fintech in year t: $$ \mathit{\Pr}\left({Alliance}_{it}=1\right)=F\left({Digital\ Strategy}_{it}/{CDO}_{it}+ Bank\ {is\ listed}_i+{Digital\ Bank}_i+{Universal\ Bank}_i+ Bank\ HQ\ Country\ {of\ Interest}_i+\mathit{\ln}\Big( Bank\ {Age}_{it}\right)+{Year}_t+{Country}_i\Big). $$ To test whether the bank's digital strategy or CDO is correlated with the number of bank–fintech alliances, we estimate a count data model using the Number of New Alliances as the dependent variable. Because the dependent variable is a count variable and its unconditional variance suffers from overdispersion, we estimate a negative binomial panel regression. The baseline equation is $$ \mathit{\Pr}\left({y}_{i1},{y}_{i2},\dots, {y}_{iT}\right)=F\left({Digital\ Strategy}_{it}/{CDO}_{it}+ Bank\ {is\ listed}_i+{Digital\ Bank}_i+{Universal\ Bank}_i+ Bank\ HQ\ Country\ {of\ Interest}_i+\mathit{\ln}\Big( Bank\ {Age}_{it}\right)+{Year}_t+{Country}_i\Big), $$ where yit refers to the dependent variable Number of New Alliances. If a Hausman test indicates that a model with random-effects is an inconsistent estimator, we adopt the fixed-effects model. Otherwise, we rely on random-effects in the respective regressions. To test Hypotheses 2a and 2b, we estimate standard probit regressions. The baseline equation is $$ \mathit{\Pr}\left({Financial\ Investment}_i=1\right)=F\left(\mathit{\ln}\left( Bank\ {Total\ Assets}_i\right)+{Fintech\ Employees}_i+{Digital\ Strategy}_i/{CDO}_i+ Bank\ {is\ listed}_i+{Digital\ Bank}_i+{Universal\ Bank}_i+ Bank\ HQ\ Country\ {of\ Interest}_i+\mathit{\ln}\left( Bank\ {Age}_i\right)+{Country}_i\right). $$ In line with prior studies that have investigated how strategic alliances and joint-venture announcements affect stock prices (Amici et al. 2013; Chiou and White 2005), we rely on cumulative abnormal returns (CARs) to assess changes in the market valuation of banks after the announcement to establish an alliance with a fintech. We use the market model to calculate abnormal returns (following Brown and Warner 1980, 1985), which is widely used in event studies. To be included in the sample, we required that the date of the first public announcement about the bank–fintech alliance could be uniquely identified and stock price data were available to calculate the returns for a minimum of 46 days before the first press announcement. We manually searched for the International Securities Identification Number of the banks in our sample on the websites of various retail brokers and financial data providers such as OnVista and Finanztreff. We then extracted stock prices from Thomson Reuters Datastream. After we excluded non-listed firms, 140 alliances with 30 publicly listed banks remained for the period from 2007 to 2017. More specifically, we identified 28 announcements of fintech alliances from Canada (from eight listed banks), 23 from France (from six listed banks), 40 from Germany (from five listed banks), and 49 from the United Kingdom (from 11 listed banks). As a benchmark stock portfolio for the country where the respective bank had its headquarters, we used the MSCI index, which measures the performance of the large and mid-cap segments of each market (MSCI 2018). We estimated the parameters of the market model over a 200-trading-day window, ending 20 days before the event day to avoid bias in the parameter estimations due to incidents surrounding the event date (Brown and Warner 1985). Empirical results Table 2 provides summary statistics for our panel dataset of the 100 largest banks in Canada, France, Germany, and the United Kingdom. During the sample period, 39% of all banks engaged in some form of alliance. The share of banks following a digital strategy is 57%, while just 10% of the banks employ a CDO.Footnote 5 Overall, 15% of the banks in our sample are publicly listed, 7% are digital banks, and 40% are universal banks. Figure 2 shows the distribution of fintech alliances by bank size. The 10 largest banks in each country account for slightly more than half the alliances between banks and fintechs in our sample. Table 2 Summary statistics of panel data for bank-year observations by the 100 largest banks each in Canada, France, Germany, and the United Kingdom from 2007 to 2017 Distribution of alliances by bank size. The figure shows the cumulative number of alliances for the top 10 largest banks in Canada, France, Germany, and the United Kingdom, compared with the remaining 460 banks Table 3 provides summary statistics for the alliance sample covering 500 bank–fintech alliances. We find that 33% of the 154 banks partnering with fintechs made at least a minority investment.Footnote 6 Considering only banks that collaborate with fintechs, we find that 76% have a digital strategy and 15% employ a CDO. In the alliance sample, 23% of the banks are publicly listed, 10% are digital banks, and 44% are universal banks. Table 3 Summary statistics of deal-level data for alliances identified between banks and fintechs in Canada, France, Germany, and the United Kingdom from 2007 to 2017 Figure 3 provides an overview of the segments in which fintechs that formed a bank–fintech alliance operate and shows that many fintechs operate in the payment services sector. While fintechs generally engage most often in financing (Haddad and Hornuf 2019), this is not part of their core business in our sample, which includes only fintechs that formed an alliance with banks. In comparison with Canada, France, and Germany, a relatively large number of UK fintechs provide bank-level software, such as digital tools for customer relationship management. Furthermore, we could not assign many UK fintechs to one of the predefined segments, indicating that they either offer more diversified services or operate in niche segments. Frequency of occurrence of bank alliances with fintechs by segment and country. The sample includes 492 identified fintech segments. The bars represent the number of fintechs in each segment and grouped by the banks' home country Figure 4 shows the most common types of bank–fintech alliances. We classified them into four categories: minority investment, majority investment, product-related collaboration, and other forms of alliances. We find that minority investments and product-related collaborations are the two most common alliance types in all four countries, which suggests that comparatively loose forms of alliances are preferred. We find a relatively high number of majority investments in France, even though they are still smaller than alliances in the form of minority investments and product-related collaborations. Frequency of occurrence of interacting fintechs by form and country. The sample includes 469 interacting fintechs from 28 countries collected from 2007 to 2017. The bars represent the frequency of the different arrangements of interaction with banks in Canada, France, Germany, and the United Kingdom Regression results Establishing alliances In Table 4, we examine whether a Digital Strategy or the employment of a CDO is in line with bank–fintech alliances. We find that the coefficients of Digital Strategy for regressions (1) and (3) are consistently positive and significant, suggesting that having a strategic focus on digitalization increases the probability of forming an alliance with a fintech by 6% to 8%. The probability of finding at least one bank–fintech alliance is 10% higher if a bank employs a CDO. Thus, our results for Digital Strategy and CDO provide support for Hypothesis 1. Moreover, we find significant, positive coefficients for Bank Listed in models (2) and (4) and for ln(Bank Total Assets) in model (3), suggesting that the sheer size of a bank is associated with more bank–fintech alliances. Table 4 Panel data analysis for the dummy variables Digital Strategy, CDO, and Alliance As a robustness check and as a way to exclude large, established fintechs, we ran the same analysis for the subsample of fintechs with fewer than 1000 employees or which were less than 10 years old at the time of forming the alliance. We find that no bank in our sample began an alliance with a large fintech without starting at least another one with a smaller fintech in the same year. Thus, when measuring the dependent variable as a dummy, we obtain exactly the same dataset both with and without the large, established fintechs (this is different for the number of alliances as well as in the cross-sectional analysis). Next, we investigate whether Digital Strategy or CDO is also correlated with the number of fintechs with which a bank collaborates. Table 5 reports the results of the negative binominal regressions. We report incident rate ratios, which can be interpreted as multiplicative effects or semi-elasticities. The coefficients of Digital Strategy in models (1), (3), (5), and (7) are positive and statistically significant at the 0.1% level. In economic terms, banks with a digital strategy have three to four times more alliances with fintechs than banks without a digital strategy. The coefficient of CDO is positive and statistically significant in all models that include this variable. Banks employing a CDO have two to three times more alliances with fintechs than banks without a CDO. These findings are robust to the exclusion of large fintechs in the sample (regressions (5)–(8)). Overall, our findings indicate that a bank's strategic orientation is positively correlated with the number of alliances with fintechs. The coefficients of Bank Listed and ln(Bank Total Assets) are statistically significant and have a positive effect in all the regressions. Accordingly, large and listed banks interact with more fintechs than small and private banks. Table 5 Panel data analysis for the variables Digital Strategy, CDO, and Number of Partnerships Types of alliances To test Hypotheses 2a and 2b, we construct a binary dependent variable that classifies the two broad types of alliances, which we denote as Investment. This allows us to further investigate what motivates the type of alliance while integrating additional explanatory variables measuring various fintech characteristics. Because some banks have not made an investment in a fintech or engaged in a product-related collaboration but formed another type of partnership, the number of observations in the regressions is somewhat smaller than the overall number of alliances in our alliance sample. As the other types of collaborations were few, we did not run separate analyses for them. The results presented in Table 6 show a significant, negative coefficient of Fintech Employees at the 0.1% level in the full sample in regressions (1) and (2), which indicates that a one-unit change in the employee size category decreases the probability of investments by approximately 10%. This is in line with Hypothesis 2b and our conjecture that banks make investments more often in small firms, which allows them to better control the fintech. However, the coefficients turn non-significant when we exclude large fintechs (regressions (3) and (4)), suggesting that the effect diminishes when only considering fintechs with fewer than 1000 employees. The coefficients of ln(Bank Total Assets) are positive and statistically significant at the 1% level in regressions (1) and (2), indicating that, when accounting for both small and large fintechs, large banks are more likely to invest in fintechs, which is in line with Hypothesis 2a. Again, the effect diminishes when we exclude large fintechs, suggesting that the bank's size does not play a role when only small fintech startups are considered. Furthermore, the coefficients of Universal Bank are negative and statistically significant at the 5% level in all regressions. This suggests that specialized banks that focus on particular industries or business segments are more likely to invest in fintechs, while universal banks seem to benefit more from product-related collaborations. Table 6 Cross-sectional regression results for investment versus product-related collaboration To examine whether stock price reactions occurred after an alliance was publicly announced, we calculate cumulative average abnormal returns (CAARs) for different event windows. We find that all short-term windows have a negative impact and that the event windows with (− 1;0) and (− 1;+ 1) are statistically significant at the 5% level. Thus, at least in the short run, investors perceive alliances with fintechs as having, on average, a negative effect on firm value; thus, Hypothesis 3 is rejected. We might interpret this result as indicating that markets believe banks should develop new digital services themselves instead. However, Table 7 reveals that the CAARs are not negative for all banks and that, in some cases, investors value the public announcement of an alliance. We run ordinary least square regressions not only on our financial performance measure, CARs, for the short event windows (− 1;0), (0;+ 1), and (− 1;+ 1), as Amici et al. (2013) and Chiou and White (2005) suggest, but also for the longer event window (0;+ 100) to account for potential momentum effects. Table 7 CAARs for bank–fintech alliances In unreported regressions, we find that Digital Strategy is positive and statistically significant for the (0;+ 100) event windows. Moreover, digital banks benefit the most from a strategic alliance i.e., the variable Digital Bank is positive and statistically significant for most short-term event windows. Depending on the specification considered, the bank value increases by 2.5 to 3.2% after the digital bank announces a new alliance with a fintech. None of the other bank or fintech variables affect bank value when it comes to a fintech alliance. In this article, we examined the impact of digitalization in the banking industry by analyzing the bank characteristics that play a role in the alliances between banks and fintech startups. Moreover, we investigated the factors that are relevant for a bank to invest in a fintech rather than entering into a product-related collaboration. Finally, we tested whether announcing a new alliance affects banks' market value. Using a hand-collected dataset covering the 100 largest banks in Canada, France, Germany, and the United Kingdom, we found that bank–fintech alliances have increased in the past decade and that the types of alliances are rather similar in all four countries. Thus, we provide empirical evidence that digitalization and new market players have indeed forced banks to make their corporate boundaries more open to market interactions (Kohtamäki et al. 2019). However, there is no apparent difference in the way banks interact with fintechs in market-based (Canada and the United Kingdom) and bank-based (France and Germany) financial systems. Alliances across the four countries examined are most often characterized by a product-related collaboration, which is a comparatively less institutionalized form of alliance that offers little or no control in the product and service development process of a fintech. This finding is consistent with the theoretical observation that financial innovations may by particularly difficult for a bank to contract and internalize through an acquisition (Brandl and Hornuf 2020; Scott et al. 2017; Teece 1986). From a managerial perspective, this raises the question whether banks should use this form of alliance to outsource their innovation activities and thereby become increasingly dependent on fintechs and other partners for ensuring digital transformation. While prior research suggests that banks should benefit from voluntary cooperation and innovations developed by fintechs in ways different from the simple make-or-buy decision (Borah and Tellis 2014; Jacobides and Billinger 2006), the results from our event study indicate that at least for short-term event windows, financial markets find alliances with fintech value-reducing. A potential explanation for this is that in the future, banks might be reduced to innovation followers in the new financial ecosystem, with incumbent banks quickly losing their relevance. We also find that fintechs engaging in alliances operate in various segments across the four countries we investigate, with payment services being the most prevalent segment. Given that, overall, most fintechs operate in the financing segment (Haddad and Hornuf 2019), banks seem to benefit most from external technology in the realm of payment services. Our findings confirm that the implementation of a digital strategy and the employment of a CDO by a bank are positively related to both the mere existence and the number of alliances with fintechs. We consider this indication that alliances with fintechs, the employment of a CDO, and the execution of a clear digital strategy are part of the same overall corporate change strategy. All these strategic approaches may enable banks to be more permeable to the outside market, which is necessary to foster different forms of alliances and remain competitive (Borah and Tellis 2014; Jacobides and Billinger 2006; Kohtamäki et al. 2019). We further find that large, listed, and universal banks are more likely to establish alliances with at least one fintech than smaller, unlisted, and specialized banks. The bank's financial situation, as measured by the return on average assets, is a relevant predictor for explaining the number of alliances in which a bank becomes involved. That less profitable banks engage more frequently in alliances with fintechs indicate that these banks try to compensate for their own inefficiency and inability to innovate by engaging in alliances. Whether such a strategy will improve their performance, however, remains unclear. Product-related collaborations can help banks broaden their service portfolio and use alternative distribution channels to reach new customers. Such a strategy appears particularly beneficial for banks that cannot develop new digital services themselves because of their IT legacy or organizational structure. As mentioned previously, regarding the market effect of publicly announced alliances, we find that announcements have a negative effect on a bank's value for short-term windows. While this finding does not indicate much about the ultimate profitability of bank–fintech alliances, it suggests that markets believe that banks should develop new digital services themselves rather than engaging in alliances with fintechs. Our results further suggest that neither a digital strategy nor the employment of a CDO is more strongly connected with an investment than with product-related collaborations. Although this finding contradicts our hypothesis and previous research on board positions (Geiger and North 2006; Jiang and Li 2009), we assume that CDOs do not simply focus on acquiring fintechs but also increasingly work to develop digitalization expertise in-house. We find, however, that large banks are more likely to become financially engaged in fintech firms. Through a minority investment or a full acquisition in a fintech, banks can often obtain representation on the fintech's board of directors and thereby gain complete or partial control over it. Ensuring a strong and stable relationship in strategic alliances, which from the start are inherently instable, is often critical to their success, as otherwise internal organizational tensions may result in conflicts and ultimately lead to the dissolution of an alliance (Das and Teng 2000). Moreover, through an investment, banks cannot only orchestrate specific service developments that fit the overall corporate change strategy but also engineer services in a way that enables them to integrate these services best in their existing organizational structures and IT infrastructure. Large banks often set up incubator and accelerator programs to obtain financial stakes in fintech firms early on. We find that banks are also more likely to financially invest in smaller fintechs. Practical and policy implications Our empirical analysis has implications for the development of theories regarding strategic alliances and digital servitization in the domain of financial services. The new permeability in the financial industry might be the result of a top down-process, in which the corporate board initiated a general corporate change strategy that resulted in a digital strategy and the employment of a CDO. Alternatively, customers might demand more digital services (e.g., mobile payment solutions, robo-advise applications), and if these are implemented in a decentralized manner through, for example, different divisions of a bank, a CDO might be necessary if these services become more widespread over time. Our work also has practical implications for fintech entrepreneurs, banks, and policy makers. Entrepreneurs seeking funds, regulatory advice, or access to customers may find it worthwhile to engage in an alliance with a bank. The form of collaboration, however, may depend on what is most beneficial for both. In particular, our findings show that entrepreneurs in need of capital may be more successful approaching large banks, because they are more likely to invest in fintechs; by contrast, fintech entrepreneurs who want to stay independent but need to reach new customers may favor smaller and specialized banks, which are more likely to engage in product-related collaborations. More generally, banks with a clearly defined digital strategy or a CDO are most likely to be receptive to entrepreneurs' request to collaborate or for investment. It is important for banks to acknowledge that there is an upward trend toward hiring a CDO, which may become increasingly important as digitalization spreads across the different segments of the financial industry. However, according to our analysis, most banks still have not recognized the need for a CDO. Hiring a CDO may become more urgent in the future as financial technologies become more mature and the need to engage in alliances becomes more pressing. Financial institutions and policy makers will need to define which competencies CDOs must have and how banks can successfully hire such professionals. Moreover, an increasing reliance on alliances also raises questions about the existing technological infrastructure of banks. To collaborate effectively with a fintech, banks may rely on traditional information networks such as SWIFT or need to develop new suitable application programming interfaces. This again raises the demand for professionals who have experience not only in financial products and services but also in the respective IT infrastructure. Finally, our work offers practical implications for policy makers who want to foster an acceleration of the usage of digital technologies in the financial sector. Depending on how policy makers want to shape the financial ecosystem, adopting a restrictive granting practice for bank licenses limits growth opportunities of fintechs as independent entities, because many activities (e.g., taking deposits, extending loans) require such a license. The restrictive granting of bank licenses could thus lead to an ecosystem that is increasingly based on alliances. This, in turn, may affect the relative stability, profitability, and, thus, viability of incumbent banks, a development that should be closely monitored by supervisory authorities. Limitations and future research avenues Our analysis also has clear limitations and thus offers avenues for future research. First, we do not examine the duration of bank–fintech alliances. A preliminary analysis of 150 alliances in our sample indicates that 17% ended by 2020. A survey among banks and fintechs, as well as a systematic search on Factiva, showed that alliances were terminated mostly by banks because the banks developed their own technical solutions, the fintechs were sold to a competitor or went insolvent, or the contract or accelerator program came to an end. Future research could investigate what determines the success of bank–fintech alliances and whether the strategic engineering of digital services can be a successful strategy for incumbent players in the financial industry. Although Lerner et al. (2015) find that the quality of financial patents and financial innovations is often low, bank–fintech alliances might be more successful in generating groundbreaking innovations. Moreover, research on the outcome of bank–fintech alliances would contribute to the recent strand of literature on servitization that investigates the stability and ultimate performance of alliances (Das and Teng 2000; Sjödin et al. 2019; Sklyar et al. 2019). The financial industry is particularly fitting to explore in this respect, because national and supranational regulations are likely to affect the scope, organizational form, and, thus, stability of these alliances. Second, our study also provides an avenue for future research on servitization (Rabetino et al. 2018). While many banks are currently experimenting with new services, services packages, and alliances with startup firms from the financial ecosystem, whether and how they can systematically develop new services (Bullinger et al. 2003) and how they should combine organizational, technological, and human factors to develop profitable services (Spohrer et al. 2007; Vargo and Lusch 2011) are not clear. Answering these questions likely requires in-depth case studies on individual bank–fintech alliances. Our analysis provides first hints that organizations' top-level management leads corporate change. However, research is still required to investigate whether top-level management can carry this change process successfully to intermediate- and lower-level managers, who are ultimately in charge of executing the implementation of new digital services and selling them to existing and new clients. A third future research avenue pertains to the point raised previously that while we consider correlations in our analysis, we do not claim causality. Banks could establish a CDO position because they plan to form alliances in the near future, which would suggest that causality can also go in the opposite direction from what we suggest. Future research might uncover an exogenous shock that would help establish a clean identification strategy for empirical work on causality. Fourth, although we relied on various sources of information to identify alliances, we acknowledge that other sources of information remain invisible to the market, preventing us from identifying all of them. As such, our figures on the number of alliances are lower bounds. Finally, our analysis takes the perspective of banks. Complementary research could explore the perspective of fintech startups' incentives to collaborate with banks. This perspective is likely to be quite different, as fintechs' driving force for forming alliances is often to gain access to banks' large customer base, rather than novel technologies that help foster digital services. This stream of research might uncover why certain digital services are more conducive to development by fintechs rather than by large incumbents. For an empirical structuring of the servitization literature, see the study of Rabetino et al. (2018). In a classic sense, digital servitization refers to "the provision of digital services embedded in a physical product" (Vendrell-Herrero et al. 2017, p. 69), a definition that is based on research conducted in the manufacturing industry (Coreynen et al. 2017; Kohtamäki et al. 2019). The financial industry views the term "product" more broadly, as financial products do not physically exist. In the financial industry, servitization refers to the way financial products are now offered and brokered. New services became available for traditional financial products that did not exist a decade ago and were enabled by digitalization. For example, application programming interfaces allow fintechs to screen the number of bank customers and to offer new services, such as account switching or the automated switching or termination of an agreement, based on the information from the account and the algorithms the fintech developed. In this article, we use the term "alliance" to refer to any type of interaction between fintech startups and banks. This term is widely used in the "make, buy, or ally" literature (Borah and Tellis 2014; Jacobides and Billinger 2006). As will become clear, alliances comprise minority and majority investments, product-related collaborations, and some other forms. Thus, alliances cover a broad spectrum of possible interactions with fintechs. While some of the terms used here differ somewhat from those in Hagedoorn and Duysters (2002), overall, we take a similar approach in the context of externally sourcing innovative capacities. We searched for the name of the respective bank in combination with "fintech," "partner," "alliance," "invest," "acqui," and "collabor" and entered wildcard search symbols such as * in combination with these search terms. Annual reports and websites were frequently available in English. If the information about the variable Digital Strategy was only available in a foreign language, we searched for the French and German equivalents (e.g., stratégie de numérisation, Digitalisierungsstrategie). We consulted an independent researcher fluent in English, French, and German, who validated the terminology and confirmed that the variable was correctly coded. As "chief digital officer" and its abbreviation "CDO" are internationally established terms, we did not perform additional search or validation for these terms in the respective foreign language. These figures do not correspond to those in Table 2, as they are based on the bank level and the table reports statistics of our panel dataset, in which banks are represented for several years. These figures do not correspond to those in Table 3, as some banks have engaged in multiple alliances and the table reports statistics of our alliance dataset. Aghion, P., & Bolton, P. (1992). An incomplete contracts approach to financial contracting. The Review of Economic Studies, 59(3), 473–494. https://doi.org/10.2307/2297860. Amici, A., Fiordelisi, F., Masala, F., Ricci, O., & Sist, F. (2013). Value creation in banking through strategic alliances and joint ventures. Journal of Banking and Finance, 37(5), 1386–1396. https://doi.org/10.1016/j.jbankfin.2012.03.028. Autio, E., Nambisan, S., Thomas, L. D. W., & Wright, M. (2018). Digital affordances, spatial affordances, and the genesis of entrepreneurial ecosystems. Strategic Entrepreneurship Journal, 12(1), 72–95. https://doi.org/10.1002/sej.1266. Baines, T. S., Lightfoot, H., Peppard, J., Johnson, M., Tiwari, A., Shehab, E., & Swink, M. (2009). Towards an operations strategy for product-centric servitization. International Journal of Operations & Production Management, 29(5), 494–519. https://doi.org/10.1108/01443570910953603. Barrett, M., Davidson, E., Prabhu, J., & Vargo, S. (2015). Service innovation in the digital age: key contributions and future directions. MIS Quarterly: Management Information Systems, 39(1), 135–154. https://doi.org/10.25300/MISQ/2015/39:1.03. Beck, T., Chen, T., Lin, C., & Song, F. M. (2016). Financial innovation: the bright and the dark sides. Journal of Banking and Finance, 72, 25–51. https://doi.org/10.1016/j.jbankfin.2016.06.012. Borah, A., & Tellis, G. J. (2014). Make, buy, or ally? Choice of and payoff from announcements of alternate strategies for innovations. Marketing Science, 33, 114–133. https://doi.org/10.1287/mksc.2013.0818. Brandl, B., & Hornuf, L. (2020). Where did Fintechs come from, and where do they go? The transformation of the financial industry in Germany after digitalization. Frontiers in Artificial Intelligence, 3(1), 1–12. https://doi.org/10.3389/frai.2020.00008. Brown, S. J., & Warner, J. B. (1980). Measuring security price performance. Journal of Financial Economics, 8(3), 205–258. https://doi.org/10.1016/0304-405X(80)90002-1. Brown, S. J., & Warner, J. B. (1985). Using daily stock returns. Journal of Financial Economics, 14(1), 3–31. https://doi.org/10.1016/0304-405X(85)90042-X. Bullinger, H.-J., Fähnrich, K.-P., & Meiren, T. (2003). Service engineering–methodical development of new service products. International Journal of Production Economics, 85(3), 275–287. https://doi.org/10.1016/S0925-5273(03)00116-6. Chiou, I., & White, L. J. (2005). Measuring the value of strategic alliances in the wake of a financial implosion: evidence from Japan's financial services sector. Journal of Banking & Finance, 29(10), 2455–2473. https://doi.org/10.1016/J.JBANKFIN.2004.09.001. Coase, R. H. (1937). The nature of the firm. Economica, 4(16), 386–405. https://doi.org/10.1111/j.1468-0335.1937.tb00002.x. Coase, R. H. (1960). The problem of social cost. The Journal of Law & Economics, 3, 1–44 Retrieved from http://www.jstor.org/stable/724810. Coreynen, W., Matthyssens, P., & Van Bockhaven, W. (2017). Boosting servitization through digitization: pathways and dynamic resource configurations for manufacturers. Industrial Marketing Management, 60, 42–53. https://doi.org/10.1016/j.indmarman.2016.04.012. Cumming, D. J., & Schwienbacher, A. (2018). Fintech venture capital. Corporate Governance: An International Review, 26(5), 374–389. https://doi.org/10.1111/corg.12256. Das, T. K., & Teng, B.-S. (2000). Instabilities of strategic alliances: an internal tensions perspective. Organization Science, 11(1), 77–101. https://doi.org/10.1287/orsc.11.1.77.12570. Demirguc-Kunt, A., & Levine, R. (1999). Bank-based and market-based financial systems - cross-country comparisons. Policy Research Working Paper No. 2143. Washington, DC: The World Bank Retrieved from http://documents.worldbank.org/curated/en/259341468739463577/pdf/multi-page.pdf. European Banking Authority. (2017). Discussion paper on the EBA's approach to financial technology (FinTech). London: Retrieved from http://www.eba.europa.eu/documents/10180/1919160/EBA+Discussion+Paper+on+Fintech+%28EBA-DP-2017-02%29.pdf. Geiger, M. A., & North, D. S. (2006). Does hiring a new CFO change things? An investigation of changes in discretionary accruals. The Accounting Review, 81(4), 781–809. https://doi.org/10.2308/accr.2006.81.4.781. Gleason, K. C., Mathur, I., & Wiggins III, R. A. (2003). Evidence on value creation in the financial services industries through the use of joint ventures and strategic alliances. The Financial Review, 38(2), 213–234. https://doi.org/10.1111/1540-6288.00043. Grossman, S. J., & Hart, O. D. (1986). The costs and benefits of ownership: a theory of vertical and lateral integration. Journal of Political Economy, 94(4), 691–719. https://doi.org/10.1086/261404. Haddad, C., & Hornuf, L. (2019). The emergence of the global fintech market: economic and technological determinants. Small Business Economics, 53(1), 81–105. https://doi.org/10.1007/s11187-018-9991-x. Hagedoorn, J., & Duysters, G. (2002). External sources of innovative capabilities: the preferences for strategic alliances or mergers and acquisitions. Journal of Management Studies, 39(2), 167–188. https://doi.org/10.1111/1467-6486.00287. Jacobides, M. G., & Billinger, S. (2006). Designing the boundaries of the firm: from "make, buy, or ally" to the dynamic benefits of vertical architecture. Organization Science, 17(2), 249–261. https://doi.org/10.1287/orsc.1050.0167. Jiang, X., & Li, Y. (2009). An empirical investigation of knowledge management and innovative performance: the case of alliances. Research Policy, 38(2), 358–368. https://doi.org/10.1016/j.respol.2008.11.002. Jiang, X. J., Petroni, K. R., & Wang, I. Y. (2010). CFOs and CEOs: who have the most influence on earnings management? Journal of Financial Economics, 96(3), 513–526. https://doi.org/10.1016/j.jfineco.2010.02.007. Klus, M. F., Lohwasser, T. S., Holotiuk, F., & Moormann, J. (2019). Strategic alliances between banks and fintechs for digital innovation: motives to collaborate and types of interaction. Journal of Entrepreneurial Finance, 21(1), 1–23. Kohtamäki, M., Parida, V., Oghazi, P., Gebauer, H., & Baines, T. (2019). Digital servitization business models in ecosystems: a theory of the firm. Journal of Business Research, 104, 380–392. https://doi.org/10.1016/j.jbusres.2019.06.027. Kohtamäki, M., Parida, V., Patel, P. C., & Gebauer, H. (2020). The relationship between digitalization and servitization: the role of servitization in capturing the financial potential of digitalization. Technological Forecasting and Social Change, 151, 119804. https://doi.org/10.1016/j.techfore.2019.119804. Lerner, J. (2002). Where does state street lead? A first look at finance patents, 1971 to 2000. The Journal of Finance, 57(2), 901–930. https://doi.org/10.1111/1540-6261.00446. Lerner, J., Speen, A., Baker, M., & Leamon, A. (2015). Financial patent quality: finance patents after state street. Harvard Business School Working Paper No. 16–068. Cambridge, MA. Retrieved from https://www.hbs.edu/faculty/Pages/item.aspx?num=50238. Li, K., Qiu, J., & Wang, J. (2019). Technology conglomeration, strategic alliances, and corporate innovation. Management Science, 65(11), 5065–5090. https://doi.org/10.1287/mnsc.2018.3085. Marciukaityte, D., Roskelley, K., & Wang, H. (2009). Strategic alliances by financial services firms. Journal of Business Research, 62(11), 1193–1199. https://doi.org/10.1016/J.JBUSRES.2008.07.004. Merton, R. C. (1995). Financial innovation and the management and regulation of financial institutions. Journal of Banking & Finance, 19(3–4), 461–481. https://doi.org/10.1016/0378-4266(94)00133-N. Miller, M. H. (1986). Financial innovation: the last twenty years and the next. The Journal of Financial and Quantitative Analysis, 21(4), 459–471. https://doi.org/10.2307/2330693. MSCI. (2018). MSCI Germany Index (USD). Retrieved January 14, 2019, from https://www.msci.com/documents/10199/d76361cb-d5a5-4185-97ce-ec5e3dd5bf2e. Navaretti, G. B., Calzolari, G., Mansilla-Fernandez, J. M., & Pozzolo, A. F. (2017). Fintech and banking. Friends or foes? European Economy – banks, regulation, and the real sector, 2017.2 (pp. 9–30). https://european-economy.eu/2017-2/fintech-and-banks-friends-or-foes/?did=2045. Peng, J.-L., Jeng, V., Wang, J. L., & Chen, Y.-C. (2017). The impact of bancassurance on efficiency and profitability of banks: evidence from the banking industry in Taiwan. Journal of Banking & Finance, 80, 1–13. https://doi.org/10.1016/J.JBANKFIN.2017.03.013. Puschmann, T. (2017). Fintech. Business & Information Systems Engineering, 59(1), 69–76. https://doi.org/10.1007/s12599-017-0464-6. Rabetino, R., Harmsen, W., Kohtamäki, M., & Sihvonen, J. (2018). Structuring servitization-related research. International Journal of Operations & Production Management, 38(2), 350–371. https://doi.org/10.1108/IJOPM-03-2017-0175. Sakao, T., & Shimomura, Y. (2007). Service engineering: a novel engineering discipline for producers to increase value combining service and product. Journal of Cleaner Production, 15(6), 590–604. https://doi.org/10.1016/j.jclepro.2006.05.015. Scott, S. V., Van Reenen, J., & Zachariadis, M. (2017). The long-term effect of digital innovation on bank performance: an empirical study of SWIFT adoption in financial services. Research Policy, 46(5), 984–1004. https://doi.org/10.1016/j.respol.2017.03.010. Sjödin, D., Parida, V., & Kohtamäki, M. (2019). Relational governance strategies for advanced service provision: multiple paths to superior financial performance in servitization. Journal of Business Research, 101, 906–915. https://doi.org/10.1016/j.jbusres.2019.02.042. Sklyar, A., Kowalowski, C., Tronvoll, B., & Sorhammar, D. (2019). Organizing for digital servitization: a service ecosystem perspective. Journal of Business Research, 104, 450–460. https://doi.org/10.1016/j.jbusres.2019.02.012. Spohrer, J., & Maglio, P. P. (2010). Toward a science of service systems: value and symbols. In P. P. Maglio, J. A. Kieliszewski, & J. C. Spohrer (Eds.), Handbook of service science (pp. 157–194). New York: Springer. https://doi.org/10.1007/978-1-4419-1628-0_9. Spohrer, J., Maglio, P. P., Bailey, J., & Gruhl, D. (2007). Steps toward a science of service systems. Computer, 40(1), 71–77. https://doi.org/10.1109/MC.2007.33. Teece, D. J. (1986). Profiting from technological innovation: implications for integration, collaboration, licensing and public policy. Research Policy, 15(6), 285–305. https://doi.org/10.1016/0048-7333(86)90027-2. Vargo, S. L., & Lusch, R. F. (2011). It's All B2B...and beyond: toward a systems perspective of the market. Industrial Marketing Management, 40(2), 181–187. https://doi.org/10.1016/j.indmarman.2010.06.026. Vendrell-Herrero, F., Bustinza, O. F., Parry, G., & Georgantzis, N. (2017). Servitization, digitization and supply chain interdependency. Industrial Marketing Management, 60, 69–81. https://doi.org/10.1016/j.indmarman.2016.06.013. Williamson, O. E. (1991). Strategizing, economizing, and economic organization. Strategic Management Journal, 12, 75–94. https://doi.org/10.1002/smj.4250121007. Yoo, Y., Henfridsson, O., & Lyytinen, K. (2010). Research commentary—the new organizing logic of digital innovation: an agenda for information systems research. Information Systems Research, 21(4), 724–735. https://doi.org/10.1287/isre.1100.0322. The authors thank Luca Enriques, Alvaro Martin Enriquez, Gerard Hertig, Georg Ringe, Egle Vaznyte, Dirk Zetzsche, Kristin Van Zwieten, and the participants of the 5th Crowdinvesting Symposium (Humboldt University Berlin), the 4th International Conference on the Dynamics of Entrepreneurship (Mannheim University), the International Workshop on Financial System Architecture & Stability (Cass Business School), the 4th Luxembourg FinTech Conference (University of Luxembourg, Banque Internationale à Luxembourg (BIL) and the Luxembourg House of Financial Technology (LHoFT)), the EBI Global Annual Conference on Banking Regulation (Goethe University, European Stability Mechanism (ESM) and Institute for Monetary and Financial Stability (IMFS)), the 26th Annual Meeting of the German Finance Association (DGF) (University of Duisburg-Essen), the 4th Oxford Business Law Blog Annual Conference, and the 1st International FinTech, InsurTech & Blockchain Forum (University of Zurich), where the paper received the best paper award, for their valuable comments and suggestions. They also thank Mareike Staufenbiel and Robert Platow for their research assistance. University of Bremen, Bremen, Germany Lars Hornuf Max Planck Institute for Innovation and Competition, Munich, Germany CESifo, Munich, Germany University of Münster, Münster, Germany Milan F. Klus & Todor S. Lohwasser SKEMA Business School – Université Côte d'Azur, Euralille, France Armin Schwienbacher Milan F. Klus Todor S. Lohwasser Correspondence to Lars Hornuf. Hornuf, L., Klus, M.F., Lohwasser, T.S. et al. How do banks interact with fintech startups?. Small Bus Econ 57, 1505–1526 (2021). https://doi.org/10.1007/s11187-020-00359-3 Issue Date: October 2021 Make, buy, or ally
CommonCrawl
On the relationship between the traceability properties of Reed-Solomon codes AMC Home Syndrome decoding for Hermite codes with a Sugiyama-type algorithm November 2012, 6(4): 443-466. doi: 10.3934/amc.2012.6.443 An algebraic approach for decoding spread codes Elisa Gorla 1, , Felice Manganiello 2, and Joachim Rosenthal 3, Institut de Mathématiques, Université de Neuchâtel, Rue Emile-Argand 11, 2000 Neuchâtel, Switzerland Department of Electrical and Computer Engineering, University of Toronto, Toronto, Ontario, M5S 3G4, Canada Institut für Mathematik, Universität Zürich, 8057 Zürich, Switzerland Received September 2011 Revised June 2012 Published November 2012 In this paper we study spread codes: a family of constant-dimension codes for random linear network coding. In other words, the codewords are full-rank matrices of size $k\times n$ with entries in a finite field $\mathbb F_q$. Spread codes are a family of optimal codes with maximal minimum distance. We give a minimum-distance decoding algorithm which requires $\mathcal{O}((n-k)k^3)$ operations over an extension field $\mathbb F_{q^k}$. Our algorithm is more efficient than the previous ones in the literature, when the dimension $k$ of the codewords is small with respect to $n$. The decoding algorithm takes advantage of the algebraic structure of the code, and it uses original results on minors of a matrix and on the factorization of polynomials over finite fields. Keywords: spread codes, decoding algorithm., Random linear network coding. Mathematics Subject Classification: 11T7. Citation: Elisa Gorla, Felice Manganiello, Joachim Rosenthal. An algebraic approach for decoding spread codes. Advances in Mathematics of Communications, 2012, 6 (4) : 443-466. doi: 10.3934/amc.2012.6.443 R. Ahlswede, N. Cai, S.-Y. R. Li and R. W. Yeung, Network information flow,, IEEE Trans. Inform. Theory, 46 (2000), 1204. doi: 10.1109/18.850663. Google Scholar T. Etzion and N. Silberstein, Error-correcting codes in projective spaces via rank-metric codes and Ferrers diagrams,, IEEE Trans. Inform. Theory, 55 (2009), 2909. doi: 10.1109/TIT.2009.2021376. Google Scholar T. Etzion and A. Vardy, Error-correcting codes in projective space,, in, (2008), 871. Google Scholar È. M. Gabidulin, Theory of codes with maximum rank distance,, Problemy Peredachi Informatsii, 21 (1985), 3. Google Scholar E. Gorla, C. Puttmann and J. Shokrollahi, Explicit formulas for efficient multiplication in $GF(3$6m$)$,, in, (2007), 173. Google Scholar J. W. P. Hirschfeld, "Projective Geometries over Finite Fields,'' 2nd edition,, The Clarendon Press, (1998). Google Scholar A. Kohnert and S. Kurz, Construction of large constant dimension codes with a prescribed minimum distance,, in, (2008), 31. Google Scholar R. Kötter and F. R. Kschischang, Coding for errors and erasures in random network coding,, IEEE Trans. Inform. Theory, 54 (2008), 3579. doi: 10.1109/TIT.2008.926449. Google Scholar S.-Y. R. Li, R. W. Yeung and N. Cai, Linear network coding,, IEEE Trans. Inform. Theory, 49 (2003), 371. doi: 10.1109/TIT.2002.807285. Google Scholar R. Lidl and H. Niederreiter, "Introduction to Finite Fields and their Applications,'' revised edition,, Cambridge University Press, (1994). doi: 10.1017/CBO9781139172769. Google Scholar P. Loidreau, A Welch-Berlekamp like algorithm for decoding Gabidulin codes,, in, (2006), 36. doi: 10.1007/11779360_4. Google Scholar H. Mahdavifar and A. Vardy, Algebraic list-decoding on the operator channel,, in, (2010), 1193. doi: 10.1109/ISIT.2010.5513656. Google Scholar F. Manganiello, E. Gorla and J. Rosenthal, Spread codes and spread decoding in network coding,, in, (2008), 851. doi: 10.1109/ISIT.2008.4595113. Google Scholar G. Richter and S. Plass, Fast decoding of rank-codes with rank errors and column erasures,, in, (2004). Google Scholar D. Silva, F. R. Kschischang and R. Kötter, A rank-metric approach to error control in random network coding,, IEEE Trans. Inform. Theory, 54 (2008), 3951. doi: 10.1109/TIT.2008.928291. Google Scholar V. Skachek, Recursive code construction for random networks,, IEEE Trans. Inform. Theory, 56 (2010), 1378. doi: 10.1109/TIT.2009.2039163. Google Scholar A.-L. Trautmann, F. Manganiello and J. Rosenthal, Orbit codes - a new concept in the area of network coding,, in, (2010), 1. doi: 10.1109/CIG.2010.5592788. Google Scholar A.-L. Trautmann and J. Rosenthal, A complete characterization of irreducible cyclic orbit codes,, in, (2011), 219. Google Scholar Giuseppe Bianchi, Lorenzo Bracciale, Keren Censor-Hillel, Andrea Lincoln, Muriel Médard. The one-out-of-k retrieval problem and linear network coding. Advances in Mathematics of Communications, 2016, 10 (1) : 95-112. doi: 10.3934/amc.2016.10.95 Irene I. Bouw, Sabine Kampf. Syndrome decoding for Hermite codes with a Sugiyama-type algorithm. Advances in Mathematics of Communications, 2012, 6 (4) : 419-442. doi: 10.3934/amc.2012.6.419 Kwankyu Lee. Decoding of differential AG codes. Advances in Mathematics of Communications, 2016, 10 (2) : 307-319. doi: 10.3934/amc.2016007 Artyom Nahapetyan, Panos M. Pardalos. A bilinear relaxation based algorithm for concave piecewise linear network flow problems. Journal of Industrial & Management Optimization, 2007, 3 (1) : 71-85. doi: 10.3934/jimo.2007.3.71 Washiela Fish, Jennifer D. Key, Eric Mwambene. Partial permutation decoding for simplex codes. Advances in Mathematics of Communications, 2012, 6 (4) : 505-516. doi: 10.3934/amc.2012.6.505 Alexander Barg, Arya Mazumdar, Gilles Zémor. Weight distribution and decoding of codes on hypergraphs. Advances in Mathematics of Communications, 2008, 2 (4) : 433-450. doi: 10.3934/amc.2008.2.433 Keisuke Minami, Takahiro Matsuda, Tetsuya Takine, Taku Noguchi. Asynchronous multiple source network coding for wireless broadcasting. Numerical Algebra, Control & Optimization, 2011, 1 (4) : 577-592. doi: 10.3934/naco.2011.1.577 Min Ye, Alexander Barg. Polar codes for distributed hierarchical source coding. Advances in Mathematics of Communications, 2015, 9 (1) : 87-103. doi: 10.3934/amc.2015.9.87 Terasan Niyomsataya, Ali Miri, Monica Nevins. Decoding affine reflection group codes with trellises. Advances in Mathematics of Communications, 2012, 6 (4) : 385-400. doi: 10.3934/amc.2012.6.385 Heide Gluesing-Luerssen, Uwe Helmke, José Ignacio Iglesias Curto. Algebraic decoding for doubly cyclic convolutional codes. Advances in Mathematics of Communications, 2010, 4 (1) : 83-99. doi: 10.3934/amc.2010.4.83 Michael Braun. On lattices, binary codes, and network codes. Advances in Mathematics of Communications, 2011, 5 (2) : 225-232. doi: 10.3934/amc.2011.5.225 Stefan Martignoli, Ruedi Stoop. Phase-locking and Arnold coding in prototypical network topologies. Discrete & Continuous Dynamical Systems - B, 2008, 9 (1) : 145-162. doi: 10.3934/dcdsb.2008.9.145 T. Jäger. Neuronal coding of pacemaker neurons -- A random dynamical systems approach. Communications on Pure & Applied Analysis, 2011, 10 (3) : 995-1009. doi: 10.3934/cpaa.2011.10.995 Qian Guo, Thomas Johansson, Erik Mårtensson, Paul Stankovski Wagner. Some cryptanalytic and coding-theoretic applications of a soft stern algorithm. Advances in Mathematics of Communications, 2019, 13 (4) : 559-578. doi: 10.3934/amc.2019035 Jiangtao Mo, Liqun Qi, Zengxin Wei. A network simplex algorithm for simple manufacturing network model. Journal of Industrial & Management Optimization, 2005, 1 (2) : 251-273. doi: 10.3934/jimo.2005.1.251 Hannes Bartz, Antonia Wachter-Zeh. Efficient decoding of interleaved subspace and Gabidulin codes beyond their unique decoding radius using Gröbner bases. Advances in Mathematics of Communications, 2018, 12 (4) : 773-804. doi: 10.3934/amc.2018046 Joan-Josep Climent, Diego Napp, Raquel Pinto, Rita Simões. Decoding of $2$D convolutional codes over an erasure channel. Advances in Mathematics of Communications, 2016, 10 (1) : 179-193. doi: 10.3934/amc.2016.10.179 Johan Rosenkilde. Power decoding Reed-Solomon codes up to the Johnson radius. Advances in Mathematics of Communications, 2018, 12 (1) : 81-106. doi: 10.3934/amc.2018005 Anas Chaaban, Vladimir Sidorenko, Christian Senger. On multi-trial Forney-Kovalev decoding of concatenated codes. Advances in Mathematics of Communications, 2014, 8 (1) : 1-20. doi: 10.3934/amc.2014.8.1 Vladimir Sidorenko, Christian Senger, Martin Bossert, Victor Zyablov. Single-trial decoding of concatenated codes using fixed or adaptive erasing. Advances in Mathematics of Communications, 2010, 4 (1) : 49-60. doi: 10.3934/amc.2010.4.49 Elisa Gorla Felice Manganiello Joachim Rosenthal
CommonCrawl
The arc length of a parametrized curve An introduction to parametrized curves A vector-valued function of a single variable, $\dllp: \R \to \R^n$ (confused?), can be viewed as parametrizing a curve. Such a function $\dllp(t)$ traces out a curve as you vary $t$. You could think of a curve $\dllp : \R \to \R^3$ as being a wire. For example, $\dllp(t) = (\cos t, \sin t, t)$, for $0 \le t \le 6\pi$, is the parametrization of a helix. You can view it as a slinky or a spring. Applet loading Parametrized helix. The vector-valued function $\dllp(t)=(\cos t, \sin t, t)$ parametrizes a helix, shown in blue. This helix is the image of the interval $[0,6\pi]$ (shown in cyan) under the mapping of $\dllp$. For each value of $t$, the red point represents the vector $\dllp(t)$. As you change $t$ by moving the cyan point along the interval $[0,6\pi]$, the red point traces out the helix. More information about applet. Imagine we wanted to estimate the length of the slinky, which we call the arc length of the parametrized curve. Unfortunately, it's difficult to calculate the length of a curved piece of wire. It's much easier to calculate the length of straight pieces of wire. Probably the easiest way to calculate the length of the slinky would be to stretch it out into one straight line. But, if you ever tried to do that with a slinky (or a strong spring), you'd discover that stretching it into a straight line is virtually impossible. If you can't stretch the slinky into one straight line, what could you do to estimate its length? One thing you could do is pretend that the slinky, rather than being a curved wire, was really composed of a bunch of short straight wires. In other words, you could approximate the curved slinky with line segments. Helix arc length. The vector-valued function $\dllp(t)=(\cos t, \sin t, t)$ parametrizes a helix, shown in blue. The green lines are line segments that approximate the helix. The discretization size of line segments $\Delta t$ can be changed by moving the cyan point on the slider. As $\Delta t \to 0$, the length $L(\Delta t)$ of the line segment approximation approaches the arc length of the helix from below. The length of the line segments is easy to measure. If you add up the lengths of all the line segments, you'll get an estimate of the length of the slinky. Let $\Delta t$ specify the discretization interval of the line segments, and denote the total length of the line segments by $L(\Delta t)$. As the line segments take shortcuts, the length of the line segments underestimate the arc length of the slinky. However, if you increase the number of line segments (decreasing the length of each line segment), the total length of the line segments becomes a better estimate of the slinky arc length. As $\Delta t$ approaches zero, the length of each line segment shrinks toward zero, the number of line segments increases, and the line segments become closer and closer to the slinky. Consequently, the total length $L(\Delta t)$ of the line segments approaches the slinky arc length. What's the length of each line segment? If there are $n$ line segments, we could define $t_0, t_1, \ldots, t_n$ so that the first line segment goes from the point $\dllp(t_0)$ to the point $\dllp(t_1)$, the second line segment goes from the point $\dllp(t_1)$ to the point $\dllp(t_2)$, etc. The vector from $\dllp(t_0)$ to $\dllp(t_1)$ is simply $\dllp(t_1) - \dllp(t_0)$, so the length of the line segment must be $\| \dllp(t_1) - \dllp(t_0) \|$. The length of the second line segment is $\| \dllp(t_2) - \dllp(t_1) \|$, etc. To find the total length of the line segments, we just add up those lengths from all $n$ line segments: \begin{align*} \sum_i^n \| \dllp(t_i) - \dllp(t_{i-1})\|. \label{totallengtha} \end{align*} Now we do some tricks to put this into a different form. First, if $\Delta t_i = t_i - t_{i-1}$, then we can rewrite $t_{i}$ as $t_{i-1} + \Delta t_i$. Next, we can divide each term of the above equation by $\Delta t_i$ and multiply it by $\Delta t_i$ so that our expression for the length becomes \begin{align} \sum_i^n \| \dllp(t_i) - \dllp(t_{i-1})\| &= \sum_i^n \| \dllp(t_{i-1} + \Delta t_i) - \dllp(t_{i-1})\| \notag \\ &= \sum_i^n \left\| \frac{\dllp(t_{i-1} + \Delta t_i) - \dllp(t_{i-1})}{\Delta t_i}\right\| \Delta t_i. \label{total_length} \end{align} Maybe this new equation doesn't look like much of an improvement. But if you were a real math nerd, you might have noticed that the quotient involving $\dllp(t_{i-1})$ is exactly the expression used in the limit definition of the derivative $\dllp'(t)$ of a parametrized curve (if we replace $h$ with $\Delta t_i$). In fact, equation \eqref{total_length} is a Riemann sum for an integral, analogous to the ones used to define integrals such as double integrals. If we let the number of line segments increase (as we take the limit $\Delta t_i \to 0$) the quotient becomes $\dllp'(t)$, and equation \eqref{total_length} approaches the integral \begin{align*} L(\dllp)=\int_a^b \| \dllp'(t) \| dt, \label{totallengthc} \end{align*} which is the true arc length of the slinky. The numbers $a$ and $b$ are the values of $t$ at the ends of the slinky (i.e., the numbers so that the slinky is defined by $\dllp(t)$ for $a \le t \le b$). In our example, the slinky was defined by $\dllp(t)$ for $0 \le t \le 6\pi$, so we would use $a=0$ and $b=6\pi$. The magnitude of the derivative $\| \dllp'(t) \|$ is the speed of a particle that is at position $\dllp(t)$ at time $t$. The above equation simply says that the total length of the curve traced by the particle is the integral of its speed. (This length must, of course, be independent of the particle's speed.) You can see some examples here. Previous: Orienting curves Next: Parametrized curve arc length examples Previous: Triple integral examples Introduction to a line integral of a scalar-valued function Line integrals are independent of parametrization Examples of scalar line integrals Introduction to a line integral of a vector field Length of curves Derivatives of parameterized curves Tangent lines to parametrized curves Tangent line to parametrized curve examples Parametrized curve and derivative as location and velocity Parametrized curve arc length examples Surface area of parametrized surfaces Length, area, and volume factors Nykamp DQ, "The arc length of a parametrized curve." From Math Insight. http://mathinsight.org/parametrized_curve_arc_length Keywords: arc length, line integral, parametrized curve Send us a message about "The arc length of a parametrized curve" The arc length of a parametrized curve by Duane Q. Nykamp is licensed under a Creative Commons Attribution-Noncommercial-ShareAlike 4.0 License. For permissions beyond the scope of this license, please contact us.
CommonCrawl
Smart Learning Environments Reuse of e-learning personalization components Sameh Ghallabi ORCID: orcid.org/0000-0002-7064-65901, Fathi Essalmi2, Mohamed Jemni1 & Kinshuk3 Smart Learning Environments volume 9, Article number: 32 (2022) Cite this article Personalized learning systems use several components in order to create courses adapted to the learners'characteristics. Current emphasis on the reduction of costs of development of new resources has motivated the reuse of the e-learning personalization components in the creation of new components. Several systems have been proposed in the literature. Each system implements a specific approach and includes a set of software components. However, many of these components are not easily reusable. This paper proposes an architecture, which aims to improve the representation of learning components in a reusable and interoperable way. As a result, these components could be integrated easily for the creation of personalized learning courses. This architecture consists of five main packages: learner model, adaptation model, reuse facilities, learner interface and pedagogical knowledge. An experiment is conducted to validate the proposed architecture. The obtained results illustrate the optimal composition of e-learning personalization components through an example. The combination of learning components for providing personalization of learning courses has been an important subject of research in recent years. Several research teams working in the field of e-learning personalization in different parts of the world have focused on this issue. Each team has achieved some results. However, many of these results are not easily reusable. The e-learning environments are characterized by the use of a set of standards. These standards facilitate the reuse of pedagogical resources as well as e-learning software components. For example, ASSA by Aljohany et al. (2018) used the SCORM (Sharable Content Object Reference Model) standard to represent the learning objects in a reusable manner. In Dominic and Francis (2015), the IMS-QTI (IMS Question and Test Interoperability) standard permitted creating and generating adaptive questionnaires. Another example was Heath and Schwerdtfeger (2009) who put forward a new approach that allowed for introducing the e-learning standards for reusing the learning components. Even though there are some components that are represented in an interoperable and reusable way through the e-learning standards, various other components are not supported by e-learning standards. As examples of these components, we cite the personalization parameters and strategies. Essalmi et al. (2010) used the web service technology to represent the mentioned components in reusable and interoperable manner. To sum up, a personalized learning system uses several components in order to create a course adapted to the learners' needs. However, the used components in a given system are not exploited in others systems. Consequently, research is needed to identify how to obtain the personalization of learning courses by the federation of software components in a reusable, interoperable and flexible way. In this context, several systems allows representing the mentioned components in a flexible and accessible way. In addition, it enables teachers to select and choose the components according to students' profiles and the specifics of the courses they want to use. For example, Siddique et al. (2018) presented an approach to reuse the reusable software components adapted to learners' preferences. Another example is Harandi (2015), who proposed a new approach, which allowed authors to compose learning objects. However, very little research is available that focuses on optimizing the composition process. The research in this paper presents an approach that allows for combining e-learning personalization efforts. The proposed approach includes packages representing information about learners, pedagogical knowledge, adaptation mechanisms and reuse facilities. Furthermore, this paper focuses on the package for reuse facilities. This package describes various technologies, methodologies and learning standards, which allows for representing student model and pedagogical knowledge in an interoperable and reusable way. The objective is to optimize the effort investment of the community in developing personalized e-learning systems. This approach uses the A* algorithm, proposed by Mehlhorn et al.(2017), which allows for finding the best solution with the least cost. It utilizes this algorithm to give optimal and shortest path to the given goal node between multiple points using the heuristic function. In Mehlhorn et al. (2017), the A* algorithm was compared with the different types of search algorithms, such as the Breadth first search, Greedy best search, Depth first search and Dijkstra. The obtained results show that the A* algorithm has the best performance by using admissible heuristics to guide its search. It permits finding the optimal and complete solution. At the same time, the mentioned approach provides many advantages for the teachers. For instance, it gives them the possibility to reuse the pedagogical and software components and to provide optimal and performance composition to satisfy their' needs. In addition, it allows teachers to incorporate these components in a relevant and easy way in their courses. The rest of the paper is organized as follows. The following section (Section II) presents related works that focus on e-learning personalization systems. Section III presents the proposed architecture, which allows for combining and assembling the learning components in a reusable way. Section IV describes the research questions and research methodology. In section V, the suggested approach is explained with a pilot study of the simulation. Section VI discusses the proposed architecture. Finally, conclusion and future work are presented in section VII. Personalized learning systems generate courses adapted to the learner's characteristics. In order to ensure personalization in the educational field, these systems use several components such as the student's model and the pedagogical knowledge, but the problem is the implementation of these components in a reusable manner. Several authors have treated this problem by using e-learning standards and technologies. For example, Aljohany et al. (2018) proposed an approach, which considered the adequacy of existing e-learning standards. These standards might support and facilitate the introduction of adaptive techniques in learning systems. The suggested approach aimed to provide personalized learning questions. In Aljohany et al. (2018) learning objects were organized through shareable content objects (SCORM). This latter was a content unit that had a pedagogical sense which could be reused in other learning resources. Their approach highlighted the learning questions that had to be suitable to each learner according to the learner's level of knowledge and the Felder-Silverman learning style. Moreover, Heath and Schwerdtfeger (2009) suggested an approach that described a learning scenario suitable for both the personal preferences and the delivery context. They presented the integration of different standards, working groups and specified organizations that allowed for the accessibility to contents and to the customized interface by certain users in a given context. In Dominic and Francis (2015), the IMS-QTI standard permitted creating and generating adaptive questionnaires. Through this standard, the authors were given the possibility of representing the test and questions in reusable and interoperable way. Even though there were some components that were represented in an interoperable and reusable way through the e-learning standards, other components were not supported by e-learning standards. Therefore, as a solution, Essalmi et al. (2010) put forward an approach that described the reuse of personalization strategies. In particular, the authors used the technology of Web services for the implementation of their approach. In another example, Virvou and Troussas (2011) propounded a personalized learning system that enabled each student to learn two languages ​​(English and French) individually depending on the student model. To do that, the student model and the educational components were represented as Web services. As a result, these components could be reused by other educational applications. Elbeh and Biundo (2012) put forward an approach, using an ontology and a Hierarchical Task Network (HTN) planning technique, that allowed for creation of a course structure and course contents adapted to each student according to the type of personality, learning style, emotional and motivational state and cognitive ability. Yarandi et al. (2013) utilized semantic Web technologies (a semantic Web is an extension of a Web that aims to provide software programs with machine interpretable metadata of the published resources) to enable the reuse of learning contents and to add a semantic layer in charge of customization. The authors used these technologies to represent the domain, student and content models. The aforementioned models were based on the ontological representations, which provided an appropriate solution for each individual learner. Wu et al. (2017) represented an approach that allowed for adapting learning objects by specific users via the adaptation model. The learner model, adaptation model, domain model and learning objects were not represented in a reusable way. Another example of Gutiérrez et al. (2016) introduced an approach that permitted to design and implement the shareable auto-adaptive learning object in online learning environment. The main objective of the work was to define a solution for real-time adaptation in e-learning systems based on the use of dynamic languages. E-learning personalization components can be classified into three categories. First category is the pedagogical knowledge that defines the different learning elements, which are used to create a learning course, as described below. Domain model: It represents learning objects and contents. For example, in Ahmed et al. (2017), the domain model included various contents of lessons. In Wu et al. (2017), the domain model described the concepts of learning objects. Learning materials: It represents theories, examples, practicums and tests, represented by appropriate learning objects. Pedagogical model: It contains the system knowledge and implements the different pedagogical strategies. For instance, in Virvou and Troussas (2011), this model included demonstrations, presentations, modeling, and so on, as a type of pedagogical strategy. Learning resources: They include various information, documents, programs, data banks, and so on. For instance, Yarandi et al. (2013) used documents as a type of learning object. Learning objects: They are digital and non-digital entities related to the concepts of learning material. For example, Essalmi et al. (2010) represented the learning object in reusable way. Contents: It represents the topics, themes, behaviours, concepts and facts, often grouped within each subject or learning area. For instance, in Yarandi et al. (2013), this model described the contents of learning concepts. Learning activities: They include a set of tasks and associated tools, which can be used to perform the task. For example, in Gutiérrez et al. (2016), this component included the description and the type of the learning activity. Test: This component includes various types of test questions: simple choices, multiple choices, fill in the blanks, etc. For instance, Virvou and Troussas (2011) used an exercise as a test of assessment. Lesson structure: It represents the course structure. For instance, Elbeh and Biundo (2012) used a lesson structure in order to prepare the course (Fig. 1). Second category is the learner model that represents learner's information and characteristics. It represents information about a learner. For instance, Aljohany et al. (2018) used the level of knowledge and the Felder-Silverman learning style as information about the learner. In Elbeh, and Biundo (2012), the proposed system defined the personality type, the learning style, the emotional and motivational state, the student's goal, the performance history and the cognitive ability as information about the learner. Personalization parameters: It is a set of learners' characteristics such as learning styles or learners' level of knowledge. Context: It includes two types of constraints: learner models and environment constraints. For instance, in Heath and Schwerdtfeger (2009), this component contained learners' preferences and environment constraints. Last category is the adaptation mechanisms that defines the different techniques and methods of adaptation. These components are presented as follows: Adaptation model: It includes adaptation methods, techniques and technologies, as well as the adaption algorithm. For example, Gutiérrez et al. (2016) used the adaptive presentation and the adaptive navigation support as adaptation technologies. Table 1 summarizes the aforementioned approaches and shows the various components utilized to provide personalized learning scenarios. In particular, it presents the components that are widely used in the personalization of e-learning systems. Table 1 Learning components To sum up, personalized learning systems use several components in order to create a course adapted to learners' needs and characteristics. The need for reduction in the costs of new personalized courses has motivated researchers to think about the reuse of personalization components. In the literature, there are several mechanisms available to help tutors reutilize appropriate software components and results. In particular, one can observe that some components are represented in an interoperable and reusable way through the e-learning standards and/or technologies, while other components are not supported by e-learning standards and technologies. In addition, the used components in a given system are not exploited in other systems. This paper proposes a new solution that makes the e-learning personalization components reusable. The goal of this solution is the federation of them in the creation of a new component. Proposed architecture This section presents the proposed architecture that describes various components to create a personalized learning scenario. Figure 2 presents the proposed architecture. This figure shows the interaction between the different components in order to provide a personalized learning course. When course developers create a personalized learning course, they require an input from modeled components, such as the learner model, the pedagogical strategy, the domain model, the content, the learning activities and the adaptive mechanisms. E-learning personalization components Architecture of the proposed system On the one hand, the learner's profile component can be represented in a reusable way utilizing one of the reuse mechanisms. On the other hand, the pedagogical knowledge component can be represented through one of the ways to reuse in order to provide reusable knowledge. According to the learner's profile, the learning system provides personalized and adapted pedagogical knowledge to the learner. Through the learner's interface, the learning system collects information about the learner and saves it in a package of learner models. The authors can select the necessary learning contents in order to create a learning course. Such architecture includes the potential packages that make these components reusable. These packages are inspired from Table 1 in the related works (see section II). In software engineering, package is a collection of items grouped together. It is based on logical grouping. It contains diagrams, classes and other packages. This research uses these benefits to construct the mentioned packages. The proposed architecture consists of five main packages: learner model, adaptation model, reuse facilities, learner interface and pedagogical knowledge that, as described in the next sub-sections. Package of learner models The package of learner models (Fig. 3) contains important information and requirements about the learners, such as their interests, preference, goal, tasks, background, learning performance, learning style, aptitude and environment, as well as other useful features. This model is used to provide learning contents adapted to learners' needs. Khamis (2015) divided information in the learner model in two groups: domain independent information and domain specific information. The first group describes the learner's characteristics, such as aptitudes, goals, and personal traits. The second group represents various domain related elements (such as the topic, the concept and the subject). Package of learner interfaces The package of learner interfaces (Fig. 4) helps learners to find their paths through the hyperspace by adaptive presentations, a selection of adaptive contents and an adaptive form to students' characteristics. Ahmed et al. (2017) provided a comprehensive summary of the various elements, which enable adapting learning contents according to students' characteristics. Package of adaptation The package of adaptation (Fig. 5) is responsible for creating and generating a personalized and adapted learning course according to learners' characteristics. This package is achieved by providing different contents for each learner. It contains several sub-packages: the package of adaptation methods, the package of adaptation techniques and the package of adaptation model. Gutiérrez et al. (2016) made a distinction between adaptation methods and techniques to obtain a selection of adaptive pedagogical knowledge. Adaptation methods: They are based on a clear adaptation idea, which can be presented at the conceptual level. Adaptation techniques: They are the means used to implement a method. Adaptation model: It is achieved by providing different media representations for each learner. This model describes the adaptation rules to select the content according to learners' features. Brusilovsky (2001) distinguished two main technologies of adaptivity: Adaptive presentation: It is used to adapt the selection of different media depending on user preferences and the adaptation of learning contents based on a learner model. Adaptive navigation support: It is a technique to change the link-structure between pages that together make up a hyper-document. The most popular techniques are direct guidance, sorting, hiding and adaptive annotation. Package of pedagogical knowledge The package of pedagogical knowledge (Fig. 6) represents the system knowledge that allows for managing the learning process. This package consists of three sub-packages: the package of assessment contents, the package of pedagogical contents and the package of pedagogical models. Package of assessment contents: It describes different evaluation methods (tests, issues, etc.) to assess learners' knowledge. Package of pedagogical contents: It contains all the resources necessary to create a learning course. Package of pedagogical model: It defines different pedagogical strategies, lesson structures, and domain models used to represent a learning scenario. Package of reuse facilities The package of reuse facilities (Fig. 7) contains various emerging technologies, methodologies and learning standards, which allows representing the student model and pedagogical knowledge in an interoperable and reusable way. This package contains three main sub-packages: Package of technologies: In this package, we present the used technologies (such as Web services, semantics Web service and semantics Web) which aim to facilitate the access between applications, thereby simplifying data exchange. We present here an example of a learning environment based on Web services. In Virvou and Troussas (2011), the e-learning platform can be seen as a set of autonomous and independent services. These services can be simple or composite. They are the basis of this research, since the objective of the research presented in this paper is that e-learning capabilities are accessible and available via the Internet as reusable and interoperable services. Package of modeling methodology: It represents methodologies (such as ontology, pattern design and model driven architecture) which can build the pedagogical knowledge and the learner profile in a standardized way. Design patterns provide the reuse of various components in a standard way. Some work has used design patterns in game engines; we cite the work of Karavolos et al. (2017). This latter propounded a solution that permitted presenting the various components of a game engine using design patterns to model and standardize the main components of a game engine. Other works has used ontology for representing the learner model in reusable way. For example, Aljohany et al. (2018) suggested the learner model as generic user ontology. Package of learning standards: The e-learning standards enable realizing the interoperability between the different platforms and the reuse of pedagogical resources. For example, The LOM (Learning Object Meta data) standard defines the various elements of description of a learning object (Ahmed et al., 2017; Essalmi et al., 2010). Moreover, the SCORM allows e-learning platforms to find, import, share, reuse and export learning resources in a standardized way (Gutiérrez et al., 2016). Several standards have been used to represent learner profiles. For instance, the IMS LIP (Instructional Management System Language Preference) provides a means to store information about a learner in a database (Ahmed et al., 2017) Research questions and research methodology To facilitate the reuse of e-learning personalization systems, two alternatives (A1 and A2) are presented in (Ghallabi et al., 2013), as described below. Furthermore, A1 is divided into three alternatives: A1.1, A1.2 and A1.3. A1: The components needed for reuse are available. In this case, these components may be reusable and interoperable, reusable and non-interoperable, neither reusable nor interoperable. A1.1: If the components are reusable and interoperable, it can be easily integrated into a learning course. So, as solution, the components could be easily integrated in the system. In the literature, there are some components which are represented in reusable and interoperable way through the e-learning standards or Web services. A1.2: If the component is reusable and non-interoperable. In the literature, there are some components which are represented in a reusable way through ontology. However, there is no guarantee that these components could be integrated in others systems. A1.3: The component is neither reusable nor interoperable. There are several components which are not represented in a reusable way. Thus, to make these components reusable and interoperable, the teacher will create adapters or translators to obtain a standard format for each component. A2: If the components needed for reuse are not available, then to create the needed components in reusable and interoperable way, the teacher can use one of the reuse solutions (Web services, e-learning standards, etc.) to represent the new learning components in a standard way. Then, these components will be available to be reused in other learning systems. The research in this paper is specifically focused on the following alternative A1.1: If the components are reusable and interoperable, thus how to reuse these reusable and interoperable components? To respond to this alternative, several learning systems in the literature have allowed teachers to compose and assemble the learning components by reusing existing ones. For example, Rahadian and Budiningsih (2017) put forward an approach that allowed for the combinations of student learning styles to get the most suitable one. In Taniguchi et al. (2015), the suggested approach permitted combining pedagogical resources through a composition model. The goal of this model was to combine and assemble the existing pedagogical resources in order to create a new component. However, these cited works has focused on optimizing the composition of e-learning components according to authors' needs. Furthermore, such composition has not been based on smart algorithms to generate optimal components. This leads to the following question: How can we obtain the optimal composition of e-learning personalization components adapted to teachers' needs? To answer this question, this paper uses the A* algorithm which allows for finding the best solution with the least cost. It enhances the federation of e-learning components. This algorithm is an optimization process. It allows for inserting the nodes (in this paper, each node is represented the learning component) in the open list according to the following function: $${\mathbf{f}}\left( {\mathbf{n}} \right) = {\mathbf{g}}\left( {\mathbf{n}} \right) + {\mathbf{h}}\left( {\mathbf{n}} \right)$$ where g (n) is the cost of the optimal path from the initial node to node n, h (n) is the estimation of the additional cost of an optimal path to reach the goal from n, and C(n, ni) is the cost to pass from ni to nj. We will use two arrays, namely Open list and Closed list, for the execution of the A* algorithm. Open list is an array that includes the nodes which have been generated, but not examined yet. Closed list is an array that includes the nodes which have been examined. Some work in the literature (e.g., Mehlhorn et al., 2017; and Potdar & Thool, 2014) has compared between the different types of search algorithms. Four comparison criteria are presented as follows: Optimal: It is to find the best and shortest path from a starting node to a goal if one exists. Complete: It must solve the problem and obtain a solution if the goal exists. Heuristic function: It is an estimate of the optimum cost from the current node to a goal. Time complexity: It presents the amount of time it takes to run an algorithm. Here, n presents the number of nodes in a graph, and p defines the graph arcs (Table 2). Table 2 Comparison between A* algorithm and other search algorithms (Mehlhorn et al., 2017; Potdar & Thool, 2014) In summary, this algorithm has some advantages. It is complete and optimal. It achieves better performances by using admissible heuristics to guide its search. For that, this paper utilizes this algorithm in order to enhance the reuse of e-learning personalization components and to find the optimal composition. It is based on heuristics to select the best appropriate component. Besides, this algorithm is used to solve very complex problems. Therefore, it allows for finding the shortest path from a starting node to a goal if one exists. This section validates the suggested approach through a pilot study. In this section, a case study is presented to show how the proposed architecture could be used to create a personalized learning course. When a teacher creates a personalized learning course, he/she requires an input from modeled components (such as the learner model, the pedagogical strategy, the domain model, the content, the learning activities and the adaptive mechanisms) which influence the generation and the creation of an adaptive course. In the literature, there are some components that are represented in reusable and interoperable way through Web services technology. The A* algorithm combines and federates the mentioned components that will satisfy a teacher's request. In particular, Fig. 8 presents the taxonomy of personalization parameters, as presented in Essalmi et al. (2015). In addition, the figure includes a heuristic value for each parameter. These parameters define characteristics and needs of learners such as the learner's level of knowledge and the motivation level. Each parameter is represented as a Web service. The goal of this case study is to reuse these parameters in order to create personalized learning courses and to find an optimal composition (Fig. 9). Example of composition of personalization parameters based on taxonomy of Essalmi et al. (2015) Optimal path of personalization parameters This section presents different results of composition of e-learning personalization parameters. The A* algorithm is applied in order to choose the best composition of services (representing personalization parameters). It is based on heuristics to select the best appropriate service. It uses the following heuristic values to estimate a cost to the destination node less than the real cost. In this case, Table 3 shows the estimated cost of each service to achieve the goal: (personalization parameter: Information seeking task). The heuristic value for each node is assigned randomly. Table 3 The estimated cost of each component After running this algorithm, the nodes are inserted in the open list by applying the following function: For instance: f (how to learn) = g (start) + C (start, how to learn) + h (how to learn). = 0 + 10 + 8 = 18. The optimal composition can be obtained with a minimal cost through the information recorded in the open list: The proposed approach aims to find the optimal composition of personalization by considering the minimal cost of e-learning personalization. It provides the appropriate and optimal components according to authors' requests. A number of approaches are available in the literature that allow for federating the e-learning personalization components. However, none of them focuses on optimizing composition by considering minimal costs. In addition, these compositions are not based on intelligent and optimization algorithms to create a new course. For that, we put forward a new approach for assembling these components by selecting the optimal ones. The proposed approach is based on the A* algorithm which permits finding the optimal path within short time and with a low cost. The findings of the study suggest that this algorithm is more efficient than any other algorithm (see section IV) since it decreases the set of nodes to explore. It enhances the reuse of learning components. A* algorithm is based on heuristics to select the optimal appropriate component. But, the heuristic value is assigned randomly. As result, with a growing these values, the execution time of the SVM algorithm is slow. In addition, it is not used the criterion of Quality of Services (QoS) to choose an optimal composition. The suggested approach aims to represent the e-learning personalization components in reusable and interoperable way using Web services technology. These components are uploaded to the databases. As a consequence, they are centralized and are not mobile. In addition, and users cannot incorporate learning courses from heterogeneous personalized learning systems. In order to achieve the personalization of e-learning courses, learning systems use different components. This paper has proposed a new solution to federate and combine the reusable, interoperable, available and accessible components to build a new course. The goal of this solution is to enhance the composition of e-learning personalization components by selecting the optimal ones. The suggested approach uses the A* algorithm which allows for finding the best and the optimal composition within short time and with a low cost. This algorithm combines and federates the mentioned components that will satisfy a teacher's request. It is based on heuristics to select the best appropriate component. To show the feasibility and effectiveness of this approach, an experimentation has been conducted. The obtained results have enabled the optimal composition of software components by considering the minimal cost of e-learning personalization. The proposed approach provides many advantages for the teachers. For instance, it gives them the possibility to reuse the pedagogical and software components and to provide optimal and performance composition to satisfy their' needs. In addition, it allows teachers to incorporate these components in a relevant and easy way in their courses and to choose the required parameter they want to use. At the same time, the mentioned approach can also help researchers in the e-learning personalization domain to understand federation of personalization efforts, and exploitation and composition of different personalization components according to the specifics of courses while considering the minimal cost and time. However, this paper is needed to present how to integrate and adapt the selected component into a particular course. Therefore, to overcome this limitation, we envision using the adapter interface (e.g. XML format, or ontology) which will represent these components in an understandable and readable way. This adapter interface will allow to realize the correspondence between the author's needs and the different components' formats. The authors do not have ethics approval to make the raw student data or the tool available to anyone outside the organization, in which the experiment was conducted. Ahmed, M. U., Sangi, N. A., & Mahmood, A. (2017). A learner model for adaptable e-learning. International Journal of Advanced Computer Science and Applications, 8, 139–147. Aljohany, D. A., Salama, R. M., & Saleh, M. (2018). ASSA: Adaptive E-learning smart students assessment model. International Journal of Advanced Computer Science and Applications, 9, 128–136. Chookaew, S., Panjaburee, P., & Wanichsan, D. (2014). A personalized elearning environment to promote students' conceptual learning on basic computer programming. Procedia-Social and Behavioral Sciences, 116, 815–819. Dominic, M., & Francis, S. (2015). An adaptable E-learning architecture based on learners' profiling. International Journal of Modern Education and Computer Science, 3, 26–31. Essalmi, F., Jemni Ben Ayed, L., Jemni, M., & Kinshuk,. (2015). Generalized metrics for the analysis of E-learning personalization strategies Science Direct. Computers in Human Behavior, 48, 310–322. Essalmi, F., Jemni Ben Ayed, L., Jemni, M., & Kinshuk Graf S. (2010). A fully personalization strategy of E-learning scenarios. Computers in Human Behavior, 26, 581–591. Gee, B. M., & Strickland, J. (2014). The role of reusable learning objects in occupational therapy entry-level education. Journal of Occupational Therapy, 2, 1–15. Gutiérrez, I., Álvarez, V., Paule, M. P., Pérez-Pérez, J. R., & de Freitas, S. (2016). Adaptation in E-learning content specifications with dynamic sharable objects. International Journal of Adaptive Educational Technology Systems, 4, 1–11. Mehlhorn, K., Näher, S., & Sanders, P. (2017). Engineering DFS-based graph algorithms. Computer Science, Data Structures and Algorithms, 1, 1–6. Paramythis, A., & Loidl-Reisinger, S. (2004). Adaptive learning environments and e-learning standards. Electronic Journal of e-Learning, 2, 181–194. Potdar, G. P., & Thool, R. C. (2014). Comparison of various heuristics search techniques for finding shortest path. International Journal of Artificial Intelligence and Applications, 5, 63–74. Rahadian, R. B., & Budiningsih, C. A. (2017). What are the suitable instructional strategy and media for student learning styles in middle schools? International Journal on Integrating Technology in Education (IJITE), 6, 25–39. Virvou, M., & Troussas, C. (2011). Personalized Teaching of Multiple Languages through the Web. International Journal for e-Learning Security (IJeLS), 1, 52–59. Yarandi, M., Jahankhani, H., & Tawil, A.-R.H. (2013). A personalized adaptive e-learning approach based on semantic web technology. Webology, 10, 1–14. Wu, Ch. H., Chen, YSh., & Chen, TCh. (2017). An adaptive e-learning system for enhancing learning performance: Based on dynamic scaffolding theory. International Journal of Mathematics, Science and Technology Education, 14, 903–913. Brusilovsky, P. (2001). Adaptive hypermedia, user modeling and user adapted interaction Kluwer Academic Publishers. Printed in the Netherlands, 87–110. Elbeh, H., & Biundo, S. (2012). A personalized course generation system based on task-centered instruction strategy. International Conference on Artificial Intelligence (ICAI). Ghallabi, S., Essalmi, F., Jemni, M. & Kinshuk, (2013). Toward the reuse of E-learning personalization systems. International Conference on Information and Communication Technology and Accessibility, 24– 26. Heath, A. & Schwerdtfeger, R. (2009). W3C personalization roadmap: Ubiquitous Web integration of access for all 1.0 W3C Working Group.W3C Working Group Note. Karavolos, D., Liapis, A. & Yannakakis, G. (2017).Learning the patterns of balance in a multi-player shooter game. In Proceedings of the 12th International Conference on the Foundations of Digital Games, 70. Khamis, M. A. (2015). Adaptive e-learning environment systems and technologies. In The First International Conference of the Faculty of Education. Kravcik, M. (2005). Requirements and solutions for personalized adaptive learning. In Network of Excellence Professional Learning PROLEARN. European Sixth Framework Project. Siddique, A., Durrani, Q. S., & Naqvi, H. A. (2018). Developing Adaptive E-Learning Environment Using Cognitive and No cognitive Parameters. Journal of Educational Computing Research. Taniguchi, T., Sakaki, S., Shigenaka, R., Tsuboshita, Y., & Ohkuma, T. (2015). A weighted combination of text and image classifiers for user gender inference. In Proceedings of the Fourth Workshop on Vision and Language (pp. 87–93). This work presents the findings of the proposed approach through an example. The Research Laboratory of Technologies of Information and Communication & Electrical Engineering (LaTICE), Tunis, Tunisia Sameh Ghallabi & Mohamed Jemni Management of Information Systems Department, College of Business, University of Jeddah Saudi Arabia, Jeddah, Saudi Arabia Fathi Essalmi School Computing and Information Systems, Athabasca University, 1 University Drive, Athabasca, ABT9S3A3, Canada Kinshuk Sameh Ghallabi Mohamed Jemni Each author contributed evenly to this paper. All authors read and approved the final manuscript. Correspondence to Sameh Ghallabi. Ghallabi, S., Essalmi, F., Jemni, M. et al. Reuse of e-learning personalization components. Smart Learn. Environ. 9, 32 (2022). https://doi.org/10.1186/s40561-022-00214-w DOI: https://doi.org/10.1186/s40561-022-00214-w E-learning components Optimal composition A* algorithm Towards enhancing learning using open educational resources
CommonCrawl
On the feasibility of a secondary service transmission over an existent satellite infrastructure: design and analysis Luciano Barros Cardoso da Silva ORCID: orcid.org/0000-0003-3521-40361,2, Tarik Benaddi1 & Laurent Franck3 In this paper, we present a realistic use case in order to investigate the feasibility of a secondary service transmission over an existent satellite infrastructure. By introducing the overlay cognitive radio paradigm towards satellite communications, we compute a theoretical achievable data rate greater than 16 kbps for the secondary service, which is suitable for most M2M applications. Using simulation results, we show that this can be achieved while preserving the primary service performance. In addition, a system design framework is discussed in order to dimension such systems. It can be emphatically stated that the access to space is easier, cheaper, and faster than ever before. This is the widespread view among the main satellite players with respect to the unique opportunistic time currently experienced by the space segment. Actually, the favorable projections to this segment could be sustained especially today, since the demand for the rising new services has increased considerably [1] (multicast, broadcast, high mobility, and wide coverage). In this sense, as an actual and typical example, it could be pointed out the use of satellite to support the machine-to-machine (M2M) communications, providing connectivity to the end-users anytime, anywhere, for any media and device [2]. The M2M communications are one of the central use cases in the fifth-generation (5G) mobile network [3] as they play a major role in the Internet of Things (IoT). In fact, it is predicted in [4] the deployment of around 1 million devices per kilometer square like sensors/actuators, vehicles, factory machines, and beacons. In what concerns the technological advances, this satellite era consolidation could be reinforced (but not only) by the recent maturity reached in the manufacturing process (cheaper and faster production, powerful and sophisticated payloads). To keep up with the new challenges of M2M communications, the satellite communication systems need to push the boundaries in the direction to more efficient technical solutions. For this purpose, the search for power and bandwidth efficiency as well as the actual trend to low complexity systems must be a paramount concern for system designers. For a given provided service to the end-users, the design requirements of the communication system, in general, are settled by (i) the service availability, which ultimately specifies the required bit error rate (BER); (ii) the allocated spectral band, which is assigned by the International Telecommunication Union Radio Regulation [5] (ITU-RR), and (iii) the range of the received carrier-to-noise ratio (CNR), measured in the occupied bandwidth, which is limited by the large path loss as well as the nonlinear behavior of the satellite channel. For instance, it could be typically assigned the dynamic range of CNR between − 3 and 20 dB, in accordance to the extension recently presented at the DVB-S2X standard [6]. Basically, concerning these bounds, the system can be designed efficiently in (i) power, by decreasing the received power (or equivalently the CNR) necessary to reach the specified BER, for instance by adding redundancy bits in a digital encoding system, or (ii) spectral efficient, by increasing the number of bits per hertz into the occupied bandwidth, which ultimately increases the transmitted data rate. Last but not least, the (iii) system complexity should be carefully evaluated in order to reduce as much as possible the number of processing operation, especially when on-board systems are considered. Apart from that, since the available radio spectrum is today a scarce resource (cf [7] for example), another challenge faced is to develop techniques which enable a better coordination between legacy and future services, especially considering this new machine-type communication environment and its large-scale implementation. Despite of the spectrum regulation and policy still being considered as a dry subject since the earliest days of radio communication, the need to reconsider the static long-term exclusivity of the spectrum via licensed regulation procedures, as well as the encouragement of the techniques which enable the coexistence of different networks, become a key element to permit a proper expansion of these new services [8]. It is within this framework that the cognitive radio (CR) techniques have also become attractive for space applications [9]. Based on the recent developments in the space qualified software-defined radios (SDR) [10] and also by the acceptability of concepts such as flexible [11, 12] and hosted payloads, these techniques allow a smarter spectrum management. In addition, some valuable research [13] has been acted in spectral awareness and spectral exploitation techniques, which has driven the cognitive satellite communications towards a promising approach. In a nutshell, the cognitive user (CU), in our context, unlicensed or less prioritized to operate in a specific spectrum band, senses the environment around it, and adapts its transmission as a function of the interference, by adjusting the frequencies, waveforms, and protocols in order to access the licensed primary user (PU) spectrum efficiently. Without going into further details, three paradigms classify the CU operation [14]: Interweave, which is based on the idea of opportunistic transmission. In short, the CU observes the white spaces not used by the PU transmission, (in space, time or frequency) and adjusts its operating parameters. It is inferred that, ideally, there is no coexistence between users and, consequently, no power control of the CU transmission is required; Underlay, denominated as gray spectrum space, where the CU, by means of the partial knowledge of the PU signal characteristics and channel, adjusts its parameters in such a way as to respect an acceptable interference threshold. As a greater knowledge of the PU signal is required, more sophisticated spectral sensing techniques must be employed, such as signal-to-noise ratio (SNR) estimation. It is worth noting that the interweave paradigm can be seen as a special case of underlay, where the threshold does not allow any interference. Some examples of techniques found in the literature are: dynamic resource control (power and frequency), beamforming with multiple antennas, and spectral spreading; Overlay, where the CU, from the full and noncausal knowledge about the PU waveform and message and channel, uses advanced coding and modulation strategies to transmit simultaneously while mitigating the interference. The occupied spectrum space in this last paradigm is called black, due to the fact that it is occupied by the interfering signals and noise. The first two schemes were well studied in [13, 15]. In this paper, we investigate the third scheme. The main reason to propose the overlay paradigm for satellite communications lies in the feasibility of transmitting both unlicensed and licensed services simultaneously from the same satellite towards its respective terminals. We emphasize that, due to the priority among users, the superposition coding strategy is required [16] to relay the PU transmission, unlike the technical solutions adopted for the broadcast channel [17]. Furthermore, the dirty paper coding (DPC) [18] is implemented to adapts the cognitive signal to the direction of the PU interference. Concerning the DPC, from a brief historical perspective, the first idea of practical scheme was proposed by Erez, Shamai and Zamir [19]. It pointed out the Tomlinson-Harashima precoding (THP) for intersymbol interference (ISI) canceling, which can be seen as a DPC application for frequency selective channel. In this technique, the modulo operation is used to pre-subtracted the interference with a minimum power increase. Also in this work, the THP losses, i.e., shaping loss at high signal-to-noise ratio (SNR) regime and the combined modulo and power losses at intermediate/low SNR regimes, were well characterized. Moreover, Eyuboglu and Forney in their seminal paper [20] generalized the combination of the trellis shaping (TS) [21], trellis-coded modulation (TCM), and THP for Gaussian ISI channels. The so-called trellis precoding (TP) performs interference pre-subtraction and allows recovery of the shaping loss. Likewise, a little bit more closer of our application, an extension of TP for multiuser interference was proposed to recover the shaping loss with sufficient high constellation expansion in [22] and [23], where the TS technique acts as a vectorial quantization, replacing the modulo operator. In view of the above, the techniques previously exposed in the recent publications [24, 25] concerns the design of the overlay paradigm transmission towards satellite communication systems. This paper is an extended version of the author's work [26]. Additionally, to the link budget evaluation and system dimensioning introduced in [26], we provide here a design framework to implement such cognitive systems as well as a detailed analysis of the effect of different parameters on the overall system performance. Firstly, a DPC encoder is proposed involving TS and TCM concepts along with proper constellation expansion combined with THP. The discussions leads to a trade-off between power efficiency, by the reduction of the modulo loss, and complexity, key levers for both satellite onboard processing and terminals. Secondly, as a general contribution, we focus on the feasibility of a low data rate secondary transmission. In this sense, a practical use case is investigated, which considers commercial off-the-shelf (COTS) parts [27] and assumes realistic link budget parameters in its evaluation. The discussions and results contained herein could be seen as part of a "preliminary phase" of an engineering process plan [28]. Following this introduction, Section 2 presents the overlay model description. Subsequently, in Section 3, the employed methodology is introduced, with reference to the superposition and dirty paper coding (DPC) designs. The paper contributions on these techniques are detailed in Section 4, emphasizing the satellite context. Next, the results and discussion are presented, by investigation of a realistic use case in Section 5. Finally, Section 6 is dedicated to the conclusions, with suggestions for future works. Overlay model description The following scenarios are provided as examples where the overlay CR techniques might be applied to satellite communications. In the first case, presented in the Fig. 1a, an ordinary low Earth orbit (LEO), or medium Earth orbit (MEO), satellite provides two different services towards different terminals. In this context, a single licensed user PU takes priority over the added unlicensed CU. The interference presented at both terminals should be mitigated by properly designed CU encoder, without any changes in the PU transmission chain. Satellite scenarios In the same way, the GEO multibeam satellite is illustrated in the Fig. 1b. In this case, considering the frequency reuse, the CU is able to transmits by using, for instance, the determined blue frequency (or polarization) into the red spot footprint, as far as the interference among adjacent beams is resolved. It is worth noting also that all possible different PU transmissions, represented by several blue spots, should be taken into account in the interference mitigation design. By this way, the total satellite capacity could be increased as well as the spectrum resources better managed. Equally suitable for both scenarios, the interference model with side information, adapted from [16], is presented in Fig. 2. Assuming that the signals are onboard the satellite, the cognitive encoder has full and noncausal knowledge about each PU ith signal and message, which addresses the main overlay paradigm requirement. In this sense, the encoded cognitive signal\(\ X_{c}^{n}\) is function of both primary and cognitive messages mp,i and mc. Overlay model Without loss of generality, considering the ith PU and the added CU, the channel gains ∣hyx,i∣ (from the transmitter x to the receiver y) are defined by the direct paths (∣hcc∣ and ∣hpp,i∣), and the interfering paths (∣hpc,i∣ and ∣hcp,i∣) losses (the Fig. 2 summarizes these notation). In our context, these gains are computed as function of the each transmission link budget. The following equations describes the output of the channel, where n refers to the nth symbol: $$ \ Y_{p,i}^{n}=\mid h_{pp,i}\mid X_{p,i}^{n}+\mid h_{pc,i}\mid X_{c}^{n}+Z_{p,i}^{n} $$ $$ \ Y_{s}^{n}=\mid h_{cc}\mid X_{c}^{n}+\sum_{i=1}^{N}\mid h_{cp,i}\mid X_{p,i}^{n}+Z_{s}^{n}. $$ Based on the fact that the terminals may be located in different geographical sites, the Gaussian noise component \(Z_{p,i}^{n}\) (resp. \(Z_{s}^{n}\)) is assumed to follow the normal law \(\mathcal {N} \left (0,N_{p,i}\right)\) (resp. \(\mathcal {N} (0,N_{s})\)). Also, the power constraints to be satisfied are \(\ E\left [\|{X_{p,i}^{n}}\|^{2}\right ]= P_{p,i}\) and \(\ E\left [\| {X_{c}^{n}}\|^{2}\right ]\leq P_{c}\), respectively. Finally, since each PU has the same transmission priority, we highlight that the interference among them could be solved by precoding techniques as proposed, for instance, in DVB-S2X standard [6]. Under this assumption, this work provides a design method to permit a secondary service transmission without affecting the PU transmission performance. Superposition strategy The purpose of the superposition technique is to ensure that the signal-to-noise ratio (SNR) at each PU receiver is not decreased in the presence of interference. To accomplish this goal, the CU shares part of its power to relay each PU message, and \(\hat {X}_{c}^{n}\) is a modified version of the CU message, as it is going to be detailed in the next section. Based on that operation, the CU transmitted signal is given by $$ X_{c}^{n}=\hat{X}_{c}^{n}+\sum_{i=1}^{N}\sqrt{\alpha_{i} \frac{P_{c}}{P_{p,i}}}X_{p,i}^{n}, $$ where αi∈ [0,1] is the shared power fraction from Pc to relay each PU message. Under the assumption that all signals are statistically independent, the new power constraint can be defined as \(E[\| \hat {X}_{c}^{n}\|^{2} ]\leq \left (1-\sum _{i=1}^{N}\alpha _{i}\right)P_{c}\). The signal-to-interference-plus-noise ratio (SINR) at the kth primary receiver is given by $$ \begin{aligned} \text{SINR}_{p,k} &=\frac{E \left[ \left\|\left(\mid h_{pp,k}\mid+\mid h_{pc,k}\mid \sqrt{\alpha_{k} \frac{P_{c}}{P_{p,k}}}\right) X_{p,k}^{n} \right\|^{2}\right] }{E \left[\left\|\mid h_{pc,k}\mid \left(\hat{X}_{c}^{n}\,+\,\sum_{i=1,i\neq k}^{N-1}\sqrt{\alpha_{i} \frac{P_{c}}{P_{p,i}}}X_{p,i}^{n}\right)\right\|^{2}\right]+E\left[\left\|Z_{p,k}^{n}\right\|^{2}\right]} \\ &=\frac{\bigg(\mid h_{pp,k}\mid \sqrt{P_{p,k}}+\mid h_{pc,k}\mid \sqrt{\alpha_{k}P_{c}}\bigg)^{2}}{\| h_{pc,k}\|^{2} (1-\alpha_{k})P_{c} + N_{p,k}} \end{aligned} $$ In this context, the superposition factor αk∈ [0,1] that maximizes Eq. (4), for the interference condition (∣hpc,k∣>0), which is a generalized form of [16, Eq. 14], is given by $$ \begin{aligned} \alpha_{k}=\Bigg(\frac{\mid h_{pp,k}\mid \sqrt{P_{p,k}}\Big(\sqrt{N_{p,k}^{2}+\|h_{pc,k}\|^{2}P_{c}(N_{p,k}+\|h_{pp,k}\|^{2}P_{p,k})}-N_{p,k}\Big)}{\mid h_{pc,k}\mid \sqrt{P_{c}}(N_{p,k}+\|h_{pp,k}\|^{2}P_{p,k})} \Bigg)^{2} \end{aligned} $$ Hence, by applying Eq. (5) in Eq. (4), we have $$ \text{SINR}_{P,k} = \frac{\|h_{pp,k}\|^{2}P_{p,k}}{N_{p,k}} $$ By inspection of Eq. (3), we emphasize that the CU transmission is feasible only if the condition \(\sum _{i=1}^{N}\alpha _{i}<1\) is satisfied. By this assumption, note that the CU data rate should be decreased when aggressive frequency reuse scenarios are considered. Dirty paper coding Once the superposition factors are computed and the CU partially shares its power to relay each PU signal, the next step is to design \(\hat {X}_{c}^{n}\) efficiently, in order to minimize the interference of PU transmission on the CU receiver. The optimal strategy employes the theoretical results presented by Costa [18]. On the assumption that the interference (PU signal) is noncausally known at the CU transmitter, a transmitter-based interference presubtraction can be implemented, without any power increase, reaching the AWGN capacity. By rearranging the Eq. (2) and considering the superposition, we have $$ {}Y_{s}^{n}=\mid h_{cc}\mid\hat{X}_{c}^{n}+\sum_{i=1}^{N}\bigg(\mid h_{cp,i}\mid+ \mid h_{cc}\mid\sqrt{\alpha_{i} \frac{P_{c}}{P_{p,i}}}\bigg)X_{p,i}^{n} +Z_{s}^{n}. $$ Without loss of generality, given that the signals in Eq. (7) are statistically and mutually independents, the implemented model considers a single Gaussian distributed PU constellation, in respect to the total interfering power received at CU terminal. In addition, in order to simplify the notation through this paper, the Eq. (7) is normalized by the direct path attenuation factor | hcc|. Thus, the signal at CU receiver is given by $$ Y_{s}^{n}=\hat{X}_{c}^{n}+\underbrace{\left(b+\sqrt{\alpha \frac{P_{c}}{P_{p}}} \right) X_{p}^{n}}_{S^{n}} +Z_{s}^{n}, $$ where the factor b represents the normalized interfering path and Sn represents the total channel interference. Figure 3 presents the basic diagram of the DPC encoder, where the THP is used to presubtract the multiuser interference. In this configuration, assuming low and intermediate SNR regime, the partial interference presubtraction (PIP) is implemented [18]. In this way, the signal \(\hat {X}_{c}^{n}\) is designed as $$ \hat{X}_{c}^{n}=\bigg[X_{cc}^{n}-\lambda S^{n}\bigg]\textsc{mod}_{\Delta}\ , $$ DPC communication system by THP for multiuser interference where \(X_{cc}^{n}\) is the coded signal and the factor λ, to be properly chosen, controls the fractioned interference to be presubtracted. Also, MODΔ is the complex-valued modulo operation, implemented to limit the transmitted power. The modulo region is defined by \(\Delta =\sqrt {M}d_{{\text {min}}}\), where M is the number of points of the square QAM constellation and dmin is the minimum intersymbol distance. When compared to QAM signal on AWGN channel, the performance of THP system is degraded by three separated sources of losses: power loss, modulo loss, and shaping loss [22]. From this point, our goal is to design the coded signal \(X_{cc}^{n}\) in order to mitigate these sources of losses. Trellis shaping-based DPC Main encoding blocks Figure 4 presents the implemented encoder for the cognitive user. Three gains can be achieved by this system: (i) coding gain, represented by the upper part of the diagram; (ii) shaping gain, achieved by the trellis shaping code in the lower part, and (iii) prediction gain (term rubricated by [29]), achieved by the modulo operation jointly with shaping code. Proposed DPC encoder The two codes work independently. The input bit sequence is split in two parts. The upper part is formed by the coset select code Cc. This later is described by the generator matrix Gc, which encodes the kc message bits xc into a nc-coded bits yc. In the lower part, the rs-bits syndrome sequence s passes through an inverse syndrome former \(H_{s}^{-T}\) for the shaping code Cs. This initial sequence t jointly with the channel coded sequence yc and the interference λSn are fed into the Viterbi decoder. This later selects, according to a well chosen branch metric, the shaping codeword ys. After that, the shaped sequence zs is obtained by the XOR operation between t and ys. Note that zs and t are within the same coset, according to the trellis shaping on regions strategy, detailed in [21]. Finally, the output shaped sequence \(X_{cc}^{n}\) is obtained by mapping the d symbols as a function of yc and the sign mapped bits z. Similar to the THP operation, the coded symbol sequence \(X_{cc}^{n}\) is presubtracted by the scaled interference sequence λSn and modulo operated in order to limit its power. At the end, the transmitted sequence \(\hat {X}_{c}^{n}\) is obtained and sent through the interference channel. The following branch metric Eq. (10) is implemented, where the precoder selects the proper region sequence with minimum average power to steer the scaled interference sequence λSn: $$ \Vert\big[X_{cc}^{n}-\lambda S^{n}\big]\textsc{mod}_{\Delta} \Vert^{2}. $$ Transmitter design We propose, in this section, a scheme based on the trellis shaped DPC encoder with slight constellation expansion combined with THP. Unlike the previous works published in [22, 23], the modulo operation is always part of our transmission system, assuring the required power limitation for the transmitter, given any interference scenario. The available remaining transmitted power (after the superposition) is computed based on the link budget parameters. In order also to comply with the defined SNR range for satellite communication, it is proposed a typical transmission rate of Rcu=2 bits/symbol. The trellis shaped-based DPC is implemented, with a slight expanded constellation of ns=2, and modulo operation in its output, previously depicted in Fig. 4. For the coding gain, a systematic 64-state, rate 1/2, convolutional code Cc specified in octal notation by the feedforward polynomial h1(D)=54 and the feedback polynomial h0(D)=161 is assumed. Along with this, for the shaping code Cs, the 4-state code, rate 1/2, specified by generators gs,1(D)=7 and gs,2(D)=5 is implemented. In other words, at the DPC transmitter, the 4-QAM-coded constellation is replicated four times by the shaping operation (since ns=2), resulting in an expanded 16-QAM constellation for \(X_{cc}^{n}\). In addition, considering the shaping operation, the proposed mapping for \(X_{cc}^{n}\) is based on the sign bit shaping strategy described by Forney in [21]. Following the same notation as in Fig. 4, consider now the 16-QAM \(X_{cc}^{n}\) constellation represented in Fig. 5. Each symbol is defined by the tupple \(\phantom {\dot {i}\!}(z_{1}z_{2}y_{c_{1}}y_{c_{2}})\), where z1 and z2 are the sign bits while \(y_{c_{1}}\) and \(y_{c_{2}}\) are the coset bits. It should also be noted that, to keep the specified rate, which maintains the SNR range, the uncoded bits are not implemented in this scheme (when compared with the general scheme presented in the previously [22]). Proposed CU mapping Thus, the shaping regions are represented by the four different colors that delimit the constellation regions, while the four different markers ○,X,, or △ represent the convolutional code cosets. The analogy with the Gelfand - Pinsker multicoding scheme [30], which is a key concept utilized in the DPC theoretical proof [18], is intuitive. Basically, we assign the transmitted "sequence of colors" as the subcodebook, determined by the shaping code Cs, which is indexed by the chosen codeword generated by the convolutional code Cc. In this sense, the described encoder can be conceptually considered as a practical implementation of the DPC encoder. Receiver design At the receiver, as depicted in the Fig. 6, the reverse chain is implemented: firstly, \(Y_{s}^{n}\) is multiplied by the factor λ. Before entering the DPC decoder, the signal is modulo operated again, as the following: $$ {\begin{aligned} \hat{Y}_{s}^{n} &= \bigg[(Y_{s}^{n})\lambda \bigg]\textsc{mod}_{\Delta} \\ &=\bigg[\lambda \hat{X}_{c}^{n}+(\hat{X}_{c}^{n}-\hat{X}_{c}^{n})+\lambda S^{n}+\lambda Z^{n} \bigg]\textsc{mod}_{\Delta} \\ &=\bigg[\lambda \hat{X}_{c}^{n}+[X_{cc}^{n}-\lambda S^{n}]\textsc{mod}_{\Delta}-\hat{X}_{c}^{n}+\lambda S^{n}+\lambda Z^{n} \bigg]\textsc{mod}_{\Delta} \end{aligned}} $$ Proposed DPC receiver $$ {\begin{aligned} =\bigg[X_{cc}^{n}-(1-\lambda)\hat{X}_{c}^{n}+\lambda Z_{s}^{n}\bigg]\textsc{mod}_{\Delta}, \end{aligned}} $$ where in Eq. (11), the following property was utilized [(a)MODΔ+b]MODΔ= (a+b)MODΔ. The value of λ that minimizes the effective noise \(N_{eff}=(1-\lambda)\hat {X}_{c}^{n}+\lambda Z_{s}^{n}\) is obtained by [18]: $$ \lambda =\frac{(1-\alpha)P_{c}}{(1-\alpha)P_{c}+E\left[|Z_{s}^{n}|^{2}\right]}. $$ The decoder employs the same strategy for usual TCM schemes. Figure 7 illustrates an example of trellis for the channel code Cc with the parallel transitions. Since each branch of the trellis corresponds to a signal subset (in this case, defined by yc and labeled by the markers ○,X,, or △), the first step in decoding is to determine the best signal point within each subset (equivalently to determine the best shaping region, represented by the different colors). This is performed by selecting the point that is closest in Euclidean distance to the received point \(\hat {Y_{s}}\). After that, the selected signal point and its squared distance metric are applied in the usual Viterbi decoder for Cc in order to select the most likely coded sequence \(\hat {y_{c}}\) [31]. At the end of the decoding, the estimated shaping bits \(\hat {s}\) are obtained by \(\hat {s}=\hat {z} H_{s}^{T}\). Representation of the trellis transition for a generic convolutional code Cc. In this example, the parallel transitions are defined according to the mapping presented in the previous Fig. 5 By observing this simple example, the trellis transition from the state \(\sigma ^{n}_{j}\) to the state \(\sigma ^{n+1}_{j}\) provides \(\phantom {\dot {i}\!}(y_{c_{1}}y_{c_{2}})\) mapped to (0,0), which is represented by the marker " ○" according with the assigned mapping. Thus, the decoder will perform a hard decision among the four constellation symbols (0,4,8,12), representing the shaping regions, defined by the pair of bits (z1z2). In the same way, the transition from the state \(\sigma ^{n}_{j}\) to \(\sigma ^{n+1}_{j+1}\) provides \((y_{c_{1}}y_{c_{2}})\phantom {\dot {i}\!}\) mapped to (0,1), represented by the marker " X," which decides among the parallel transitions 1, 5, 9, or 13. It is worth noting that an error in the first step of the decoding procedure (i.e., error in the parallel transition decision) results in a wrong decoding of the shaping region. This fact increases significantly the degradation caused by the modulo loss in DPC schemes, as will be further discussed in the next section. We also remark that, in order to reduces the system complexity, the state expansion in the Viterbi decoder at the receiver [32–34], which could be an effective strategy for eliminating the parallel transition effect, is not considered in this implementation. CU practical encoder design for dirty paper channel In this section, we describe our contributions based on some changes in the low complexity encoder presented in the last section. Firstly, taking into account that the CU signal is Gaussian distributed by trellis shaping operation, we demonstrate a procedure that provides the appropriate output power for \(\hat {X}_{c}^{n}\). As a consequence, the SINR of the PU is properly evaluated. Subsequently, we deal with the CU link by proposing further expansion in the Xcc constellation, jointly with an optimized mapping design. The results are analyzed as a function of the complexity involved in the transmitter implementation. CU transmitted power control The power reduction of \(\hat {X}_{c}^{n}\), caused by the trellis precoding technique (i.e., \(E[|\hat {X}_{c}^{n}|^{2}]< (1-\alpha)P_{c} \)), impacts directly on both link performances since the value of α, adopted for power sharing at the superposition strategy, is no longer exact. It can be noted, as given by Eq. (6), that the SINR at primary receiver is increased. As a result, the PU presents better bit error rate (BER) performance than when operating in AWGN channel (as further analyzed in [25]). By considering that the exactly same shaping gain generated by a shaping code Cs is obtained for the multiuser precoding [22], we propose a method for controlling the CU output power such as \(\ E\left [|{X_{c}^{n}}|^{2} \right ]= P_{c}\), or equivalently, \(E\left [|\hat {X}_{c}^{n}|^{2}\right ]= (1-\alpha)P_{c} \). This is reached by proper scaling the minimum distance dmin of the coded constellation \(X_{cc}^{n}\). As a reference, we assume the power of the baseline constellation, without considering the shaping operation, given by [35]: $$ \ P_{\oplus}=\frac{2^{R}}{6}d^{\prime 2}_{{\text{min}}}, $$ where R is the data rate in bits per two dimensions, without taking into account the shaping redundant bits. The scaled minimum distance is defined as \(d^{\prime }_{{\text {min}}}\). The shaping gain is then defined as $$ \gamma_{s} =\frac{P_{\oplus}}{E\left[|\hat{X}_{c}^{n}|^{2}\right]}=\frac{P_{\oplus}}{(1-\alpha)P_{c}}. $$ Thus, by combining Eq. (14) and Eq. (15) and rearranging the terms, the \(d^{\prime }_{{\text {min}}}\) such that the available power after the shaping operation is equals to (1−α)Pc, is given by $$ \ d^{\prime}_{{\text{min}}}=\sqrt{\frac{[(1-\alpha)P_{c}]6 \gamma_{s}}{2^{R}}}. $$ All things considered, some remarks are pointed out for this proposed procedure: Both links are properly adjusted. Since the output power of the remaining CU transmission, after superposition, satisfies exactly \(E\left [|\hat {X}_{c}^{n}|^{2} \right ]=(1-\alpha)P_{c} \), the BER performance of PU in a interference channel is the same as AWGN channel. In addition, as the CU power is efficiently employed, the BER of cognitive link is improved or, equivalently, the secondary service data rate can be augmented; CU modulo loss might be reduced. The performance degradation caused by modulo loss augments as a function of: (i) the selection probability of boundaries symbols in the \(X_{cc}^{n}\) constellation and (ii) the reduction of the SNR level. By these perceptions, it is worth noting that, by re-scaling the minimum distance of \(X_{cc}^{n}\) and considering the same Gaussian distributed interference S, this procedure might reduce the occurrence of boundaries selected points for \(X_{cc}^{n}\), since the constellation region is further enlarged. Clearly, because of this modulo loss reduction, this technique improves the cognitive system performance for the same fixed SNR (in the next section, the investigation of a practical use case will further demonstrate this concept). Mapping of \(X_{cc}^{n}\) constellation should be optimized. In order to obtain exactly the specified output power (1−α)Pc, the shaping operation should be performed over a defined continuous region delimited by the modulo amplitude Δ. In this sense, it is of utmost importance that the mapping be designed such that the interference presubtracted signal \(\hat {X}_{c}^{n}\) be confined within this delimited region. Otherwise, the previous procedure does not control the exact power. This specific issue, jointly considered with constellation expansion, is further discussed in the next section. Constellation expansion The THP precoding works as the simplest solution for multiuser interference (MUI) presubtraction. By using the modulo operation, this technique satisfies the power constraint for application of dirty paper encoder and results in almost negligible degradation in high SNR regimes [36–38]. However, for low and intermediate SNR regimes (for instance, below 15 dB), the degradation caused by THP losses becomes more significant, especially due to the modulo loss. Additionally, the modulo loss increases as a function of the probability occurrence of the boundary constellation points in \(X_{cc}^{n}\). In order to recover part of the THP losses, some practical precoding techniques are proposed in [22]. For intermediate SNR regimes, which is our application, [22] proposes the combination of TP and PIP techniques. The recommended shaping code is 5/6 in order to expand the \(X_{cc}^{n}\), in such way to confine the interference. Also, as the trellis shaping is implemented at the precoder, the transmitted presubtracted signal \(\hat {X}_{c}\) has Gaussian distribution. It is interesting to observe that, as the modulo operation is not considered at the precoding, only the power and shaping losses affect the system performance. As a complement of the approach presented in [22], we consider the modulo implemented at the transmitter output in our investigated schemes. The amount of the interference is function of the link budget and, on the contrary of [22], different constellation expansions are considered. This assumption allows the analysis of the modulo loss impact. Also, the results could be evaluated considering the trade-off between complexity and power efficiency. As an example for our proposed constellation expansion, and following the same notations in Fig. 4, let us consider that the original DPC constellation is a 16-QAM. Each symbol is defined by the tupple \(\phantom {\dot {i}\!}(z_{1}z_{2}y_{c_{1}}y_{c_{2}})\), where z1 and z2 are the sign bits while \(y_{c_{1}}\) and \(y_{c_{2}}\) are the coset bits (as proposed in our low complexity design in the Fig. 5). This constellation can be expanded to, for instance, a 64-QAM by considering an additionally two information bits u1 and u2 or two "auxiliary" bits (not information bits) \(\phantom {\dot {i}\!}z_{{\text {aux}}_{1}}\) and \(z_{{\text {aux}}_{2}}\phantom {\dot {i}\!}\). Now, each symbol is defined by the tupple \(\phantom {\dot {i}\!}(z_{1}z_{2}u_{1}u_{2}y_{c_{1}}y_{c_{2}})\) or \(\phantom {\dot {i}\!}(z_{1}z_{2}z_{{\text {aux}}_{1}}z_{{\text {aux}}_{2}}y_{c_{1}}y_{c_{2}})\), respectively. By employing the same procedure, the signal constellation of \(X_{cc}^{n}\) can be expanded as necessary to confine the scaled interference (i.e., | Re(λSn)| and \(|\operatorname {Im}(\lambda S^{n})|<(\sqrt {M}/2).d_{{\text {min}}}\), where M is the constellation order of \(X_{cc}^{n}\)). In this way, the assignment of the boundary constellation points can be avoided and, as consequence, the modulo loss is mitigated. Also, it is highlighted that the original information rate is maintained by expanding with auxiliary bits, since these last ones are not information bits. For the transmitter, this operation could be seen as an extension of the trellis shaping procedure. The Viterbi decoder of the shaping operation acts as an usual TCM decoder, where the auxiliary shaping bits zaux represent the parallel branches transitions. These latter are hard decided during each trellis section for Cs. Furthermore, it is still necessary to design the optimal mapping of shaping regions, according to the information rate. The next section discusses this topic. Consider the implementation of the mapping I, depicted in the Fig. 8. We assume a shaping code Cs of rate 1/2, which provides four shaping regions, represented by the different colors. The convolutional code Cc of rate 1/2 is defined, which also provides four TCM cosets. In addition, the modulo regions are represented by dashed lines, according to the Xcc constellation expansion adopted in 16, 64, 256, and 1024-QAM. Shaping region mapping I The interference point is represented by the red " X", which is presubtracted by the Xcc constellation point, selected by Cs, to form the transmitted \(\hat {X}_{c}\) symbol. As we have discussed in the Section 4.1, by a proper mapping design, we confine the \(\hat {X}_{c}\) signal in a determined continuous region of the constellation. In this way, the control of the transmitted power is achieved. Following, two cases are analyzed. In the first case, depicted in the Fig. 8a, the uncoded information bits in the trellis precoding are considered. We define the mapping, according to the sign bit shaping strategy, proposed in [21], as \(\phantom {\dot {i}\!}(z_{1}z_{2}u_{1}u_{2}u_{3}u_{4}u_{5}u_{6}y_{c_{1}}y_{c_{2}})\). In this way, we have 8 information bits transmitted by symbol, which require high SNR (unlike the operating range targeted in this article). Thus, there is no shaping subregions in this scheme. Each 256-QAM constellation, divided in 4 cosets with 64 constellation points inside each coset, is repeated 4 times to form the outer shaping 4-QAM. Regardless of whether the interference S is confined within the 1024-QAM constellation of Xcc, the presubtracted signal \(\hat {X}_{c}\), due to the modulo, is always restrict in its region. Based on this assumption, we observe the following: The shaping loss is partially recovered. This is reached according to the shaping gain γs, provided by the code Cs, and based on the fact that \(\hat {X}_{c}\) is continuous and shaped bounded by the squared region of Δ=32 (i.e., \(|\operatorname {Re}(\hat {X}_{c})|\) and \(|\operatorname {Im}(\hat {X}_{c})|<\Delta /2\), where Δ is the modulo boundary); The modulo loss is assumed negligible. In fact, even if there are some points selected at the boundaries of Xcc, since the system operates in high SNR regime, the modulo loss only cause degradation in very low BER; The power loss is close to 0. By the continuous approximation [35], we compute the power loss as γp=10.log(1024/1023)≈0.0042 dB for this 1024-QAM constellation. In the second case, depicted in Fig. 8b, we transmit our rate of 2 information bits per \(\hat {X}_{c}\) symbol. The uncoded bits are replaced by the auxiliary shaping bits and the mapping is defined by \((z_{aux_1}z_{aux_2}z_{aux_3}z_{aux_4}z_{aux_5}z_{aux_6}(z_{aux_1}z_{aux_2}z_{aux_3}z_{aux_4}z_{aux_5}z_{aux_6} z_1z_2y_{c_1}y_{c_2})\). The shaping operation selects the closest one of the 26 subregions (or equivalent, points inside each shaping cosets) of each one of the 4 region defined by Cs. Considering this rate, as discussed in our proposed power control technique, the \(\hat {X}_{c}\) signal must be continuous, Gaussian shaped, and bounded by the squared region of Δ=4 (i.e., \(|\operatorname {Re}(\hat {X}_{c})|\) and \(|\operatorname {Im}(\hat {X}_{c})|<\Delta /2\), where Δ is the modulo boundary). For the same interference S, represented also by the red " X," the only case that the presubctracted signal \(\hat {X}_{c}\) is confined in our region of interest occurs when the "yellow" region is selected by Cs. Intuitively, as the other shaping regions (i.e., colors) can be selected by Cs, it is not guaranteed that the shaping gain provided by this code is reached. As a result, the power control strategy, discussed in the last section, would not be effective in this case. The following mapping, represented in Fig. 9, is optimized for scenarios where auxiliary bits are employed. In this case, the mapping is defined by \((z_{aux_1}z_{aux_2}z_{aux_3}z_{aux_4}z_{aux_5}z_{aux_6} (z_{aux_1}z_{aux_2}z_{aux_3}z_{aux_4}z_{aux_5}z_{aux_6} z_1z_2y_{c_1}y_{c_2})\). The same four shaping cosets are represented by the four different colors and the 26 subregions (points inside each shaping coset) are spread over the expanded constellation. Shaping regions mapping II The interference point is also represented by the red " X", which is presubtracted by the closest Xcc to provide the transmitted signal \(\hat {X}_{c}\). It is worth noting that, considering the hard decision of the parallel branches for the Cs trellis, \(\hat {X}_{c}\) is always inside the smallest dashed square, which is our continuous region of interest (Δ=4). By this design, we conclude that The shaping loss is partially recovered. the shaping gain γs is reached according to the code Cs. As a consequence, the power control technique is properly designed; The modulo loss can be totally mitigated. If the Xcc constellation is expanded enough such that the interference is confined and the boundary selected symbols are avoided, the modulo loss can be mitigated. However, we emphasizes that, in some cases, a low modulo loss might be tolerated in order reduces the transmitter complexity (this will be discussed in the next section, by analyzing the realistic scenario); The power loss. Considering this rate, this loss is evaluated by the approximation γp=16/15≈0.28 dB. As discussed in [22], this loss represents the relation between continuous, which is the DPC case, and discrete constellation transmission; In summary, the previous discussions showed, by considering proper expansion of the DPC constellation, that the degradation can be made within 0.3 dB of the corresponding reference trellis shaping in AWGN channel. On other hand, we have seen that the system complexity increased, which is a design drawback. It is important to emphasize that the PU user maintains the same performance as in absence of the CU operation. In this work, we investigate the constellation expansion in 4, 16, 64, and 256 regions, which represents ns=2,4,6,8 (where, in case of ns>2, the remaining ns−2 are auxiliary bits). It is important to point out that the minimum distance \(d^{\prime }_{{\text {min}}}\), evaluated by scaling method, previously exposed in Section 4.1, was implemented in all simulated schemes. Additionally, the impact of modulo operation in the system is also considered. As we are employing Rcu=2 bits/symbol, the transmitted signal should be confined within the region defined by the 16-QAM constellation (i.e., \(|\operatorname {Re}(\hat {X}_{c})|\) and \(|\operatorname {Im}(\hat {X}_{c})|<2d_{{\text {min}}}\prime \)). Having this in mind, we are assuming the previously discussed mapping II (see Fig. 9) for all considered schemes. The following example, depicted in Fig. 10, clarifies the mapping strategy. In this case, the constellation is expanded from 16 to 64 regions, where the Xcc symbol is defined by \((z_{{\mathrm {aux1}}},z_{{\mathrm {aux2}}},z_{1},z_{2},y_{c_{1}},y_{c_{2}})\phantom {\dot {i}\!}\). Consequently, we have the middle bits z1 and z2 as the shaping coset bits, which assign the shaping regions (represented by colors) and zaux1 and zaux2 as a points within each shaping coset, which defines the shaping subregions. Finally, the convolutional coded bits \(y_{c_{1}}\) and \(y_{c_{2}}\) selects a point (represented by the markers ○,X,, and △) inside each subregion. Shaping regions mapping II for 64-QAM Consider that the pair of bits \((y_{c_{1}},y_{c_{2}})\) are mapped to (0,0) by the channel trellis convolutional code Cc, represented in the figure by the marker ○. In this case, Fig. 11 elucidates the Viterbi decoder of the shaping operation located at the transmitter. By observing this example, the trellis transitions from the state \(\sigma ^{n}_{j}\) to the state \(\sigma ^{n+1}_{j}\) corresponds to constellation points where z1=0 and z2=0 (yellow regions). Thus, the decoder will perform a hard decision among the four constellation symbols (0,16,32,48), representing the points within the assigned shaping region coset, each defined by the pair of bits (zaux1,zaux2). In the same way, the transition from the state \(\sigma ^{n}_{j}\) to \(\sigma ^{n+1}_{j+1}\) corresponds to constellation points where z1=0 and z2=1 (green regions), which decides among the parallel transitions (4,20,36,52). Example of parallel transitions for Viterbi decoder of trellis shaping at the transmitter, according to mapping of the Fig. 10 We can noticed that, thanks to this mapping strategy, regardless of the interference Sn, the presubtracted signal \(\hat {X}_{c}^{n}\) is confined within the 16-QAM region, assuring that the power control is established. In our analyzed scenarios, when required to mitigate the modulo loss, further expansions are implemented following the same presented procedure. Practical system analysis This section intends to point out some practical issues concerning the techniques exposed and applied in the satellite scenario. As a matter of system engineering, the design for the CU payload could either be a standalone system (implemented by a dedicated transmission chain and antenna) or a shared transmitter (by using the same transponder and antenna as the PU). In this latter configuration, notice that more caution should be taken into account when the transmission of both signals inputs the same high power amplifier (HPA). In fact, this practice should be avoided since this implementation may induce higher non-linear distortions, particularly in terms of AM/AM and AM/PM conversions [39]. Moreover, at the receiver side (PU and CU), two scenarios could be encountered: (i) by the deployment of geographically separated receiving sites for each user, and thus reducing the interference due to the attenuation at the interfering paths, or (ii) by using the same receiving station with two dedicated demodulators and decoders. In this last case, the attenuation of the interfering and direct paths are the same (i.e., ∣h∣ = ∣hpp,i∣ = ∣hcc∣ = ∣hpc,i∣ = ∣hcp,i∣). In fact, it increases the interference of both links and, as a consequence, requires higher value of the superposition factor α (consequently, reducing the secondary service data rate). Deepening the vision on the techniques described, we point out that, due to the superposition, the bit rate of the secondary service might be very low with respect to the primary. However, this practice generates two implementation problems: (i) in the DPC presubtraction technique, the same symbol rate for both signals is considered in order to be able to compute the Eq. (9) and (ii) in the superposition technique, the interference generated by the CU signal would appear as spikes in the PU bandwidth, which makes the usual interference model unrealistic in this case. In order to avoid both constraints, we can think of the implementation of the chirp spread spectrum technique [31] at CU transmission. In this sense, the DPC encoder can correctly perform its operation and the CU receiver can demodulate at a more flexible transmitted data rate. To improve the whole system performance, the channel estimation techniques could be realized at the terminal's end through a link feedback, for instance, according to the DVB-S2X standard [6]. By these features, the superposition factor α, which depends directly on the channels conditions, as well as the λ, which depends on SNR, can be periodically updated, changing the achievable secondary service data rate and, as consequence, optimizing CU performance. In order to investigate the system feasibility, we adopted a scenario where a Cubesat at a altitude of 600 km with same orbital parameters as [40], using COTS parts, transmits both signals (primary and cognitive) from the same satellite antenna towards a single earth station, which is equipped with two dedicated demodulators. In this sense, the channel attenuations are the same and defined as ∣h∣. In this study, just the downlink is considered. Primary service transmission analysis The main specification for the PU signal are output power of 1W [27], operating frequency of 2200 MHz (downlink band assigned for Earth exploration satellite service), bit rate of 3.4 Mbps, BER specified to 10−5 and coded QPSK modulation with FEC (R=1/2). Table 1 presents the link budget of PU without secondary service addition. Table 1 Primary user link budget (QPSK coded FEC R = 1/2) It is worth noting that a conservative margin for demodulation losses of 6 dB is assumed in order to cover the impairments of the communication chain. The overall link margin is about 3.5 dB, as required by the targeted BER. The principle behind this design strategy was to use part of the power remaining in this margin to transmit the CU signal. Therefore, we defined that 900 mW were allocated for PU transmission (which still maintains the recommended link margin of 3 dB) and 100 mW were used for CU. Figure 12 presents the overlay model considering this use case. Overlay model considering realistic satellite link budget The powers transmitted and received are provided considering an example of a realistic link budget parameters. By computations according the Table 1, the channel attenuations equal \(\|h\|^{2}_{{\text {dB}}}=-~125.7\) dB. In this condition, the interference-to-noise ratio (INR) and the link degradation D by interference are given by $$ \ \text{INR}=\frac{I_{pc}}{N_{p}}=4.47; $$ $$ \ \mathrm{{\it{D}}}({\text{dB}})~=~10.\text{log}(1+\text{INR})~=~7.38 \ \text{dB}. $$ Considering all parameters, the CU performs the superposition strategy and, by the Eq. (5), the factor α = 0.85 is evaluated. This value guarantees a SINR of 16 dB at PU. We highlight that, thanks to the superposition strategy and the power controlling design of the DPC encoder (see results in [25]), the PU maintains the same performance as in the absence of the CU interference. Secondary service transmission analysis From this point, a simulation for CU link is performed considering the whole CU channel interference (see Eq. (8)), which is composed by the PU signal and the CU shared power in the superposition. CU results and analysis The system performance is analyzed for the scenario depicted in Fig. 12 in both qualitative and quantitative approaches. The main objective is to provide a comparative study regarding the different expansions of \(X_{cc}^{n}\), emphasizing the trade-off between power efficiency vs. complexity. Figure 13 presents the main results. Results and analysis for CU, considering 10.log(Pp/Pc)=9.5 dB and α = 0.85, according to different constellation expansions Qualitative analysis: To support the analysis, the scatter plot of the signal constellations, that are part of the partial interference pre-subtraction (PIP) processing are interpreted (represented in Fig. 13a, b). The expanded constellation signal \(X_{cc}^{n}\) is shown in green "x." The Gaussian distributed version of the scaled interference λSn is superposed in red points and the transmitted signal \(\hat {X}_{c}^{n}\) is shown in blue dots. Additionally, we depict the histogram of \(X_{cc}^{n}\) (resp. Fig. 13c, d) in order to study the probability of the boundaries symbols, since that it is determinant for the degradation caused by modulo loss. As our previous definition and in contrast to [22] and [23], we are using the modulo operation at the transmitter output and the shaping metric is implemented taking that into account Eq. (10). This design assumptions assures that the transmitted signal \(\hat {X}_{c}^{n}\) will be confined within the expanded constellation, independently of the channel interference power, guaranteeing the condition \(E[|\hat {X}_{c}^{n}|^{2} ] \leq (1-\alpha)P_{c}\), which is a DPC requirement. By this design, in a clearly reference to the THP scheme, three important consequences are highlighted concerning its discriminated losses: Shaping loss: When the shaping metric takes into account the modulo operation (see Eq. (10)), the shaping gain corresponding to Cs is also reached within the continuous region delimited by Δ. It is worth noting that the same shaping gain is achieved, independently of the channel interference power. This is directly given by the fact that $$ \big[X_{cc}^{n}-\lambda S^{n}\big]\textsc{mod}_{\Delta}=\big[X_{cc}^{n}-(\lambda S^{n})\textsc{mod}_{\Delta}\big]\textsc{mod}_{\Delta}, $$ where we can conclude that the shaping minimization will be the same whether or not λSn is confined within the modulo region. This could be observed by the blue dots illustrated in the scatter plot, which provides the shaping gain equals to 0.97 dB for all scenarios, according to the implemented shaping code Cs. Another interesting point concerning the shaping operation refers to the fact that, when the transmitted signal \(\hat {X}_{c}^{n}\) is Gaussian distributed, thanks to the shaping operation, the usual decoding method by minimum euclidean distance is optimum [22]. It could be inferred directly due to the Gaussian distribution of the effective noise Neff, given by Eq. (11). Modulo loss: Based on the previous observation that the boundary points should be avoided. We observe in Fig. 13c that for 16-QAM the \(X_{cc}^{n}\) constellation is uniformly distributed (i.e., the so-called n-cube distribution). We also notice that the histogram of 1024-QAM, presented in Fig. 13d, shows the occurrence of boundary constellation points decreasing as further expansion is considered. Power loss: The same degradation value evaluated for 16-QAM (in this case, equals to 0.28 dB), as discussed in Section 4.3. Quantitative analysis: Connected with this qualitative investigation, the BER CU is shown for quantitative performance evaluation. Figure13e presents the BER curves for this scenario. We observe that the modulo loss is significantly high for the 16-QAM, 64-QAM, and 256-QAM schemes. However, when 1024-QAM is considered, the degradation just becomes more significant after 5 dB of SNR. This could be inferred by the increase of λ as a function of SNR (see Eq. (13)), which results in more modulo loss degradation. When comparing minimum and maximum \(X_{cc}^{n}\) constellation expansion (resp. 16-QAM and 1024-QAM), we detected a gain around 4 dB considering a BER of 10−3 and 1.5 dB considering a BER of 10−5. This defines the minimum (by 16-QAM scheme implementation) and maximum (by 1024-QAM scheme implementation) supported bit rate for the secondary service. As a result, we point out the tradeoff between power efficiency (or alternatively the supported bit rate) and complexity. The implementation of 1024-QAM at the transmitter side could be seen as prohibitive in reference to satellite communications. However, we emphasize that the impact is only on the increase of the system complexity, since more comparisons should be performed by the shaping operation. As we have discussed in Section 4.3, the \(\hat {X}_{c}^{n}\) signal is continuous within the squared region bounded by Δ=4dmin (i.e., \(|\operatorname {Re}(\hat {X}_{c})|\) and \(|\operatorname {Im}(\hat {X}_{c})|<\Delta /2\), where Δ is the modulo boundary). Thus, the transmitted power (i.e., \(E[|\hat {X}_{c}^{n}|^{2} ]\)) as well as the power peak to average power ratio (PAPR) remain in the same order, regardless the constellation expansion. Finally, the maintenance of the PU service is evaluated by attesting the exact specified SINRP (see Eq. (4)) at the primary receiver, thanks to the superposition strategy. CU link budget By taking the link parameters and the CU BER curve presented in Fig. 13e, we can now evaluate the link budget, presented at Table 2. It is important to note that, by reason of superposition, just 15% of the power originally allocated for CU is used for its own transmission. However, despite of the received low power, the sensitivity of the receiver is in line with the specifications usually attributed for small satellites links (i.e., sensivity threshold of − 118 dBm [41]). We emphasize that all conservative margins are still being considered in order to guarantee the performance. Table 2 Cognitive user link budget It is worth noting that different M-QAM schemes present the peak to average power ratio (PAPR) in the same order of magnitude (as presented in Table 2). Also, the further expansion impacts directly in the BER performance, even when the total interference is not confined inside the expanded constellation (as in this case). As pointed in [24], by practical effects, the SNR here does not consider the defined effective noise in Section 11. As an important result of this feasibility analysis, we assure the minimum supported bit rate of 16 and 28 kbps (w.r.t. BER of 10−5 and 10−3, respectively) and the maximum of 22 and 74 kbps (w.r.t. BER of 10−5 and 10−3, respectively) for the secondary service. This range is suitable for most of M2M applications. The motivation behind this paper concerns the general trend to reconsider the regulation procedure for spectrum allocation, as well as to stimulate the development of techniques which enable the coexistence of different networks. In this view, the terrestrial cognitive radio techniques have also been employed in the satellite communication context. A step forward is presented here, where the coexistence among primaries and cognitive users is investigated by using overlay paradigm. In this specific paradigm, the cognitive user (CU) has noncausal knowledge about the message and encoding strategy of the primary user (PU). By this assumption, the optimum CU encoding strategy consists in: the superposition and dirty paper coding (DPC) techniques. In this line, given that the PU should operate as in the absence of interference, with no performance degradation, we first propose a scheme based on the trellis shaped DPC encoder for CU. In this sense, some design improvements are discussed in order to overcome the so-called precoding losses. Techniques are detailed for both the encoder and decoder, taking into account the satellite scenario and the trade-off between power efficiency and complexity. Thereafter, this paper investigates the feasibility of a low data rate secondary service transmission over a primary user infrastructure. A realistic scenario is presented and the previously discussed techniques are implemented to resolve the interference of both links. As a result, we obtained the same performance for the PU as in absence of the CU operation (AWGN channel). Also, concerning the feasibility analysis, we fulfill the minimum supported bit rate of 16 and 28 kbps (w.r.t. BER of 10−5 and 10−3, respectively) and the maximum of 22 and 74 kbps (w.r.t. BER of 10−5 and 10−3, respectively) for the secondary service, where this range is variable according to the implemented scheme. We emphasize that this achievable bit rate is suitable for most of M2M applications. As a drawback, we point out an increase of the output power of the satellite due to the intrinsic signal correlation presented at the superposition technique. In the described scenario, the total power transmitted is about 32 dBm instead of the previously specified 30 dBm. In this case, the transmitted antenna should be properly designed in order to consider this output power. Finally, we would recommend further research concerning the following topics: (i) an investigation of the feasibility of a secondary service transmission for GEO multibeam satellite scenario; (ii) the development of a proof of concept by means of SDR implementation; (iii) the analysis of the satellite impairments effects on DPC schemes; (iv) the control of PAPR by shaping operation, typically on satellite communication. AWGN: Additive white Gaussian noise Bit error rate CNR: Carrier-to-noise ratio COTS: Commercial off-the-shelf parts CU: Cognitive user DPC: DVB-S2X: Digital Video Broadcasting - Satellite - Second-Generation Extension EESS: Earth exploration satellite service FEC: Forward error correction code Geosynchronous equatorial orbit HPA: IoT: Inter-symbol interference ITU-RR: International Telecommunication Union Radio Regulation Low Earth orbit M2M: MEO: medium Earth orbit PAPR: peak-to-average power ratio PIP: Partial interference presubtraction PU: QAM: Quadrature amplitude modulation QPSK: Quadrature phase-shift keying SDR: SINR: Signal-to-interference-plus-noise ratio Trellis-coded modulation THP: Tomlinson-Harashima precoding TP: Trellis precoding TS: Trellis shaping D. Minoli, Innovations in Satellite Communications and Satellite Technology: The Industry Implications of DVB-S2X, High Throughput Satellites, Ultra HD, M2M, and IP (Wiley, 2015). Z. Qu, G. Zhang, H. Cao, J. Xie, LEO satellite constellation for internet of things. IEEE Access. 5:, 18391–18401 (2017). ITU-R, Emerging trends in 5G/IMT2020. Geneva Mission Brief. Ser. (2016). https://www.itu.int/en/membership/documents/missions/gva-mission-briefing-5g-28sept2016.pdf. Accessed 27 Jan 2020. Huawei, 55G Network Architecture: A High-Level Perspective. White Paper. Huawei Technologies Co, Shenzhen (2016). https://www.huawei.com/minisite/5g/img/5G_Network_Architecture_A_High-Level_Perspective_en.pdf. Accessed 27 Jan 2020. Regulations, ITU Radio, and CHAPTER. II-Frequencies, ARTICLE 5 Frequency allocations. Section IV-Table of Frequency Allocations (2016). DVB, DVB-Doc A-172" White Paper on the use of DVB-S2X for DTH applications DSNG and Professional Services Broadband Interactive Services and VL-SNR applications (ESA, Paris, 2015). United States Department of Commerce, United States radio spectrum frequency allocations chart (2016). https://www.ntia.doc.gov/files/ntia/publications/january_2016_spectrum_wall_chart.pdf. Accessed 27 Jan 2020. Y. Han, E. Ekici, H. Kremo, O. Altintas, Spectrum sharing methods for the coexistence of multiple rf systems: A survey. Ad Hoc Netw. 53:, 53–78 (2016). K. Patil, K. Skouby, A. Chandra, R. Prasad, in 2011 The 14th International Symposium on Wireless Personal Multimedia Communications (WPMC). Spectrum occupancy statistics in the context of cognitive radio (IEEE, 2011), pp. 1–5. M. R. Maheshwarappa, M. Bowyer, C. P. Bridges, in IEEE Aerospace Conference. Software defined radio (SDR) architecture to support multi-satellite communications (IEEE, 2015). https://doi.org/10.1109/aero.2015.7119186. G. Cocco, T. De Cola, M. Angelone, Z. Katona, S. Erl, Radio resource management optimization of flexible satellite payloads for DVB-S2 systems. IEEE Trans. Broadcast.64(2), 266–280 (2017). https://doi.org/10.1109/tbc.2017.2755263. N. Porecki, G. Thomas, A. Warburton, N. Wheatley, N. Metzger, in Proc. 19th Ka Broadband Commun., Navigat. Earth Observat. Conf.Flexible payload technologies for optimising Ka-band payloads to meet future business needs, (2013), pp. 1–7. S. K. Sharma, S. Chatzinotas, B. Ottersten, Cognitive radio techniques for satellite communication systems (IEEE, 2013). https://doi.org/10.1109/vtcfall.2013.6692139. E. Biglieri, in 2012 IEEE First AESS European Conference on Satellite Telecommunications (ESTEL). An overview of cognitive radio for satellite communications (IEEE, 2012). https://doi.org/10.1109/estel.2012.6400078. M. Álvarez-Díaz, M. Neri, C. Mosquera, G. Corazza, in 2006 International Workshop on Satellite and Space Communications. Trellis shaping techniques for satellite telecommunication systems (IEEE, 2006). https://doi.org/10.1109/iwssc.2006.256012. A. Jovicic, P. Viswanath, Cognitive radio: an information-theoretic perspective. IEEE Trans. Inf. Theory. 55(9), 3945–3958 (2009). A. J. Goldsmith, S. A. Jafar, I. Maric, S. Srinivasa, Breaking spectrum gridlock with cognitive radios: An information theoretic perspective. Proc. IEEE. 97(5), 894–914 (2009). M. Costa, Writing on dirty paper (corresp.)IEEE Trans. Inf. Theory. 29(3), 439–441 (1983). U. Erez, S. Shamai, R. Zamir, Capacity and lattice strategies for canceling known interference. IEEE Trans. Inf. Theory. 51(11), 3820–3833 (2005). M. V. Eyuboglu, G. D. Forney, Trellis precoding: Combined coding, precoding and shaping for intersymbol interference channels. IEEE Trans. Inf. Theory. 38(2), 301–314 (1992). G. Forney, Trellis shaping. IEEE Trans. Inf. Theory. 38(2), 281–300 (1992). W. Yu, D. P. Varodayan, J. M. Cioffi, Trellis and convolutional precoding for transmitter-based interference presubtraction. IEEE Trans. Commun.53(7), 1220–1230. Y. Sun, W. Xu, J. Lin, Trellis shaping based dirty paper coding scheme for the overlay cognitive radio channel (IEEE, 2014). https://doi.org/10.1109/pimrc.2014.7136456. L. B. C. da Silva, T. Benaddi, L. Franck, in 2018 IEEE International Black Sea Conference on Communications and Networking (BlackSeaCom). Cognitive radio overlay paradigm towards satellite communications (IEEE, 2018). https://doi.org/10.1109/blackseacom.2018.8433717. L. B. C. da Silva, T. Benaddi, L. Franck, in 2018 9th Advanced Satellite Multimedia Systems Conference and the 15th Signal Processing for Space Communications Workshop (ASMS/SPSC). A design method of cognitive overlay links for satellite communications (IEEE, 2018). https://doi.org/10.1109/asms-spsc.2018.8510739. L. B. C. da Silva, T. Benaddi, L. Franck, in Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering. On the feasibility of a secondary service transmission over an existent satellite infrastructure (Springer, 2019), pp. 154–167. https://doi.org/10.1007/978-3-030-25748-4_12. ISIS - Innovative Solutions In Space B.V., ISIS high data rate S-band transmitter specifications. https://www.isispace.nl/wp-content/uploads/2016/02/ISIS-Communicationsystems-Brochure-v2-compressed.pdf. Accessed 27 Jan 2020. Secretariat, E. C. S. S.ECSS-M-ST-10C Space Project Management-Project Planning and Implementation (ESA-ESTEC Requirements & Standards DivisionNoordwijk, 2009). R. F. Fischer. Precoding and signal shaping for digital transmission (Wiley, 2005). S. Gelfand, Coding for channel with random parameters. Probl. Contr. and Inf. Theory. 9(1), 19–31 (1980). MathSciNet Google Scholar J. G. Proakis, M. Salehi, Digital communications, vol. 4 (2001). G. Ungerboeck, Channel coding with multilevel/phase signals. IEEE Trans. Inf. Theor.28(1), 55–67 (1982). G. Ungerboeck, Trellis-coded modulation with redundant signal sets part I: Introduction. IEEE Commun. Mag.25(2), 5–11 (1987). G. Ungerboeck, Trellis-coded modulation with redundant signal sets part II: State of the art. IEEE Commun. Mag.25(2), 12–21 (1987). G. D. Forney, L. -F. Wei, Multidimensional constellations. I, introduction, figures of merit, and generalized cross constellations. IEEE J. Sel. Areas Commun.7(6), 877–892 (1989). M. Tomlinson, New automatic equaliser employing modulo arithmetic. Electron. Lett.7(5), 138–139 (1971). H. Harashima, H. Miyakawa, Matched-transmission technique for channels with intersymbol interference. IEEE Trans. Commun.20(4), 774–780 (1972). G. D. Forney, M. V. Eyuboglu, Combined equalization and coding using precoding. IEEE Commun. Mag.29(12), 25–34 (1991). G. Maral, M. Bousquet, Satellite communications systems: systems, techniques and technology (Wiley, 2011). D. Barbarić, J. Vuković, D. Babic, in 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO). Link budget analysis for a proposed cubesat earth observation mission (IEEE, 2018). https://doi.org/10.23919/mipro.2018.8400026. M. Arias, F. Aguado, in ITU Symposium and Workshop on small satellite regulation and communication systems. Small satellite link budget calculation, (2016). https://www.itu.int/en/ITU-R/space/workshops/2016-smallsat/Documents/Link_budget_uvigo.pdf. Accessed 27 Jan 2020. This work was supported by National Council for Scientific and Technological Development (CNPq/Brazil) and by National Institute for Space Research (INPE/Brazil). IMT Atlantique, LabSTICC, DEOS, 10 Avenue Edouard Belin, Toulouse, BP 44004, 31028 TOULOUSE CEDEX 4, France Luciano Barros Cardoso da Silva & Tarik Benaddi National Institute for Space Research, INPE, Av dos Astronautas, 1758 -. Jardim da Granja, São José dos Campos, Sao Paulo, 12227-010, Brazil Luciano Barros Cardoso da Silva Airbus Defence and Space, Toulouse, France Laurent Franck Tarik Benaddi All authors contributed to the development of the ideas, design of the study, the theory, result analysis, and article writing. Correspondence to Luciano Barros Cardoso da Silva. da Silva, L., Benaddi, T. & Franck, L. On the feasibility of a secondary service transmission over an existent satellite infrastructure: design and analysis. J Wireless Com Network 2020, 61 (2020). https://doi.org/10.1186/s13638-020-01674-9 Overlay paradigm Machine-to-machine application
CommonCrawl
Markov chain Monte Carlo and expectation maximization approaches for estimation of haplotype frequencies for multiply infected human blood samples Gie Ken-Dror1 & Ian M. Hastings ORCID: orcid.org/0000-0002-1332-742X1 Haplotypes are important in anti-malarial drug resistance because genes encoding drug resistance may accumulate mutations at several codons in the same gene, each mutation increasing the level of drug resistance and, possibly, reducing the metabolic costs of previous mutation. Patients often have two or more haplotypes in their blood sample which may make it impossible to identify exactly which haplotypes they carry, and hence to measure the type and frequency of resistant haplotypes in the malaria population. This study presents two novel statistical methods expectation–maximization (EM) and Markov chain Monte Carlo (MCMC) algorithms to investigate this issue. The performance of the algorithms is evaluated on simulated datasets consisting of patient blood characterized by their multiplicity of infection (MOI) and malaria genotype. The datasets are generated using different resistance allele frequencies (RAF) at each single nucleotide polymorphisms (SNPs) and different limit of detection (LoD) of the SNPs and the MOI. The EM and the MCMC algorithm are validated and appear more accurate, faster and slightly less affected by LoD of the SNPs and the MOI compared to previous related statistical approaches. The EM and the MCMC algorithms perform well when analysing malaria genetic data obtained from infected human blood samples. The results are robust to genotyping errors caused by LoDs and function well even in the absence of MOI data on individual patients. Malaria infections in human blood often consist of several genetically-distinct infections, each of which is called a clone. Humans in endemic areas may receive up to 1000 infective bites per year. Polyclonal infections are common, the number of clones within a human blood sample called the multiplicity of infection (MOI). The average number of MOI is around three in humans who lives in areas of intense transmission, and rarely exceeds to 12 in any individual patient [1]. The presence of multiple clones (each of which is haploid) in a blood sample often makes it impossible to identify what multiple SNPs haplotypes are present in each patient. This makes estimating the frequencies of haplotypes in the malaria population from human blood samples a challenging computational task. Haplotypes are important in tracking anti-malarial drug resistance because genes encoding drug resistance may accumulate mutations at several codons in the same gene, each mutation increasing the level of drug resistance and possibly, reducing the metabolic costs of previous mutation. Drug resistance mutations threaten malaria control and treatment policies and the mutations and haplotypes that encode resistance will be the subject of this paper. The prevalence of mutations (i.e. their presence/absence in a blood sample) can be directly observed so the information available for each human blood sample is (a) an estimate of the MOI and (b) the presence/absence of an allele at a SNP. In effect a blood sample provides a genotype whose ploidy level equals its MOI; the problem is to use these observed polyploid blood genotypes to infer haplotype frequencies. This inference is further complicated by genetic ambiguity that arises from three sources: The MOI is estimated using hyper-variable genetic loci, such as msp1, msp2, glurp and ta109, which typically have an expected homozygosity of around 0.05–0.08 [2]. Simple counting of the number of different alleles at each loci provides a minimum MOI. However, this may underestimate the population MOI if clones share alleles at hyper-variable loci purely by chance, or if they are low density clones missed during genotyping [3]. Alleles at single nucleotide polymorphisms (SNPs) can only be scored as present/absent and not directly counted unless MOI ≤ 2. For example if MOI = 4 and both wildtype and mutant alleles are present in the sample, it is impossible to tell whether the ratio of mutant:wildtype clones is 1:3, 2:2 or 3:1. Differing assay sensitivity means that some alleles are not detected. Malaria clones in humans are not present at the same density. Differences arise because they are recognized differently by host immunity, and because of sequestration in their 48-h cycle in red blood cells. Alleles present in the numerically-smaller minor clones will provide fewer PCR amplification templates and hence a lower detection signal. The situation is further complicated by each allele's signal strength being affected by other factors such as the size of the region amplified during PCR. Different laboratories set different cut-off levels to distinguish smaller true signals from background assay noise. The Swiss TPH attributed signals less than 30 % intensity of the main genotyping signals as 'noise', while other labs use lower cut-offs and some apparently rely on user subjectivity to distinguish minor peaks from technical noise. The cut-off defines as the assay's limit of detection (LoD). If the LoD of PCR reactions differ between SNP and hyper-variable loci then it is possible to get the situation where an allele from a clone is detected at the hyper-variable locus used to determine MOI, but may be missed when genotyping the SNP at the resistance locus. Clones (usually at very low density) that are undetected at all loci can be ignored as they do not enter the analysis [1]. These three factors have a large impact when attempting to impute genetic data for the malaria population. In particular, it precludes estimating haplotype frequency by simple gene counting of unambiguous genotypes because low frequency alleles and haplotypes are systematically missed, typically leading to twofold errors in frequency estimates [1]. The impact of detection limits when genotyping blood samples are under-studied. A simulation developed to create artificial datasets that incorporate the three sources of genetic ambiguity described above. This allows for non-detection of clones and allows the user to know both the "true" underlying genetic data in the simulated dataset and the "observed" data that would be seen in the blood samples. This problem does not arise in conventional diploid species. They have equal copies of each chromosome so the genetic signal from each SNP allele is equal, hence the need to develop new ways of inferring haplotype frequency in malaria patients. Several statistical approaches to estimate haplotype frequencies from multiclonal infections have been proposed including: maximum-likelihood (ML) estimation using a hill climbing algorithm (MalHapFreq) [4], expectation–maximization (EM) using an efficient iterative maximum likelihood approach (malaria.em) [5] and a Metropolis–Hastings Markov Chain Monte Carlo implementation of a model constructed within a Bayesian framework, which we hereafter referred to as Bayesian [6]. The aim of the present study is to present two novel approaches i.e. MCMC and EM algorithms for haplotype reconstruction with known or unknown MOI, and to compare the results to those obtained from the related statistical approaches described above; In addition, quantify the impact of misclassified observed genotype and examine the accuracy of the various method in estimating the population haplotype frequency. The simulated datasets, estimation algorithms and statistical analysis described below have been implemented in the R statistical software system version 3.1.1 [7], on a 64-bit computer with 32.0 GB of random access memory and an Intel(R) Core(TM) i7-4770K central processing unit (CPU) @ 3.50 GHz processor. Simulation of genotype and haplotype datasets Simulation of population (haplotype) data The simulation starts by generating a user-defined number of human blood samples, N, (1, 2,…,N) in the dataset. The multiplicity of infection (MOI) in each blood sample is generated randomly by the default frequency distributions given by Jaki et al. [8] i.e. with "population" MOI frequencies as follows: = 1–4 %, 2–40 %, 3–10 %, 4–10 %, 5–20 %, 6–5 %, 7–6 %, 8–5 % [8]; this reflects a distribution of MOI observed in a relative intense area of malaria transmission. Separate infections in the MOI are assumed to be genetically distinct and unrelated, haploid, asexual clones that are presumed to have been inoculated by separate mosquito bites into the same person. Each clone within the blood sample is then randomly assigned an allele from each of three hyper-variable genetic markers used to estimate its MOI. Here assume the loci msp1, msp2 and ta109 whose allele frequency distributions are given by Jaki et al. [8]. Each clone is then assigned a biomass randomly selected from the interval 109 to 1011; the "biomass" is the total number of parasites in the human and this sampling interval is typical for symptomatic malaria infections. The relative biomass (i.e. its proportion of total biomass) of each clone is then calculated as that clone's biomass divided by the total biomass in the patient. Importantly, the genotyping signal from a SNP or MOI allele will be assumed to be proportional to the relative biomass of parasites containing that allele. Each clone is then assigned a resistance haplotype defined at a user-defined number of SNPs. This may be achieved using user-defined resistance allele frequencies (RAF) at each SNP in the haplotype and assuming linkage equilibrium (LE) between the codons. Alternately the haplotype frequencies can be input directly from user-defined haplotype frequencies if alleles at the SNPs are in linkage disequilibrium. This approach was used to generate genetic datasets for subsequent analysis. Unless stated otherwise assumed: 100 blood samples per dataset, diallelic SNPs (i.e. either resistant or sensitive) RAF at each codon ranging from 1 to 50 %, and linkage equilibrium (LE) between all SNPs and MOI markers. 1000 datasets were generated and analysed assuming differing LoD i.e. 0.0/0.0, 0.1/0.05, 0.2/0.1, 0.3/0.15 where the first number is LoDSNP and the second is LoDMOI. Simulation of observed (genotype) data Genotypes are the observable data obtained on human blood samples and are subjected to the sources of genetic ambiguity described above i.e. genotyping errors arising from LoD and the fact that different combination of haplotypes may give rise to the same observed genotype. The "true" genetic data are therefore processed as follows to simulate what would actually be observed in the blood samples. Observed MOI The strength of each genotyping 'signal' is calculated from their relative biomasses. The cut-off for distinguishing true signals from 'noise' may differ slightly from that used for SNPs which is why having different detection limits for LoDSNP and LoDMOI. The novel algorithm assumes a signal less than a certain proportion of the major signal, this proportion being denoted LoDMOI, is regarded as 'noise'. So if LoDMOI = 0.1, signals <10 % of the maximum would be regarded as noise and would not contribute to the 'observed' blood sample genotype. Alternative algorithms for distinguishing noise in MOI genotyping suggested (Additional file 1), and can be integrated into the code if required. The observed MOI is then calculated as the maximum number of the alleles observed at three hyper-variable genetic markers msp1, msp2 and ta109. Observed genotypes These are calculated in an analogous manner to MOI i.e. by assuming that a clone's biomass determines its contribution to the genotyping signal. The total 'signal' for each allele at each SNP is then calculated and compared to the user-defined LoDSNP to find which alleles are detectable and contribute to the observed blood genotype. Finally run a "reality check" on the simulated blood dataset as would be done for real data. In particular, search for samples with observed MOI = 1 and one of the SNPs is heterozygous. These observations are incompatible and generally occur when MOI ≥ 2 but appears to have MOI = 1 for one of two main reasons. Firstly, the ≥2 clones are identical at all three MOI loci purely by chance such that the observed MOI = 1. Secondly, the clones do differ at one or more MOI loci, but difference in genotyping sensitivity (LoD) between MOI and SNPs means only a single MOI allele is detected at each hypervariable locus but a heterozygote is detected at one of the SNPs. In both cases, the MOI is reset to have a value of two as would likely occur when processing clinical samples. Novel haplotype reconstruction methods The expectation–maximization (EM)-algorithm Here after called the "EM" algorithm. This is a natural approach to estimating population parameters where the model depends on unknown latent variables [9, 10]. The EM-algorithm was first implemented for haplotype reconstruction by Excoffier and Slatkin [11], Hawley and Kidd [12], and Long et al. [13]. The EM method implemented here is a variation that incorporates MOI. It consists of several distinct steps and is explained in detail in the Additional file 1. The Markov chain Monte Carlo (MCMC)-algorithm Here after called the "MCMC" algorithm. This approach drawn iteratively samples in a way that each step the process should be drawing from a distribution that is becoming closer and closer to the target distribution [14–16]. The MCMC algorithm was first implemented for haplotype reconstruction by Stephens et al. [17], Stephens and Donnelly [18] and is implemented by us as described in the Additional file 1. The confidence interval around haplotype frequency estimates Once haplotype frequencies have been estimated, by either the EM or MCMC methods, the confidence interval (CI) around these estimates are calculated from the exact binomial tail areas [19] that are usually considered as the gold standard. The lower and upper bound of the interval are defined via quantiles of the F distribution: $$\begin{aligned} &\frac{x}{{x + (n - x + 1)F_{2x,1 - \alpha /2}^{2n - 2x + 2} }} \\ & \quad \le \theta_{i} \le \frac{{(x + 1)F_{2n - 2x,1 - \alpha /2}^{2x + 2} }}{{x + (n - x + 1)F_{2n - 2x,1 - \alpha /2}^{2x + 2} }}\end{aligned} $$ where x = θ i n, θ is the haplotype frequency, n is number of blood sample (sample size) and α is the required width of the CI (so α = 0.95 for 95 % confidence intervals). Existing statistical methods of haplotype reconstruction There are three other published methods that are available to use: malaria.em, Taylor et al. as R packages and MalHaploFreq software to infer haplotype frequencies that compared against the two novel methods described above. Maximum likelihood (ML) estimation using a hill climbing algorithm described in [4]. The approach was called MalHaploFreq and hereafter will be called the "MHF" algorithm. This algorithm uses a hill climbing as an iterative optimization method where the function to be maximized is evaluated at each step. The functions parameters are systematically varied each step with the goal to find a better solution than the previous one. Another expectation–maximization (EM) algorithm as described in [5]. Hereafter this will be called the "R-EM" approach. This is as efficient iterative maximum likelihood approach. The algorithm alternates between two steps expectation (E-step) the posterior probabilities of all haplotype combinations and maximization (M-step) the expectation of the log likelihood of the frequency estimates is maximized. The MOI values for each sample are used in the analysis if they are known. If unknown, MOI were assumed to follow a Poisson distribution with mean = 2. It differs from the EM method implemented here as the latter does not use the posterior probabilities of all haplotype combinations in the expectation. A Bayesian approach as described in [6]. Here after it will be called the "Bayesian" algorithm. It uses a Metropolis–Hastings Markov chain Monte Carlo (MH-MCMC), The MH-MCMC is used to draw samples of genotype frequencies conditional on the observed data. Each time a new genotype was sampled within the recursive re-sampling scheme. The genotype frequency samples drawn using the MCMC algorithm were then used to infer the relevant haplotype frequencies. The average of the frequency sample set was used as a point estimate of the haplotype frequencies. The algorithm starts with an initial estimate of haplotype frequencies, a vector of MOI in each patient, and a matrix of genotype counts. It proposes an update for the MOI vector and genotypes counts. The proposed MOI vector and genotype counts are accepted for rejected based on the Metropolis–Hastings ratio, which includes both the proposal densities and posterior densities. The MOI is based on prior distribution of four possible distributions (Uniform, Poisson, negative Binomial, and Geometric). The parameter of the distribution is set equal to the reported mean MOI. It differs to the MCMC method implemented here because, in its simplest form, the latter proposes an update only for the set of haplotypes because the MOI is known for each patient and the proposed set of haplotypes are accepted or rejected based on maximizing the conditional probability of observing the complete data. Evaluation of different statistical methods There are several published methods and programmes for inferring malaria haplotype frequency [4–6], plus the MCMC and EM algorithm developed here, so objective metrics are required to quantify their relative performances. Simulate 1000 datasets as described above, assuming, for simplicity, that resistance is encoded at two loci (so there are four resistance haplotypes). Each dataset is obtained by a process of five sequential steps: The population frequencies of haplotypes are defined by selecting a RAF for each locus at random and the four population haplotype frequencies obtained assuming linkage equilibrium between the alleles. A field survey of malaria blood samples is simulated. Each patient in the dataset has an MOI assigned at random according to the frequencies given above. The malaria clones (a number equal to the MOI) are then sampled at random according to the "true" population frequencies of resistance haplotypes and polymorphic markers (msp1, msp2, ta109); note that the "sampled" resistance haplotype frequencies in the dataset will differ from the "true" frequency due to this sampling process. The MOI polymorphic markers and resistance SNPs in each patient are then processed to obtain the 'genotypes' observed in the blood samples taken from patients (an example present in Table 1) depending on the LoD. The "estimated" resistance haplotype frequencies are obtained from each of the statistical programmes described above. Randomly select one "estimated" haplotype frequency from that dataset to evaluate the performance of the methods. One haplotype used in each dataset because the haplotype frequency estimates within each datasets are non-independent; for example a large deviation in estimating one frequency must be matched by a large error in another because the estimates must sum to unity. Table 1 How malaria datasets are simulated Each of these five steps is repeated for each of the 1000 datasets. The datasets and selected haplotype in each dataset are kept the same for each of the five analysis method; this allows a direct comparison between the different methodologies used to infer haplotype frequencies. The performance of the different methods is then measured as follows. Note that "population/sample" means that only one of these definitions should be used not that a division should be applied. These metrics are fairly standard ones used in haplotype reconstruction in conventional (i.e. diploid) organisms and details can be found elsewhere [11, 20–23]. Note that 'P' used below is a vector whose number of elements equal h, the number of potential haplotypes in the malaria population (in the sample case 2 resistance SNPs so h = 22 = 4, but this may vary; for example if there are five SNPs then h = 25 = 32 and so on). The elements of the vector are indicated by the superscript i. The accuracy of the estimates The 'population' and "sampled" values are compared with the "estimated" value reported as: The correlation coefficient (R 2 ) between population/sample, and estimated haplotype frequency value is recorded for each of the 1000 selected haplotypes. A similarity index (I F ) [11] was calculated to examine how close the computationally estimated haplotype frequencies are to the 'population' and "sampled" haplotype frequencies as: $$\begin{aligned} I_{F} &= \sum\limits_{i = 1}^{h} {\hbox{min} \left(Pi_{estimated} ,Pi_{population/sample}\right )} \\ &= 1 - \frac{1}{2}\sum\limits_{i = 1}^{h} {\left|Pi_{estimated} - Pi_{population/sample} \right|} \end{aligned}$$ P estimated and P population/sample denote, respectively, the estimated and the population/sample haplotype frequency of i haplotype. This measure incorporates all h haplotype frequencies and thus captures the overall difference between estimated and population/sample frequencies. It varies between one, when population/sample and estimated haplotypes frequencies are identical, and zero, when estimated haplotypes frequencies tending to zero. The mean squared error (MSE) [20] was calculated as: $$ MSE = \frac{{\left[ {\sum\nolimits_{i = 1}^{h} {\left(Pi_{estimated} - Pi_{population/sample} \right)^{2} } } \right]}}{h} $$ P estimated and P population/sample denote, respectively, the estimated and the population/sample haplotype frequency of i haplotype, h is the number of haplotype frequencies in the population. Since these indexes (I F , and R 2) gives more weight to high frequency haplotypes the change coefficient C [21] assess the scaled change in haplotype frequencies and was calculated as: $$ C_{i} = \frac{{\left(Pi_{estimated} - Pi_{population/sample} \right)}}{{Max\left[Pi_{estimated},\,Pi_{population/sample}\right ]}} $$ The coefficients were computed for each possible haplotype across statistical methods and presented as plot difference of estimation (%) on Y-axis against the haplotype frequency for each estimate on the X-axis. This metric is useful as it indicate that the haplotype frequency estimated and the haplotype frequency population/sample is the same. The value of the coefficient C ranges from 1 to −1, the value 0 indicating that the haplotype frequency estimated and the haplotype frequency population/sample are identical. Positive values indicate that haplotype frequency estimates tend to be larger than the population/sample frequency. The validity of the methods measure how often the "population" and "sampled" frequencies fall within the 95 % confidence intervals (CI) of the estimated frequency. It would expect ~5 % of "population" values to fall outside the CI and ≤5 % of "sampled" values to fall outside the CI. The speed of the analyses which is self explanatory recorded and presented as a line charts. Three hyper-variable genetic markers msp1, msp2 and ta109 are used to estimate MOI. The observed MOI misclassifies (underestimates) population MOI by ~5 % even when the LoDSNP and LoDMOI are zero; this occurs when genetic profiles match purely by chance. Increasing LoDMOI misclassifies the population MOI by increasing amounts i.e. by 8, 11 and 14 % when assuming LoD(SNP/MOI) 10 %/5 %, 20 %/10 %, 30 %/15 %, respectively. Similarly errors arise when, increasing LoDSNP as the observed genotypes are not necessary the true ones. Samples that are pure mutant or pure wildtype at SNPs will always be correctly classified (there are no minor genotyping signals at these SNPs) but genotypes at SNPs that are mixed mutant/wildtype may be misclassified as pure mutant or pure wildtype if the minor signal is lost. The mixed mutant/wildtype genotype was misclassified as pure mutant or pure wildtype by 5, 11 and 17 % among LoD(SNP/MOI) 10 %/5 %, 20 %/10 %, 30 %/15 %, respectively (as expected the true and observed SNP genotypes are identical when the LoDSNP are zero). These underestimated MOI values and misclassified genotypes caused by LoD potentially affect many of the subsequent estimates of haplotype frequencies as will be described below. The estimated haplotype frequency between the five statistical methods i.e. MHF (MalHaploFreq), R-EM (malaria EM), Bayesian (Bayesian statistic), EM (EM-algorithm), MCMC (Markov Chain Monte Carlo) and the population/sample haplotype frequencies among four combinations of LoD(SNP/MOI) showed high concordance. Figure 1 shows the absolute deviation of the estimated haplotype frequency from population/sample haplotype frequency. The correlation coefficient (R 2) is slightly higher by 0.49–0.85 % in the sample haplotype compared to the population haplotype among all statistical methods. Increasing both LoDMOI and LoDSNP decreases the correlation coefficient by 0.20–0.36 % among MHF, Bayesian and EM methods. Conversely, increasing both LoDMOI and LoDSNP increased the correlation coefficient by 3.06–3.29 % among R-EM method and by 0.20–0.15 % among MCMC method (Additional file 1: Figures S1–S3). The difference between correlation coefficients among statistical methods is less than 3.7 %. The data points lie close to the force line the diagonal through [0, 1]. There was also a tendency for the estimates to cluster more closely around the force line at high frequencies, showing that there is a tendency for high-frequency haplotypes to be more accurately estimated. Figure 1 reveals the presence and extent of bias as systematic deviations from the force line. The R-EM and MCMC methods both show some bias with high frequencies being slightly underestimated and low frequencies being slightly overestimated. However the bias is slight and changes with limits of detection. Additional file 1: Figures S1–S3 are analogous to Fig. 1 but illustrate the effect of increasing LoD. At higher limits (Additional file 1: Figure S3) the MHF and Bayesian methods appear to overestimate high frequency haplotype and under-estimate low frequency haplotypes. The correlation (R 2) between population/sample and estimated haplotype frequency across statistical methods among LoDSNP/MOI = 0.00/0.00 The data on Fig. 1 can be processed to show the change coefficient C which incorporates both the direction and the percentage change between the estimated and population/sample frequencies (this deviation is known as "change" in the haplotype literature [21] although it would elsewhere generally be called "error"). In this literature a "small change" is conventionally denoted as C ≤ 0.15 [21]. Figure 2 demonstrates that dramatic values of C i.e. ≥90 % occur at the lowest haplotype frequencies. The worst estimates (C ≈ 1) occur when haplotype frequencies are less than around 7 % using MHF. In addition, poor estimates occurs at haplotype frequency <6 % among R-EM and Bayesian statistical methods. A single example of C ≈ 1 occurred at haplotype frequency 2.7 % among MCMC statistical methods. No such examples of C ≈ 1 occur at the EM statistical methods. Increasing both LoDSNP and LoDMOI (Additional file 1: Figures S4–S6) slightly decreases the lowest haplotype boundary at which C ≈ 1 occurs i.e. to <5 % among MHF, <4 % among R-EM and Bayesian statistical method, <2 % among EM statistical method and <1 % among MCMC statistical method. Approximately two-thirds (68 %) of the haplotype frequency estimates show either no change or small change (defined as C < 15 %) at LoDSNP of 0.00 and LoDMOI of 0.00 among MHF, EM statistical method and MCMC statistical method. On the other hand only 47 % among R-EM and 64 % among Bayesian statistical method of the haplotype frequency estimates show either no change or small change. Increasing both LoDSNP and LoDMOI to 0.30 and 0.15, respectively decreased the haplotype frequency estimates that show either no change or small change to 48 % among MHF, 49 % among Bayesian statistical method and 64 % among EM and MCMC statistical methods. On the other hand, increase the haplotype frequency estimates that show either no change or small change to 68 % among R-EM statistical method (Additional file 1: Figure S6). Figure 2 confirms the bias shown on Fig. 1, i.e. that R-EM and MCMC tend to underestimate high frequency haplotypes and over-estimate the low frequency ones. The change coefficient (C) between population/sample and estimated haplotype frequency across statistical methods among LoDSNP/MOI = 0.00/0.00 Figure 3 show the similarity index (I F ) of the estimates haplotype frequency compared population/sample haplotype frequency. The five statistical methods provided similarity index (I F ) values very close to each other. The similarity index is higher by 0.05–1.0 % in the sample haplotype compared to the population haplotype among all statistical methods and decreased with increasing LoDSNP (30, 20, 10, 0 %) and LoDMOI (15, 10, 5, 0 %) show in the Fig. 3. The difference between similarity indexes among statistical methods is less than 4 %. Increasing both the LoDSNP and LoDMOI decreases the similarity index between 2 and 3 % in MHF, and Bayesian methods, by 1 % in EM method; conversely, R-EM shows increasing values of I F while the MCMC analyses is more complex, I F increasing slightly then decreasing slightly. The similarity index gives more weight to common haplotypes whose frequencies are the most accurately estimated. This tendency is reflected in the MSE statistics shown. Figure 4 shows the mean squared error (MSE) of the estimated haplotype frequencies around the population/sample haplotype frequency. The MSE is lower by 0.0002–0.0004 in the sample haplotype compared to the population haplotype among all statistical methods. The difference between MSE between statistical methods is less than 0.002. Increasing both the LoDSNP and LoDMOI increased the MSE by 0.001 in MHF, and Bayesian methods, and slightly in EM methods (0.0005); conversely R-EM and MCMC methods decrease the MSE values by 0.002 and 0.0002, respectively. The similarity index of the estimates haplotype frequency compared population/sample haplotype frequency across statistical methods The MSE of the estimated haplotype frequencies compared to population/sample haplotype frequency across statistical methods The validity of the methods can be quantifies as how often the "population" and "sampled" frequencies fall out of the 95 % confidence intervals (CI) generated by the analyses. It would expect ~5 % of "population" values to fall outside the CI and <5 % of "sampled" values to fall outside the CI. Figure 5 shows that, when LoD = 0 then the EM and MCMC methods all produce very narrow CI, while MHF is about correct (containing 95 % of the values) while the R-EM and Bayesian methods produces CI that are too wide with only about 85 and 90 % respectively of true values being contained within the CI. One explanation for the differences in performance is the way the CI was calculated. The MHF statistical methods calculate 95 % CI boundaries as occurring when the likelihood is less than 2 log units below the maximum-likelihood. The R-EM methods calculate 95 % CI from the standard error of the estimated haplotype frequencies. The Bayesian methods calculate 95 % CI as quintiles from haplotype frequency matrix. The EM and MCMC statistical methods calculate 95 % CI base on exact binomial tail areas (Eq. 1). A more fundamental difference between the methods is that Bayesian and R-EM did not incorporate MOI into the calculations; worryingly, both methods produced haplotype estimates that often lay outside the CI even when molecular detection was perfect. The percentage of results falling outside of the 95 % CI is slightly lower, by 2–5 %, in the sample haplotype frequency compared to the population haplotype frequency among MHF, R-EM, and Bayesian methods. Increasing both the LoDSNP and LoDMOI increased dramatically the error rates produced by Bayesian and MHF approaches with 27.4–26.3 %, and 15.6–17.4 % of estimates lying outside the 95 % CI among. EM methods were robust to changes in LoD with variation in the percentage falling outside the 95 % CI being <1.6 % across the four LoD assumptions. The R-EM and MCMC methods were similarly stable with variation being <2.1 and <2.8 % respectively. In summary, the difference between the missed CI between statistical methods is almost 30 %. The validity of the methods, calculated "population/sampled" frequencies fall out of the 95 % CI Figure 6 shows the computational time for the statistical methods. There is a big difference between the statistical methods of almost 39 s. Increasing LoDMOI and LoDSNP decreased the time of the analysis by 74 % among MHF, 50 % among R-EM, 92 % among Bayesian, 77 % among EM, and 75 % among MCMC. The most likely explanation for the reduction in time taken to run the analyses is that as LoD increases, the observed MOI and genetic diversity within patients tends to decrease; consequently the datasets become slightly simpler and their analysis faster. Computational time for five methods The haplotype frequency estimations for real data among five statistical methods are shown in Table 2. This was anonymized data kindly provided by colleagues at the Swiss Tropical and Public Health Institute; it has not yet been published so details cannot be provided here except to note that the data came from a sub-Saharan country in an area of intense transmission. The data set containing two biallelic SNPs for 82 individual with high MOI was used to check the applications of five statistical methods (MHF, R-EM, Bayesian, EM and MCMC). The results obtained from different methods were very similar with the mean difference of estimated haplotype frequencies between the statistical methods is about 3 %. The lowest difference of estimated probability is 1.5 % present between R-EM and MCMC methods and the highest difference of estimated probability is 6 % present between Bayesian and EM methods. These small differences were observed for all the data sets so only one set of results is shown. Table 2 The haplotype frequency estimations for real data set (Swiss TPH), n = 82 individual This study proposed two statistical methods EM-algorithm and MCMC for haplotype reconstruction for multiply infected human blood samples. These methods have three major advantages over existing statistical methods: increased accuracy, validity (i.e. the percentage of true frequencies falling within the 95 % credibility limits) and their ability to return the probabilities of possible haplotype combination in each individual in the EM-algorithm or the uncertainly probability of the haplotype frequencies in the MCMC. Unfortunately, real data sets do not exist with known haplotypes to allow statistical comparisons of different methods. This study used simulation data to test the accuracy of the estimated haplotype frequencies, assuming different levels of LoDSNP and LoDMOI. These reflect realistic conditions of molecular detection in blood samples obtained from malaria patients. Since the simulation data represent the true situation among malaria patients, simulation results provide evidence that there is high confidence in the haplotype frequency estimation produced by any one of the statistical methods. However, there are differences between the statistical methods especially with increasing LoDMOI and LoDSNP that occur in real data. Even if the LoDMOI are zero there is an impact on the observed MOI caused purely by chance when clones share the same MOI allele: the population MOI in misclassified (under-estimated) in around 5 % of patients. This directly effects the estimation of the haplotype frequency. That is the main limitations for the efficient use of haplotype frequency estimation in current, large scale, genetic epidemiology data among malaria patient. The main difference between the statistical methods is how they deal with the number of infections, number of copies of haplotype (haplotype combination). The MHF (MalHaploFreq) cycles through all the combinations of haplotypes that can occur within that multiplicity of infection. The EM-algorithm implemented as R-EM (malaria.em) is based on the assumption that each patient has a fixed MOI. The Bayesian algorithm (Bayesian statistic) is based on model specification the selects a prior distribution of MOI based on one of four possible distributions (Uniform, Poisson, negative Binomial, and Geometric). The novel methods (EM and MCMC) used MOI estimates that were assumed to be provided for each patient and that clones within the MOI were acquired at random, so that the distribution of infections within patients is multinomial with the sample size of the MOI and frequencies of haplotype provided by their current estimates within the algorithm. The differences between the methods directly affect their results. The Hill climbing algorithm (MalHaploFreq) is accurate and valid when the LoDMOI and LoDSNP are zero, but increased LoD dramatically decreased the accuracy and the validity of the result (Fig. 5). The EM-algorithm implemented in R-EM (malaria.em) with the fixed MOI presents the opposite i.e. its accuracy and the validity increases when the LoDSNP and LoDMOI increases. The Bayesian (Bayesian statistic) with the prior distribution of MOI appears exhibits accurate and valid when the LoDSNP and LoDMOI are zero, but increasing the LoDs dramatically decreases the accuracy and the validity of the results. The novel EM-algorithm method obtains highly accurate and valid results irrespective of the LoDSNP and LoDMOI values. The MCMC method obtains results that are sensitive to LoD levels: its accuracy and validity both decrease as the LoDSNP and LoDMOI increase. Adding the constant k (1/MOI) to the algorithm preferentially weights the low-frequency haplotypes which should be reflected in slight overestimates of their frequency. Additional file 1: Figure S7 demonstrates the impact of several values of k i.e. 0, 0.01, 0.05, 0.1, 0.2, and 0.5 among the EM-algorithm. The correlation coefficient (R 2) is slightly higher by the k = 1/MOI. Additional file 1: Figure S8 limits the X-axis scale to show the deviation of the estimated haplotype frequency among RAF <0.15 from population or sample haplotype frequency. The correlation coefficient (R 2) is higher in the EM estimated haplotype frequency with correction k = 1/MOI compared to using EM without correction i.e. 6.24 % compared to 10.71 %. The same basic approach is used in this EM-algorithm and that used in MalHaploFreq i.e. all possible combinations of haplotypes within a genotype are examined to obtain a likelihood of observing the dataset given current estimates of haplotype frequencies. Consequently, the results are very similar: correlation coefficient of the 1000 estimates obtained by the two methods is R 2 = 0.98. However the EM-algorithm is much faster because the combinations are only generated once prior to the estimation steps, plus the 95 % CI are calculated algebraically in the EM-algorithm (Eq. 1) whereas MalHaploFreq uses a rather crude (and slow) hill-descending algorithm to define the CI as −2 LL units less than the maximum LL. The speed of the EM-algorithm is considerably faster than MHF (although MHF could be made considerably faster if it estimated only haplotype frequencies then calculated 95 % CI using Eq. 1, rather than its hill-descending algorithm). Speed becomes important because both methods suffer from one potential problem: they seek to identify the haplotype frequencies that provide best prediction of observed data using an interactive process that gradually increases the likelihood of observing the data. The problem is that this "hill climbing" process may converge onto a local "peak" of likelihood and miss a peak of higher likelihood located some distance away in parameter space. The consequence is that both methods need to be started from a large number of different parameter values to check that a single peak of likelihood is always identified and, if not, to ensure the analysis returns the haplotype frequency estimates obtained at the maximum peak. This potential problem was investigated by Hastings and colleagues [1, 4] who analysed datasets and reported this problem of multiple peaks existed in the analyses. The problem of multiple peaks is unpredictable so users are urged to analyze their dataset using a large number (1000 seems reasonable) of initial haplotype frequency estimates. This problem not exist in the EM-algorithm presented here, it can start from any set of initial starting frequencies, use random starting frequencies to check the algorithm converged on the same final estimates and runs quickly so the time penalty should be negligible. The EM-algorithm examines each possible haplotype combination in an individual that could plausibly give rise to his/her observed genotype. This mean that once the haplotype estimates are obtained it would be possible to use them to obtain the probability that any given patient harbours a "drug resistant" haplotype and make clinical decisions on this basis. Whether data could be collected and analysed in a sufficiently timely manner for this to occur is debatable but consider it a point worth making. More plausibly, the presence of putative "resistant" haplotype can be inferred in individual patients and the probability of their presence used as a weighting in a logistic regression predicting the therapeutic outcome (cure/fail) of drug treatment. A positive impact of the putative-resistant haplotype on therapeutic outcome would be indicative that it truly does affect resistance levels. Given the potential importance of MOI estimates, it is unfortunate that some surveys do not collect it [24–26]. The R-EM, Bayesian, EM and MCMC can calculate the haplotype frequency when MOI information on a patient was unknown (i.e. unmeasured or missing). Every one of the methods makes a prior assumption on the probability distribution on the number of infections per individual. The R-EM algorithm assumes MOI follows a Poisson distribution with mean = 2, the Bayesian algorithm can assume one of four possible distributions (Uniform, Poisson, negative Binomial, and Geometric). The EM and the MCMC algorithm described here, analyses datasets that were simulated assuming the frequency distributions that given by Jaki et al. [8]. The EM and the MCMC algorithm (Additional file 1: Figures S9, S10) obtain more accurate frequency estimates, and are slightly less affected by LoD of the SNPs and the MOI than compared to the related statistical approaches. Another option of the MCMC algorithm includes update to the MOI could be proposed when patient MOI is unknown and accepted/rejected during the updating stage. Four different algorithms runs with changing the MOI, present in the Additional file 1 MCMC algorithm 2, 3 and 4. Initial analyses suggested updating the MOI during MCMC made little or no difference to the resulting haplotype frequency estimates (Additional file 1: Figures S11–S14). A potential additional advantage is that the MOI distribution would be updated each iteration and hence the MCMC algorithm naturally provides an estimated distribution of MOI frequencies. However, the estimated MOI distribution did not reliable recover the population MOI and the analyses suggested more weight was given to MOI = 2 (60–80 % of the MOI distribution) in those algorithm which did update MOI (i.e. algorithms 2, 3 and 4; Additional file 1: Figures S15–S18). In retrospect this is not surprising: all the methods seek to provide the simplest explanation of the data and any heterozygous genotype can always be explained most parsimoniously by two haplotypes in the blood sample; the fact that some estimates of MOI are greater than 2 presumably reflects the situation a "2-haplotype" explanation is untenable due to the low frequency of the two plausible haplotypes. One consequence of this is that it appears impossible to analyse SNP data to obtain a MOI distribution using the MCMC Algorithms described here. This is unfortunate as MOI distributions are useful epidemiological indicators, high MOI values tending to reflect higher transmission intensities, and may be useful to asses, for example, the impact of malaria control measures on malaria transmission rate. It may be possible to recover more accurate MOI distribution by forcing a distribution onto the data, typically a Poisson or Negative Binomial. This was done by previous authors but was not attempted here as the intention was to avoid having to pre-specify a MOI distribution and simply let the MCMC algorithm find the best explanation for the data. It did this with considerable success, the haplotype frequency estimates showing excellent correlation with the population/sample values even in the absence of MOI information. The study was conducted within a number of constraints imposed to ensure they appropriately address key questions in malaria research. The simulations were limited to two and three SNPs to simplify the comparison (results from haplotypes defined at three SNPs are presented in Additional file 1: Figures S19–S23, the same pattern irrespective of whether haplotypes are defined at two or three loci). This is consistent with previous analyses: MHF (MalHaploFreq) is limited to analysing up to three SNPs, the R-EM (malaria.em) can analyse more than three SNPs but required a considerable amount of computational time, and the Bayesian (Bayesian statistic) method, as implemented by Taylor et al. [6] is limited to handling up to seven SNPs. The novels methods described above did not have to limit the number of SNPs analysed but the examples were limited to three SNPs because the complexity of calculations rises exponentially with the number of SNPs and it is rarely necessary in practice to analyse more than three SNPs simultaneously [4] when investigating drug resistance haplotypes. However, calculating the frequencies of haplotypes that are defined at a large number of SNPs increases the computational time, the magnitude of this increase depending on the computer memory. It was considered important to recognize the technical limitations of genotyping so three values for levels of detection (LoDSNP and LoDMOI) were investigated. The MOI distribution reflected the default frequency distributions given by Jaki et al. [8] and represents an area of relatively intense malaria transmission where MOI tends to be high and where the statistical problems of correctly estimating haplotype frequencies are most severe. In summary, the two novel methods proposed here have advantages over previous methods of inferring haplotype frequency. If MOI is known the EM algorithm appears the most natural way to analyse the data. It is explicitly set up to incorporate MOI data on individual patients (in contrast to EM-R and Bayesian) and is much faster than MHF. It also appear robust to chance misclassification of MOI and to genotyping detection limits (e.g. Figure 5) If MOI information is absent, the MCMC algorithm seems a more natural way of analysing data as it allows the algorithm to fit individual MOI to each patient rather than, as in R-EM and Bayesian, forcing a pre-determined distribution (Poisson or negative Binomial) onto the MOI; it appears that, at least in the simulations analysed here, that MOI is underestimated (Additional file 1: Figure S15) but the accuracy of haplotype estimates is maintained and is comparable to the other methods (Additional file 1: Figures S11, S12). In addition, isolating MCMC from its usual Bayesian context means that decisions on prior distributions of haplotype frequency and MOI distribution can be avoided as the MCMC converges on accurate estimates of haplotype frequencies irrespective of initial assumptions of haplotype and MOI frequencies. The R code used for these simulations and analyses are freely available on request to GKD. EM: expectation–maximization MCMC: Markov chain Monte Carlo MOI: multiplicity of infection RAF: resistance allele frequencies LoD: limit of detection MHF: MalHapFreq R-EM: malaria.em MH-MCMC: Metropolis–Hastings Markov chain Monte Carlo ML: LE: linkage equilibrium SNPs: single nucleotide polymorphisms log-likelihood I F : similarity index Hastings IM, Nsanzabana C, Smith TA. A comparison of methods to detect and quantify the markers of antimalarial drug resistance. Am J Trop Med Hyg. 2010;83:489–95. Greenhouse B, Dokomajilar C, Hubbard A, Rosenthal PJ, Dorsey G. Impact of transmission intensity on the accuracy of genotyping to distinguish recrudescence from new infection in antimalarial clinical trials. Antimicrob Agents Chemother. 2007;51:3096–103. Wigger L, Vogt JE, Roth V. Malaria haplotype frequency estimation. Stat Med. 2013;32:3737–51. Hastings IM, Smith TA. MalHaploFreq: a computer programme for estimating malaria haplotype frequencies from blood samples. Malar J. 2008;7:130. Li X, Foulkes AS, Yucel RM, Rich SM. An expectation maximization approach to estimate malaria haplotype frequencies in multiply infected children. Stat Appl Genet Mol Biol. 2007;6:33. Taylor AR, Flegg JA, Nsobya SL, Yeka A, Kamya MR, Rosenthal PJ, et al. Estimation of malaria haplotype and genotype frequencies: a statistical approach to overcome the challenge associated with multiclonal infections. Malar J. 2014;13:102. R Development Core Team. R: a language and environment for statistical computing. Vienna: R Foundation for Statistical Computing; 2014. Jaki T, Parry A, Winter K, Hastings I. Analysing malaria drug trials on a per-individual or per-clone basis: a comparison of methods. Stat Med. 2013;32:3020–38. Dempster AP, Laird NM, Rubin DB. Maximum likelihood from incomplete data via em algorithm. J R Stat Soc Ser B Methodol. 1977;39:1–38. McLachlan GJ, Krishnan T. The EM algorithm and extensions. New York: Wiley; 1997. Excoffier L, Slatkin M. Maximum-likelihood estimation of molecular haplotype frequencies in a diploid population. Mol Biol Evol. 1995;12:921–7. Hawley ME, Kidd KK. HAPLO: a program using the EM algorithm to estimate the frequencies of multi-site haplotypes. J Hered. 1995;86:409–11. Long JC, Williams RC, Urbanek M. An E–M algorithm and testing strategy for multiple-locus haplotypes. Am J Hum Genet. 1995;56:799–810. Gilks WR, Richardson S, Spiegelhalter DJ. Markov chain Monte Carlo in practice. London: Chapman & Hall; 1996. Roberts GO, Sahu SK. Updating schemes, correlation structure, blocking and parameterization for the Gibbs sampler. J R Stat Soc Ser B. 1997;59:291–317. Gelman A. Bayesian data analysis. 2nd ed. Boca Raton: Chapman & Hall/CRC; 2004. Stephens M, Smith NJ, Donnelly P. A new statistical method for haplotype reconstruction from population data. Am J Hum Genet. 2001;68:978–89. Stephens M, Donnelly P. A comparison of bayesian methods for haplotype reconstruction from population genotype data. Am J Hum Genet. 2003;73:1162–9. Casella G, Berger RL. Statistical inference. Pacific Grove: Brooks/Cole Pub. Co.; 1990. Fallin D, Schork NJ. Accuracy of haplotype frequency estimation for biallelic loci, via the expectation-maximization algorithm for unphased diploid genotype data. Am J Hum Genet. 2000;67:947–59. Tishkoff SA, Pakstis AJ, Ruano G, Kidd KK. The accuracy of statistical methods for estimation of haplotype frequencies: an example from the CD4 locus. Am J Hum Genet. 2000;67:518–22. Adkins RM. Comparison of the accuracy of methods of computational haplotype inference using a large empirical dataset. BMC Genet. 2004;5:22. Istrail S, Waterman MS, Clark AG. Computational methods for SNPs and haplotype inference: DIMACS/RECOMB satellite workshop, Piscataway, NJ, USA. In: Istrail S, Waterman MS, Clark A, editors. November 2002 revised papers. Berlin: Springer-Verlag; 2004. Jennison C, Arnott A, Tessier N, Tavul L, Koepfli C, Felger I, et al. Plasmodium vivax populations are more genetically diverse and less structured than sympatric Plasmodium falciparum populations. PLoS Negl Trop Dis. 2015;9:e0003634. Escalante AA, Ferreira MU, Vinetz JM, Volkman SK, Cui L, Gamboa D, et al. Malaria molecular epidemiology: lessons from the international centers of excellence for malaria research network. Am J Trop Med Hyg. 2015;93:79–86. Mara SE, Silue KD, Raso G, N'Guetta SP, N'Goran EK, Tanner M, et al. Genetic diversity of Plasmodium falciparum among school-aged children from the Man region, western Cote d'Ivoire. Malar J. 2013;12:419. Conceived the study, interpreted the results and wrote the manuscript: GKD, IMH. Wrote the computer code and ran the experiments: GKD. Both authors read and approved the final manuscript. We thank Prof. Tom Smith and colleagues at the Swiss Tropical and Public Health Institute for access to data. Field data will not be shared; anonymized datasets were shared, in confidence, with us by colleagues at the Swiss Tropical and Public Health Institute. We can put any interested researchers in contact with these colleagues. We are prepared to share compute code and our simulated datasets on request. Medical Research Council, UK; Grant Ref. MR/K014676/1. Liverpool School of Tropical Medicine, Pembroke Place, Liverpool, L5 3QA, UK Gie Ken-Dror & Ian M. Hastings Gie Ken-Dror Ian M. Hastings Correspondence to Gie Ken-Dror. Data that support and expand some of the interpretations and conclusions drawn in the main text, but whose inclusion would detract from the main argument. Ken-Dror, G., Hastings, I.M. Markov chain Monte Carlo and expectation maximization approaches for estimation of haplotype frequencies for multiply infected human blood samples. Malar J 15, 430 (2016). https://doi.org/10.1186/s12936-016-1473-5 Haplotype reconstruction Expectation–maximization algorithm
CommonCrawl
Leveraging discrete modulation and liquid metal antennas for interference reduction Mirza Uzair Baig ORCID: orcid.org/0000-0003-0000-504X1, Kareem S. Elassy ORCID: orcid.org/0000-0001-7368-62671, Anders Høst-Madsen ORCID: orcid.org/0000-0002-1472-49531, Aaron T. Ohta ORCID: orcid.org/0000-0003-3789-897X1, Wayne A. Shiroma ORCID: orcid.org/0000-0002-1294-27031 & Aria Nosratinia ORCID: orcid.org/0000-0002-3751-01652 EURASIP Journal on Wireless Communications and Networking volume 2021, Article number: 158 (2021) Cite this article Recent progress in the understanding of the behavior of the interference channel has led to valuable insights: first, discrete signaling has been discovered to have tangible benefits in the presence of interference, especially when one does not wish to decode the interfering signal, i.e., the interference is treated as noise, and second, the capacity of the interference channel as a function of the interference link gains is now understood to be highly irregular, i.e., non-monotonic and discontinuous. This work addresses these two issues in an integrated and interdisciplinary manner: it utilizes discrete signaling to approach the capacity of the interference channel by developing lower bounds on the mutual information under discrete modulation and treating interference as noise, subject to an outage set, and addresses the issue of sensitivity to link gains with a liquid metal reconfigurable antenna to avoid the aforementioned outage sets. Simulations illustrate the effectiveness of our approach. One of the main difficulties in communication over wireless networks is interference, many of whose characteristics can be modeled and understood via the Gaussian interference channel [1] (see Fig. 1). This model, which can be used as a building block for other interference networks, has a capacity that is not known in the most general case, but in recent years much progress has been made in understanding its complicated behavior, e.g., [2]. It is known that capacity of this channel model is non-monotonic in the strength of interference links. In particular, if the interference is strong enough, the capacity of each link is equivalent to interference-free capacity [3]. More recent developments have shown that the capacity of the interference channel is very sensitive to channel coefficients; the operational significance of this fact is in the proliferation of sets or "pockets" in the vector space of channel gains, called "outage sets," that are unable to support a desired set of transmission rates. Furthermore, evidence has been accumulating [4] that discrete signaling imparts tangible benefits in managing interference, because if applied at the right scale, it can push interference to locations in the signal space that are less damaging to the desired signal. This is especially important when the decoder does not know the structure of the interference, or for complexity reasons does not wish to decode the interference. Motivated by the observations made above, this paper makes contributions to interference mitigation that are of an interdisciplinary nature, via advances in signaling and coding as well as antenna design. The two key components of this paper mirror the issues highlighted above: (1) to get as close to the capacity of the interference channel as possible via discrete signaling and to efficiently and accurately characterize these achievable rates and (2) to address the issue of the sensitivity of the interference channel capacity with respect to link gains. The former is addressed by developing new bounds on the mutual information of the two-user Gaussian Interference channel under discrete modulations, subject to an outage set. The latter is addressed by a reconfigurable antenna technology to avoid as much as possible the aforementioned outage sets. For a discrete modulation transmission strategy at the transmitter, and a receiver that does not decode or peel off the interference (also known as treating interference as noise, specifically TINnoTS—Treating interference as noise with no time sharing [4]), we calculate a bound on mutual information with a constant gap \(O\left( \log \gamma \right)\) to capacity (excepting an outage set) in the strong interference channel (where capacity is known). Our technique builds on Ozarow and Wyner [5], and the gap is an improvement over the \(O\left( \log \left( \gamma ^{-1}\ln {\mathrm{min}} \left\{ h_1^2P,h_2^2P\right\} \right) \right)\) gap reported in [4] with mixed inputs. For avoiding the channel gains leading to outage, we propose to use liquid metal reconfigurable antennas. Liquid metal devices and circuits have been implemented using various reported techniques [6,7,8,9,10,11]. Liquid metal antennas have the capability of changing the radiation [12, 13], polarization [14, 15], and resonant frequency of individual antennas [16, 17], thereby modifying the gains between different nodes in a manner beyond what is possible with signal processing alone. The feasibility of avoiding problematic or singular channel gains with liquid metal antennas is demonstrated via experimental extraction of the operating parameters of the proposed liquid metal antennas and then using these parameters in extensive simulations of the channel states produced by the liquid metal antennas and matching it with the low minimum-distance channel gains suggested by analysis. An early version of this work appeared in [18]; the present paper goes beyond [18] in the following aspects: (a) Theorem 2 is reformulated and its expression is distinct from the results of [18], (b) Sect. 5 on successive interference cancellation is novel, (c) Sect. 6.1 involving analytical and simulation modeling based on a dipole model is new, (d) several insightful simulations, including those represented in Figs. 9, 11, 12, are novel with respect to [18]. Definitions and notation Throughout the paper, we use pulse amplitude modulation (PAM) for transmitted signals. Specifically, by n-PAM modulation we mean a uniform distribution over the points $$\begin{aligned} \pm \sqrt{\frac{3}{n^{2}-1}},\pm 3\sqrt{\frac{3}{n^{2}-1}},\ldots \pm \left( n-1\right) \sqrt{\frac{3}{n^{2}-1}},&\quad n \,\,{\text{even}} \\ 0,\pm 2\sqrt{\frac{3}{n^{2}-1}},\pm 4\sqrt{\frac{3}{n^{2}-1}},\ldots \pm \left( n-1\right) \sqrt{\frac{3}{n^{2}-1}},&\quad n\,\,{\text{odd}} \end{aligned}$$ which results in zero mean and unit variance. The support of a real-valued random variable, denoted with \({{\text{supp}}}[\cdot ]\), is the set of points \(x\in {\mathbb{R}}\) that have nonzero probability under that random variable. The cardinality of a set (including that of a support) is denoted with \(|\cdot |\). The symbol \(\sim\) is used as an operator which means a random variable being drawn according to a probability law, e.g., \({\mathcal{N}}\left( \beta \right)\) denotes a Gaussian probability law with zero mean and variance \(\beta \in {\mathbb{R}}^{+}\). With a constant on the right hand side, e.g., \(a\sim b\), this notation indicates that a, b are of the same order. The aim of the paper is to show that a combination of discrete modulation and liquid metal antennas can be used to achieve close to capacity in interference channels while treating interference as noise. The methodology used is fairly unique. The paper combines information theory calculations, specifically novel bounds on discrete modulation, with realistic simulations of liquid metal antennas. The simulations were conducted with ANSYS High-Frequency Structure Simulator (HFSS) software. The simulated antenna patterns are fed into the information theory bounds, while transmit powers are set at realistic values, as shown in results section. Channel model and discrete modulation As argued in Introduction, to investigate the effect of interference, we consider the interferencer channel in Fig. 1. Channel model The received signals are $$\begin{aligned} Y_{1}&=\sqrt{P}h_{1}X_{1}+\sqrt{P}h_{2}X_{2}+Z_{1} \\ Y_{2}&=\sqrt{P}h_{1}X_{2}+\sqrt{P}h_{2}X_{1}+Z_{2}, \end{aligned}$$ where the random variables \(X_{i},X_{j},Z_{i},Z_{j}\) are mutually independent, \(Z_{i},Z_{j}\sim {\mathcal{N}}(N)\) are independent Gaussian random variable with variance N, and \({\mathbb{E}}\left[ X_{1}^{2}\right] ,{\mathbb{E}}\left[ X_{2}^{2}\right] \le 1\), and P is the power constraint on the transmitters. The coefficients \(h_{1}\) and \(h_{2}\) are the channel gains. As shown in Fig. 1 and Eq. (2), we only consider the symmetric interference channel. In principle, results can be generalized to non-symmetric channels, but results become quite messy; in many expositions on the interference channel [1, 2], the authors start with the symmetric channel, which we will follow. In line with the symmetry of the channel, we will consider symmetric capacity, i.e., where the users share the capacity equally. There are a few cases where the capacity of the interference channel is known. If the interference is (very) weak, \(|h_{2}|\ll |h_{1}|\), specifically \(\frac{|h_{2}|}{|h_{1}|}\left( 1+h_{2}^{2}\frac{P}{N}\right) \le \frac{1}{2}\), the sum capacity is achieved by using Gaussian signaling and treating interference as noise (TIN) [19, 20], $$\begin{aligned} C_{\mathrm{sum}}=\log \left( \frac{h_{1}^{2}\frac{P}{N}}{1+h_{2}^{2}\frac{P}{N}}\right) , \end{aligned}$$ i.e., a symmetric capacity of \(C=\frac{1}{2} \log \left( \frac{h_{1}^{2}{P/N}}{1+h_{2}^{2}{P/N}}\right)\) On the other hand, if the interference is strong \(|h_{2}|>|h_{1}|\), capacity can be achieved with Gaussian signaling and joint decoding at the receivers [21, 22], achieving the following symmetric capacity $$\begin{aligned} C={\mathrm{min}} \left\{ \frac{1}{2}\log \left( 1+h_{1}^{2}\frac{P}{N}\right) ,\frac{1}{4}\log \left( 1+(h_{1}^{2}+h_{2}^{2})\frac{P}{N}\right) \right\} . \end{aligned}$$ While treating interference as noise might be optimal for weak interference, the rate (3) is far from capacity (4) for strong interference. However, [4] showed that when discrete modulation is used, it is often possible to get close to the capacity (4) with TIN (specifically TINnoTS—treating interference as noise with no time sharing). This is explained in Fig. 2. To decode the desired signal, we can simply take the modulus with respect to the interfering signal [4, Sect. VIII] and then decode as though there was no interference. Specifically, we "fold" the signal into the interval \(\left[ -|h_2|\sqrt{\frac{3}{n^2-1}},|h_2|\sqrt{\frac{3}{n^2-1}}\right]\) for n odd (see (1)). This only requires knowledge of the modulation of the interfering signal, not the encoding. Thus, discrete modulation is advantageous for treating interference as noise. Discrete interference. The red curve is the pdf of the received signal. The blue curve is the pdf of the received signal after modulus with the interference constellation; this is the same as if there had been no interference In this paper, we extend the results in [4] to show that TINnoTS can always get close to capacity, except in a few outage regions. We will then show that these outage regions can be avoided through liquid metal antennas. A major issue with discrete modulation is that it is difficult to calculate mutual information and therefore achievable rate. An elegant lower bound was derived by Ozarow and Wyner in [5] and improved in [4]. Let \(X_d\) be any discrete random variable; then $$\begin{aligned} I\left( X_{d};\sqrt{P}h_{1}X_{d}+Z\right) \ge H\left( X_{d}\right) -\frac{1}{2}\log \left( 1+\frac{12}{\frac{P}{N}d_{{\mathrm{min}} }^{2}\left( h_{1}X_{d}\right) }\right) -c_{0}, \end{aligned}$$ where \(c_{0}=\frac{1}{2}\log \left( \frac{2\pi e}{12}\right)\), \(H(\cdot )\) is discrete entropy; the key parameter in this bound is \(d_{{\mathrm{min}} }(h_{1}X_{d})\), which is the smallest distance between constellation points in the received signal. This bound, for the point-to-point channel, can also be used to bind the achievable rate in the interference channel. Let $$\begin{aligned} {\widetilde{X}}_{d}=Y_{1}-Z_{1}=\sqrt{h_1^2P}X_{1}+\sqrt{h_2^2P}X_{2} \end{aligned}$$ be the received signal at receiver 1 before noise is added. Notice that except on a set of probability zero of \(h_1, h_2\) there is a one-to-one map \((X_1,X_2)\leftrightarrows {\widetilde{X}}_d\) as \(X_1\) and \(X_2\) are discrete. We can then bound the achievable rate by $$\begin{aligned} & I\left( X_{1};Y_{1}\right) \,{\mathop{=}\limits^{a}}\,I\left( {\widetilde{X}}_{d};Y_{1}\right) -I\left( X_{2};\sqrt{h_{2}^{2P}}X_{2}+Z_{1}\right) \\ & \quad\, {\mathop{\ge }\limits^{b}}\, I\left( {\widetilde{X}}_{d};Y_{1}\right) -H\left( X_{2}\right) . \\ & \quad \,{\mathop{\ge }\limits^{c}} \,H\left( {\widetilde{X}}_{d}\right) -\frac{1}{2}\log \left( 1+\frac{12}{\frac{P}{N}d_{\mathrm{min}}^{2}\left( {\widetilde{X}}_{d}\right) }\right) -c_{0}-H\left( X_{2}\right) . \end{aligned}$$ Here (a) is from the chain rule of entropy [23]: \(I({{\widetilde{X}}}_d;Y_1)=I(X_1,X_2;Y_1)=I(X_1;Y_1)+I(X_2;Y_1|X_1)\), where in the first step we have used the one-to-one map \((X_1,X_2)\leftrightarrows {\widetilde{X}}_d\). The inequality (b) is because mutual information is upper bounded by entropy, and (c) is from (5). Minimum distance. Minimum distance of sum of two PAM signals with \(n=10\) as a function of \(h_2\) with \(h_{1}=1\). \(d_{{\mathrm{min}} }\left( {\widetilde{X}}_{d}\right)\) is plotted by calculating the smallest distance in the sum constellation \(\sqrt{h_{1}^{2}P}X_{1}+\sqrt{h_{2}^{2}P}X_{2}\) with \(X_{1},X_{2}\) generated as n-PAM inputs The bound depends critically on \(d_{{\mathrm{min}} }\left( {\widetilde{X}}_{d}\right)\). The minimum distance of a sum of discrete distributions is a complicated function. Even if each distribution has a large minimum distance, the sum can have a small minimum distance, and the minimum distance is very sensitive to the values of \(h_{1}\) and \(h_{2}\) (Fig. 3). Small changes can lead to large changes in \(d_{{\mathrm{min}} }\). For some values of \(h_{1}\) and \(h_{2}\) the rate can be very small. This leads to the second thrust in this paper: the only way to overcome this problem is to change \(h_{1}\) and \(h_{2}\), perhaps by small amounts. For this purpose, we analyze liquid metal antennas. We first present an improved lower bound. Let \(p_{{\mathrm{max}} }={\mathrm{max}} \{P(X_{d})\}\) be the maximum probability assumed by the discrete random variable \(X_{d}\) and \(c_{2}=\frac{1}{2}\log \left( \frac{2}{e}\right)\). Then, $$\begin{aligned} I\left( X_{d};\sqrt{P}X_{d}+Z\right)&\ge c_{2}-\log \left( p_{{\mathrm{max}} }\right) -\log \Biggl [1+2e^{-\frac{1}{4}\frac{P}{N}d_{{\mathrm{min}} }^{2}\left( X_{d}\right) } \\ &\quad +\sqrt{\frac{\pi }{\frac{{P/N}}{4}d_{{\mathrm{min}} }^{2}\left( X_{d}\right) }}{\text{Erfc}}\left( \sqrt{\frac{{P/N}}{4}d_{{\mathrm{min}} }^{2}\left( X_{d}\right) }\right) \Biggr ] \end{aligned}$$ $$\begin{aligned} &\ge c_{2}-\log \left( p_{{\mathrm{max}} }\right) -\log \Biggl [1+\sqrt{\frac{\pi }{\frac{{P/N}}{4}d_{{\mathrm{min}} }^{2}\left( X_{d}\right) }}\Biggr ]. \end{aligned}$$ The proof is in "Proof of Proposition 1" Appendix. The bound (8) does not utilize special functions (e.g., error function) and offers analytical convenience similar to the Ozarow–Wyner bound. The benefit of (7) in Proposition 1 is that it is both analytical while close to the numerical bound of [4, Eq. (18a)]. Treating interference as noise (TIN) As discussed above, we consider the strong interference regime with capacity given by (4). For large P, the second term in (4) dominates, which expresses that the the receiver has to jointly be able to decode the two messages. We will show that by instead treating interference as noise, we can achieve essentially the same rate, specificallyFootnote 1 $$\begin{aligned} R>C-c=\frac{1}{4}\log \left( 1+(h_{1}^{2}+h_{2}^{2})\frac{P}{N}\right) -c, \end{aligned}$$ where c is a constant independent of P, but potentially dependent on \(h_{1},h_{2}\). This is called a constant gap approximation [2]. When P is large, this is close to capacity as the first term increases while the second stays constant. The following Theorem formalizes this: Theorem 2 In the strong interference regime \(\left| h_{2}\right| >\left| h_{1}\right|\), discrete modulation achieves a constant gap to capacity, $$\begin{aligned} R>\frac{1}{4}\log \left( 1+(h_{1}^{2}+h_{2}^{2})\frac{P}{N}\right) -c, \end{aligned}$$ as follows \(h_{2}^{2}\ge h_{1}^{4}\frac{P}{N}\): (10) is true for all values of \(h_{1},h_{2}\) and c is independent of \(h_{1},h_{2}\). \(h_{1}^{2}<h_{2}^{2}<h_{1}^{4}\frac{P}{N}\): Let $$\begin{aligned} {\tilde{h}}_{i}\triangleq 2^{\mathrm{frac}(\frac{1}{2}\log (h_{i}^{2}{P/N}))}\in [1,2), \end{aligned}$$ where \(\mathrm{frac}(x)\triangleq x-\lfloor x\rfloor\). Then for every \(0<\gamma <1\) there exists a set \(B\in [1,2)^{2}\) with area less than \(\gamma <1\) so for \({\tilde{h}}_{1},{\tilde{h}}_{2}\notin B\), (10) is true and where now \(c=O(\log \gamma )\). The proof is in "Proof of Theorem 2" Appendix. For Case (1) of Theorem 2, from the proof it can be seen that the rate can be achieved with symmetrical PAM modulation with $$\begin{aligned} n=\left\lceil \sqrt{\frac{3}{4}}\frac{|h_{2}|}{|h_{1}|}\right\rceil . \end{aligned}$$ On the other hand, in Case (2), the rate is achieved with $$\begin{aligned} n&=\left\lfloor \left( 1+h_1^2\frac{P}{N}+h_2^2\frac{P}{N}\right) ^{\frac{1}{4}}\right\rfloor -1. \end{aligned}$$ In Fig. 4, we use the bound developed in Sect. 3 and the input specified in (11) to illustrate the constant gap to capacity. The constant gap in Case (2) was obtained by a probabilistic argument, where the channel coefficients \(\left( h_{1},h_{2}\right)\) were varied while keeping other parameters fixed. This is because the minimum distance of the received signals varies (see Fig. 5), thus affecting the achievable rate using PAM inputs. Gap to capacity using PAM input specified in (11). Here \(h_{2}=\sqrt{h_{1}^{4}\frac{P}{N}+\frac{h_{1}^{2}}{2}}\). The top curve 'blue dashed line' is the analytic lower bound of Eq. (5). The curve in the middle plotted with 'green line' is the analytic lower bound of Prop. 1, and the marker 'red square' is the numerical lower bound using Eq. (26). Analytical bounds only need the value of \(d_{{\mathrm{min}} }\left( Y_{i}-Z_{i}\right)\), whereas the numerical bound (26) requires the entire \({\text{supp}}\left[ \sqrt{h_1^2P}X_{i}+\sqrt{h_2^2P}X_{j}\right]\) Minimum distance. Minimum distance at the receiver \(Y_{1}\) for the discrete constellation \({\widetilde{X}}_d\) as the channel coefficients vary. Here \(P/N=27.3\) dB, and we use n from (12) Theorem 2 is significant as it achieves constant gap to capacity for the complete strong interference channel. It contradicts the claim in [4, Sect. IV B, Remark 6] that an improvement from the log–log gap might be impossible. Case 2), as opposed to Case 1), relies on certain good channel coefficients \(\left( h_{1},h_{2}\right)\) to achieve a constant gap independent of P, from capacity using discrete inputs. The set of channel coefficients where this is not possible are termed as bad channel coefficients as they lead to outages whose probability can be controlled. Outages is of course a problem in a fixed channel. The outages are permanent, i.e., little information can be transmitted. They are an unfortunate downside to use of discrete modulation and can only be avoided by changing \(h_{1}\) or \(h_{2}\). In Sect. 6, we therefore consider liquid metal antennas to deal with the outages. Successive interference cancellation (SIC) In successive interference cancellation, the receiver decodes the interfering signal first, subtracts it from the received signal, and then decodes the desired signal. We show that using discrete modulation can also increase the rate of SIC. In [24], the authors showed that by dividing each user's transmission into multiple streams and alternatingly decoding the streams, higher rates can be achieved. Specifically, User i transmits \(X_{i}=\sum _{k=1}^{L}X_{i}^{\left\{ k\right\} }\), and the decoding order is $$\begin{aligned} X_{2}^{\left\{ 1\right\} },X_{1}^{\left\{ 1\right\} },X_{2}^{\left\{ 2\right\} },X_{1}^{\left\{ 2\right\} },\ldots ,X_{2}^{\left\{ L\right\} },X_{1}^{\left\{ L\right\} },&\quad \text{for}\,\,\,\,Y_{1} \\ X_{1}^{\left\{ 1\right\} },X_{2}^{\left\{ 1\right\} },X_{1}^{\left\{ 2\right\} },X_{2}^{\left\{ 2\right\} },\ldots ,X_{1}^{\left\{ L\right\} },X_{2}^{\left\{ L\right\} },&\quad \text{for}\,\,\,\,Y_{2}. \end{aligned}$$ It turns out that using discrete modulation for the streams can further increase rate. Now, instead of treating interference as Gaussian noise when decoding higher layers, it is treated as discrete noise, and the techniques for treating interference as noise from Sect. 4 can be used. Unfortunately, it is difficult to calculate exact rate expressions, so we instead show this with an example. When using discrete modulation, one has to optimize over the number of streams L, the power allocation for each stream \(\mathtt{P}_{k}\) and the number of levels \(n_{k}\). For the former two, we heuristically use the same parameters as for the Gaussian case, and we then numerically optimize over \(n_{k}\). Figure 6 shows the performance in one case. In this case \(L=2\) (which is optimum for the Gaussian case). We calculate rate two ways: using [4, Eq. (18a)] (the lower bound) and using numerical integration for calculating entropies of mixture Gaussians. We plot the performance of two schemes SIC and TIN: 2 Layers in Fig. 6. The sum rate for both users in the SIC case is found to be $$\begin{aligned}&2\Biggl ({\mathrm{min}} \left\{ I\left( X_{2}^{\left\{ 1\right\} };Y_{1}\right) ,I\left( X_{2}^{\left\{ 1\right\} };Y_{2}\mid X_{1}^{\left\{ 1\right\} }\right) \right\} \\ &\quad +{\mathrm{min}} \left\{ I\left( X_{2}^{\left\{ 2\right\} };Y_{1}\mid X_{1}^{\left\{ 1\right\} },X_{2}^{\left\{ 1\right\} }\right) ,I\left( X_{2}^{\left\{ 2\right\} };X_{2}^{\left\{ 2\right\} }+Z_{2}\right) \right\} \Biggr ), \end{aligned}$$ whereas for the TIN: 2 Layers case is $$\begin{aligned} 2\left( I\left( X_{1}^{\left\{ 1\right\} };Y_{1}\right) +I\left( X_{1}^{\left\{ 2\right\} };Y_{1}\mid X_{1}^{\left\{ 1\right\} }\right) \right) . \end{aligned}$$ We briefly discuss rate expressions for the SIC scheme. Consider the receiver at \(Y_{2}\) interested in layers \(X_{2}^{\left\{ 1\right\} }\) and \(X_{2}^{\left\{ 2\right\} }\) (of User 2). The rate for Layer 1 is the minimum of two terms. The first term \(I\left( X_{2}^{\left\{ 1\right\} };Y_{1}\right)\) exists as \(X_{2}^{\left\{ 1\right\} }\) is interference to User 1 and needs to be decoded and peeled off. The second term \(I\left( X_{2}^{\left\{ 1\right\} };Y_{2}\mid X_{1}^{\left\{ 1\right\} }\right)\) ensures User 2 can successfully decode Layer 1 after subtracting the interference of Layer 1 from User 1. Similar is the case for Layer 2 which is decoded first at User 1 (after decoding \(X_{2}^{\left\{ 1\right\} },X_{1}^{\left\{ 1\right\} }\)) and then at User 2. The lower bounds for each scheme are also plotted. For the SIC scheme, we use the following lower bound. $$\begin{aligned}&2\left({\mathrm{min}} \left\{ I_{\mathrm{DTD}}\left( Y_{1};Y_{1}-Z_{1}\right) -\log \left( n_{1}n_{2}n_{2}\right) ,\right.\right. \\ &\quad \left. I_{\mathrm{DTD}}\left( Y_{1};Y_{1}-Z_{1}\mid X_{2}^{\left\{ 1\right\} }\right) -\log \left( n_{2}n_{2}\right) \right\} \\ &\quad +{\mathrm{min}} \left\{ I_{\mathrm{DTD}}\left( Y_{1};Y_{1}-Z_{1}\mid X_{2}^{\left\{ 1\right\} },X_{1}^{\left\{ 1\right\} }\right) -\log \left( n_{2}\right) ,\right. \\ &\quad \left.\left. I_{\mathrm{DTD}}\left( Y_{1};Y_{1}-Z_{1}\mid X_{2}^{\left\{ 1\right\} },X_{1}^{\left\{ 1\right\} },X_{2}^{\left\{ 2\right\} }\right) \right\} \right). \end{aligned}$$ In the TIN: 2 Layers case, we lower bound the expression in (14) with $$\begin{aligned} 2\left( I_{\mathrm{DTD}}\left( Y_{1};Y_{1}-Z_{1}\right) -\log \left( n_{1}n_{2}n_{2}\right) +I_{\mathrm{DTD}}\left( Y_{1};Y_{1}-Z_{1}\mid X_{1}^{\left\{ 1\right\} }\right) -\log \left( n_{1}n_{2}\right) \right) . \end{aligned}$$ Here, the mutual information terms with a subscript 'DTD,' e.g., \(I_{\mathrm{DTD}}\) \(\left( Y_{1};Y_{1}-Z_{1}\right)\), are lower bounds to the usual mutual information expression \(I\left( Y_{1};Y_{1}-Z_{1}\right)\). These are calculated via the numerical lower bound provided in [4, Eq. (18a)], or (26). Performance of successive interference cancellation. Here \(P/N=37\) dB and \(h_{1}=1\). The top curve 'red line' is the upper bound by evaluating \(0.5\log \left( 1+h_2^2\frac{P}{N}+h_1^2\frac{P}{N}\right)\). The markers 'green asterisk' and 'blue plus' are lower bounds for Discrete Modulated inputs with SIC. Similarly, The markers 'green square box' and 'violet circle' are lower bounds for Discrete Modulated inputs with 2 layers and TIN. In each scheme, the first marker is evaluated using a numerical bound and the latter is plotted with the analytical lower bound. The 'black triangle' marker is the rate with Gaussian Modulation as proposed in [24] The important conclusions for this figure are Discrete modulation outperforms Gaussian modulation. This is proved by Proposition 1 and further by numerical integration. The discrete modulation uses PAM and therefore has a shaping loss. Using binomial modulation or quantized Gaussian would give a further gain. Treating interference as noise with discrete modulation is mostly better than successive interference cancellation with Gaussian modulation. Successive interference cancellation with discrete modulation gives a modest gain over treating interference as noise with discrete modulation. The fluctuations in the curves are not due to inaccurate numerical calculation. They are due to the outage behavior explained in Sect. 3, which also affects successive interference cancellation. Minimizing outages with liquid metal antennas Reconfigurable antennas with adjustable directive gain have the potential to minimize network outages by varying the channel coefficients. One approach to realizing reconfigurable antennas is using liquid metal as the radiating element [6, 7, 9,10,11,12,13,14,15,16,17]. To quantify the effects of liquid metal reconfigurable antennas, an interference channel is simulated assuming standard microstrip patch antennas as the transmitters (\({\mathrm{Tx}}_1\) and \({\mathrm{Tx}}_2\)) and two reconfigurable liquid metal dipole antennas as the receivers (\({\mathrm{Rx}}_1\) and \({\mathrm{Rx}}_2\)) in a square configuration (Fig. 7). The microstrip patch antenna is designed to radiate at 1.58 GHz, with a maximum gain of 7.1 dBi at 0° with respect to receiver \({\mathrm{Rx}}_1\) and a gain of 2.9 dBi at 45° with respect to \({\mathrm{Rx}}_2\). The distance between each transmitter–receiver pair is varied between 100 and 500 m. Figure 8a shows the architecture of a liquid metal dipole antenna which is capable of reconfigurable lengths and orientations, resulting in variable radiation patterns with lobes and nulls in varying direction. Antenna model. Gaussian interference channel model with a static transmit (Tx) antennas and reconfigurable receive (Rx) antennas, used for simulating various states Adapted from [14] Liquid metal antennas. The reconfigurable liquid metal dipole antenna (a) device structure (b) simulated radiation patterns. This structure allows the flexibility to vary independently both the length and orientation in each arm. Analytical and simulation modeling for finite-length dipoles To obtain the channel performance discussed in Sect. 4, the channel coefficients \(h_1\) and \(h_2\) are calculated from the ratio of the transmitted power and received power using the Friis equation [25] $$\begin{aligned} h_1^2&= \frac{P_{Rx1}}{P_{Tx1}}={{G_{Tx}(\theta _1)}}\ {{G_{Rx1}(\theta _1+r_1)}}\left( \frac{\lambda }{4\pi d_1}\right) ^2 \end{aligned}$$ $$\begin{aligned} h_2^2&= \frac{P_{Rx2}}{P_{Tx1}}={{G_{Tx}(\theta _2)}}\ {{G_{Rx2}(\theta _2+r_2)}}\left( \frac{\lambda }{4\pi d_2}\right) ^2, \end{aligned}$$ where \(d_1\) and \(d_2\) are the distances in [m] between \(Tx_1-Rx_1\) and \(Tx_1-Rx_2\), respectively; \(r_1\) and \(r_2\) are the angle rotations for receiving antennas as illustrated in Fig. 8; \(\theta _1=0^\circ\) and \(\theta _2=45^\circ\) are the angles between \(Tx_1-Rx_1\) and \(Tx_1-Rx_2\), respectively; \(G_{T_x} (0)=7.1\) dBi and \(G_{T_x} (\pi /4 )= 2.9\) dBi are the gains of \(T_x\) at angle \(\theta _1\) and \(\theta _2\), respectively. The gain of an antenna can be calculated from [25]: $$\begin{aligned} G(\theta ) = D_m F_n(\theta ), \end{aligned}$$ where \(D_m\) is the maximum directivity and \(F_n(\theta )\) is the normalized power density function. The receiving antennas in Fig. 7 are assumed to be dipole antennas, with \(F_n(\theta )\) and \(D_m\) for a finite-length dipole antenna calculated from [25]: $$\begin{aligned} F_n(\theta ) = \frac{F(\theta )}{F(\theta )|_{{\mathrm{max}}}} \end{aligned}$$ $$\begin{aligned} F(\theta ) = \left[ \frac{{\mathrm{cos}} \left[ k\frac{L}{2} {\mathrm{cos}} \theta \right] -{\mathrm{cos}} \left[ k\frac{L}{2}\right] }{\sin \theta }\right] ^2, \qquad k = \frac{2\pi }{\lambda } \end{aligned}$$ $$\begin{aligned} D_m = \frac{4\pi }{\Omega _A} \end{aligned}$$ $$\begin{aligned} \Omega _A = 2\pi \int _0^\pi F_n(\theta )\sin \theta \; d\theta , \end{aligned}$$ where \(\Omega _A\) is defined as the beam solid angle and L is the dipole length \([n\times \lambda ]\) Radiation pattern. Simulated radiation pattern for a dipole antenna at different lengths Gain. Unnormalized gain of a dipole antenna with reconfigurable length. For a 2\(\lambda\) antenna, there is a 27.4-dB difference in gain between the − 45° and − 90° directions Although a dipole antenna is typically operated at the frequency where its physical length corresponds to 0.5\(\lambda\), Fig. 9 shows the change in radiation pattern when the physical length is increased to 2\(\lambda\) and 3\(\lambda\). The simulations were conducted with ANSYS High-Frequency Structure Simulator (HFSS) software, with the dipoles modeled as 500-μm-diameter wires separated by a 1-mm gap. Changing the length of the dipole while keeping the operating frequency constant results in a variety of radiation patterns, with lobes and nulls in varying directions. Figure 10 illustrates the contrast in directive gain at − 90° and − 45°, corresponding to the directions of the two transmitters in Fig. 7. The contrast in gain results in a corresponding contrast in the channel coefficients. A dipole antenna with a variable length can be realized by using liquid metal as the radiating elements, enabling adaptive reconfiguration of the radiation pattern. Liquid metal antennas Gallium-based liquid metal alloys have been used to implement reconfigurable antennas that change operating frequency [17], polarization [15], and radiation pattern [12, 13]. Galinstan is a non-toxic eutectic liquid metal that is made of gallium, indium, and tin, with an electrical conductivity of \(3.5\times 10^{5}\) S/m and a surface tension of 0.535 N/m [26]. Actuating Galinstan to realize a pattern-reconfigurable liquid metal antenna can alter the receivers' gain, providing a wide range of antenna states. Here, liquid metal is used to realize a pattern-reconfigurable antenna that can alter its gain and radiation pattern to increase achievable capacity. Simulated radiation pattern 1. Simulated radiation patterns for a liquid metal dipole antenna with varying angles between the radiating elements Simulated radiation pattern 2. Simulated radiation patterns for a liquid metal dipole antenna with superpositions of dipole elements In [14], we reported the experimental results of a liquid metal polarization-pattern-reconfigurable dipole antenna. The liquid metal was electrically actuated using a low-voltage (5-VDC) electrocapillary actuation [27] signal to create five discrete states. Each state was active at a time when the liquid metal is actuated to fill the arms of the desired state, corresponding to the angle of the dipole antenna arms (0°, ± 45°, and ± 90° configurations). These states varied the polarizations and the radiation patterns. The device architecture (Fig. 8a) can be modified to enable more states by adjusting the length of each arm, thus enhancing the tunability of the channel coefficients (\(h_{1}\) and \(h_{2}\)). To implement dipole elements of any length, a gallium-based liquid metal such as Galinstan is used in the dipole arms. The lengths of the arms can be continuously changed using hybrid electrocapillary actuation [27]. Figure 8b shows that this gives flexibility in changing the channel coefficients, which is of crucial importance in the interference channel because of the sensitivity to channel coefficients. This would be difficult to achieve with other technologies. The total number of simulated states for this architecture is 87, obtained by varying both the dipole arm lengths (from 0.1\(\lambda\) to 3\(\lambda\) with a step of 0.1\(\lambda\)), the angle between the arms (Fig. 11), and creating superpositions of the previously mentioned states (Fig. 12). We now show the results of using discrete modulation together with liquid metal antennas in the setup in Fig. 7. As mentioned above, for each distance d we simulate 87 channel states which we denote \({\mathcal{S}}_{d}=\left\{ \left( h_{1,1},h_{2,1}\right) ,\left( h_{1,2},h_{2,2}\right) ,\ldots \left( h_{1,87},h_{2,87}\right) \right\}\). Our goal is specifically to demonstrate how liquid metal antennas can be used to avoid outages; outages happen for channel coefficients in states that satisfy the condition of Case 2) in Theorem 2, and we therefore only consider channel coefficients in the following set $$\begin{aligned} {\mathcal{S}}_{d}^{\prime}:=\left\{ \left( h_{1,k},h_{2,k}\right) \Bigl |\left( h_{1,k},h_{2,k}\right) \in {\mathcal{S}}_{d},\ h_{1,k}^{2}<h_{2,k}^{2}<h_{1,k}^{4}\frac{P}{N}\right\} \end{aligned}$$ The transmit power is \(4\times 10^{-10}\) W/Hz, which corresponds to a received SNR of around 25 dB at a distance of 250 m. Capacity for antenna states. For different liquid metal states that satisfy \(h_{1}^{2}<h_{2}^{2}<h_{1}^{4}\frac{P}{N}\), we plot the capacity with Gaussian inputs '\(-\square -\) and the achievable sum rate (via the lower bound in Proposition 1) using discrete PAM inputs '\(-\circ\)' To visualize the variability in the performance across different antenna states, in Fig. 13 we plot both the achievable rate with discrete modulated n-PAM signals and the capacity. The achievable rate for PAM inputs is plotted using the lower bound developed in Proposition 1. The number of levels n is based on Theorem 2, specifically (12). For reference, at 25 dB the interference-free capacity per user is 4 bits, i.e., a sum capacity of 8 bits. We notice from the figure that the capacity varies only moderately with the state: most states give a good rate. On the other hand, the achievable rate varies widely and many states give poor performance, which is exactly the outage behavior. The figure illustrates how liquid metal antennas can avoid the outages: The capacity is maximized in state 9, but the rate with discrete modulation is poor; moving to state 12 gives a rate close to maximum capacity. Gap to capacity. The plot shows the ratio of achievable rate to capacity for different channel coefficients (i.e., due to a change in the geometry of the setup in Fig. 7). From top to bottom, the curves represent the maximum, average, and minimum performance for all states in \({\mathcal{S}}_d^{\prime}\) of the liquid metal antenna of Fig. 8 In Fig. 14, we illustrate the variability of rate with discrete modulation for different distances, d. The y-axis is the ratio of the sum rate achieved using discrete modulation to the capacity. Outage avoidance. Demonstration of how liquid metal antennas avoid outages. See text for explanation We further investigate the behavior in Fig. 13 by plotting the metrics $$\begin{aligned} \rho _{1}\left( d\right)&=\frac{{\mathrm{max}} _{{\mathcal{S}}_{d}^{\prime}} R_{\mathrm{discrete}}}{{\mathrm{max}} _{{\mathcal{S}}_{d}^{\prime}} C} \end{aligned}$$ $$\begin{aligned} \rho _{2}\left( d\right)&=\frac{R_{\mathrm{discrete}}(s^*)}{{\mathrm{max}} _{{\mathcal{S}}_{d}^{\prime}} C}. \end{aligned}$$ Here \(R_{\mathrm{discrete}}(s^*)\) denotes the rate with discrete modulation in the state \(s^*\) that maximizes capacity. For example, in Fig. 13\(s^*\) is state 9, so (24) is the ratio of discrete rate to capacity in state 9. On the other hand, (23) is the ratio of the maximum discrete rate, achieved in state 12, to the maximum capacity achieved in state 9. The metrics are plotted in Fig. 15. The result can be seen as evidence that the liquid metal antennas are not only maximizing capacity, but also fine-tuning the channel to avoid outages. Namely, \(\rho _1(d)>\rho _2(d)\) means that the capacity-maximizing state is in outage. The liquid metal antenna can then move to a similar state which gives a much higher rate. In the figures, this happens when the receivers are close to the transmitters. This paper investigated interference reduction via discrete modulation and liquid metal antennas. Discrete modulation can be used to reduce the effect of interference when treated as noise. The tradeoff is that there are certain configurations where the rate is very bad, leading to outage regions. A key insight of this paper is to utilize discrete modulation for good performance excepting some few weak spots and using liquid metal antennas to mitigate any weaknesses of discrete modulation. We use C for capacity and R for achievable rate. TIN: Treating interference as noise TINnoTS: Treating interference as noise with no time sharing PAM: Pulse amplitude modulation Successive interference cancellation GHz: Giga hertz Tx: Rx: dBi: Decibel for isotropic antenna HFSS: High-Frequency Structure Simulator VDC: Voltage direct current A.E. Gamal, Y.H. Kim, Network Information Theory (Cambridge University Press, Boston, 2011) R.H. Etkin, D.N.C. Tse, H. Wang, Gaussian interference channel capacity to within one bit. IEEE Trans. Inf. Theory 54(12), 5534–5562 (2008). https://doi.org/10.1109/tit.2008.2006447 A. Carleial, A case where interference does not reduce capacity. IEEE Trans. Inf. Theory (Corresp.) 21(5), 569–570 (1975) A. Dytso, D. Tuninetti, N. Devroye, Interference as noise: friend or foe? IEEE Trans. Inf. Theory 62(6), 3561–3596 (2016). https://doi.org/10.1109/tit.2016.2553098 L.H. Ozarow, A.D. Wyner, On the capacity of the Gaussian channel with a finite number of input levels. IEEE Trans. Inf. Theory 36(6), 1426–1428 (1990). https://doi.org/10.1109/18.59937 D.J. Hartl, G.J. Frank, G.H. Huff, J.W. Baur, A liquid metal-based structurally embedded vascular antenna: I. Concept and multiphysical modeling. Smart Mater. Struct. 26(2), 025001 (2016). https://doi.org/10.1088/1361-665X/aa5142 K.S. Elassy, T.K. Akau, W.A. Shiroma, S. Seo, A.T. Ohta, Low-cost rapid fabrication of conformal liquid–metal patterns. Appl. Sci. 9(8), 1565–1578 (2019). https://doi.org/10.3390/app9081565 A.M. Watson, K. Elassy, T. Leary, M.A. Rahman, A. Ohta, W. Shiroma, C.E. Tabor, Enabling reconfigurable all-liquid microcircuits via Laplace barriers to control liquid metal. In: IEEE MTT-S International Microwave Symposium, pp. 188–191. IEEE, Boston, MA, USA (2019). 978-1-7281-1310-4 M. Wang, C. Trlica, M.R. Khan, M.D. Dickey, J.J. Adams, A reconfigurable liquid metal antenna driven by electrochemically controlled capillarity. J. Appl. Phys. 117(19), 194901 (2015). https://doi.org/10.1063/1.4919605 V. Bharambe, D.P. Parekh, C. Ladd, K. Moussa, M.D. Dickey, J.J. Adams, Liquid–metal-filled 3-D antenna array structure with an integrated feeding network. IEEE Antennas Wirel. Propag. Lett. 17(5), 739–742 (2018). https://doi.org/10.1109/LAWP.2018.2813309 M.U. Memon, K. Ling, Y. Seo, S. Lim, Frequency-switchable half-mode substrate-integrated waveguide antenna injecting eutectic gallium indium (EGaIn) liquid metal alloy. J. Electromagn. Waves Appl. 16, 2207–2215 (2015). https://doi.org/10.1080/09205071.2015.1087347 G.B. Zhang, R.C. Gough, M.R. Moorefield, K.S. Elassy, A.T. Ohta, W.A. Shiroma, An electrically actuated liquid–metal gain-reconfigurable antenna. Int. J. Antennas Propag. 2018, 1–7 (2018). https://doi.org/10.1155/2018/7595363 A.M. Morishita, C.K.Y. Kitamura, A.T. Ohta, W.A. Shiroma, A liquid–metal monopole array with tunable frequency, gain, and beam steering. IEEE Antennas Wirel. Propag. Lett. 12, 1388–1391 (2013). https://doi.org/10.1109/LAWP.2013.2286544 G.B. Zhang, R.C. Gough, M.R. Moorefield, K.J. Cho, A.T. Ohta, W.A. Shiroma, A liquid–metal polarization-pattern-reconfigurable dipole antenna. IEEE Antennas Wirel. Propag. Lett. 17(1), 50–53 (2018) M. Wang, M.R. Khan, M.D. Dickey, J.J. Adams, A compound frequency-and polarization-reconfigurable crossed dipole using multidirectional spreading of liquid metal. IEEE Antennas Wirel. Propag. Lett. 16, 79–82 (2017) M.R. Khan, G.J. Hayes, J.-H. So, G. Lazzi, M.D. Dickey, A frequency shifting liquid metal antenna with pressure responsiveness. Appl. Phys. Lett. 99(117), 013501 (2011). https://doi.org/10.1063/1.4919605 J.H. Dang, R.C. Gough, A.M. Morishita, A.T. Ohta, W.A. Shiroma, Liquid–metal frequency-reconfigurable slot antenna using air-bubble actuation. Electron. Lett. 51(21), 1630–1632 (2015). https://doi.org/10.1049/el.2015.2782 M.U. Baig, K.S. Elassy, A. Høst-Madsen, A. Ohta, W. Shiroma, A. Nosratinia, Managing interference through discrete modulation and liquid metal antennas. In: IEEE 88th Vehicular Technology Conference (VTC-Fall), pp. 1–5 (2018) V.S. Annapureddy, V.V. Veeravalli, Gaussian interference networks: sum capacity in the low-interference regime and new outer bounds on the capacity region. IEEE Trans. Inf. Theory 55(7), 3032–3050 (2009). https://doi.org/10.1109/TIT.2009.2021380 X. Shang, G. Kramer, B. Chen, Throughput optimization for multi-user interference channels. In: MILCOM 2008—IEEE Military Communications Conference, pp. 1–7 (2008). https://doi.org/10.1109/MILCOM.2008.4753283 H. Sato, The capacity of the Gaussian interference channel under strong interference. IEEE Trans. Inf. Theory IT 27(6), 786–788 (1981) M.H.M. Costa, On the Gaussian interference channel. IEEE Trans. Inf. Theory 31(5), 607–615 (1985) T.M. Cover, J.A. Thomas, Elements of Information Theory, 2nd edn. (Wiley, New York, 2006) Y. Zhao, C.W. Tan, A.S. Avestimehr, S.N. Diggavi, G.J. Pottie, On the maximum achievable sum-rate with successive decoding in interference channels. IEEE Trans. Inf. Theory 58(6), 3798–3820 (2012). https://doi.org/10.1109/tit.2012.2190040 C.A. Balanis, Antenna Theory: Analysis and Design, 4th edn. (Wiley, Hoboken, NJ, 2016), p. 1104 T. Liu, P. Sen, C.-J. Kim, Characterization of nontoxic liquid–metal alloy Galinstan for applications in microdevices. J. Microelectromech. Syst. 21(2), 443–450 (2012) R.C. Gough, A.M. Morishita, J.H. Dang, M.R. Moorefield, W.A. Shiroma, A.T. Ohta, Rapid electrocapillary deformation of liquid metal with reversible shape retention. Micro Nano Syst. Lett. 3(4), 1–9 (2015) T.M. Apostol, Mathematical Analysis (Addison-Wesley, Boston, 1974) M.U. Baig, A. Høst-Madsen, A. Nosratinia, Discrete modulation for interference mitigation. IEEE Trans. Inf. Theory 66(5), 3026–3039 (2020) M.V. Berry, Z.V. Lewis, On the Weierstrass–Mandelbrot fractal function. Proc. R. Soc. Lond. A 370(1743), 459–484 (1980) E. Guariglia, Entropy and fractal antennas. Entropy 18, 3 (2016). https://doi.org/10.3390/e18030084 E. Guariglia, S. Silvestrov, Fractional-wavelet analysis of positive definite distributions and wavelets on \(\mathscr{D}^{\prime }({\mathbb{C}})\), in Engineering Mathematics II. ed. by S. Silvestrov, M. Rančić (Springer, Cham, 2016), pp. 337–353 E. Guariglia, Harmonic Sierpinski gasket and applications. Entropy 20, 9 (2018). https://doi.org/10.3390/e20090714 E. Guariglia, Primality, fractality, and image analysis. Entropy 21(3), 304 (2019). https://doi.org/10.3390/e21030304 L. Yang, H. Su, C. Zhong, Z. Meng, H. Luo, X. Li, Y.Y. Tang, Y. Lu, Hyperspectral image classification using wavelet transform-based smooth ordering. Int. J. Wavelets Multiresolut. Inf. Process. 17(06), 1950050 (2019). https://doi.org/10.1142/S0219691319500504 X. Zheng, Y.Y. Tang, J. Zhou, A framework of adaptive multiscale wavelet decomposition for signals on undirected graphs. IEEE Trans. Signal Process. 67(7), 1696–1711 (2019). https://doi.org/10.1109/TSP.2019.2896246 This work was supported in part by the NSF Grants EECS-1546980, EECS-1546969, CCSS-1711689, and EECS-1923751. Department of Electrical and Computer Engineering, University of Hawaii at Manoa, Honolulu, HI, 96822, USA Mirza Uzair Baig, Kareem S. Elassy, Anders Høst-Madsen, Aaron T. Ohta & Wayne A. Shiroma Department of Electrical Engineering, University of Texas at Dallas, Richardson, TX, 75080, USA Aria Nosratinia Mirza Uzair Baig Kareem S. Elassy Anders Høst-Madsen Aaron T. Ohta Wayne A. Shiroma M.U.B. developed the information theory in the article and generated numerical results; K.S.E. developed the electromagnetic theory and provided channel simulations; A.H.M. and A.N. supervised the information theory development; A.T.O. and W.A.S. supervised the development of the electromagnetic theory. All authors read and approved the final manuscript. Correspondence to Anders Høst-Madsen. Proof of Proposition 1 We first need the following lemma Lemma 3 Let \(\alpha \ge 0\), \(s_{j},s_{i}\in {\text{supp}}\left\{ X_{d}\right\}\) and \(g\left( n,\alpha \right) =\sum _{j=1,j\ne i}^{n}e^{-\alpha \left( s_{i}-s_{j}\right) ^{2}}\). For \(\alpha \ge 0\) , we have $$\begin{aligned} g\left( n,\alpha \right)&\le 2e^{-\alpha d_{{\mathrm{min}} }^{2}\left( X_{d}\right) } \\ &\quad +\sqrt{\frac{\pi }{\alpha d_{{\mathrm{min}} }^{2}\left( X_{d}\right) }}{\text{Erfc}}\left( \sqrt{\alpha d_{{\mathrm{min}} }^{2}\left( X_{d}\right) }\right) \\ & \le \sqrt{\frac{\pi }{\alpha d_{{\mathrm{min}} }^{2}\left( X_{d}\right) }}. \end{aligned}$$ Note that we can always re-label the \(\left\{ s_{j}\right\} _{j=1,j\ne i}^{n}\) such that \(s_{1}\) is the closest point (in distance) to \(s_{i}\), \(s_{2}\) is the next closest, and so on. Clearly, \(\left| s_{i}-s_{j}\right| \ge d_{{\mathrm{min}} }\left( X_{d}\right)\) by definition. Now for any j we have $$\begin{aligned} \left( s_{j}-s_{1}\right) ^{2}\ge \left( s_{i}-\left[ s_{i}\pm jd_{{\mathrm{min}} }\left( X_{d}\right) \right] \right) ^{2} \end{aligned}$$ and this allows us to upper bound \(g\left( n,\alpha \right)\) with \(2\sum _{j=1}^{\frac{n-1}{2}}e^{-\alpha j^{2}d_{{\mathrm{min}} }^{2}\left( X_{d}\right) }\). Making use of [28, Sect. 8.12], we get (25). Proving the last inequality in the lemma is simple and omitted. \(\square\) We start with a numerical bound reported in [4, Eq. (18a)] as follows $$\begin{aligned} I\left( X_{d};\sqrt{P}X_{d}+Z\right) \ge -\log \left[ \sum _{i,j\in \left[ 1:n\right] ^{2}}{\tilde{p}}_{i}{\tilde{p}}_{j}\frac{1}{\sqrt{4\pi }}e^{-\frac{\frac{P}{N}\left( s_{i}-s_{j}\right) ^{2}}{4}}\right] -c_{1}, \end{aligned}$$ where \(s_{i}\in {\text{supp}}\left\{ X_{d}\right\}\), \(\tilde{p_{i}}=\Pr \left\{ X_{d}=s_{i}\right\}\), and \(c_{1}=\frac{1}{2}\log \left( 2\pi e\right)\). This result is leveraged below to produce an analytical bound that improves the Ozarow–Wyner bound in Eq. (5). We can now prove the proposition. The main idea is to bound the summation in (26) using a staircase approximation. $$\begin{aligned} \frac{1}{\sqrt{4\pi }}\sum _{i,j\in \left[ 1:n\right] ^{2}}{\tilde{p}}_{i}{\tilde{p}}_{j}e^{-\frac{1}{4}\frac{P}{N}\left( s_{i}-s_{j}\right) ^{2}} \le&\frac{p_{{\mathrm{max}} }}{\sqrt{4\pi }}\sum _{i\left[ 1:n\right] }p_{i}\left[ 1+\sum _{j\in \left[ 1:n\right] \backslash i}e^{-\frac{1}{4}\frac{P}{N}\left( s_{i}-s_{j}\right) ^{2}}\right] \\ &\overset{\left( a\right) }{\le }\frac{p_{{\mathrm{max}} }}{\sqrt{4\pi }}\Biggl [1+2e^{-\frac{1}{4}\frac{P}{N}d_{{\mathrm{min}} }^{2}\left( X_{d}\right) }\\&+\sqrt{\frac{\pi }{\frac{1}{4}\frac{P}{N}d_{{\mathrm{min}} }^{2}\left( X_{d}\right) }}{\text{Erfc}}\left( \sqrt{\frac{1}{4}\frac{P}{N}d_{{\mathrm{min}} }^{2}\left( X_{d}\right) }\right) \Biggr ]\\ \le&\frac{p_{{\mathrm{max}} }}{\sqrt{4\pi }}\left[ 1+\sqrt{\frac{\pi }{\frac{1}{4}\frac{P}{N}d_{{\mathrm{min}} }^{2}\left( X_{d}\right) }}\right] . \end{aligned}$$ where in (a) we use Lemma 3 with \(\alpha =\frac{1}{4}\frac{P}{N}\). Proof of Theorem 2 A constant gap approximation to capacity is relevant for large P, so throughout this section we assume that P is sufficiently large. The transmissions \(X_{i}\) are symmetrical n-PAM modulations where n is the number of discrete levels \(X_{i}\) may take, i.e., \(n=\left| {\text{supp}}\left[ X_{i}\right] \right|\). Furthermore, for a PAM input it should be easy to verify (see (1)) that \(d_{{\mathrm{min}} }\left( X_{i}\right) =\sqrt{\frac{12}{n_{i}^{2}-1}}.\) The two cases require distinct proofs and are treated individually. Case 1: \(h_{2}^{2}\ge h_{1}^{4}\frac{P}{N}\) For the very strong interference regime, \(h_{2}^{2}\ge h_{1}^{2}\left( 1+h_{1}^{2}\frac{P}{N}\right)\), the result was proved in [4, Th. 7]. We therefore only have to prove it for \(h_{2}^{2}<h_{1}^{2}\left( 1+h_{1}^{2}\frac{P}{N}\right)\). Choose \(n=\left\lceil \sqrt{\frac{3}{4}}\frac{|h_{2}|}{|h_{1}|}\right\rceil\), and we may bound $$\begin{aligned} n^{2}<\left( 1+\sqrt{\frac{3}{4}}\frac{|h_{2}|}{|h_{1}|}\right) ^{2}\le \frac{3}{4}\frac{h_{2}^{2}}{h_{1}^{2}}. \end{aligned}$$ Using the result in [4, Prop. 2], we know the minimum distance of the discrete constellation \(\sqrt{P}\left( h_{1}X_{1}+h_{2}X_{2}\right)\) as $$\begin{aligned} d_{{\mathrm{min}} }\left( Y_{i}-Z_{i}\right) =\sqrt{h_1^2P}d_{{\mathrm{min}} }\left( X_{i}\right) . \end{aligned}$$ For a fixed constant \(c>0\) and Proposition 1 or Eq. (5), we get $$\begin{aligned} I\left( Y_{i}-Z_{i};Y_{i}\right) +c\ge \log \left( n^{2}\right) -\frac{1}{2}\log \left( 1+\frac{12}{d_{{\mathrm{min}} }^{2}\left( Y_{i}-Z_{i}\right) /N}\right) . \end{aligned}$$ We now lower bound the $$\begin{aligned} d_{{\mathrm{min}} }^{2}\left( Y_{i}-Z_{i}\right) /N&\ge16\frac{h_{1}^{4}{P/N}}{h_{2}^{2}}\\ &\ge 16\frac{h_{1}^{2}{P/N}}{1+h_{1}^{2}{P/N}}\\ &\ge 8, \end{aligned}$$ where the last inequality follows since the P is large. The above estimate allows us to state that \(R\ge \frac{1}{2}\log \left( \frac{3}{4}\frac{h_{2}^{2}}{h_{1}^{2}|}\right) -c_{1}\), for some constant \(c_{1}>0\). Finally, it is easy to verify that $$\begin{aligned} \left| 2R-\frac{1}{2}\log \left( 1+h_2^2\frac{P}{N}+h_1^2\frac{P}{N}\right) \right| &<c_{3} \end{aligned}$$ for some constant \(c_{3}>0\). Case 2: \(h_{1}^{2}<h_{2}^{2}<h_{1}^{4}\frac{P}{N}\) For large P, we can write the signal at receiver \(i,j\in \left\{ 1,2\right\}\) as $$\begin{aligned} Y_{i}=2^{\ell _{i}}\tilde{h_{i}}X_{i}+2^{\ell _{j}}\tilde{h_{j}}X_{j}+Z_{i},\ i\ne j. \end{aligned}$$ Recall \({\tilde{h}}_{i}\in [1,2)\) was defined as the fractional part of \(h_i\) in the log domain, and \(\ell _i \triangleq \log _2 \frac{h_i}{{\tilde{h}}_i}\); therefore, \(\ell _i\in {\mathbb{Z}}^{+}.\) Now set \(n=\left\lfloor \left( 1+h_{1}^{2}\frac{P}{N}+h_{2}^{2}\frac{P}{N}\right) ^{\frac{1}{4}}\right\rfloor -1\) and \(m=\frac{n-1}{2}\). It is easy to show $$\begin{aligned} m\,d_{{\mathrm{min}} }\left( X_{i}\right) &\le\sqrt{3}, \\ n&\le\left( 1+h_{1}^{2}\frac{P}{N}+h_{2}^{2}\frac{P}{N}\right) ^{\frac{1}{4}}. \end{aligned}$$ In this case, we have the following bound for \(d_{{\mathrm{min}} }\left( Y_{i}-Z_{i}\right)\), using [29, Prop. 2] for certain channel gains \(h_{1},h_{2}\). $$\begin{aligned} d_{{\mathrm{min}} }\left( Y_{i}-Z_{i}\right) & \ge\frac{\gamma }{8\sqrt{3}}\frac{2^{n_{1}+n_{2}}d_{{\mathrm{min}} }^{2}\left( X_{i}\right) }{2^{n_{1}}+2^{n_{2}}}\\ &=\frac{\gamma }{8\sqrt{3}}\frac{P\left| h_{2}h_{1}\right| d_{{\mathrm{min}} }^{2}\left( X_{i}\right) }{\sqrt{P}\left( \left| h_{1}\right| /{\tilde{h}}_{1}+\left| h_{2}\right| /{\tilde{h}}_{2}\right) }\\ & \overset{\left( a\right) }{\ge } \frac{\gamma \cdot 2}{3\sqrt{3}}\frac{\sqrt{P}\left| h_{2}h_{1}\right| }{\left| h_{1}\right| /{\tilde{h}}_{1}+\left| h_{2}\right| /{\tilde{h}}_{2}}\left( 1+h_{1}^{2}\frac{P}{N}+h_{2}^{2}\frac{P}{N}\right) ^{-\frac{1}{2}}\\ & \overset{\left( b\right) }{\ge }\frac{\gamma \cdot 2}{3\sqrt{3}}\frac{\left| h_{2}h_{1}\right| \left( 1+2h_{2}^{2}\right) ^{-\frac{1}{2}}}{\left| h_{1}\right| +\left| h_{2}\right| }=C_{\gamma ,h_{1},h_{2}}. \end{aligned}$$ To arrive at (a), we first lower bound the term \(d_{{\mathrm{min}} }\left( X_{i}\right)\) with \(\frac{\sqrt{12}}{n}\) and then use the upper bound on n using Eq. (27). At \(\left( b\right)\), we make use of the following relation $$\begin{aligned} 1+h_{1}^{2}\frac{P}{N}+h_{2}^{2}\frac{P}{N}\le \frac{P}{N}+2h_{2}^{2}\frac{P}{N}\end{aligned}$$ and lower bound the constants \(\tilde{h_{i}}\) to arrive at a constant that depends on \(\gamma ,h_{1}\) and \(h_{2}.\) Finally, note that for the channel realizations \(h_{1},h_{2}\) where the above bound on \(d_{{\mathrm{min}} }\left( Y_{i}-Z_{i}\right)\) holds we have $$\begin{aligned} H\left( \sqrt{P}h_{1}X_{i}+\sqrt{P}h_{2}X_{j}\right) =H\left( X_{i}\right) +H\left( X_{j}\right) , \end{aligned}$$ which allows us to use Eq. (6) and Proposition 1 to bound \(R\ge \log \left( n\right) -O\left( \log \gamma \right) .\) Baig, M.U., Elassy, K.S., Høst-Madsen, A. et al. Leveraging discrete modulation and liquid metal antennas for interference reduction. J Wireless Com Network 2021, 158 (2021). https://doi.org/10.1186/s13638-021-02019-w Interference channel Discrete modulation Liquid metal antenna
CommonCrawl
Mat. Sb.: Mat. Sb. (N.S.), 1976, Volume 99(141), Number 1, Pages 121–133 (Mi msb2742) This article is cited in 15 scientific papers (total in 15 papers) Asymptotic normality of some variables connected with the cyclic structure of random permutations Yu. V. Bolotnikov, V. N. Sachkov, V. E. Tarakanov Abstract: Asymptotic normality as $n\to\infty$ is proved for certain random variables connected with the cyclic structure of a random permutation of degree $n$, under a nonuniform probability distribution on the set of all permutations, in particular, the asymptotic normality of the number of all cycles. Bibliography: 7 titles. References: PDF file HTML file Mathematics of the USSR-Sbornik, 1976, 28:1, 107–117 MSC: Primary 60E05, 60F05; Secondary 62E20 Citation: Yu. V. Bolotnikov, V. N. Sachkov, V. E. Tarakanov, "Asymptotic normality of some variables connected with the cyclic structure of random permutations", Mat. Sb. (N.S.), 99(141):1 (1976), 121–133; Math. USSR-Sb., 28:1 (1976), 107–117 \Bibitem{BolSacTar76} \by Yu.~V.~Bolotnikov, V.~N.~Sachkov, V.~E.~Tarakanov \paper Asymptotic normality of some variables connected with the cyclic structure of random permutations \jour Mat. Sb. (N.S.) \vol 99(141) \mathnet{http://mi.mathnet.ru/msb2742} \mathscinet{http://www.ams.org/mathscinet-getitem?mr=415713} \jour Math. USSR-Sb. \vol 28 \crossref{https://doi.org/10.1070/SM1976v028n01ABEH001642} \isi{http://gateway.isiknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&DestLinkType=FullRecord&DestApp=ALL_WOS&KeyUT=A1976EM69000006} http://mi.mathnet.ru/eng/msb2742 http://mi.mathnet.ru/eng/msb/v141/i1/p121 This publication is cited in the following articles: Yu. V. Bolotnikov, V. N. Sachkov, V. E. Tarakanov, "On some classes of random variables on cycles of permutations", Math. USSR-Sb., 36:1 (1980), 87–99 Lyuben R. Mutafchiev, "Local Limit Theorems for Sums of Power Series Distributed Random Variables and for the Number of Components in Labelled Relational Structures", Random Struct Alg, 3:4 (1992), 403 A. N. Timashev, "On the distribution of the number of cycles of a given length in the class of permutations with known number of cycles", Discrete Math. Appl., 11:5 (2001), 471–483 A. L. Yakymiv, "On the distribution of the $m$th maximal cycle lengths of random $A$-permutations", Discrete Math. Appl., 15:5 (2005), 527–546 A. L. Yakymiv, "Limit theorem for the general number of cycles in a random $A$-permutation", Theory Probab. Appl., 52:1 (2008), 133–146 A. L. Yakymiv, "Random $A$-Permutations: Convergence to a Poisson Process", Math. Notes, 81:6 (2007), 840–846 A. L. Yakymiv, "On the Number of $A$-Mappings", Math. Notes, 86:1 (2009), 132–139 A. L. Yakymiv, "Limit Theorem for the Middle Members of Ordered Cycle Lengths in Random $A$-Permutations", Theory Probab. Appl., 54:1 (2010), 114–128 A. L. Yakymiv, "A limit theorem for the logarithm of the order of a random $A$-permutation", Discrete Math. Appl., 20:3 (2010), 247–275 A. L. Yakymiv, "Asymptotics of the Moments of the Number of Cycles of a Random $A$-Permutation", Math. Notes, 88:5 (2010), 759–766 A. L. Yakymiv, "Random $A$-permutations and Brownian motion", Proc. Steklov Inst. Math., 282 (2013), 298–318 A. L. Yakymiv, "On the number of cyclic points of random $A$-mapping", Discrete Math. Appl., 23:5-6 (2013), 503–515 A. L. Yakymiv, "On a number of components in a random $A$-mapping", Theory Probab. Appl., 59:1 (2015), 114–127 A. L. Yakymiv, "On the Number of Components of Fixed Size in a Random $A$-Mapping", Math. Notes, 97:3 (2015), 468–475 A. L. Yakymiv, "Limit theorems for the logarithm of the order of a random $A$-mapping", Discrete Math. Appl., 27:5 (2017), 325–338 Full text: 67 First page: 2
CommonCrawl
Nice construction Thomson problem AVI video (768×576, 3.1 MB) Zip archive (AVI, 2.2 MB) Electrons behavior Best arrangement General case Karl Gustav Jacob Jacobi lived in the first half of the XIX'th century. For his scientific research he developed a class of orthogonal polynomials, that were later named after him. Given some fixed values of the parameters $\alpha$ and $\beta$ (greater than $-1$) the Jacobi polynomial $P_{k}^{(\alpha, \beta)}$ is of degree $k$ and has thesame number of zeros lying in the segment $[-1,1]$. The notion of orthogonality (i.e. perpendicularity) came from geometry to otherbranches of mathematics. If two vectors are perpendicular, their scalar productequals zero. By analogy two polynomials are called orthogonal if their scalarproduct equals zero. In this case by the scalar product we mean the integral overthe segment $[-1,1]$ of the product of two polynomials multiplied by a specialfunction that called weight. Classes of orthogonal polynomials play a great role both in pure and appliedmathematics. Functions that arise during research, properties of which you need tostudy, can be approximated with linear combinations of concerned polynomials. Thus,one may deduce properties of the approximation which is often much easier. The study of orthogonal polynomials and their properties is a large and interestingbranch of mathematics with great and important applications. As it often happens in science, a nice construction can be useful in many questions.It turned out that the Jacobi polynomials, or being precise their zeros, gavesolution to a problem that appeared much later that they were invented. Consider two electric charges of positive quantity q and p fixed along the edgesof the segment $[-1,1]$ and $k$ unit charges randomly placed inside. Unit charges areallowed to move, but not to leave the segment. As all the charges are positive, theytry their best to run away one from another as far as possible. How will they bearranged to minimize potential energy of the system? The problem is to find such aconstruction when all the forces are balanced. Lets consider first some particular cases. Let the left fixed charge be of quantity 3, and the right of quantity 5. Lets placerandomly three unit charges that can move freely inside the segment and watch themfor a while. When they stop moving, we draw the graph of the Jacobi polynomial $P_{3}^{(9, 5)}$ onthe same segment. It turns out that the charges stopped exactly inthe zeros of this polynomial! Lets experiment once again. Fix charges of quantity 3 and 2 on the left and the rightedge respectively. We place four unit charges and watch the system. When the stopmoving they will be exactly in the same positions where the zeros of the Jacobipolynomial $P_{4}^{(3, 5)}$ are. This effect holds in general too. Given electric charges of positive quantity q and p fixed at points -1 and 1 respectively and $k$ unit charges between them, theminimum of potential energy is reached when the «internal» charges are placed in the zeros of the Jacobi polynomial $P_{k}^{(2p-1, 2q-1)}$. That is how once invented class of orthogonal polynomials appeared while solving aproblem from a completely different scientific area. The Jacobi polynomials also showtheir hidden properties in many other problems as any other «nice construction». G. Sege. Orthogonal polynomials. — M.: Fizmatlit, 1962. See also in section "Points' best position"2 In reading this text or in downloading a film you may have used the solution of the balls contact number problem in 8-dimensional space. Are you surprised? This movie tells about applications of this well-known mathematical problem. Thomson problem (with V. A. Yudin) How N equal charges will arrange on a sphere trying to minimize the potential energy of the system? The problem was posed by Thomson at the turn of the XIXth and XXth centuries, but still has been solved in a few special cases.
CommonCrawl
A search for the exotic meson $X(5568)$ with the Collider Detector at Fermilab (1712.09620) CDF Collaboration: T. Aaltonen, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, J.A. Appel, T. Arisawa, A. Artikov, J. Asaadi, W. Ashmanskas, B. Auerbach, A. Aurisano, F. Azfar, W. Badgett, T. Bae, A. Barbaro-Galtieri, V.E. Barnes, B.A. Barnett, P. Barria, P. Bartos, M. Bauce, F. Bedeschi, S. Behari, G. Bellettini, J. Bellinger, D. Benjamin, A. Beretvas, A. Bhatti, K.R. Bland, B. Blumenfeld, A. Bocci, A. Bodek, D. Bortoletto, J. Boudreau, A. Boveia, L. Brigliadori, C. Bromberg, E. Brucken, J. Budagov, H.S. Budd, K. Burkett, G. Busetto, P. Bussey, P. Butti, A. Buzatu, A. Calamba, S. Camarda, M. Campanelli, F. Canelli, B. Carls, D. Carlsmith, R. Carosi, S. Carrillo, B. Casal, M. Casarsa, A. Castro, P. Catastini, D. Cauz, V. Cavaliere, A. Cerri, L. Cerrito, Y.C. Chen, M. Chertok, G. Chiarelli, G. Chlachidze, K. Cho, D. Chokheli, A. Clark, C. Clarke, M.E. Convery, J. Conway, M. Corbo, M. Cordelli, C.A. Cox, D.J. Cox, M. Cremonesi, D. Cruz, J. Cuevas, R. Culbertson, N. d'Ascenzo, M. Datta, P. de Barbaro, L. Demortier, M. Deninno, F. Devoto, M. D'Errico, A. Di Canto, B. Di Ruzza, J.R. Dittmann, M. D'Onofrio, S. Donati, M. Dorigo, A. Driutti, K. Ebina, R. Edgar, A. Elagin, R. Erbacher, S. Errede, B. Esham, S. Farrington, J.P. Fernández Ramos, R. Field, G. Flanagan, R. Forrest, M. Franklin, J.C. Freeman, H. Frisch, Y. Funakoshi, C. Galloni, A.F. Garfinkel, P. Garosi, H. Gerberich, E. Gerchtein, S. Giagu, V. Giakoumopoulou, K. Gibson, C.M. Ginsburg, N. Giokaris, P. Giromini, V. Glagolev, D. Glenzinski, M. Gold, D. Goldin, A. Golossanov, G. Gomez, G. Gomez-Ceballos, M. Goncharov, O. González López, I. Gorelov, A.T. Goshaw, K. Goulianos, E. Gramellini, C. Grosso-Pilcher, J. Guimaraes da Costa, S.R. Hahn, J.Y. Han, F. Happacher, K. Hara, M. Hare, R.F. Harr, T. Harrington-Taber, K. Hatakeyama, C. Hays, J. Heinrich, M. Herndon, A. Hocker, Z. Hong, W. Hopkins, S. Hou, R.E. Hughes, U. Husemann, M. Hussein, J. Huston, G. Introzzi, M. Iori, A. Ivanov, E. James, D. Jang, B. Jayatilaka, E.J. Jeon, S. Jindariani, M. Jones, K.K. Joo, S.Y. Jun, T.R. Junk, M. Kambeitz, T. Kamon, P.E. Karchin, A. Kasmi, Y. Kato, W. Ketchum, J. Keung, B. Kilminster, D.H. Kim, H.S. Kim, J.E. Kim, M.J. Kim, S.B. Kim, S.H. Kim, Y.K. Kim, Y.J. Kim, N. Kimura, M. Kirby, K. Kondo, D.J. Kong, J. Konigsberg, A.V. Kotwal, M. Kreps, J. Kroll, M. Kruse, T. Kuhr, M. Kurata, A.T. Laasanen, S. Lammel, M. Lancaster, K. Lannon, G. Latino, H.S. Lee, J.S. Lee, S. Leo, S. Leone, J.D. Lewis, A. Limosani, E. Lipeles, A. Lister, Q. Liu, T. Liu, S. Lockwitz, A. Loginov, A. Lucà, D. Lucchesi, J. Lueck, P. Lujan, P. Lukens, G. Lungu, J. Lys, R. Lysak, R. Madrak, P. Maestro, S. Malik, G. Manca, A. Manousakis-Katsikakis, L. Marchese, F. Margaroli, P. Marino, K. Matera, M.E. Mattson, A. Mazzacane, P. Mazzanti, R. McNulty, A. Mehta, P. Mehtala, C. Mesropian, T. Miao, D. Mietlicki, A. Mitra, H. Miyake, S. Moed, N. Moggi, C.S. Moon, R. Moore, M.J. Morello, A. Mukherjee, Th. Muller, P. Murat, M. Mussini, J. Nachtman, Y. Nagai, J. Naganoma, I. Nakano, A. Napier, J. Nett, T. Nigmanov, L. Nodulman, S.Y. Noh, O. Norniella, L. Oakes, S.H. Oh, Y.D. Oh, T. Okusawa, R. Orava, L. Ortolan, C. Pagliarone, E. Palencia, P. Palni, V. Papadimitriou, W. Parker, G. Pauletta, M. Paulini, C. Paus, T.J. Phillips, G. Piacentino, E. Pianori, J. Pilot, K. Pitts, C. Plager, L. Pondrom, S. Poprocki, K. Potamianos, F. Prokoshin, A. Pranko, F. Ptohos, G. Punzi, I. Redondo Fernández, P. Renton, M. Rescigno, F. Rimondi, L. Ristori, A. Robson, T. Rodriguez, S. Rolli, M. Ronzani, R. Roser, J.L. Rosner, F. Ruffini, A. Ruiz, J. Russ, V. Rusu, W.K. Sakumoto, Y. Sakurai, L. Santi, K. Sato, V. Saveliev, A. Savoy-Navarro, P. Schlabach, E.E. Schmidt, T. Schwarz, L. Scodellaro, F. Scuri, S. Seidel, Y. Seiya, A. Semenov, F. Sforza, S.Z. Shalhout, T. Shears, P.F. Shepard, M. Shimojima, M. Shochet, I. Shreyber-Tecker, A. Simonenko, K. Sliwa, J.R. Smith, F.D. Snider, V. Sorin, H. Song, M. Stancari, R. St. Denis, D. Stentz, J. Strologas, Y. Sudo, A. Sukhanov, I. Suslov, K. Takemasa, Y. Takeuchi, J. Tang, M. Tecchio, P.K. Teng, J. Thom, E. Thomson, V. Thukral, D. Toback, S. Tokar, K. Tollefson, T. Tomura, D. Tonelli, S. Torre, D. Torretta, P. Totaro, M. Trovato, F. Ukegawa, S. Uozumi, F. Vázquez, G. Velev, C. Vellidis, C. Vernieri, M. Vidal, R. Vilar, J. Vizán, M. Vogel, G. Volpi, P. Wagner, R. Wallny, S.M. Wang, D. Waters, W.C. Wester III, D. Whiteson, A.B. Wicklund, S. Wilbur, H.H. Williams, J.S. Wilson, P. Wilson, B.L. Winer, P. Wittich, S. Wolbers, H. Wolfmeister, T. Wright, X. Wu, Z. Wu, K. Yamamoto, D. Yamato, T. Yang, U.K. Yang, Y.C. Yang, W.-M. Yao, G.P. Yeh, K. Yi, J. Yoh, K. Yorita, T. Yoshida, G.B. Yu, I. Yu, A.M. Zanetti, Y. Zeng, C. Zhou, S. Zucchelli Dec. 27, 2017 hep-ex A search for the exotic meson $X(5568)$ decaying into the $B^0_s \pi^{\pm}$ final state is performed using data corresponding to $9.6 \textrm{fb}^{-1}$ from $p{\bar p}$ collisions at $\sqrt{s} = 1960$ GeV recorded by the Collider Detector at Fermilab. No evidence for this state is found and an upper limit of 6.7\% at the 95\% confidence level is set on the fraction of $B^0_s$ produced through the $X(5568) \rightarrow B^0_s \, \pi^{\pm}$ process. Measurement of the inclusive-isolated prompt-photon cross section in $p\bar{p}$ collisions using the full CDF data set (1703.00599) CDF Collaboration: T. Aaltonen, M.G. Albrow, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, J.A. Appel, T. Arisawa, A. Artikov, J. Asaadi, W. Ashmanskas, B. Auerbach, A. Aurisano, F. Azfar, W. Badgett, T. Bae, A. Barbaro-Galtieri, V.E. Barnes, B.A. Barnett, P. Barria, P. Bartos, M. Bauce, F. Bedeschi, S. Behari, G. Bellettini, J. Bellinger, D. Benjamin, A. Beretvas, A. Bhatti, K.R. Bland, B. Blumenfeld, A. Bocci, A. Bodek, D. Bortoletto, J. Boudreau, A. Boveia, L. Brigliadori, C. Bromberg, E. Brucken, J. Budagov, H.S. Budd, K. Burkett, G. Busetto, P. Bussey, P. Butti, A. Buzatu, A. Calamba, S. Camarda, M. Campanelli, F. Canelli, B. Carls, D. Carlsmith, R. Carosi, S. Carrillo, B. Casal, M. Casarsa, A. Castro, P. Catastini, D. Cauz, V. Cavaliere, A. Cerri, L. Cerrito, Y.C. Chen, M. Chertok, G. Chiarelli, G. Chlachidze, K. Cho, D. Chokheli, A. Clark, C. Clarke, M.E. Convery, J. Conway, M. Corbo, M. Cordelli, C.A. Cox, D.J. Cox, M. Cremonesi, D. Cruz, J. Cuevas, R. Culbertson, N. d'Ascenzo, M. Datta, P. de Barbaro, L. Demortier, M. Deninno, F. Devoto, M. D'Errico, A. Di Canto, B. Di Ruzza, J.R. Dittmann, M. D'Onofrio, S. Donati, M. Dorigo, A. Driutti, K. Ebina, R. Edgar, R. Erbacher, S. Errede, B. Esham, S. Farrington, J.P. Fernández Ramos, R. Field, G. Flanagan, R. Forrest, M. Franklin, J.C. Freeman, H. Frisch, Y. Funakoshi, C. Galloni, A.F. Garfinkel, P. Garosi, H. Gerberich, E. Gerchtein, S. Giagu, V. Giakoumopoulou, K. Gibson, C.M. Ginsburg, N. Giokaris, P. Giromini, V. Glagolev, D. Glenzinski, M. Gold, D. Goldin, A. Golossanov, G. Gomez, G. Gomez-Ceballos, M. Goncharov, O. González López, I. Gorelov, A.T. Goshaw, K. Goulianos, E. Gramellini, C. Grosso-Pilcher, J. Guimaraes da Costa, S.R. Hahn, J.Y. Han, F. Happacher, K. Hara, M. Hare, R.F. Harr, T. Harrington-Taber, K. Hatakeyama, C. Hays, J. Heinrich, M. Herndon, A. Hocker, Z. Hong, W. Hopkins, S. Hou, R.E. Hughes, U. Husemann, M. Hussein, J. Huston, G. Introzzi, M. Iori, A. Ivanov, E. James, D. Jang, B. Jayatilaka, E.J. Jeon, S. Jindariani, M. Jones, K.K. Joo, S.Y. Jun, T.R. Junk, M. Kambeitz, T. Kamon, P.E. Karchin, A. Kasmi, Y. Kato, W. Ketchum, J. Keung, B. Kilminster, D.H. Kim, H.S. Kim, J.E. Kim, M.J. Kim, S.B. Kim, S.H. Kim, Y.K. Kim, Y.J. Kim, N. Kimura, M. Kirby, K. Kondo, D.J. Kong, J. Konigsberg, A.V. Kotwal, M. Kreps, J. Kroll, M. Kruse, T. Kuhr, M. Kurata, A.T. Laasanen, S. Lammel, M. Lancaster, K. Lannon, G. Latino, H.S. Lee, J.S. Lee, S. Leo, S. Leone, J.D. Lewis, A. Limosani, E. Lipeles, A. Lister, Q. Liu, T. Liu, S. Lockwitz, A. Loginov, A. Lucà, D. Lucchesi, J. Lueck, P. Lujan, P. Lukens, G. Lungu, J. Lys, R. Lysak, R. Madrak, P. Maestro, S. Malik, G. Manca, A. Manousakis-Katsikakis, L. Marchese, F. Margaroli, P. Marino, K. Matera, M.E. Mattson, A. Mazzacane, P. Mazzanti, R. McNulty, A. Mehta, P. Mehtala, C. Mesropian, T. Miao, D. Mietlicki, A. Mitra, H. Miyake, S. Moed, N. Moggi, C.S. Moon, R. Moore, M.J. Morello, A. Mukherjee, Th. Muller, P. Murat, M. Mussini, J. Nachtman, Y. Nagai, J. Naganoma, I. Nakano, A. Napier, J. Nett, T. Nigmanov, L. Nodulman, S.Y. Noh, O. Norniella, L. Oakes, S.H. Oh, Y.D. Oh, T. Okusawa, R. Orava, L. Ortolan, C. Pagliarone, E. Palencia, P. Palni, V. Papadimitriou, W. Parker, G. Pauletta, M. Paulini, C. Paus, T.J. Phillips, G. Piacentino, E. Pianori, J. Pilot, K. Pitts, C. Plager, L. Pondrom, S. Poprocki, K. Potamianos, F. Prokoshin, A. Pranko, F. Ptohos, G. Punzi, I. Redondo Fernández, P. Renton, M. Rescigno, F. Rimondi, L. Ristori, A. Robson, T. Rodriguez, S. Rolli, M. Ronzani, R. Roser, J.L. Rosner, F. Ruffini, A. Ruiz, J. Russ, V. Rusu, W.K. Sakumoto, Y. Sakurai, L. Santi, K. Sato, V. Saveliev, A. Savoy-Navarro, P. Schlabach, E.E. Schmidt, T. Schwarz, L. Scodellaro, F. Scuri, S. Seidel, Y. Seiya, A. Semenov, F. Sforza, S.Z. Shalhout, T. Shears, P.F. Shepard, M. Shimojima, M. Shochet, I. Shreyber-Tecker, A. Simonenko, P. Sinervo, K. Sliwa, J.R. Smith, F.D. Snider, V. Sorin, H. Song, M. Stancari, R. St. Denis, D. Stentz, J. Strologas, Y. Sudo, A. Sukhanov, I. Suslov, K. Takemasa, Y. Takeuchi, J. Tang, M. Tecchio, P.K. Teng, J. Thom, E. Thomson, V. Thukral, D. Toback, S. Tokar, K. Tollefson, T. Tomura, D. Tonelli, S. Torre, D. Torretta, P. Totaro, M. Trovato, F. Ukegawa, S. Uozumi, F. Vázquez, G. Velev, C. Vellidis, C. Vernieri, M. Vidal, R. Vilar, J. Vizán, M. Vogel, G. Volpi, P. Wagner, R. Wallny, S.M. Wang, D. Waters, W.C. Wester III, D. Whiteson, A.B. Wicklund, S. Wilbur, H.H. Williams, J.S. Wilson, P. Wilson, B.L. Winer, P. Wittich, S. Wolbers, H. Wolfe, T. Wright, X. Wu, Z. Wu, K. Yamamoto, D. Yamato, T. Yang, U.K. Yang, Y.C. Yang, W.-M. Yao, G.P. Yeh, K. Yi, J. Yoh, K. Yorita, T. Yoshida, G.B. Yu, I. Yu, A.M. Zanetti, Y. Zeng, C. Zhou, S. Zucchelli March 2, 2017 hep-ex A measurement of the inclusive production cross section of isolated prompt photons in proton-antiproton collisions at center-of-mass energy $\sqrt{s}$=1.96TeV is presented. The results are obtained using the full Run II data sample collected with the Collider Detector at the Fermilab Tevatron, which corresponds to an integrated luminosity of 9.5fb$^{-1}$. The cross section is measured as a function of photon transverse energy, $E_T^{\gamma}$, in the range 30$ < E_T^{\gamma} <$500GeV and in the pseudorapidity region $|\eta^{\gamma}|<$1.0. The results are compared with predictions from parton-shower Monte Carlo models at leading order in quantum chromodynamics (QCD) and from next-to-leading order perturbative QCD calculations. The latter show good agreement with the measured cross section. Measurement of the $D^+$-meson production cross section at low transverse momentum in $p\bar{p}$ collisions at $\sqrt{s}=1.96$ TeV (1610.08989) CDF Collaboration: T. Aaltonen, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, J.A. Appel, T. Arisawa, A. Artikov, J. Asaadi, W. Ashmanskas, B. Auerbach, A. Aurisano, F. Azfar, W. Badgett, T. Bae, A. Barbaro-Galtieri, V.E. Barnes, B.A. Barnett, P. Barria, P. Bartos, M. Bauce, F. Bedeschi, S. Behari, G. Bellettini, J. Bellinger, D. Benjamin, A. Beretvas, A. Bhatti, K.R. Bland, B. Blumenfeld, A. Bocci, A. Bodek, D. Bortoletto, J. Boudreau, A. Boveia, L. Brigliadori, C. Bromberg, E. Brucken, J. Budagov, H.S. Budd, K. Burkett, G. Busetto, P. Bussey, P. Butti, A. Buzatu, A. Calamba, S. Camarda, M. Campanelli, F. Canelli, B. Carls, D. Carlsmith, R. Carosi, S. Carrillo, B. Casal, M. Casarsa, A. Castro, P. Catastini, D. Cauz, V. Cavaliere, A. Cerri, L. Cerrito, Y.C. Chen, M. Chertok, G. Chiarelli, G. Chlachidze, K. Cho, D. Chokheli, A. Clark, C. Clarke, M.E. Convery, J. Conway, M. Corbo, M. Cordelli, C.A. Cox, D.J. Cox, M. Cremonesi, D. Cruz, J. Cuevas, R. Culbertson, N. d'Ascenzo, M. Datta, P. de Barbaro, L. Demortier, M. Deninno, F. Devoto, M. D'Errico, A. Di Canto, B. Di Ruzza, J.R. Dittmann, M. D'Onofrio, S. Donati, M. Dorigo, A. Driutti, K. Ebina, R. Edgar, R. Erbacher, S. Errede, B. Esham, S. Farrington, J.P. Fernández Ramos, R. Field, G. Flanagan, R. Forrest, M. Franklin, J.C. Freeman, H. Frisch, Y. Funakoshi, C. Galloni, A.F. Garfinkel, P. Garosi, H. Gerberich, E. Gerchtein, S. Giagu, V. Giakoumopoulou, K. Gibson, C.M. Ginsburg, N. Giokaris, P. Giromini, V. Glagolev, D. Glenzinski, M. Gold, D. Goldin, A. Golossanov, G. Gomez, G. Gomez-Ceballos, M. Goncharov, O. González López, I. Gorelov, A.T. Goshaw, K. Goulianos, E. Gramellini, C. Grosso-Pilcher, J. Guimaraes da Costa, S.R. Hahn, J.Y. Han, F. Happacher, K. Hara, M. Hare, R.F. Harr, T. Harrington-Taber, K. Hatakeyama, C. Hays, J. Heinrich, M. Herndon, A. Hocker, Z. Hong, W. Hopkins, S. Hou, R.E. Hughes, U. Husemann, M. Hussein, J. Huston, G. Introzzi, M. Iori, A. Ivanov, E. James, D. Jang, B. Jayatilaka, E.J. Jeon, S. Jindariani, M. Jones, K.K. Joo, S.Y. Jun, T.R. Junk, M. Kambeitz, T. Kamon, P.E. Karchin, A. Kasmi, Y. Kato, W. Ketchum, J. Keung, B. Kilminster, D.H. Kim, H.S. Kim, J.E. Kim, M.J. Kim, S.B. Kim, S.H. Kim, Y.K. Kim, Y.J. Kim, N. Kimura, M. Kirby, K. Kondo, D.J. Kong, J. Konigsberg, A.V. Kotwal, M. Kreps, J. Kroll, M. Kruse, T. Kuhr, M. Kurata, A.T. Laasanen, S. Lammel, M. Lancaster, K. Lannon, G. Latino, H.S. Lee, J.S. Lee, S. Leo, S. Leone, J.D. Lewis, A. Limosani, E. Lipeles, A. Lister, Q. Liu, T. Liu, S. Lockwitz, A. Loginov, A. Lucà, D. Lucchesi, J. Lueck, P. Lujan, P. Lukens, G. Lungu, J. Lys, R. Lysak, R. Madrak, P. Maestro, S. Malik, G. Manca, A. Manousakis-Katsikakis, L. Marchese, F. Margaroli, P. Marino, K. Matera, M.E. Mattson, A. Mazzacane, P. Mazzanti, R. McNulty, A. Mehta, P. Mehtala, C. Mesropian, T. Miao, D. Mietlicki, A. Mitra, H. Miyake, S. Moed, N. Moggi, C.S. Moon, R. Moore, M.J. Morello, A. Mukherjee, Th. Muller, P. Murat, M. Mussini, J. Nachtman, Y. Nagai, J. Naganoma, I. Nakano, A. Napier, J. Nett, T. Nigmanov, L. Nodulman, S.Y. Noh, O. Norniella, L. Oakes, S.H. Oh, Y.D. Oh, T. Okusawa, R. Orava, L. Ortolan, C. Pagliarone, E. Palencia, P. Palni, V. Papadimitriou, W. Parker, G. Pauletta, M. Paulini, C. Paus, T.J. Phillips, G. Piacentino, E. Pianori, J. Pilot, K. Pitts, C. Plager, L. Pondrom, S. Poprocki, K. Potamianos, F. Prokoshin, A. Pranko, F. Ptohos, G. Punzi, I. Redondo Fernández, P. Renton, M. Rescigno, F. Rimondi, L. Ristori, A. Robson, T. Rodriguez, S. Rolli, M. Ronzani, R. Roser, J.L. Rosner, F. Ruffini, A. Ruiz, J. Russ, V. Rusu, W.K. Sakumoto, Y. Sakurai, L. Santi, K. Sato, V. Saveliev, A. Savoy-Navarro, P. Schlabach, E.E. Schmidt, T. Schwarz, L. Scodellaro, F. Scuri, S. Seidel, Y. Seiya, A. Semenov, F. Sforza, S.Z. Shalhout, T. Shears, P.F. Shepard, M. Shimojima, M. Shochet, I. Shreyber-Tecker, A. Simonenko, K. Sliwa, J.R. Smith, F.D. Snider, V. Sorin, H. Song, M. Stancari, R. St. Denis, D. Stentz, J. Strologas, Y. Sudo, A. Sukhanov, I. Suslov, K. Takemasa, Y. Takeuchi, J. Tang, M. Tecchio, P.K. Teng, J. Thom, E. Thomson, V. Thukral, D. Toback, S. Tokar, K. Tollefson, T. Tomura, D. Tonelli, S. Torre, D. Torretta, P. Totaro, M. Trovato, F. Ukegawa, S. Uozumi, F. Vázquez, G. Velev, C. Vellidis, C. Vernieri, M. Vidal, R. Vilar, J. Vizán, M. Vogel, G. Volpi, P. Wagner, R. Wallny, S.M. Wang, D. Waters, W.C. Wester III, D. Whiteson, A.B. Wicklund, S. Wilbur, H.H. Williams, J.S. Wilson, P. Wilson, B.L. Winer, P. Wittich, S. Wolbers, H. Wolfe, T. Wright, X. Wu, Z. Wu, K. Yamamoto, D. Yamato, T. Yang, U.K. Yang, Y.C. Yang, W.-M. Yao, G.P. Yeh, K. Yi, J. Yoh, K. Yorita, T. Yoshida, G.B. Yu, I. Yu, A.M. Zanetti, Y. Zeng, C. Zhou, S. Zucchelli Oct. 27, 2016 hep-ex We report on a measurement of the $D^{+}$-meson production cross section as a function of transverse momentum ($p_T$) in proton-antiproton ($p\bar{p}$) collisions at 1.96 TeV center-of-mass energy, using the full data set collected by the Collider Detector at Fermilab in Tevatron Run II and corresponding to 10 fb$^{-1}$ of integrated luminosity. We use $D^{+} \to K^-\pi^+\pi^+$ decays fully reconstructed in the central rapidity region $|y|<1$ with transverse momentum down to 1.5 GeV/$c$, a range previously unexplored in $p\bar{p}$ collisions. Inelastic $p\bar{p}$-scattering events are selected online using minimally-biasing requirements followed by an optimized offline selection. The $K^-\pi^+\pi^+$ mass distribution is used to identify the $D^+$ signal, and the $D^+$ transverse impact-parameter distribution is used to separate prompt production, occurring directly in the hard scattering process, from secondary production from $b$-hadron decays. We obtain a prompt $D^+$ signal of 2950 candidates corresponding to a total cross section $\sigma(D^+, 1.5 < p_T < 14.5~\mbox{GeV/}c, |y|<1) = 71.9 \pm 6.8 (\mbox{stat}) \pm 9.3 (\mbox{syst})~\mu$b. While the measured cross sections are consistent with theoretical estimates in each $p_T$ bin, the shape of the observed $p_T$ spectrum is softer than the expectation from quantum chromodynamics. The results are unique in $p\bar{p}$ collisions and can improve the shape and uncertainties of future predictions. Measurement of the $WW$ and $WZ$ production cross section using final states with a charged lepton and heavy-flavor jets in the full CDF Run II data set (1606.06823) CDF Collaboration: T. Aaltonen, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, J.A. Appel, T. Arisawa, A. Artikov, J. Asaadi, W. Ashmanskas, B. Auerbach, A. Aurisano, F. Azfar, W. Badgett, T. Bae, A. Barbaro-Galtieri, V.E. Barnes, B.A. Barnett, P. Barria, P. Bartos, M. Bauce, F. Bedeschi, S. Behari, G. Bellettini, J. Bellinger, D. Benjamin, A. Beretvas, A. Bhatti, K.R. Bland, B. Blumenfeld, A. Bocci, A. Bodek, D. Bortoletto, J. Boudreau, A. Boveia, L. Brigliadori, C. Bromberg, E. Brucken, J. Budagov, H.S. Budd, K. Burkett, G. Busetto, P. Bussey, P. Butti, A. Buzatu, A. Calamba, S. Camarda, M. Campanelli, F. Canelli, B. Carls, D. Carlsmith, R. Carosi, S. Carrillo, B. Casal, M. Casarsa, A. Castro, P. Catastini, D. Cauz, V. Cavaliere, A. Cerri, L. Cerrito, Y.C. Chen, M. Chertok, G. Chiarelli, G. Chlachidze, K. Cho, D. Chokheli, A. Clark, C. Clarke, M.E. Convery, J. Conway, M. Corbo, M. Cordelli, C.A. Cox, D.J. Cox, M. Cremonesi, D. Cruz, J. Cuevas, R. Culbertson, N. d'Ascenzo, M. Datta, P. de Barbaro, L. Demortier, L. Marchese, M. Deninno, F. Devoto, M. D'Errico, A. Di Canto, B. Di Ruzza, J.R. Dittmann, M. D'Onofrio, S. Donati, M. Dorigo, A. Driutti, K. Ebina, R. Edgar, R. Erbacher, S. Errede, B. Esham, S. Farrington, J.P. Fernández Ramos, R. Field, G. Flanagan, R. Forrest, M. Franklin, J.C. Freeman, H. Frisch, Y. Funakoshi, C. Galloni, A.F. Garfinkel, P. Garosi, H. Gerberich, E. Gerchtein, S. Giagu, V. Giakoumopoulou, K. Gibson, C.M. Ginsburg, N. Giokaris, P. Giromini, V. Glagolev, D. Glenzinski, M. Gold, D. Goldin, A. Golossanov, G. Gomez, G. Gomez-Ceballos, M. Goncharov, O. González López, I. Gorelov, A.T. Goshaw, K. Goulianos, E. Gramellini, C. Grosso-Pilcher, R. C. Group, J. Guimaraes da Costa, S.R. Hahn, J.Y. Han, F. Happacher, K. Hara, M. Hare, R.F. Harr, T. Harrington-Taber, K. Hatakeyama, C. Hays, J. Heinrich, M. Herndon, A. Hocker, Z. Hong, W. Hopkins, S. Hou, R.E. Hughes, U. Husemann, M. Hussein, J. Huston, G. Introzzi, M. Iori, A. Ivanov, E. James, D. Jang, B. Jayatilaka, E.J. Jeon, S. Jindariani, M. Jones, K.K. Joo, S.Y. Jun, T.R. Junk, M. Kambeitz, T. Kamon, P.E. Karchin, A. Kasmi, Y. Kato, W. Ketchum, J. Keung, B. Kilminster, D.H. Kim, H.S. Kim, J.E. Kim, M.J. Kim, S.B. Kim, S.H. Kim, Y.K. Kim, Y.J. Kim, N. Kimura, M. Kirby, K. Knoepfel, K. Kondo, D.J. Kong, J. Konigsberg, A.V. Kotwal, M. Kreps, J. Kroll, M. Kruse, T. Kuhr, M. Kurata, A.T. Laasanen, S. Lammel, M. Lancaster, K. Lannon, G. Latino, H.S. Lee, J.S. Lee, S. Leo, S. Leone, J.D. Lewis, A. Limosani, E. Lipeles, A. Lister, Q. Liu, T. Liu, S. Lockwitz, A. Loginov, A. Lucà, D. Lucchesi, J. Lueck, P. Lujan, P. Lukens, G. Lungu, J. Lys, R. Lysak, R. Madrak, P. Maestro, S. Malik, G. Manca, A. Manousakis-Katsikakis, F. Margaroli, P. Marino, K. Matera, M.E. Mattson, A. Mazzacane, P. Mazzanti, R. McNulty, A. Mehta, P. Mehtala, C. Mesropian, T. Miao, D. Mietlicki, A. Mitra, H. Miyake, S. Moed, N. Moggi, C.S. Moon, R. Moore, M.J. Morello, A. Mukherjee, Th. Muller, P. Murat, M. Mussini, J. Nachtman, Y. Nagai, J. Naganoma, I. Nakano, A. Napier, J. Nett, T. Nigmanov, L. Nodulman, S.Y. Noh, O. Norniella, L. Oakes, S.H. Oh, Y.D. Oh, T. Okusawa, R. Orava, L. Ortolan, C. Pagliarone, E. Palencia, P. Palni, V. Papadimitriou, W. Parker, G. Pauletta, M. Paulini, C. Paus, T.J. Phillips, G. Piacentino, E. Pianori, J. Pilot, K. Pitts, C. Plager, L. Pondrom, S. Poprocki, K. Potamianos, F. Prokoshin, A. Pranko, F. Ptohos, G. Punzi, I. Redondo Fernández, P. Renton, M. Rescigno, F. Rimondi, L. Ristori, A. Robson, T. Rodriguez, S. Rolli, M. Ronzani, R. Roser, J.L. Rosner, F. Ruffini, A. Ruiz, J. Russ, V. Rusu, W.K. Sakumoto, Y. Sakurai, L. Santi, K. Sato, V. Saveliev, A. Savoy-Navarro, P. Schlabach, E.E. Schmidt, T. Schwarz, L. Scodellaro, F. Scuri, S. Seidel, Y. Seiya, A. Semenov, F. Sforza, S.Z. Shalhout, T. Shears, P.F. Shepard, M. Shimojima, M. Shochet, I. Shreyber-Tecker, A. Simonenko, K. Sliwa, J.R. Smith, F.D. Snider, V. Sorin, H. Song, M. Stancari, R. St. Denis, D. Stentz, J. Strologas, Y. Sudo, A. Sukhanov, I. Suslov, K. Takemasa, Y. Takeuchi, J. Tang, M. Tecchio, P.K. Teng, J. Thom, E. Thomson, V. Thukral, D. Toback, S. Tokar, K. Tollefson, T. Tomura, D. Tonelli, S. Torre, D. Torretta, P. Totaro, M. Trovato, F. Ukegawa, S. Uozumi, F. Vázquez, G. Velev, C. Vellidis, C. Vernieri, M. Vidal, R. Vilar, J. Vizán, M. Vogel, G. Volpi, P. Wagner, R. Wallny, S.M. Wang, D. Waters, W.C. Wester III, D. Whiteson, A.B. Wicklund, S. Wilbur, H.H. Williams, J.S. Wilson, P. Wilson, B.L. Winer, P. Wittich, S. Wolbers, H. Wolfe, T. Wright, X. Wu, Z. Wu, K. Yamamoto, D. Yamato, T. Yang, U.K. Yang, Y.C. Yang, W.-M. Yao, G.P. Yeh, K. Yi, J. Yoh, K. Yorita, T. Yoshida, G.B. Yu, I. Yu, A.M. Zanetti, Y. Zeng, C. Zhou, S. Zucchelli July 31, 2016 hep-ex We present a measurement of the total {\it WW} and {\it WZ} production cross sections in $p\bar{p}$ collision at $\sqrt{s}=1.96$ TeV, in a final state consistent with leptonic $W$ boson decay and jets originating from heavy-flavor quarks from either a $W$ or a $Z$ boson decay. This analysis uses the full data set collected with the CDF II detector during Run II of the Tevatron collider, corresponding to an integrated luminosity of 9.4 fb$^{-1}$. An analysis of the dijet mass spectrum provides $3.7\sigma$ evidence of the summed production processes of either {\it WW} or {\it WZ} bosons with a measured total cross section of $\sigma_{WW+WZ} = 13.7\pm 3.9$~pb. Independent measurements of the {\it WW} and {\it WZ} production cross sections are allowed by the different heavy-flavor decay-patterns of the $W$ and $Z$ bosons and by the analysis of secondary-decay vertices reconstructed within heavy-flavor jets. The productions of {\it WW} and of {\it WZ} dibosons are independently seen with significances of $2.9\sigma$ and $2.1\sigma$, respectively, with total cross sections of $\sigma_{WW}= 9.4\pm 4.2$~pb and $\sigma_{WZ}=3.7^{+2.5}_{-2.2}$~pb. The measurements are consistent with standard-model predictions. Measurement of $\sin^2\theta^{\rm lept}_{\rm eff}$ using $e^+e^-$ pairs from $\gamma^*/Z$ bosons produced in $p\bar{p}$ collisions at a center-of-momentum energy of 1.96 TeV (1605.02719) CDF Collaboration: T. Aaltonen, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, J.A. Appel, T. Arisawa, A. Artikov, J. Asaadi, W. Ashmanskas, B. Auerbach, A. Aurisano, F. Azfar, W. Badgett, T. Bae, A. Barbaro-Galtieri, V.E. Barnes, B.A. Barnett, P. Barria, P. Bartos, M. Bauce, F. Bedeschi, S. Behari, G. Bellettini, J. Bellinger, D. Benjamin, A. Beretvas, A. Bhatti, K.R. Bland, B. Blumenfeld, A. Bocci, A. Bodek, D. Bortoletto, J. Boudreau, A. Boveia, L. Brigliadori, C. Bromberg, E. Brucken, J. Budagov, H.S. Budd, K. Burkett, G. Busetto, P. Bussey, P. Butti, A. Buzatu, A. Calamba, S. Camarda, M. Campanelli, F. Canelli, B. Carls, D. Carlsmith, R. Carosi, S. Carrillo, B. Casal, M. Casarsa, A. Castro, P. Catastini, D. Cauz, V. Cavaliere, A. Cerri, L. Cerrito, Y.C. Chen, M. Chertok, G. Chiarelli, G. Chlachidze, K. Cho, D. Chokheli, A. Clark, C. Clarke, M.E. Convery, J. Conway, M. Corbo, M. Cordelli, C.A. Cox, D.J. Cox, M. Cremonesi, D. Cruz, J. Cuevas, R. Culbertson, N. d'Ascenzo, M. Datta, P. de Barbaro, L. Demortier, L. Marchese, M. Deninno, F. Devoto, M. D'Errico, A. Di Canto, B. Di Ruzza, J.R. Dittmann, M. D'Onofrio, S. Donati, M. Dorigo, A. Driutti, K. Ebina, R. Edgar, R. Erbacher, S. Errede, B. Esham, S. Farrington, J.P. Fernández Ramos, R. Field, G. Flanagan, R. Forrest, M. Franklin, J.C. Freeman, H. Frisch, Y. Funakoshi, C. Galloni, A.F. Garfinkel, P. Garosi, H. Gerberich, E. Gerchtein, S. Giagu, V. Giakoumopoulou, K. Gibson, C.M. Ginsburg, N. Giokaris, P. Giromini, V. Glagolev, D. Glenzinski, M. Gold, D. Goldin, A. Golossanov, G. Gomez, G. Gomez-Ceballos, M. Goncharov, O. González López, I. Gorelov, A.T. Goshaw, K. Goulianos, E. Gramellini, C. Grosso-Pilcher, J. Guimaraes da Costa, S.R. Hahn, J.Y. Han, F. Happacher, K. Hara, M. Hare, R.F. Harr, T. Harrington-Taber, K. Hatakeyama, C. Hays, J. Heinrich, M. Herndon, A. Hocker, Z. Hong, W. Hopkins, S. Hou, R.E. Hughes, U. Husemann, M. Hussein, J. Huston, G. Introzzi, M. Iori, A. Ivanov, E. James, D. Jang, B. Jayatilaka, E.J. Jeon, S. Jindariani, M. Jones, K.K. Joo, S.Y. Jun, T.R. Junk, M. Kambeitz, T. Kamon, P.E. Karchin, A. Kasmi, Y. Kato, W. Ketchum, J. Keung, B. Kilminster, D.H. Kim, H.S. Kim, J.E. Kim, M.J. Kim, S.B. Kim, S.H. Kim, Y.K. Kim, Y.J. Kim, N. Kimura, M. Kirby, K. Kondo, D.J. Kong, J. Konigsberg, A.V. Kotwal, M. Kreps, J. Kroll, M. Kruse, T. Kuhr, M. Kurata, A.T. Laasanen, S. Lammel, M. Lancaster, K. Lannon, G. Latino, H.S. Lee, J.S. Lee, S. Leo, S. Leone, J.D. Lewis, A. Limosani, E. Lipeles, A. Lister, Q. Liu, T. Liu, S. Lockwitz, A. Loginov, A. Lucà, D. Lucchesi, J. Lueck, P. Lujan, P. Lukens, G. Lungu, J. Lys, R. Lysak, R. Madrak, P. Maestro, S. Malik, G. Manca, A. Manousakis-Katsikakis, F. Margaroli, P. Marino, K. Matera, M.E. Mattson, A. Mazzacane, P. Mazzanti, R. McNulty, A. Mehta, P. Mehtala, C. Mesropian, T. Miao, D. Mietlicki, A. Mitra, H. Miyake, S. Moed, N. Moggi, C.S. Moon, R. Moore, M.J. Morello, A. Mukherjee, Th. Muller, P. Murat, M. Mussini, J. Nachtman, Y. Nagai, J. Naganoma, I. Nakano, A. Napier, J. Nett, T. Nigmanov, L. Nodulman, S.Y. Noh, O. Norniella, L. Oakes, S.H. Oh, Y.D. Oh, T. Okusawa, R. Orava, L. Ortolan, C. Pagliarone, E. Palencia, P. Palni, V. Papadimitriou, W. Parker, G. Pauletta, M. Paulini, C. Paus, T.J. Phillips, G. Piacentino, E. Pianori, J. Pilot, K. Pitts, C. Plager, L. Pondrom, S. Poprocki, K. Potamianos, F. Prokoshin, A. Pranko, F. Ptohos, G. Punzi, I. Redondo Fernández, P. Renton, M. Rescigno, F. Rimondi, L. Ristori, A. Robson, T. Rodriguez, S. Rolli, M. Ronzani, R. Roser, J.L. Rosner, F. Ruffini, A. Ruiz, J. Russ, V. Rusu, W.K. Sakumoto, Y. Sakurai, L. Santi, K. Sato, V. Saveliev, A. Savoy-Navarro, P. Schlabach, E.E. Schmidt, T. Schwarz, L. Scodellaro, F. Scuri, S. Seidel, Y. Seiya, A. Semenov, F. Sforza, S.Z. Shalhout, T. Shears, P.F. Shepard, M. Shimojima, M. Shochet, I. Shreyber-Tecker, A. Simonenko, K. Sliwa, J.R. Smith, F.D. Snider, V. Sorin, H. Song, M. Stancari, R. St. Denis, D. Stentz, J. Strologas, Y. Sudo, A. Sukhanov, I. Suslov, K. Takemasa, Y. Takeuchi, J. Tang, M. Tecchio, P.K. Teng, J. Thom, E. Thomson, V. Thukral, D. Toback, S. Tokar, K. Tollefson, T. Tomura, D. Tonelli, S. Torre, D. Torretta, P. Totaro, M. Trovato, F. Ukegawa, S. Uozumi, F. Vázquez, G. Velev, C. Vellidis, C. Vernieri, M. Vidal, R. Vilar, J. Vizán, M. Vogel, G. Volpi, P. Wagner, R. Wallny, S.M. Wang, D. Waters, W.C. Wester III, D. Whiteson, A.B. Wicklund, S. Wilbur, H.H. Williams, J.S. Wilson, P. Wilson, B.L. Winer, P. Wittich, S. Wolbers, H. Wolfe, T. Wright, X. Wu, Z. Wu, K. Yamamoto, D. Yamato, T. Yang, U.K. Yang, Y.C. Yang, W.-M. Yao, G.P. Yeh, K. Yi, J. Yoh, K. Yorita, T. Yoshida, G.B. Yu, I. Yu, A.M. Zanetti, Y. Zeng, C. Zhou, S. Zucchelli June 10, 2016 hep-ex At the Fermilab Tevatron proton-antiproton ($p\bar{p}$) collider, Drell-Yan lepton pairs are produced in the process $p \bar{p} \rightarrow e^+e^- + X$ through an intermediate $\gamma^*/Z$ boson. The forward-backward asymmetry in the polar-angle distribution of the $e^-$ as a function of the $e^+e^-$-pair mass is used to obtain $\sin^2\theta^{\rm lept}_{\rm eff}$, the effective leptonic determination of the electroweak-mixing parameter $\sin^2\theta_W$. The measurement sample, recorded by the Collider Detector at Fermilab (CDF), corresponds to 9.4~fb$^{-1}$ of integrated luminosity from $p\bar{p}$ collisions at a center-of-momentum energy of 1.96 TeV, and is the full CDF Run II data set. The value of $\sin^2\theta^{\rm lept}_{\rm eff}$ is found to be $0.23248 \pm 0.00053$. The combination with the previous CDF measurement based on $\mu^+\mu^-$ pairs yields $\sin^2\theta^{\rm lept}_{\rm eff} = 0.23221 \pm 0.00046$. This result, when interpreted within the specified context of the standard model assuming $\sin^2 \theta_W = 1 - M_W^2/M_Z^2$ and that the $W$- and $Z$-boson masses are on-shell, yields $\sin^2\theta_W = 0.22400 \pm 0.00045$, or equivalently a $W$-boson mass of $80.328 \pm 0.024 \;{\rm GeV}/c^2$. Measurement of the $B_c^{\pm}$ production cross section in $p\bar{p}$ collisions at $\sqrt{s}=1.96$ TeV (1601.03819) CDF Collaboration: T. Aaltonen, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, J.A. Appel, T. Arisawa, A. Artikov, J. Asaadi, W. Ashmanskas, B. Auerbach, A. Aurisano, F. Azfar, W. Badgett, T. Bae, A. Barbaro-Galtieri, V.E. Barnes, B.A. Barnett, P. Barria, P. Bartos, M. Bauce, F. Bedeschi, S. Behari, G. Bellettini, J. Bellinger, D. Benjamin, A. Beretvas, A. Bhatti, K.R. Bland, B. Blumenfeld, A. Bocci, A. Bodek, D. Bortoletto, J. Boudreau, A. Boveia, L. Brigliadori, C. Bromberg, E. Brucken, J. Budagov, H.S. Budd, K. Burkett, G. Busetto, P. Bussey, P. Butti, A. Buzatu, A. Calamba, S. Camarda, M. Campanelli, F. Canelli, B. Carls, D. Carlsmith, R. Carosi, S. Carrillo, B. Casal, M. Casarsa, A. Castro, P. Catastini, D. Cauz, V. Cavaliere, A. Cerri, L. Cerrito, Y.C. Chen, M. Chertok, G. Chiarelli, G. Chlachidze, K. Cho, D. Chokheli, A. Clark, C. Clarke, M.E. Convery, J. Conway, M. Corbo, M. Cordelli, C.A. Cox, D.J. Cox, M. Cremonesi, D. Cruz, J. Cuevas, R. Culbertson, N. d'Ascenzo, M. Datta, P. de Barbaro, L. Demortier, L. Marchese, M. Deninno, F. Devoto, M. D'Errico, A. Di Canto, B. Di Ruzza, J.R. Dittmann, M. D'Onofrio, S. Donati, M. Dorigo, A. Driutti, K. Ebina, R. Edgar, R. Erbacher, S. Errede, B. Esham, S. Farrington, J.P. Fernández Ramos, R. Field, G. Flanagan, R. Forrest, M. Franklin, J.C. Freeman, H. Frisch, Y. Funakoshi, C. Galloni, A.F. Garfinkel, P. Garosi, H. Gerberich, E. Gerchtein, S. Giagu, V. Giakoumopoulou, K. Gibson, C.M. Ginsburg, N. Giokaris, P. Giromini, V. Glagolev, D. Glenzinski, M. Gold, D. Goldin, A. Golossanov, G. Gomez, G. Gomez-Ceballos, M. Goncharov, O. González López, I. Gorelov, A.T. Goshaw, K. Goulianos, E. Gramellini, C. Grosso-Pilcher, J. Guimaraes da Costa, S.R. Hahn, J.Y. Han, F. Happacher, K. Hara, M. Hare, R.F. Harr, T. Harrington-Taber, M. Hartz, K. Hatakeyama, C. Hays, J. Heinrich, M. Herndon, A. Hocker, Z. Hong, W. Hopkins, S. Hou, R.E. Hughes, U. Husemann, M. Hussein, J. Huston, G. Introzzi, M. Iori, A. Ivanov, E. James, D. Jang, B. Jayatilaka, E.J. Jeon, S. Jindariani, M. Jones, K.K. Joo, S.Y. Jun, T.R. Junk, M. Kambeitz, T. Kamon, P.E. Karchin, A. Kasmi, Y. Kato, W. Ketchum, J. Keung, B. Kilminster, D.H. Kim, H.S. Kim, J.E. Kim, M.J. Kim, S.B. Kim, S.H. Kim, Y.K. Kim, Y.J. Kim, N. Kimura, M. Kirby, K. Knoepfel, K. Kondo, D.J. Kong, J. Konigsberg, A.V. Kotwal, M. Kreps, J. Kroll, M. Kruse, T. Kuhr, M. Kurata, A.T. Laasanen, S. Lammel, M. Lancaster, K. Lannon, G. Latino, H.S. Lee, J.S. Lee, S. Leo, S. Leone, J.D. Lewis, A. Limosani, E. Lipeles, A. Lister, Q. Liu, T. Liu, S. Lockwitz, A. Loginov, A. Lucà, D. Lucchesi, J. Lueck, P. Lujan, P. Lukens, G. Lungu, J. Lys, R. Lysak, R. Madrak, P. Maestro, S. Malik, G. Manca, A. Manousakis-Katsikakis, F. Margaroli, P. Marino, K. Matera, M.E. Mattson, A. Mazzacane, P. Mazzanti, R. McNulty, A. Mehta, P. Mehtala, C. Mesropian, T. Miao, D. Mietlicki, A. Mitra, H. Miyake, S. Moed, N. Moggi, C.S. Moon, R. Moore, M.J. Morello, A. Mukherjee, Th. Muller, P. Murat, M. Mussini, J. Nachtman, Y. Nagai, J. Naganoma, I. Nakano, A. Napier, J. Nett, T. Nigmanov, L. Nodulman, S.Y. Noh, O. Norniella, L. Oakes, S.H. Oh, Y.D. Oh, T. Okusawa, R. Orava, L. Ortolan, C. Pagliarone, E. Palencia, P. Palni, V. Papadimitriou, W. Parker, G. Pauletta, M. Paulini, C. Paus, T.J. Phillips, G. Piacentino, E. Pianori, J. Pilot, K. Pitts, C. Plager, L. Pondrom, S. Poprocki, K. Potamianos, F. Prokoshin, A. Pranko, F. Ptohos, G. Punzi, I. Redondo Fernández, P. Renton, M. Rescigno, F. Rimondi, L. Ristori, A. Robson, T. Rodriguez, S. Rolli, M. Ronzani, R. Roser, J.L. Rosner, F. Ruffini, A. Ruiz, J. Russ, V. Rusu, W.K. Sakumoto, Y. Sakurai, L. Santi, K. Sato, V. Saveliev, A. Savoy-Navarro, P. Schlabach, E.E. Schmidt, T. Schwarz, L. Scodellaro, F. Scuri, S. Seidel, Y. Seiya, A. Semenov, F. Sforza, S.Z. Shalhout, T. Shears, P.F. Shepard, M. Shimojima, M. Shochet, I. Shreyber-Tecker, A. Simonenko, K. Sliwa, J.R. Smith, F.D. Snider, V. Sorin, H. Song, M. Stancari, R. St. Denis, D. Stentz, J. Strologas, Y. Sudo, A. Sukhanov, I. Suslov, K. Takemasa, Y. Takeuchi, J. Tang, M. Tecchio, P.K. Teng, J. Thom, E. Thomson, V. Thukral, D. Toback, S. Tokar, K. Tollefson, T. Tomura, D. Tonelli, S. Torre, D. Torretta, P. Totaro, M. Trovato, F. Ukegawa, S. Uozumi, F. Vázquez, G. Velev, C. Vellidis, C. Vernieri, M. Vidal, R. Vilar, J. Vizán, M. Vogel, G. Volpi, P. Wagner, R. Wallny, S.M. Wang, D. Waters, W.C. Wester III, D. Whiteson, A.B. Wicklund, S. Wilbur, H.H. Williams, J.S. Wilson, P. Wilson, B.L. Winer, P. Wittich, S. Wolbers, H. Wolfe, T. Wright, X. Wu, Z. Wu, K. Yamamoto, D. Yamato, T. Yang, U.K. Yang, Y.C. Yang, W.-M. Yao, G.P. Yeh, K. Yi, J. Yoh, K. Yorita, T. Yoshida, G.B. Yu, I. Yu, A.M. Zanetti, Y. Zeng, C. Zhou, S. Zucchelli March 26, 2016 hep-ex We describe a measurement of the ratio of the cross sections times branching fractions of the $B_c^+$ meson in the decay mode $B_c^+ \rightarrow J/\psi \mu\nu$ to the $B^+$ meson in the decay mode $B^+ \rightarrow J/\psi K^+$ in proton-antiproton collisions at center-of-mass energy $\sqrt{s}=1.96$ TeV. The measurement is based on the complete CDF Run II data set, which comes from an integrated luminosity of $8.7\,{\rm fb}^{-1}$. The ratio of the production cross sections times branching fractions for $B_c^+$ and $B_c^+$ mesons with momentum transverse to the beam greater than $6~\textrm{GeV}/c$ and rapidity magnitude smaller than 0.6 is $0.211\pm 0.012~\mbox{(stat)}^{+0.021}_{-0.020}~\mbox{(syst)}$. Using the known $B^+ \rightarrow J/\psi K^+$ branching fraction, the known $B^+$ production cross section, and a selection of the predicted $B_c^+ \rightarrow J/\psi \mu\nu$ branching fractions, the range for the total $B_c^+$ production cross section is estimated. Measurement of vector boson plus $D^{*}(2010)^+$ meson production in $\bar{p}p$ collisions at $\sqrt{s}=1.96\, {\rm TeV}$ (1508.06980) CDF Collaboration: T. Aaltonen, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, J.A. Appel, T. Arisawa, A. Artikov, J. Asaadi, W. Ashmanskas, B. Auerbach, A. Aurisano, F. Azfar, W. Badgett, T. Bae, A. Barbaro-Galtieri, V.E. Barnes, B.A. Barnett, P. Barria, P. Bartos, M. Bauce, F. Bedeschi, S. Behari, G. Bellettini, J. Bellinger, D. Benjamin, A. Beretvas, A. Bhatti, K.R. Bland, B. Blumenfeld, A. Bocci, A. Bodek, D. Bortoletto, J. Boudreau, A. Boveia, L. Brigliadori, C. Bromberg, E. Brucken, J. Budagov, H.S. Budd, K. Burkett, G. Busetto, P. Bussey, P. Butti, A. Buzatu, A. Calamba, S. Camarda, M. Campanelli, F. Canelli, B. Carls, D. Carlsmith, R. Carosi, S. Carrillo, B. Casal, M. Casarsa, A. Castro, P. Catastini, D. Cauz, V. Cavaliere, A. Cerri, L. Cerrito, Y.C. Chen, M. Chertok, G. Chiarelli, G. Chlachidze, K. Cho, D. Chokheli, A. Clark, C. Clarke, M.E. Convery, J. Conway, M. Corbo, M. Cordelli, C.A. Cox, D.J. Cox, M. Cremonesi, D. Cruz, J. Cuevas, R. Culbertson, N. d'Ascenzo, M. Datta, P. de Barbaro, L. Demortier, L. Marchese, M. Deninno, F. Devoto, M. D'Errico, A. Di Canto, B. Di Ruzza, J.R. Dittmann, M. D'Onofrio, S. Donati, M. Dorigo, A. Driutti, K. Ebina, R. Edgar, A. Elagin, R. Erbacher, S. Errede, B. Esham, S. Farrington, J.P. Fernández Ramos, R. Field, G. Flanagan, R. Forrest, M. Franklin, J.C. Freeman, H. Frisch, Y. Funakoshi, C. Galloni, A.F. Garfinkel, P. Garosi, H. Gerberich, E. Gerchtein, S. Giagu, V. Giakoumopoulou, K. Gibson, C.M. Ginsburg, N. Giokaris, P. Giromini, V. Glagolev, D. Glenzinski, M. Gold, D. Goldin, A. Golossanov, G. Gomez, G. Gomez-Ceballos, M. Goncharov, O. González López, I. Gorelov, A.T. Goshaw, K. Goulianos, E. Gramellini, C. Grosso-Pilcher, R.C. Group, J. Guimaraes da Costa, S.R. Hahn, J.Y. Han, F. Happacher, K. Hara, M. Hare, R.F. Harr, T. Harrington-Taber, K. Hatakeyama, C. Hays, J. Heinrich, M. Herndon, A. Hocker, Z. Hong, W. Hopkins, S. Hou, R.E. Hughes, U. Husemann, M. Hussein, J. Huston, G. Introzzi, M. Iori, A. Ivanov, E. James, D. Jang, B. Jayatilaka, E.J. Jeon, S. Jindariani, M. Jones, K.K. Joo, S.Y. Jun, T.R. Junk, M. Kambeitz, T. Kamon, P.E. Karchin, A. Kasmi, Y. Kato, W. Ketchum, J. Keung, B. Kilminster, D.H. Kim, H.S. Kim, J.E. Kim, M.J. Kim, S.B. Kim, S.H. Kim, Y.K. Kim, Y.J. Kim, N. Kimura, M. Kirby, K. Knoepfel, K. Kondo, D.J. Kong, J. Konigsberg, A.V. Kotwal, M. Kreps, J. Kroll, M. Kruse, T. Kuhr, M. Kurata, A.T. Laasanen, S. Lammel, M. Lancaster, K. Lannon, G. Latino, H.S. Lee, J.S. Lee, S. Leo, S. Leone, J.D. Lewis, A. Limosani, E. Lipeles, A. Lister, H. Liu, Q. Liu, T. Liu, S. Lockwitz, A. Loginov, A. Lucà, D. Lucchesi, J. Lueck, P. Lujan, P. Lukens, G. Lungu, J. Lys, R. Lysak, R. Madrak, P. Maestro, S. Malik, G. Manca, A. Manousakis-Katsikakis, F. Margaroli, P. Marino, K. Matera, M.E. Mattson, A. Mazzacane, P. Mazzanti, R. McNulty, A. Mehta, P. Mehtala, C. Mesropian, T. Miao, D. Mietlicki, A. Mitra, H. Miyake, S. Moed, N. Moggi, C.S. Moon, R. Moore, M.J. Morello, A. Mukherjee, Th. Muller, P. Murat, M. Mussini, J. Nachtman, Y. Nagai, J. Naganoma, I. Nakano, A. Napier, J. Nett, C. Neu, T. Nigmanov, L. Nodulman, S.Y. Noh, O. Norniella, L. Oakes, S.H. Oh, Y.D. Oh, I. Oksuzian, T. Okusawa, R. Orava, L. Ortolan, C. Pagliarone, E. Palencia, P. Palni, V. Papadimitriou, W. Parker, G. Pauletta, M. Paulini, C. Paus, T.J. Phillips, G. Piacentino, E. Pianori, J. Pilot, K. Pitts, C. Plager, L. Pondrom, S. Poprocki, K. Potamianos, F. Prokoshin, A. Pranko, F. Ptohos, G. Punzi, I. Redondo Fernández, P. Renton, M. Rescigno, F. Rimondi, L. Ristori, A. Robson, T. Rodriguez, S. Rolli, M. Ronzani, R. Roser, J.L. Rosner, F. Ruffini, A. Ruiz, J. Russ, V. Rusu, W.K. Sakumoto, Y. Sakurai, L. Santi, K. Sato, V. Saveliev, A. Savoy-Navarro, P. Schlabach, E.E. Schmidt, T. Schwarz, L. Scodellaro, F. Scuri, S. Seidel, Y. Seiya, A. Semenov, F. Sforza, S.Z. Shalhout, T. Shears, P.F. Shepard, M. Shimojima, M. Shochet, I. Shreyber-Tecker, A. Simonenko, K. Sliwa, J.R. Smith, F.D. Snider, V. Sorin, H. Song, M. Stancari, R. St. Denis, D. Stentz, J. Strologas, Y. Sudo, A. Sukhanov, I. Suslov, K. Takemasa, Y. Takeuchi, J. Tang, M. Tecchio, P.K. Teng, J. Thom, E. Thomson, V. Thukral, D. Toback, S. Tokar, K. Tollefson, T. Tomura, D. Tonelli, S. Torre, D. Torretta, P. Totaro, M. Trovato, F. Ukegawa, S. Uozumi, F. Vázquez, G. Velev, C. Vellidis, C. Vernieri, M. Vidal, R. Vilar, J. Vizán, M. Vogel, G. Volpi, P. Wagner, R. Wallny, S.M. Wang, D. Waters, W.C. Wester III, D. Whiteson, A.B. Wicklund, S. Wilbur, H.H. Williams, J.S. Wilson, P. Wilson, B.L. Winer, P. Wittich, S. Wolbers, H. Wolfe, T. Wright, X. Wu, Z. Wu, K. Yamamoto, D. Yamato, T. Yang, U.K. Yang, Y.C. Yang, W.-M. Yao, G.P. Yeh, K. Yi, J. Yoh, K. Yorita, T. Yoshida, G.B. Yu, I. Yu, A.M. Zanetti, Y. Zeng, C. Zhou, S. Zucchelli A measurement of vector boson ($V$) production in conjunction with a $D^{*}(2010)^+$ meson is presented. Using a data sample corresponding to $9.7\, {\rm fb}^{-1}$ of ^Mproton-antiproton collisions at center-of-mass energy $\sqrt{s}=1.96\rm~ TeV$ produced by the Fermilab Tevatron, we reconstruct $V+D^{*+}$ samples with the CDF~II detector. The $D^{*+}$ is fully reconstructed in the $D^{*}(2010)^+ \rightarrow D^{0}(\to K^-\pi^+)\pi^+$ decay mode. This technique is sensitive to the associated production of vector boson plus charm or bottom mesons. We measure the ratio of production cross sections $\sigma(W+D^{*})/\sigma(W)$ = $[1.75\pm 0.13 {\rm (stat)}\pm 0.09 {\rm (syst)}]\% $ and $\sigma(Z+D^{*})/\sigma(Z)$ = $[1.5\pm 0.4 {\rm (stat)} \pm 0.2 {\rm (syst)}]\% $ and perform a differential measurement of $d\sigma(W+D^{*})/dp_T(D^{*})$. Event properties are utilized to determine the fraction of $V+D^{*}(2010)^+$ events originating from different production processes. The results are in agreement with the predictions obtained with the {\sc pythia} program, limiting possible contribution from non-standard-model physics processes. Measurement of the forward-backward asymmetry of top-quark and antiquark pairs using the full CDF Run II data set (1602.09015) Feb. 29, 2016 hep-ex We measure the forward--backward asymmetry of the production of top quark and antiquark pairs in proton-antiproton collisions at center-of-mass energy $\sqrt{s} = 1.96~\mathrm{TeV}$ using the full data set collected by the Collider Detector at Fermilab (CDF) in Tevatron Run II corresponding to an integrated luminosity of $9.1~\rm{fb}^{-1}$. The asymmetry is characterized by the rapidity difference between top quarks and antiquarks ($\Delta y$), and measured in the final state with two charged leptons (electrons and muons). The inclusive asymmetry, corrected to the entire phase space at parton level, is measured to be $A_{\text{FB}}^{t\bar{t}} = 0.12 \pm 0.13$, consistent with the expectations from the standard-model (SM) and previous CDF results in the final state with a single charged lepton. The combination of the CDF measurements of the inclusive $A_{\text{FB}}^{t\bar{t}}$ in both final states yields $A_{\text{FB}}^{t\bar{t}}=0.160\pm0.045$, which is consistent with the SM predictions. We also measure the differential asymmetry as a function of $\Delta y$. A linear fit to $A_{\text{FB}}^{t\bar{t}}(|\Delta y|)$, assuming zero asymmetry at $\Delta y=0$, yields a slope of $\alpha=0.14\pm0.15$, consistent with the SM prediction and the previous CDF determination in the final state with a single charged lepton. The combined slope of $A_{\text{FB}}^{t\bar{t}}(|\Delta y|)$ in the two final states is $\alpha=0.227\pm0.057$, which is $2.0\sigma$ larger than the SM prediction. Measurement of the forward-backward asymmetry in low-mass bottom-quark pairs produced in proton-antiproton collisions (1601.06526) CDF Collaboration: T. Aaltonen, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, J.A. Appel, T. Arisawa, A. Artikov, J. Asaadi, W. Ashmanskas, B. Auerbach, A. Aurisano, F. Azfar, W. Badgett, T. Bae, A. Barbaro-Galtieri, V.E. Barnes, B.A. Barnett, P. Barria, P. Bartos, M. Bauce, F. Bedeschi, S. Behari, G. Bellettini, J. Bellinger, D. Benjamin, A. Beretvas, A. Bhatti, K.R. Bland, B. Blumenfeld, A. Bocci, A. Bodek, D. Bortoletto, J. Boudreau, A. Boveia, L. Brigliadori, C. Bromberg, E. Brucken, J. Budagov, H.S. Budd, K. Burkett, G. Busetto, P. Bussey, P. Butti, A. Buzatu, A. Calamba, S. Camarda, M. Campanelli, F. Canelli, B. Carls, D. Carlsmith, R. Carosi, S. Carrillo, B. Casal, M. Casarsa, A. Castro, P. Catastini, D. Cauz, V. Cavaliere, A. Cerri, L. Cerrito, Y.C. Chen, M. Chertok, G. Chiarelli, G. Chlachidze, K. Cho, D. Chokheli, A. Clark, C. Clarke, M.E. Convery, J. Conway, M. Corbo, M. Cordelli, C.A. Cox, D.J. Cox, M. Cremonesi, D. Cruz, J. Cuevas, R. Culbertson, N. d'Ascenzo, M. Datta, P. de Barbaro, L. Demortier, L. Marchese, M. Deninno, F. Devoto, M. D'Errico, A. Di Canto, B. Di Ruzza, J.R. Dittmann, M. D'Onofrio, S. Donati, M. Dorigo, A. Driutti, K. Ebina, R. Edgar, R. Erbacher, S. Errede, B. Esham, S. Farrington, J.P. Fernández Ramos, R. Field, G. Flanagan, R. Forrest, M. Franklin, J.C. Freeman, H. Frisch, Y. Funakoshi, C. Galloni, A.F. Garfinkel, P. Garosi, H. Gerberich, E. Gerchtein, S. Giagu, V. Giakoumopoulou, K. Gibson, C.M. Ginsburg, N. Giokaris, P. Giromini, V. Glagolev, D. Glenzinski, M. Gold, D. Goldin, A. Golossanov, G. Gomez, G. Gomez-Ceballos, M. Goncharov, O. González López, I. Gorelov, A.T. Goshaw, K. Goulianos, E. Gramellini, C. Grosso-Pilcher, J. Guimaraes da Costa, S.R. Hahn, J.Y. Han, F. Happacher, K. Hara, M. Hare, R.F. Harr, T. Harrington-Taber, K. Hatakeyama, C. Hays, J. Heinrich, M. Herndon, A. Hocker, Z. Hong, W. Hopkins, S. Hou, R.E. Hughes, U. Husemann, M. Hussein, J. Huston, G. Introzzi, M. Iori, A. Ivanov, E. James, D. Jang, B. Jayatilaka, E.J. Jeon, S. Jindariani, M. Jones, K.K. Joo, S.Y. Jun, T.R. Junk, M. Kambeitz, T. Kamon, P.E. Karchin, A. Kasmi, Y. Kato, W. Ketchum, J. Keung, B. Kilminster, D.H. Kim, H.S. Kim, J.E. Kim, M.J. Kim, S.B. Kim, S.H. Kim, Y.K. Kim, Y.J. Kim, N. Kimura, M. Kirby, K. Kondo, D.J. Kong, J. Konigsberg, A.V. Kotwal, M. Kreps, J. Kroll, M. Kruse, T. Kuhr, M. Kurata, A.T. Laasanen, S. Lammel, M. Lancaster, K. Lannon, G. Latino, H.S. Lee, J.S. Lee, S. Leo, S. Leone, J.D. Lewis, A. Limosani, E. Lipeles, A. Lister, Q. Liu, T. Liu, S. Lockwitz, A. Loginov, A. Lucà, D. Lucchesi, J. Lueck, P. Lujan, P. Lukens, G. Lungu, J. Lys, R. Lysak, R. Madrak, P. Maestro, O. Majersky, S. Malik, G. Manca, A. Manousakis-Katsikakis, F. Margaroli, P. Marino, K. Matera, M.E. Mattson, A. Mazzacane, P. Mazzanti, R. McNulty, A. Mehta, P. Mehtala, C. Mesropian, T. Miao, D. Mietlicki, A. Mitra, H. Miyake, S. Moed, N. Moggi, C.S. Moon, R. Moore, M.J. Morello, A. Mukherjee, Th. Muller, P. Murat, M. Mussini, J. Nachtman, Y. Nagai, J. Naganoma, I. Nakano, A. Napier, J. Nett, T. Nigmanov, L. Nodulman, S.Y. Noh, O. Norniella, L. Oakes, S.H. Oh, Y.D. Oh, T. Okusawa, R. Orava, L. Ortolan, C. Pagliarone, E. Palencia, P. Palni, V. Papadimitriou, W. Parker, G. Pauletta, M. Paulini, C. Paus, T.J. Phillips, G. Piacentino, E. Pianori, J. Pilot, K. Pitts, C. Plager, L. Pondrom, S. Poprocki, K. Potamianos, F. Prokoshin, A. Pranko, F. Ptohos, G. Punzi, I. Redondo Fernández, P. Renton, M. Rescigno, F. Rimondi, L. Ristori, A. Robson, T. Rodriguez, S. Rolli, M. Ronzani, R. Roser, J.L. Rosner, F. Ruffini, A. Ruiz, J. Russ, V. Rusu, W.K. Sakumoto, Y. Sakurai, L. Santi, K. Sato, V. Saveliev, A. Savoy-Navarro, P. Schlabach, E.E. Schmidt, T. Schwarz, L. Scodellaro, F. Scuri, S. Seidel, Y. Seiya, A. Semenov, F. Sforza, S.Z. Shalhout, T. Shears, P.F. Shepard, M. Shimojima, M. Shochet, I. Shreyber-Tecker, A. Simonenko, K. Sliwa, J.R. Smith, F.D. Snider, V. Sorin, H. Song, M. Stancari, R. St. Denis, D. Stentz, J. Strologas, Y. Sudo, A. Sukhanov, I. Suslov, K. Takemasa, Y. Takeuchi, J. Tang, M. Tecchio, P.K. Teng, J. Thom, E. Thomson, V. Thukral, D. Toback, S. Tokar, K. Tollefson, T. Tomura, D. Tonelli, S. Torre, D. Torretta, P. Totaro, M. Trovato, F. Ukegawa, S. Uozumi, F. Vázquez, G. Velev, C. Vellidis, C. Vernieri, M. Vidal, R. Vilar, J. Vizán, M. Vogel, G. Volpi, P. Wagner, R. Wallny, S.M. Wang, D. Waters, W.C. Wester III, D. Whiteson, A.B. Wicklund, S. Wilbur, H.H. Williams, J.S. Wilson, P. Wilson, B.L. Winer, P. Wittich, S. Wolbers, H. Wolfe, T. Wright, X. Wu, Z. Wu, K. Yamamoto, D. Yamato, T. Yang, U.K. Yang, Y.C. Yang, W.-M. Yao, G.P. Yeh, K. Yi, J. Yoh, K. Yorita, T. Yoshida, G.B. Yu, I. Yu, A.M. Zanetti, Y. Zeng, C. Zhou, S. Zucchelli Jan. 25, 2016 hep-ex We report a measurement of the forward-backward asymmetry, $A_{FB}$, in $b\bar{b}$ pairs produced in proton-antiproton collisions and identified by muons from semileptonic $b$-hadron decays. The event sample was collected at a center-of-mass energy of $\sqrt{s}=1.96$ TeV with the CDF II detector and corresponds to 6.9 fb$^{-1}$ of integrated luminosity. We obtain an integrated asymmetry of $A_{FB}(b\bar{b})=(1.2 \pm 0.7)$\% at the particle level for $b$-quark pairs with invariant mass, $m_{b\bar{b}}$, down to $40$ GeV/$c^2$ and measure the dependence of $A_{FB}(b\bar{b})$ on $m_{b\bar{b}}$. The results are compatible with expectations from the standard model. Search for a Low-Mass Neutral Higgs Boson with Suppressed Couplings to Fermions Using Events with Multiphoton Final States (1601.00401) CDF Collaboration: T. Aaltonen, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, J.A. Appel, T. Arisawa, A. Artikov, J. Asaadi, W. Ashmanskas, B. Auerbach, A. Aurisano, F. Azfar, W. Badgett, T. Bae, A. Barbaro-Galtieri, V.E. Barnes, B.A. Barnett, P. Barria, P. Bartos, M. Bauce, F. Bedeschi, S. Behari, G. Bellettini, J. Bellinger, D. Benjamin, A. Beretvas, A. Bhatti, K.R. Bland, B. Blumenfeld, A. Bocci, A. Bodek, D. Bortoletto, J. Boudreau, A. Boveia, L. Brigliadori, C. Bromberg, E. Brucken, J. Budagov, H.S. Budd, K. Burkett, G. Busetto, P. Bussey, P. Butti, A. Buzatu, A. Calamba, S. Camarda, M. Campanelli, F. Canelli, B. Carls, D. Carlsmith, R. Carosi, S. Carrillo, B. Casal, M. Casarsa, A. Castro, P. Catastini, D. Cauz, V. Cavaliere, A. Cerri, L. Cerrito, Y.C. Chen, M. Chertok, G. Chiarelli, G. Chlachidze, K. Cho, D. Chokheli, A. Clark, C. Clarke, M.E. Convery, J. Conway, M. Corbo, M. Cordelli, C.A. Cox, D.J. Cox, M. Cremonesi, D. Cruz, J. Cuevas, R. Culbertson, N. d'Ascenzo, M. Datta, P. de Barbaro, L. Demortier, L. Marchese, M. Deninno, F. Devoto, M. D'Errico, A. Di Canto, B. Di Ruzza, J.R. Dittmann, M. D'Onofrio, S. Donati, M. Dorigo, A. Driutti, K. Ebina, R. Edgar, R. Erbacher, S. Errede, B. Esham, S. Farrington, J.P. Fernández Ramos, R. Field, G. Flanagan, R. Forrest, M. Franklin, J.C. Freeman, H. Frisch, Y. Funakoshi, C. Galloni, A.F. Garfinkel, P. Garosi, H. Gerberich, E. Gerchtein, S. Giagu, V. Giakoumopoulou, K. Gibson, C.M. Ginsburg, N. Giokaris, P. Giromini, V. Glagolev, D. Glenzinski, M. Gold, D. Goldin, A. Golossanov, G. Gomez, G. Gomez-Ceballos, M. Goncharov, O. González López, I. Gorelov, A.T. Goshaw, K. Goulianos, E. Gramellini, C. Grosso-Pilcher, J. Guimaraes da Costa, S.R. Hahn, J.Y. Han, F. Happacher, K. Hara, M. Hare, R.F. Harr, T. Harrington-Taber, K. Hatakeyama, C. Hays, J. Heinrich, M. Herndon, A. Hocker, Z. Hong, W. Hopkins, S. Hou, R.E. Hughes, U. Husemann, M. Hussein, J. Huston, G. Introzzi, M. Iori, A. Ivanov, E. James, D. Jang, B. Jayatilaka, E.J. Jeon, S. Jindariani, M. Jones, K.K. Joo, S.Y. Jun, T.R. Junk, M. Kambeitz, T. Kamon, P.E. Karchin, A. Kasmi, Y. Kato, W. Ketchum, J. Keung, B. Kilminster, D.H. Kim, H.S. Kim, J.E. Kim, M.J. Kim, S.B. Kim, S.H. Kim, Y.K. Kim, Y.J. Kim, N. Kimura, M. Kirby, K. Knoepfel, K. Kondo, D.J. Kong, J. Konigsberg, A.V. Kotwal, M. Kreps, J. Kroll, M. Kruse, T. Kuhr, M. Kurata, A.T. Laasanen, S. Lammel, M. Lancaster, K. Lannon, G. Latino, H.S. Lee, J.S. Lee, S. Leo, S. Leone, J.D. Lewis, A. Limosani, E. Lipeles, A. Lister, Q. Liu, T. Liu, S. Lockwitz, A. Loginov, A. Lucà, D. Lucchesi, J. Lueck, P. Lujan, P. Lukens, G. Lungu, J. Lys, R. Lysak, R. Madrak, P. Maestro, S. Malik, G. Manca, A. Manousakis-Katsikakis, F. Margaroli, P. Marino, K. Matera, M.E. Mattson, A. Mazzacane, P. Mazzanti, R. McNulty, A. Mehta, P. Mehtala, C. Mesropian, T. Miao, D. Mietlicki, A. Mitra, H. Miyake, S. Moed, N. Moggi, C.S. Moon, R. Moore, M.J. Morello, A. Mukherjee, Th. Muller, P. Murat, M. Mussini, J. Nachtman, Y. Nagai, J. Naganoma, I. Nakano, A. Napier, J. Nett, T. Nigmanov, L. Nodulman, S.Y. Noh, O. Norniella, L. Oakes, S.H. Oh, Y.D. Oh, T. Okusawa, R. Orava, L. Ortolan, C. Pagliarone, E. Palencia, P. Palni, V. Papadimitriou, W. Parker, G. Pauletta, M. Paulini, C. Paus, T.J. Phillips, G. Piacentino, E. Pianori, J. Pilot, K. Pitts, C. Plager, L. Pondrom, S. Poprocki, K. Potamianos, F. Prokoshin, A. Pranko, F. Ptohos, G. Punzi, I. Redondo Fernández, P. Renton, M. Rescigno, F. Rimondi, L. Ristori, A. Robson, T. Rodriguez, S. Rolli, M. Ronzani, R. Roser, J.L. Rosner, F. Ruffini, A. Ruiz, J. Russ, V. Rusu, W.K. Sakumoto, Y. Sakurai, L. Santi, K. Sato, V. Saveliev, A. Savoy-Navarro, P. Schlabach, E.E. Schmidt, T. Schwarz, L. Scodellaro, F. Scuri, S. Seidel, Y. Seiya, A. Semenov, F. Sforza, S.Z. Shalhout, T. Shears, P.F. Shepard, M. Shimojima, M. Shochet, I. Shreyber-Tecker, A. Simonenko, K. Sliwa, J.R. Smith, F.D. Snider, V. Sorin, H. Song, M. Stancari, R. St. Denis, D. Stentz, J. Strologas, Y. Sudo, A. Sukhanov, I. Suslov, K. Takemasa, Y. Takeuchi, J. Tang, M. Tecchio, P.K. Teng, J. Thom, E. Thomson, V. Thukral, D. Toback, S. Tokar, K. Tollefson, T. Tomura, D. Tonelli, S. Torre, D. Torretta, P. Totaro, M. Trovato, F. Ukegawa, S. Uozumi, F. Vázquez, G. Velev, C. Vellidis, C. Vernieri, M. Vidal, R. Vilar, J. Vizán, M. Vogel, G. Volpi, P. Wagner, R. Wallny, S.M. Wang, D. Waters, W.C. Wester III, D. Whiteson, A.B. Wicklund, S. Wilbur, H.H. Williams, J.S. Wilson, P. Wilson, B.L. Winer, P. Wittich, S. Wolbers, H. Wolfe, T. Wright, X. Wu, Z. Wu, K. Yamamoto, D. Yamato, T. Yang, U.K. Yang, Y.C. Yang, W.-M. Yao, G.P. Yeh, K. Yi, J. Yoh, K. Yorita, T. Yoshida, G.B. Yu, I. Yu, A.M. Zanetti, Y. Zeng, C. Zhou, S. Zucchelli Jan. 4, 2016 hep-ex A search for a Higgs boson with suppressed couplings to fermions, $h_f$, assumed to be the neutral, lower-mass partner of the Higgs boson discovered at the Large Hadron Collider, is reported. Such a Higgs boson could exist in extensions of the standard model with two Higgs doublets, and could be produced via $p\bar{p} \to H^\pm h_f \to W^* h_f h_f \to 4\gamma + X$, where $H^\pm$ is a charged Higgs boson. This analysis uses all events with at least three photons in the final state from proton-antiproton collisions at a center-of-mass energy of 1.96~TeV collected by the Collider Detector at Fermilab, corresponding to an integrated luminosity of 9.2~${\rm fb}^{-1}$. No evidence of a signal is observed in the data. Values of Higgs-boson masses between 10 and 100 GeV/$c^2$ are excluded at 95\% Bayesian credibility. A Study of the Energy Dependence of the Underlying Event in Proton-Antiproton Collisions (1508.05340) CDF Collaboration: T. Aaltonen, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, J.A. Appel, T. Arisawa, A. Artikov, J. Asaadi, W. Ashmanskas, B. Auerbach, M. Albrow, A. Aurisano, F. Azfar, W. Badgett, T. Bae, A. Barbaro-Galtieri, V.E. Barnes, B.A. Barnett, P. Barria, P. Bartos, M. Bauce, F. Bedeschi, S. Behari, G. Bellettini, J. Bellinger, D. Benjamin, A. Beretvas, A. Bhatti, K.R. Bland, B. Blumenfeld, A. Bocci, A. Bodek, D. Bortoletto, J. Boudreau, A. Boveia, L. Brigliadori, C. Bromberg, E. Brucken, J. Budagov, H.S. Budd, K. Burkett, G. Busetto, P. Bussey, P. Butti, A. Buzatu, A. Calamba, S. Camarda, M. Campanelli, F. Canelli, B. Carls, D. Carlsmith, R. Carosi, S. Carrillo, B. Casal, M. Casarsa, A. Castro, P. Catastini, D. Cauz, V. Cavaliere, A. Cerri, L. Cerrito, Y.C. Chen, M. Chertok, G. Chiarelli, G. Chlachidze, K. Cho, D. Chokheli, A. Clark, C. Clarke, M.E. Convery, J. Conway, M. Corbo, M. Cordelli, C.A. Cox, D.J. Cox, M. Cremonesi, D. Cruz, J. Cuevas, R. Culbertson, N. d'Ascenzo, M. Datta, P. de Barbaro, L. Demortier, L. Marchese, M. Deninno, F. Devoto, M. D'Errico, A. Di Canto, B. Di Ruzza, J.R. Dittmann, M. D'Onofrio, S. Donati, M. Dorigo, A. Driutti, K. Ebina, R. Edgar, A. Elagin, R. Erbacher, S. Errede, B. Esham, S. Farrington, J.P. Fernández Ramos, R. Field, G. Flanagan, R. Forrest, M. Franklin, J.C. Freeman, H. Frisch, Y. Funakoshi, C. Galloni, A.F. Garfinkel, P. Garosi, H. Gerberich, E. Gerchtein, S. Giagu, V. Giakoumopoulou, K. Gibson, C.M. Ginsburg, N. Giokaris, P. Giromini, V. Glagolev, D. Glenzinski, M. Gold, D. Goldin, A. Golossanov, G. Gomez, G. Gomez-Ceballos, M. Goncharov, O. González López, I. Gorelov, A.T. Goshaw, K. Goulianos, E. Gramellini, C. Grosso-Pilcher, R.C. Group, J. Guimaraes da Costa, S.R. Hahn, J.Y. Han, F. Happacher, K. Hara, M. Hare, R.F. Harr, T. Harrington-Taber, K. Hatakeyama, C. Hays, J. Heinrich, M. Herndon, A. Hocker, Z. Hong, W. Hopkins, S. Hou, R.E. Hughes, U. Husemann, M. Hussein, J. Huston, G. Introzzi, M. Iori, A. Ivanov, E. James, D. Jang, B. Jayatilaka, E.J. Jeon, S. Jindariani, M. Jones, K.K. Joo, S.Y. Jun, T.R. Junk, M. Kambeitz, T. Kamon, P.E. Karchin, A. Kasmi, Y. Kato, W. Ketchum, J. Keung, B. Kilminster, D.H. Kim, H.S. Kim, J.E. Kim, M.J. Kim, S.B. Kim, S.H. Kim, Y.K. Kim, Y.J. Kim, N. Kimura, M. Kirby, K. Knoepfel, K. Kondo, D.J. Kong, J. Konigsberg, A.V. Kotwal, M. Kreps, J. Kroll, M. Kruse, T. Kuhr, M. Kurata, A.T. Laasanen, S. Lammel, M. Lancaster, K. Lannon, G. Latino, H.S. Lee, J.S. Lee, S. Leo, S. Leone, J.D. Lewis, A. Limosani, E. Lipeles, A. Lister, H. Liu, Q. Liu, T. Liu, S. Lockwitz, A. Loginov, A. Lucà, D. Lucchesi, J. Lueck, P. Lujan, P. Lukens, G. Lungu, J. Lys, R. Lysak, R. Madrak, P. Maestro, S. Malik, G. Manca, A. Manousakis-Katsikakis, F. Margaroli, P. Marino, K. Matera, M.E. Mattson, A. Mazzacane, P. Mazzanti, R. McNulty, A. Mehta, P. Mehtala, C. Mesropian, T. Miao, D. Mietlicki, A. Mitra, H. Miyake, S. Moed, N. Moggi, C.S. Moon, R. Moore, M.J. Morello, A. Mukherjee, Th. Muller, P. Murat, M. Mussini, J. Nachtman, Y. Nagai, J. Naganoma, I. Nakano, A. Napier, J. Nett, C. Neu, T. Nigmanov, L. Nodulman, S.Y. Noh, O. Norniella, L. Oakes, S.H. Oh, Y.D. Oh, I. Oksuzian, T. Okusawa, R. Orava, L. Ortolan, C. Pagliarone, E. Palencia, P. Palni, V. Papadimitriou, W. Parker, G. Pauletta, M. Paulini, C. Paus, T.J. Phillips, G. Piacentino, E. Pianori, J. Pilot, K. Pitts, C. Plager, L. Pondrom, S. Poprocki, K. Potamianos, F. Prokoshin, A. Pranko, F. Ptohos, G. Punzi, I. Redondo Fernández, P. Renton, M. Rescigno, F. Rimondi, L. Ristori, A. Robson, T. Rodriguez, S. Rolli, M. Ronzani, R. Roser, J.L. Rosner, F. Ruffini, A. Ruiz, J. Russ, V. Rusu, W.K. Sakumoto, Y. Sakurai, L. Santi, K. Sato, V. Saveliev, A. Savoy-Navarro, P. Schlabach, E.E. Schmidt, T. Schwarz, L. Scodellaro, F. Scuri, S. Seidel, Y. Seiya, A. Semenov, F. Sforza, S.Z. Shalhout, T. Shears, P.F. Shepard, M. Shimojima, M. Shochet, I. Shreyber-Tecker, A. Simonenko, K. Sliwa, J.R. Smith, F.D. Snider, V. Sorin, H. Song, M. Stancari, R. St. Denis, D. Stentz, J. Strologas, Y. Sudo, A. Sukhanov, I. Suslov, K. Takemasa, Y. Takeuchi, J. Tang, M. Tecchio, P.K. Teng, J. Thom, E. Thomson, V. Thukral, D. Toback, S. Tokar, K. Tollefson, T. Tomura, D. Tonelli, S. Torre, D. Torretta, P. Totaro, M. Trovato, F. Ukegawa, S. Uozumi, F. Vázquez, G. Velev, C. Vellidis, C. Vernieri, M. Vidal, R. Vilar, J. Vizán, M. Vogel, G. Volpi, P. Wagner, R. Wallny, S.M. Wang, D. Waters, W.C. Wester III, D. Whiteson, A.B. Wicklund, S. Wilbur, H.H. Williams, J.S. Wilson, P. Wilson, B.L. Winer, P. Wittich, S. Wolbers, H. Wolfe, T. Wright, X. Wu, Z. Wu, K. Yamamoto, D. Yamato, T. Yang, U.K. Yang, Y.C. Yang, W.-M. Yao, G.P. Yeh, K. Yi, J. Yoh, K. Yorita, T. Yoshida, G.B. Yu, I. Yu, A.M. Zanetti, Y. Zeng, C. Zhou, S. Zucchelli Aug. 27, 2015 hep-ex We study charged particle production in proton-antiproton collisions at 300 GeV, 900 GeV, and 1.96 TeV. We use the direction of the charged particle with the largest transverse momentum in each event to define three regions of eta-phi space; toward, away, and transverse. The average number and the average scalar pT sum of charged particles in the transverse region are sensitive to the modeling of the underlying event. The transverse region is divided into a MAX and MIN transverse region, which helps separate the hard component (initial and final-state radiation) from the beam-beam remnant and multiple parton interaction components of the scattering. The center-of-mass energy dependence of the various components of the event are studied in detail. The data presented here can be used to constrain and improve QCD Monte Carlo models, resulting in more precise predictions at the LHC energies of 13 and 14 TeV. Measurement of the production and differential cross sections of $W^{+}W^{-}$ bosons in association with jets in $p\bar{p}$ collisions at $\sqrt{s}=1.96$ TeV (1505.00801) We present a measurement of the $W$-boson-pair production cross section in $p\bar{p}$ collisions at 1.96 TeV center-of-mass energy and the first measurement of the differential cross section as a function of jet multiplicity and leading-jet energy. The $W^{+}W^{-}$ cross section is measured in the final state comprising two charged leptons and neutrinos, where either charged lepton can be an electron or a muon. Using data collected by the CDF experiment corresponding to $9.7~\rm{fb}^{-1}$ of integrated luminosity, a total of $3027$ collision events consistent with $W^{+}W^{-}$ production are observed with an estimated background contribution of $1790\pm190$ events. The measured total cross section is $\sigma(p\bar{p} \rightarrow W^{+}W^{-}) = 14.0 \pm 0.6~(\rm{stat})^{+1.2}_{-1.0}~(\rm{syst})\pm0.8~(\rm{lumi})$ pb, consistent with the standard model prediction. Measurement of the top-quark mass in the ${t\bar{t}}$ dilepton channel using the full CDF Run II data set (1505.00500) We present a measurement of the top-quark mass in events containing two leptons (electrons or muons) with a large transverse momentum, two or more energetic jets, and a transverse-momentum imbalance. We use the full proton-antiproton collision data set collected by the CDF experiment during the Fermilab Tevatron Run~II at center-of-mass energy $\sqrt{s} = 1.96$ TeV, corresponding to an integrated luminosity of 9.1 fb$^{-1}$. A special observable is exploited for an optimal reduction of the dominant systematic uncertainty, associated with the knowledge of the absolute energy of the hadronic jets. The distribution of this observable in the selected events is compared to simulated distributions of ${t\bar{t}}$ dilepton signal and background.We measure a value for the top-quark mass of $171.5\pm 1.9~{\rm (stat)}\pm 2.5~{\rm (syst)}$ GeV/$c^2$. Measurement of central exclusive pi+pi- production in p-pbar collisions at sqrt(s) = 0.9 and 1.96 TeV at CDF (1502.01391) CDF Collaboration: T. Aaltonen, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, J.A. Appel, T. Arisawa, A. Artikov, J. Asaadi, W. Ashmanskas, B. Auerbach, A. Aurisano, F. Azfar, W. Badgett, T. Bae, A. Barbaro-Galtieri, V.E. Barnes, B.A. Barnett, P. Barria, P. Bartos, M. Bauce, F. Bedeschi, S. Behari, G. Bellettini, J. Bellinger, D. Benjamin, A. Beretvas, A. Bhatti, K.R. Bland, B. Blumenfeld, A. Bocci, A. Bodek, D. Bortoletto, J. Boudreau, A. Boveia, L. Brigliadori, C. Bromberg, E. Brucken, J. Budagov, H.S. Budd, K. Burkett, G. Busetto, P. Bussey, P. Butti, A. Buzatu, A. Calamba, S. Camarda, M. Campanelli, F. Canelli, B. Carls, D. Carlsmith, R. Carosi, S. Carrillo, B. Casal, M. Casarsa, A. Castro, P. Catastini, D. Cauz, V. Cavaliere, A. Cerri, L. Cerrito, Y.C. Chen, M. Chertok, G. Chiarelli, G. Chlachidze, K. Cho, D. Chokheli, A. Clark, C. Clarke, M.E. Convery, J. Conway, M. Corbo, M. Cordelli, C.A. Cox, D.J. Cox, M. Cremonesi, D. Cruz, J. Cuevas, R. Culbertson, N. d'Ascenzo, M. Datta, P. de Barbaro, L. Demortier, L. Marchese, M. Deninno, F. Devoto, M. D'Errico, A. Di Canto, B. Di Ruzza, J.R. Dittmann, M. D'Onofrio, S. Donati, M. Dorigo, A. Driutti, K. Ebina, R. Edgar, A. Elagin, R. Erbacher, S. Errede, B. Esham, S. Farrington, J.P. Fernández Ramos, R. Field, G. Flanagan, R. Forrest, M. Franklin, J.C. Freeman, H. Frisch, Y. Funakoshi, C. Galloni, A.F. Garfinkel, P. Garosi, H. Gerberich, E. Gerchtein, S. Giagu, V. Giakoumopoulou, K. Gibson, C.M. Ginsburg, N. Giokaris, P. Giromini, V. Glagolev, D. Glenzinski, M. Gold, D. Goldin, A. Golossanov, G. Gomez, G. Gomez-Ceballos, M. Goncharov, O. González López, I. Gorelov, A.T. Goshaw, K. Goulianos, E. Gramellini, C. Grosso-Pilcher, R.C. Group, J. Guimaraes da Costa, S.R. Hahn, J.Y. Han, F. Happacher, K. Hara, M. Hare, R.F. Harr, T. Harrington-Taber, K. Hatakeyama, C. Hays, J. Heinrich, M. Herndon, A. Hocker, Z. Hong, W. Hopkins, S. Hou, R.E. Hughes, U. Husemann, M. Hussein, J. Huston, G. Introzzi, M. Iori, A. Ivanov, E. James, D. Jang, B. Jayatilaka, E.J. Jeon, S. Jindariani, M. Jones, K.K. Joo, S.Y. Jun, T.R. Junk, M. Kambeitz, T. Kamon, P.E. Karchin, A. Kasmi, Y. Kato, W. Ketchum, J. Keung, B. Kilminster, D.H. Kim, H.S. Kim, J.E. Kim, M.J. Kim, S.B. Kim, S.H. Kim, Y.K. Kim, Y.J. Kim, N. Kimura, M. Kirby, K. Knoepfel, K. Kondo, D.J. Kong, J. Konigsberg, A.V. Kotwal, M. Kreps, J. Kroll, M. Kruse, T. Kuhr, M. Kurata, A.T. Laasanen, S. Lammel, M. Lancaster, K. Lannon, G. Latino, H.S. Lee, J.S. Lee, S. Leo, S. Leone, J.D. Lewis, A. Limosani, E. Lipeles, A. Lister, H. Liu, Q. Liu, T. Liu, S. Lockwitz, A. Loginov, D. Lontkovskyi, A. Lucà, D. Lucchesi, J. Lueck, P. Lujan, P. Lukens, G. Lungu, J. Lys, R. Lysak, R. Madrak, P. Maestro, I. Makarenko, S. Malik, G. Manca, A. Manousakis-Katsikakis, F. Margaroli, P. Marino, K. Matera, M.E. Mattson, A. Mazzacane, P. Mazzanti, R. McNulty, A. Mehta, P. Mehtala, C. Mesropian, T. Miao, D. Mietlicki, A. Mitra, H. Miyake, S. Moed, N. Moggi, C.S. Moon, R. Moore, M.J. Morello, A. Mukherjee, Th. Muller, P. Murat, M. Mussini, J. Nachtman, Y. Nagai, J. Naganoma, I. Nakano, A. Napier, J. Nett, C. Neu, T. Nigmanov, L. Nodulman, S.Y. Noh, O. Norniella, L. Oakes, S.H. Oh, Y.D. Oh, I. Oksuzian, T. Okusawa, R. Orava, L. Ortolan, C. Pagliarone, E. Palencia, P. Palni, V. Papadimitriou, W. Parker, G. Pauletta, M. Paulini, C. Paus, T.J. Phillips, G. Piacentino, E. Pianori, J. Pilot, K. Pitts, C. Plager, L. Pondrom, S. Poprocki, K. Potamianos, F. Prokoshin, A. Pranko, F. Ptohos, G. Punzi, I. Redondo Fernández, P. Renton, M. Rescigno, F. Rimondi, L. Ristori, A. Robson, T. Rodriguez, S. Rolli, M. Ronzani, R. Roser, J.L. Rosner, F. Ruffini, A. Ruiz, J. Russ, V. Rusu, W.K. Sakumoto, Y. Sakurai, L. Santi, K. Sato, V. Saveliev, A. Savoy-Navarro, P. Schlabach, E.E. Schmidt, T. Schwarz, L. Scodellaro, F. Scuri, S. Seidel, Y. Seiya, A. Semenov, F. Sforza, S.Z. Shalhout, T. Shears, P.F. Shepard, M. Shimojima, M. Shochet, I. Shreyber-Tecker, A. Simonenko, K. Sliwa, J.R. Smith, F.D. Snider, V. Sorin, H. Song, M. Stancari, R. St. Denis, D. Stentz, J. Strologas, Y. Sudo, A. Sukhanov, I. Suslov, K. Takemasa, Y. Takeuchi, J. Tang, M. Tecchio, P.K. Teng, J. Thom, E. Thomson, V. Thukral, D. Toback, S. Tokar, K. Tollefson, T. Tomura, D. Tonelli, S. Torre, D. Torretta, P. Totaro, M. Trovato, F. Ukegawa, S. Uozumi, F. Vázquez, G. Velev, C. Vellidis, C. Vernieri, M. Vidal, R. Vilar, J. Vizán, M. Vogel, G. Volpi, P. Wagner, R. Wallny, S.M. Wang, D. Waters, W.C. Wester III, D. Whiteson, A.B. Wicklund, S. Wilbur, H.H. Williams, J.S. Wilson, P. Wilson, B.L. Winer, P. Wittich, S. Wolbers, H. Wolfe, T. Wright, X. Wu, Z. Wu, K. Yamamoto, D. Yamato, T. Yang, U.K. Yang, Y.C. Yang, W.-M. Yao, G.P. Yeh, K. Yi, J. Yoh, K. Yorita, T. Yoshida, G.B. Yu, I. Yu, A.M. Zanetti, Y. Zeng, C. Zhou, S. Zucchelli, M. Zurek We measure exclusive $\pi^+\pi^-$ production in proton-antiproton collisions at center-of-mass energies $\sqrt{s}$ = 0.9 and 1.96 TeV in the Collider Detector at Fermilab. We select events with two oppositely charged particles, assumed to be pions, with pseudorapidity $|\eta| < 1.3$ and with no other particles detected in $|\eta| < 5.9$. We require the $\pi^+\pi^-$ system to have rapidity $|y|<$ 1.0. The production mechanism of these events is expected to be dominated by double pomeron exchange, which constrains the quantum numbers of the central state. The data are potentially valuable for isoscalar meson spectroscopy and for understanding the pomeron in a region of transition between nonperturbative and perturbative quantum chromodynamics. The data extend up to dipion mass $M(\pi^+\pi^-)$ = 5000 MeV/$c^2$ and show resonance structures attributed to $f_0$ and $f_2(1270)$ mesons. From the $\pi^+\pi^-$ and $K^+K^-$ spectra, we place upper limits on exclusive $\chi_{c0}(3415)$ production. First measurement of the forward-backward asymmetry in bottom-quark pair production at high mass (1504.06888) April 26, 2015 hep-ex We measure the particle-level forward-backward production asymmetry in $b\bar{b}$ pairs with masses $m(b\bar{b})$ larger than 150 GeV/$c^2$, using events with hadronic jets and employing jet charge to distinguish $b$ from $\bar{b}$. The measurement uses 9.5/fb of ppbar collisions at a center of mass energy of 1.96 TeV recorded by the CDF II detector. The asymmetry as a function of $m(b\bar{b})$ is consistent with zero, as well as with the predictions of the standard model. The measurement disfavors a simple model including an axigluon with a mass of 200 GeV/$c^2$ whereas a model containing a heavier 345 GeV/$c^2$ axigluon is not excluded. Search for Resonances Decaying to Top and Bottom Quarks with the CDF Experiment (1504.01536) CDF Collaboration: T. Aaltonen, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, F. Anza', G. Apollinari, J.A. Appel, T. Arisawa, A. Artikov, J. Asaadi, W. Ashmanskas, B. Auerbach, A. Aurisano, F. Azfar, W. Badgett, T. Bae, A. Barbaro-Galtieri, V.E. Barnes, B.A. Barnett, P. Barria, P. Bartos, M. Bauce, F. Bedeschi, S. Behari, G. Bellettini, J. Bellinger, D. Benjamin, A. Beretvas, A. Bhatti, L. Bianchi, K.R. Bland, B. Blumenfeld, A. Bocci, A. Bodek, D. Bortoletto, J. Boudreau, A. Boveia, L. Brigliadori, C. Bromberg, E. Brucken, J. Budagov, H.S. Budd, K. Burkett, G. Busetto, P. Bussey, P. Butti, A. Buzatu, A. Calamba, S. Camarda, M. Campanelli, F. Canelli, B. Carls, D. Carlsmith, R. Carosi, S. Carrillo, B. Casal, M. Casarsa, A. Castro, P. Catastini, D. Cauz, V. Cavaliere, A. Cerri, L. Cerrito, Y.C. Chen, M. Chertok, G. Chiarelli, G. Chlachidze, K. Cho, D. Chokheli, A. Clark, C. Clarke, M.E. Convery, J. Conway, M. Corbo, M. Cordelli, C.A. Cox, D.J. Cox, M. Cremonesi, D. Cruz, J. Cuevas, R. Culbertson, N. d'Ascenzo, M. Datta, P. de Barbaro, L. Demortier, L. Marchese, M. Deninno, F. Devoto, M. D'Errico, A. Di Canto, B. Di Ruzza, J.R. Dittmann, M. D'Onofrio, S. Donati, M. Dorigo, A. Driutti, K. Ebina, R. Edgar, A. Elagin, R. Erbacher, S. Errede, B. Esham, S. Farrington, J.P. Fernández Ramos, R. Field, G. Flanagan, R. Forrest, M. Franklin, J.C. Freeman, H. Frisch, Y. Funakoshi, C. Galloni, A.F. Garfinkel, P. Garosi, H. Gerberich, E. Gerchtein, S. Giagu, V. Giakoumopoulou, K. Gibson, C.M. Ginsburg, N. Giokaris, P. Giromini, V. Glagolev, D. Glenzinski, M. Gold, D. Goldin, A. Golossanov, G. Gomez, G. Gomez-Ceballos, M. Goncharov, O. González López, I. Gorelov, A.T. Goshaw, K. Goulianos, E. Gramellini, C. Grosso-Pilcher, R.C. Group, J. Guimaraes da Costa, S.R. Hahn, J.Y. Han, F. Happacher, K. Hara, M. Hare, R.F. Harr, T. Harrington-Taber, K. Hatakeyama, C. Hays, J. Heinrich, M. Herndon, A. Hocker, Z. Hong, W. Hopkins, S. Hou, R.E. Hughes, U. Husemann, M. Hussein, J. Huston, G. Introzzi, M. Iori, A. Ivanov, E. James, D. Jang, B. Jayatilaka, E.J. Jeon, S. Jindariani, M. Jones, K.K. Joo, S.Y. Jun, T.R. Junk, M. Kambeitz, T. Kamon, P.E. Karchin, A. Kasmi, Y. Kato, W. Ketchum, J. Keung, B. Kilminster, D.H. Kim, H.S. Kim, J.E. Kim, M.J. Kim, S.B. Kim, S.H. Kim, Y.K. Kim, Y.J. Kim, N. Kimura, M. Kirby, K. Knoepfel, K. Kondo, D.J. Kong, J. Konigsberg, A.V. Kotwal, M. Kreps, J. Kroll, M. Kruse, T. Kuhr, M. Kurata, A.T. Laasanen, S. Lammel, M. Lancaster, K. Lannon, G. Latino, H.S. Lee, J.S. Lee, S. Leo, S. Leone, J.D. Lewis, A. Limosani, E. Lipeles, A. Lister, H. Liu, Q. Liu, T. Liu, S. Lockwitz, A. Loginov, A. Lucà, D. Lucchesi, J. Lueck, P. Lujan, P. Lukens, G. Lungu, J. Lys, R. Lysak, R. Madrak, P. Maestro, S. Malik, G. Manca, A. Manousakis-Katsikakis, F. Margaroli, P. Marino, K. Matera, M.E. Mattson, A. Mazzacane, P. Mazzanti, R. McNulty, A. Mehta, P. Mehtala, C. Mesropian, T. Miao, D. Mietlicki, A. Mitra, H. Miyake, S. Moed, N. Moggi, C.S. Moon, R. Moore, M.J. Morello, A. Mukherjee, Th. Muller, P. Murat, M. Mussini, J. Nachtman, Y. Nagai, J. Naganoma, I. Nakano, A. Napier, J. Nett, C. Neu, T. Nigmanov, L. Nodulman, S.Y. Noh, O. Norniella, L. Oakes, S.H. Oh, Y.D. Oh, I. Oksuzian, T. Okusawa, R. Orava, L. Ortolan, C. Pagliarone, E. Palencia, P. Palni, V. Papadimitriou, W. Parker, G. Pauletta, M. Paulini, C. Paus, T.J. Phillips, G. Piacentino, E. Pianori, J. Pilot, K. Pitts, C. Plager, L. Pondrom, S. Poprocki, K. Potamianos, F. Prokoshin, A. Pranko, F. Ptohos, G. Punzi, I. Redondo Fernández, P. Renton, M. Rescigno, F. Rimondi, L. Ristori, A. Robson, T. Rodriguez, S. Rolli, M. Ronzani, R. Roser, J.L. Rosner, F. Ruffini, A. Ruiz, J. Russ, V. Rusu, W.K. Sakumoto, Y. Sakurai, L. Santi, K. Sato, V. Saveliev, A. Savoy-Navarro, P. Schlabach, E.E. Schmidt, T. Schwarz, L. Scodellaro, F. Scuri, S. Seidel, Y. Seiya, A. Semenov, F. Sforza, S.Z. Shalhout, T. Shears, P.F. Shepard, M. Shimojima, M. Shochet, I. Shreyber-Tecker, A. Simonenko, K. Sliwa, J.R. Smith, F.D. Snider, V. Sorin, H. Song, M. Stancari, R. St. Denis, D. Stentz, J. Strologas, Y. Sudo, A. Sukhanov, I. Suslov, K. Takemasa, Y. Takeuchi, J. Tang, M. Tecchio, P.K. Teng, J. Thom, E. Thomson, V. Thukral, D. Toback, S. Tokar, K. Tollefson, T. Tomura, D. Tonelli, S. Torre, D. Torretta, P. Totaro, M. Trovato, F. Ukegawa, S. Uozumi, F. Vázquez, G. Velev, C. Vellidis, C. Vernieri, M. Vidal, R. Vilar, J. Vizán, M. Vogel, G. Volpi, P. Wagner, R. Wallny, S.M. Wang, D. Waters, W.C. Wester III, D. Whiteson, A.B. Wicklund, S. Wilbur, H.H. Williams, J.S. Wilson, P. Wilson, B.L. Winer, P. Wittich, S. Wolbers, H. Wolfe, T. Wright, X. Wu, Z. Wu, K. Yamamoto, D. Yamato, T. Yang, U.K. Yang, Y.C. Yang, W.-M. Yao, G.P. Yeh, K. Yi, J. Yoh, K. Yorita, T. Yoshida, G.B. Yu, I. Yu, A.M. Zanetti, Y. Zeng, C. Zhou, S. Zucchelli April 7, 2015 hep-ph, hep-ex We report on a search for charged massive resonances decaying to top ($t$) and bottom ($b$) quarks in the full data set of proton-antiproton collisions at center-of-mass energy of $\sqrt{s} = 1.96$ TeV collected by the CDF~II detector at the Tevatron, corresponding to an integrated luminosity of 9.5 $fb^{-1}$. No significant excess above the standard model (SM) background prediction is observed. We set 95% Bayesian credibility mass-dependent upper limits on the heavy charged particle production cross section times branching ratio to $t b$. Using a SM extension with a $W^{\prime}$ and left-right-symmetric couplings as a benchmark model, we constrain the $W^{\prime}$ mass and couplings in the 300 to 900 GeV/$c^2$ range. The limits presented here are the most stringent for a charged resonance with mass in the range 300 -- 600 GeV/$c^2$ decaying to top and bottom quarks. Measurement of the Single Top Quark Production Cross Section and |Vtb| in Events with One Charged Lepton, Large Missing Transverse Energy, and Jets at CDF (1407.4031) CDF Collaboration: T. Aaltonen, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, J.A. Appel, T. Arisawa, A. Artikov, J. Asaadi, W. Ashmanskas, B. Auerbach, A. Aurisano, F. Azfar, W. Badgett, T. Bae, A. Barbaro-Galtieri, V.E. Barnes, B.A. Barnett, P. Barria, P. Bartos, M. Bauce, F. Bedeschi, S. Behari, G. Bellettini, J. Bellinger, D. Benjamin, A. Beretvas, A. Bhatti, K.R. Bland, B. Blumenfeld, A. Bocci, A. Bodek, D. Bortoletto, J. Boudreau, A. Boveia, L. Brigliadori, C. Bromberg, E. Brucken, J. Budagov, H.S. Budd, K. Burkett, G. Busetto, P. Bussey, P. Butti, A. Buzatu, A. Calamba, S. Camarda, M. Campanelli, F. Canelli, B. Carls, D. Carlsmith, R. Carosi, S. Carrillo, B. Casal, M. Casarsa, A. Castro, P. Catastini, D. Cauz, V. Cavaliere, A. Cerri, L. Cerrito, Y.C. Chen, M. Chertok, G. Chiarelli, G. Chlachidze, K. Cho, D. Chokheli, A. Clark, C. Clarke, M.E. Convery, J. Conway, M. Corbo, M. Cordelli, C.A. Cox, D.J. Cox, M. Cremonesi, D. Cruz, J. Cuevas, R. Culbertson, N. d'Ascenzo, M. Datta, P. de Barbaro, L. Demortier, L. Marchese, M. Deninno, F. Devoto, M. D'Errico, A. Di Canto, B. Di Ruzza, J.R. Dittmann, M. D'Onofrio, S. Donati, M. Dorigo, A. Driutti, K. Ebina, R. Edgar, A. Elagin, R. Erbacher, S. Errede, B. Esham, S. Farrington, J.P. Fernández Ramos, R. Field, G. Flanagan, R. Forrest, M. Franklin, J.C. Freeman, H. Frisch, Y. Funakoshi, C. Galloni, A.F. Garfinkel, P. Garosi, H. Gerberich, E. Gerchtein, S. Giagu, V. Giakoumopoulou, K. Gibson, C.M. Ginsburg, N. Giokaris, P. Giromini, V. Glagolev, D. Glenzinski, M. Gold, D. Goldin, A. Golossanov, G. Gomez, G. Gomez-Ceballos, M. Goncharov, O. González López, I. Gorelov, A.T. Goshaw, K. Goulianos, E. Gramellini, C. Grosso-Pilcher, R.C. Group, J. Guimaraes da Costa, S.R. Hahn, J.Y. Han, F. Happacher, K. Hara, M. Hare, R.F. Harr, T. Harrington-Taber, K. Hatakeyama, C. Hays, J. Heinrich, M. Herndon, D. Hirschbuehl, A. Hocker, Z. Hong, W. Hopkins, S. Hou, R.E. Hughes, U. Husemann, M. Hussein, J. Huston, G. Introzzi, M. Iori, A. Ivanov, E. James, D. Jang, B. Jayatilaka, E.J. Jeon, S. Jindariani, M. Jones, K.K. Joo, S.Y. Jun, T.R. Junk, M. Kambeitz, T. Kamon, P.E. Karchin, A. Kasmi, Y. Kato, W. Ketchum, J. Keung, B. Kilminster, D.H. Kim, H.S. Kim, J.E. Kim, M.J. Kim, S.B. Kim, S.H. Kim, Y.K. Kim, Y.J. Kim, N. Kimura, M. Kirby, K. Knoepfel, K. Kondo, D.J. Kong, J. Konigsberg, A.V. Kotwal, M. Kreps, J. Kroll, M. Kruse, T. Kuhr, M. Kurata, A.T. Laasanen, S. Lammel, M. Lancaster, K. Lannon, G. Latino, H.S. Lee, J.S. Lee, S. Leo, S. Leone, J.D. Lewis, A. Limosani, E. Lipeles, A. Lister, H. Liu, Q. Liu, T. Liu, S. Lockwitz, A. Loginov, A. Lucà, D. Lucchesi, J. Lueck, P. Lujan, P. Lukens, G. Lungu, J. Lys, R. Lysak, R. Madrak, P. Maestro, S. Malik, G. Manca, A. Manousakis-Katsikakis, F. Margaroli, P. Marino, K. Matera, M.E. Mattson, A. Mazzacane, P. Mazzanti, R. McNulty, A. Mehta, P. Mehtala, C. Mesropian, T. Miao, D. Mietlicki, A. Mitra, H. Miyake, S. Moed, N. Moggi, C.S. Moon, R. Moore, M.J. Morello, A. Mukherjee, Th. Muller, P. Murat, M. Mussini, J. Nachtman, Y. Nagai, J. Naganoma, I. Nakano, A. Napier, J. Nett, C. Neu, T. Nigmanov, L. Nodulman, S.Y. Noh, O. Norniella, L. Oakes, S.H. Oh, Y.D. Oh, I. Oksuzian, T. Okusawa, R. Orava, L. Ortolan, C. Pagliarone, E. Palencia, P. Palni, V. Papadimitriou, W. Parker, G. Pauletta, M. Paulini, C. Paus, T.J. Phillips, E. Pianori, J. Pilot, K. Pitts, C. Plager, L. Pondrom, S. Poprocki, K. Potamianos, F. Prokoshin, A. Pranko, F. Ptohos, G. Punzi, I. Redondo Fernández, P. Renton, M. Rescigno, F. Rimondi, L. Ristori, A. Robson, T. Rodriguez, S. Rolli, M. Ronzani, R. Roser, J.L. Rosner, F. Ruffini, A. Ruiz, J. Russ, V. Rusu, W.K. Sakumoto, Y. Sakurai, L. Santi, K. Sato, V. Saveliev, A. Savoy-Navarro, P. Schlabach, E.E. Schmidt, T. Schwarz, L. Scodellaro, F. Scuri, S. Seidel, Y. Seiya, A. Semenov, F. Sforza, S.Z. Shalhout, T. Shears, P.F. Shepard, M. Shimojima, M. Shochet, I. Shreyber-Tecker, A. Simonenko, K. Sliwa, J.R. Smith, F.D. Snider, V. Sorin, H. Song, M. Stancari, R. St. Denis, D. Stentz, J. Strologas, Y. Sudo, A. Sukhanov, I. Suslov, K. Takemasa, Y. Takeuchi, J. Tang, M. Tecchio, P.K. Teng, J. Thom, E. Thomson, V. Thukral, D. Toback, S. Tokar, K. Tollefson, T. Tomura, D. Tonelli, S. Torre, D. Torretta, P. Totaro, M. Trovato, F. Ukegawa, S. Uozumi, F. Vázquez, G. Velev, C. Vellidis, C. Vernieri, M. Vidal, R. Vilar, J. Vizán, M. Vogel, G. Volpi, P. Wagner, R. Wallny, S.M. Wang, D. Waters, W.C. Wester III, D. Whiteson, A.B. Wicklund, S. Wilbur, H.H. Williams, J.S. Wilson, P. Wilson, B.L. Winer, P. Wittich, S. Wolbers, H. Wolfe, T. Wright, X. Wu, Z. Wu, K. Yamamoto, D. Yamato, T. Yang, U.K. Yang, Y.C. Yang, W.-M. Yao, G.P. Yeh, K. Yi, J. Yoh, K. Yorita, T. Yoshida, G.B. Yu, I. Yu, A.M. Zanetti, Y. Zeng, C. Zhou, S. Zucchelli We report a measurement of single top quark production in proton-antiproton collisions at a center-of-mass energy of \sqrt{s} = 1.96 TeV using a data set corresponding to 7.5 fb-1 of integrated luminosity collected by the Collider Detector at Fermilab. We select events consistent with the single top quark decay process t \to Wb \to l{\nu}b by requiring the presence of an electron or muon, a large imbalance of transverse momentum indicating the presence of a neutrino, and two or three jets including at least one originating from a bottom quark. An artificial neural network is used to discriminate the signal from backgrounds. We measure a single top quark production cross section of 3.04+0.57-0.53 pb and set a lower limit on the magnitude of the coupling between the top quark and bottom quark |Vtb| > 0.78 at the 95% credibility level. Measurement of indirect CP-violating asymmetries in $D^0\to K^+K^-$ and $D^0\to \pi^+\pi^-$ decays at CDF (1410.5435) CDF Collaboration: T. Aaltonen, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, J.A. Appel, T. Arisawa, A. Artikov, J. Asaadi, W. Ashmanskas, B. Auerbach, A. Aurisano, F. Azfar, W. Badgett, T. Bae, A. Barbaro-Galtieri, V.E. Barnes, B.A. Barnett, P. Barria, P. Bartos, M. Bauce, F. Bedeschi, S. Behari, G. Bellettini, J. Bellinger, D. Benjamin, A. Beretvas, A. Bhatti, K.R. Bland, B. Blumenfeld, A. Bocci, A. Bodek, D. Bortoletto, J. Boudreau, A. Boveia, L. Brigliadori, C. Bromberg, E. Brucken, J. Budagov, H.S. Budd, K. Burkett, G. Busetto, P. Bussey, P. Butti, A. Buzatu, A. Calamba, S. Camarda, M. Campanelli, F. Canelli, B. Carls, D. Carlsmith, R. Carosi, S. Carrillo, B. Casal, M. Casarsa, A. Castro, P. Catastini, D. Cauz, V. Cavaliere, A. Cerri, L. Cerrito, Y.C. Chen, M. Chertok, G. Chiarelli, G. Chlachidze, K. Cho, D. Chokheli, A. Clark, C. Clarke, M.E. Convery, J. Conway, M. Corbo, M. Cordelli, C.A. Cox, D.J. Cox, M. Cremonesi, D. Cruz, J. Cuevas, R. Culbertson, N. d'Ascenzo, M. Datta, P. de Barbaro, L. Demortier, L. Marchese, M. Deninno, F. Devoto, M. D'Errico, A. Di Canto, B. Di Ruzza, J.R. Dittmann, M. D'Onofrio, S. Donati, M. Dorigo, A. Driutti, K. Ebina, R. Edgar, A. Elagin, R. Erbacher, S. Errede, B. Esham, S. Farrington, J.P. Fernández Ramos, R. Field, G. Flanagan, R. Forrest, M. Franklin, J.C. Freeman, H. Frisch, Y. Funakoshi, C. Galloni, A.F. Garfinkel, P. Garosi, H. Gerberich, E. Gerchtein, S. Giagu, V. Giakoumopoulou, K. Gibson, C.M. Ginsburg, N. Giokaris, P. Giromini, V. Glagolev, D. Glenzinski, M. Gold, D. Goldin, A. Golossanov, G. Gomez, G. Gomez-Ceballos, M. Goncharov, O. González López, I. Gorelov, A.T. Goshaw, K. Goulianos, E. Gramellini, C. Grosso-Pilcher, R.C. Group, J. Guimaraes da Costa, S.R. Hahn, J.Y. Han, F. Happacher, K. Hara, M. Hare, R.F. Harr, T. Harrington-Taber, K. Hatakeyama, C. Hays, J. Heinrich, M. Herndon, A. Hocker, Z. Hong, W. Hopkins, S. Hou, R.E. Hughes, U. Husemann, M. Hussein, J. Huston, G. Introzzi, M. Iori, A. Ivanov, E. James, D. Jang, B. Jayatilaka, E.J. Jeon, S. Jindariani, M. Jones, K.K. Joo, S.Y. Jun, T.R. Junk, M. Kambeitz, T. Kamon, P.E. Karchin, A. Kasmi, Y. Kato, W. Ketchum, J. Keung, B. Kilminster, D.H. Kim, H.S. Kim, J.E. Kim, M.J. Kim, S.B. Kim, S.H. Kim, Y.K. Kim, Y.J. Kim, N. Kimura, M. Kirby, K. Knoepfel, K. Kondo, D.J. Kong, J. Konigsberg, A.V. Kotwal, M. Kreps, J. Kroll, M. Kruse, T. Kuhr, M. Kurata, A.T. Laasanen, S. Lammel, M. Lancaster, K. Lannon, G. Latino, H.S. Lee, J.S. Lee, S. Leo, S. Leone, J.D. Lewis, A. Limosani, E. Lipeles, A. Lister, H. Liu, Q. Liu, T. Liu, S. Lockwitz, A. Loginov, A. Lucà, D. Lucchesi, J. Lueck, P. Lujan, P. Lukens, G. Lungu, J. Lys, R. Lysak, R. Madrak, P. Maestro, S. Malik, G. Manca, A. Manousakis-Katsikakis, F. Margaroli, P. Marino, K. Matera, M.E. Mattson, A. Mazzacane, P. Mazzanti, R. McNulty, A. Mehta, P. Mehtala, C. Mesropian, T. Miao, D. Mietlicki, A. Mitra, H. Miyake, S. Moed, N. Moggi, C.S. Moon, R. Moore, M.J. Morello, A. Mukherjee, Th. Muller, P. Murat, M. Mussini, J. Nachtman, Y. Nagai, J. Naganoma, I. Nakano, A. Napier, J. Nett, C. Neu, T. Nigmanov, L. Nodulman, S.Y. Noh, O. Norniella, L. Oakes, S.H. Oh, Y.D. Oh, I. Oksuzian, T. Okusawa, R. Orava, L. Ortolan, C. Pagliarone, E. Palencia, P. Palni, V. Papadimitriou, W. Parker, G. Pauletta, M. Paulini, C. Paus, T.J. Phillips, E. Pianori, J. Pilot, K. Pitts, C. Plager, L. Pondrom, S. Poprocki, K. Potamianos, F. Prokoshin, A. Pranko, F. Ptohos, G. Punzi, I. Redondo Fernández, P. Renton, M. Rescigno, F. Rimondi, L. Ristori, A. Robson, T. Rodriguez, S. Rolli, M. Ronzani, R. Roser, J.L. Rosner, F. Ruffini, A. Ruiz, J. Russ, V. Rusu, W.K. Sakumoto, Y. Sakurai, L. Santi, K. Sato, V. Saveliev, A. Savoy-Navarro, P. Schlabach, E.E. Schmidt, T. Schwarz, L. Scodellaro, F. Scuri, S. Seidel, Y. Seiya, A. Semenov, F. Sforza, S.Z. Shalhout, T. Shears, P.F. Shepard, M. Shimojima, M. Shochet, I. Shreyber-Tecker, A. Simonenko, K. Sliwa, J.R. Smith, F.D. Snider, V. Sorin, H. Song, M. Stancari, R. St. Denis, D. Stentz, J. Strologas, Y. Sudo, A. Sukhanov, I. Suslov, K. Takemasa, Y. Takeuchi, J. Tang, M. Tecchio, P.K. Teng, J. Thom, E. Thomson, V. Thukral, D. Toback, S. Tokar, K. Tollefson, T. Tomura, D. Tonelli, S. Torre, D. Torretta, P. Totaro, M. Trovato, F. Ukegawa, S. Uozumi, F. Vázquez, G. Velev, C. Vellidis, C. Vernieri, M. Vidal, R. Vilar, J. Vizán, M. Vogel, G. Volpi, P. Wagner, R. Wallny, S.M. Wang, D. Waters, W.C. Wester III, D. Whiteson, A.B. Wicklund, S. Wilbur, H.H. Williams, J.S. Wilson, P. Wilson, B.L. Winer, P. Wittich, S. Wolbers, H. Wolfe, T. Wright, X. Wu, Z. Wu, K. Yamamoto, D. Yamato, T. Yang, U.K. Yang, Y.C. Yang, W.-M. Yao, G.P. Yeh, K. Yi, J. Yoh, K. Yorita, T. Yoshida, G.B. Yu, I. Yu, A.M. Zanetti, Y. Zeng, C. Zhou, S. Zucchelli Jan. 6, 2015 hep-ph, hep-ex We report a measurement of the indirect CP-violating asymmetries ($A_{\Gamma}$) between effective lifetimes of anticharm and charm mesons reconstructed in $D^0\to K^+ K^-$ and $D^0\to \pi^+\pi^-$ decays. We use the full data set of proton-antiproton collisions collected by the Collider Detector at Fermilab experiment and corresponding to $9.7$~fb$^{-1}$ of integrated luminosity. The strong-interaction decay $D^{*+}\to D^0\pi^+$ is used to identify the meson at production as $D^0$ or $\overline{D}^0$. We statistically subtract $D^0$ and $\overline{D}^0$ mesons originating from $b$-hadron decays and measure the yield asymmetry between anticharm and charm decays as a function of decay time. We measure $A_\Gamma (K^+K^-) = (-0.19 \pm 0.15 (stat) \pm 0.04 (syst))\%$ and $A_\Gamma (\pi^+\pi^-)= (-0.01 \pm 0.18 (stat) \pm 0.03 (syst))\%$. The results are consistent with the hypothesis of CP symmetry and their combination yields $A_\Gamma = (-0.12 \pm 0.12)\%$. Measurements of Direct CP-Violating Asymmetries in Charmless Decays of Bottom Baryons (1403.5586) CDF Collaboration: T. Aaltonen, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, J.A. Appel, T. Arisawa, A. Artikov, J. Asaadi, W. Ashmanskas, B. Auerbach, A. Aurisano, F. Azfar, W. Badgett, T. Bae, A. Barbaro-Galtieri, V.E. Barnes, B.A. Barnett, P. Barria, P. Bartos, M. Bauce, F. Bedeschi, S. Behari, G. Bellettini, J. Bellinger, D. Benjamin, A. Beretvas, A. Bhatti, K.R. Bland, B. Blumenfeld, A. Bocci, A. Bodek, D. Bortoletto, J. Boudreau, A. Boveia, L. Brigliadori, C. Bromberg, E. Brucken, J. Budagov, H.S. Budd, K. Burkett, G. Busetto, P. Bussey, P. Butti, A. Buzatu, A. Calamba, S. Camarda, M. Campanelli, F. Canelli, B. Carls, D. Carlsmith, R. Carosi, S. Carrillo, B. Casal, M. Casarsa, A. Castro, P. Catastini, D. Cauz, V. Cavaliere, M. Cavalli-Sforza, A. Cerri, L. Cerrito, Y.C. Chen, M. Chertok, G. Chiarelli, G. Chlachidze, K. Cho, D. Chokheli, A. Clark, C. Clarke, M.E. Convery, J. Conway, M. Corbo, M. Cordelli, C.A. Cox, D.J. Cox, M. Cremonesi, D. Cruz, J. Cuevas, R. Culbertson, N. d'Ascenzo, M. Datta, P. de Barbaro, L. Demortier, L. Marchese, M. Deninno, F. Devoto, M. D'Errico, A. Di Canto, B. Di Ruzza, J.R. Dittmann, M. D'Onofrio, S. Donati, M. Dorigo, A. Driutti, K. Ebina, R. Edgar, A. Elagin, R. Erbacher, S. Errede, B. Esham, S. Farrington, J.P. Fernández Ramos, R. Field, G. Flanagan, R. Forrest, M. Franklin, J.C. Freeman, H. Frisch, Y. Funakoshi, C. Galloni, A.F. Garfinkel, P. Garosi, H. Gerberich, E. Gerchtein, S. Giagu, V. Giakoumopoulou, K. Gibson, C.M. Ginsburg, N. Giokaris, P. Giromini, G. Giurgiu, V. Glagolev, D. Glenzinski, M. Gold, D. Goldin, A. Golossanov, G. Gomez, G. Gomez-Ceballos, M. Goncharov, O. González López, I. Gorelov, A.T. Goshaw, K. Goulianos, E. Gramellini, S. Grinstein, C. Grosso-Pilcher, R.C. Group, J. Guimaraes da Costa, S.R. Hahn, J.Y. Han, F. Happacher, K. Hara, M. Hare, R.F. Harr, T. Harrington-Taber, K. Hatakeyama, C. Hays, J. Heinrich, M. Herndon, A. Hocker, Z. Hong, W. Hopkins, S. Hou, R.E. Hughes, U. Husemann, M. Hussein, J. Huston, G. Introzzi, M. Iori, A. Ivanov, E. James, D. Jang, B. Jayatilaka, E.J. Jeon, S. Jindariani, M. Jones, K.K. Joo, S.Y. Jun, T.R. Junk, M. Kambeitz, T. Kamon, P.E. Karchin, A. Kasmi, Y. Kato, W. Ketchum, J. Keung, B. Kilminster, D.H. Kim, H.S. Kim, J.E. Kim, M.J. Kim, S.B. Kim, S.H. Kim, Y.K. Kim, Y.J. Kim, N. Kimura, M. Kirby, K. Knoepfel, K. Kondo, D.J. Kong, J. Konigsberg, A.V. Kotwal, M. Kreps, J. Kroll, M. Kruse, T. Kuhr, M. Kurata, A.T. Laasanen, S. Lammel, M. Lancaster, K. Lannon, G. Latino, H.S. Lee, J.S. Lee, S. Leo, S. Leone, J.D. Lewis, A. Limosani, E. Lipeles, A. Lister, H. Liu, Q. Liu, T. Liu, S. Lockwitz, A. Loginov, A. Lucà, D. Lucchesi, J. Lueck, P. Lujan, P. Lukens, G. Lungu, J. Lys, R. Lysak, R. Madrak, P. Maestro, S. Malik, G. Manca, A. Manousakis-Katsikakis, F. Margaroli, P. Marino, M. Martínez, K. Matera, M.E. Mattson, A. Mazzacane, P. Mazzanti, R. McNulty, A. Mehta, P. Mehtala, C. Mesropian, T. Miao, D. Mietlicki, A. Mitra, H. Miyake, S. Moed, N. Moggi, C.S. Moon, R. Moore, M.J. Morello, A. Mukherjee, Th. Muller, P. Murat, M. Mussini, J. Nachtman, Y. Nagai, J. Naganoma, I. Nakano, A. Napier, J. Nett, C. Neu, T. Nigmanov, L. Nodulman, S.Y. Noh, O. Norniella, L. Oakes, S.H. Oh, Y.D. Oh, I. Oksuzian, T. Okusawa, R. Orava, L. Ortolan, C. Pagliarone, E. Palencia, P. Palni, V. Papadimitriou, W. Parker, G. Pauletta, M. Paulini, C. Paus, T.J. Phillips, G. Piacentino, E. Pianori, J. Pilot, K. Pitts, C. Plager, L. Pondrom, S. Poprocki, K. Potamianos, F. Prokoshin, A. Pranko, F. Ptohos, G. Punzi, N. Ranjan, I. Redondo Fernández, P. Renton, M. Rescigno, F. Rimondi, L. Ristori, A. Robson, T. Rodriguez, S. Rolli, M. Ronzani, R. Roser, J.L. Rosner, F. Ruffini, A. Ruiz, J. Russ, V. Rusu, W.K. Sakumoto, Y. Sakurai, L. Santi, K. Sato, V. Saveliev, A. Savoy-Navarro, P. Schlabach, E.E. Schmidt, T. Schwarz, L. Scodellaro, F. Scuri, S. Seidel, Y. Seiya, A. Semenov, F. Sforza, S.Z. Shalhout, T. Shears, P.F. Shepard, M. Shimojima, M. Shochet, I. Shreyber-Tecker, A. Simonenko, K. Sliwa, J.R. Smith, F.D. Snider, V. Sorin, H. Song, M. Stancari, R. St. Denis, D. Stentz, J. Strologas, Y. Sudo, A. Sukhanov, I. Suslov, K. Takemasa, Y. Takeuchi, J. Tang, M. Tecchio, P.K. Teng, J. Thom, E. Thomson, V. Thukral, D. Toback, S. Tokar, K. Tollefson, T. Tomura, D. Tonelli, S. Torre, D. Torretta, P. Totaro, M. Trovato, F. Ukegawa, S. Uozumi, F. Vázquez, G. Velev, C. Vellidis, C. Vernieri, M. Vidal, R. Vilar, J. Vizán, M. Vogel, G. Volpi, P. Wagner, R. Wallny, S.M. Wang, D. Waters, W.C. Wester III, D. Whiteson, A.B. Wicklund, S. Wilbur, H.H. Williams, J.S. Wilson, P. Wilson, B.L. Winer, P. Wittich, S. Wolbers, H. Wolfe, T. Wright, X. Wu, Z. Wu, K. Yamamoto, D. Yamato, T. Yang, U.K. Yang, Y.C. Yang, W.-M. Yao, G.P. Yeh, K. Yi, J. Yoh, K. Yorita, T. Yoshida, G.B. Yu, I. Yu, A.M. Zanetti, Y. Zeng, C. Zhou, S. Zucchelli We report final measurements of direct $\mathit{CP}$--violating asymmetries in charmless decays of neutral bottom hadrons to pairs of charged hadrons with the upgraded Collider Detector at the Fermilab Tevatron. Using the complete $\sqrt{s}=1.96$ TeV proton-antiproton collisions data set, corresponding to 9.3 fb$^{-1}$ of integrated luminosity, we measure $\mathcal{A}(\Lambda^0_b \rightarrow p\pi^{-}) = +0.06 \pm 0.07\mathrm{(stat)} \pm 0.03\mathrm{(syst)}$ and $\mathcal{A}(\Lambda^0_b \rightarrow pK^{-}) = -0.10 \pm 0.08\mathrm{(stat)} \pm 0.04\mathrm{(syst)}$, compatible with no asymmetry. In addition we measure the $\mathit{CP}$--violating asymmetries in $B^0_s \rightarrow K^{-}\pi^{+}$ and $B^0 \rightarrow K^{+}\pi^{-}$ decays to be $\mathcal{A}(B^0_s \rightarrow K^{-}\pi^{+}) = +0.22 \pm 0.07\mathrm{stat)} \pm 0.02\mathrm{(syst)}$ and $\mathcal{A}(B^0 \rightarrow K^{+}\pi^{-}) = -0.083\pm 0.013 \mathrm{(stat)} \pm 0.004\mathrm{(syst)}$, respectively, which are significantly different from zero and consistent with current world averages. Updated Measurement of the Single Top Quark Production Cross Section and $V{tb}$ in the Missing Transverse Energy Plus Jets Topology in $p\bar{p}$ Collisions at $\sqrt{s} = 1.96$ TeV (1410.4909) An updated measurement of the single top quark production cross section is presented using the full data set collected by the Collider Detector at Fermilab (CDF) and corresponding to 9.5 fb${}^{-1}$ of integrated luminosity from proton-antiproton collisions at 1.96 TeV center-of-mass energy. The events selected contain an imbalance in the total transverse energy, jets identified as originating from $b$ quarks, and no identified leptons. The sum of the $s$- and $t$-channel single top quark cross sections is measured to be $3.53_{-1.16}^{+1.25}$ pb and a lower limit on $V_{tb}$ of 0.63 is obtained at the 95% credibility level. These measurements are combined with previously reported CDF results obtained from events with an imbalance in total transverse energy, jets identified as originating from $b$ quarks, and exactly one identified lepton. The combined cross section is measured to be $3.02_{-0.48}^{+0.49}$ pb and a lower limit on $V{tb}$ of 0.84 is obtained at the 95% credibility level. Exclusion of exotic top-like quarks with -4/3 electric charge using jet-charge tagging in single-lepton ttbar events at CDF (1304.4141) CDF Collaboration: T. Aaltonen, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, J. A. Appel, T. Arisawa, A. Artikov, J. Asaadi, W. Ashmanskas, B. Auerbach, A. Aurisano, F. Azfar, W. Badgett, T. Bae, A. Barbaro-Galtieri, V. E. Barnes, B. A. Barnett, P. Barria, P. Bartos, M. Bauce, F. Bedeschi, S. Behari, G. Bellettini, J. Bellinger, D. Benjamin, A. Beretvas, A. Bhatti, K. R. Bland, B. Blumenfeld, A. Bocci, A. Bodek, V. Boisvert, D. Bortoletto, J. Boudreau, A. Boveia, L. Brigliadori, C. Bromberg, E. Brucken, J. Budagov, H. S. Budd, K. Burkett, G. Busetto, P. Bussey, P. Butti, A. Buzatu, A. Calamba, S. Camarda, M. Campanelli, F. Canelli, B. Carls, D. Carlsmith, R. Carosi, S. Carrillo, B. Casal, M. Casarsa, A. Castro, P. Catastini, D. Cauz, V. Cavaliere, M. Cavalli-Sforza, A. Cerri, L. Cerrito, Y.C. Chen, M. Chertok, G. Chiarelli, G. Chlachidze, K. Cho, D. Chokheli, M. A. Ciocci, A. Clark, C. Clarke, M.E. Convery, J. Conway, M. Corbo, M. Cordelli, C. A. Cox, D. J. Cox, M. Cremonesi, D. Cruz, J. Cuevas, R. Culbertson, N. d'Ascenzo, M. Datta, P. De Barbaro, L. Demortier, M. Deninno, M. d'Errico, F. Devoto, A. Di Canto, B. Di Ruzza, J. R. Dittmann, M. D'Onofrio, S. Donati, M. Dorigo, A. Driutti, K. Ebina, R. Edgar, A. Elagin, R. Erbacher, S. Errede, B. Esham, R. Eusebi, S. Farrington, J. P. Fernández Ramos, R. Field, G. Flanagan, R. Forrest, M. Franklin, J.C. Freeman, H. Frisch, Y. Funakoshi, A.F. Garfinkel, P. Garosi, H. Gerberich, E. Gerchtein, S. Giagu, V. Giakoumopoulou, K. Gibson, C.M. Ginsburg, N. Giokaris, P. Giromini, G. Giurgiu, V. Glagolev, D. Glenzinski, M. Gold, D. Goldin, A. Golossanov, G. Gomez, G. Gomez-Ceballos, M. Goncharov, O. González López, I. Gorelov, A. T. Goshaw, K. Goulianos, E. Gramellini, S. Grinstein, C. Grosso-Pilcher, R. C. Group, J. Guimaraes da Costa, S. R. Hahn, J. Y. Han, F. Happacher, K. Hara, M. Hare, R. F. Harr, T. Harrington-Taber, K. Hatakeyama, C. Hays, J. Heinrich, M. Herndon, A. Hocker, Z. Hong, W. Hopkins, S. Hou, R.E. Hughes, U. Husemann, M. Hussein, J. Huston, G. Introzzi, M. Iori, A. Ivanov, E. James, D. Jang, B. Jayatilaka, E. J. Jeon, S. Jindariani, M. Jones, K. K. Joo, S. Y. Jun, T. R. Junk, M. Kambeitz, T. Kamon, P.E. Karchin, A. Kasmi, Y. Kato, W. Ketchum, J. Keung, B. Kilminster, D. H. Kim, H. S. Kim, J. E. Kim, M. J. Kim, S. B. Kim, S. H. Kim, Y. J. Kim, Y. K. Kim, N. Kimura, M. Kirby, K. Knoepfel, K. Kondo, D. J. Kong, J. Konigsberg, A. V. Kotwal, M. Kreps, J. Kroll, M. Kruse, T. Kuhr, M. Kurata, A. T. Laasanen, S. Lammel, M. Lancaster, K. Lannon, G. Latino, H. S. Lee, J. S. Lee, S. Leo, S. Leone, J. D. Lewis, A. Limosani, E. Lipeles, A. Lister, H. Liu, Q. Liu, T. Liu, S. Lockwitz, A. Loginov, A. Lucà, D. Lucchesi, J. Lueck, P. Lujan, P. Lukens, G. Lungu, J. Lys, R. Lysak, R. Madrak, P. Maestro, S. Malik, G. Manca, A. Manousakis-Katsikakis, F. Margaroli, P. Marino, M. Martínez, K. Matera, M. E. Mattson, A. Mazzacane, P. Mazzanti, K. S. McFarland, R. McNulty, A. Mehta, P. Mehtala, C. Mesropian, T. Miao, D. Mietlicki, A. Mitra, H. Miyake, S. Moed, N. Moggi, C.S. Moon, R. Moore, M. J. Morello, A. Mukherjee, Th. Muller, P. Murat, M. Mussini, J. Nachtman, Y. Nagai, J. Naganoma, I. Nakano, A. Napier, J. Nett, C. Neu, T. Nigmanov, L. Nodulman, S. Y. Noh, O. Norniella, L. Oakes, S. H. Oh, Y. D. Oh, I. Oksuzian, T. Okusawa, R. Orava, L. Ortolan, C. Pagliarone, E. Palencia, P. Palni, V. Papadimitriou, W. Parker, G. Pauletta, M. Paulini, C. Paus, T.J. Phillips, G. Piacentino, E. Pianori, J. Pilot, K. Pitts, C. Plager, L. Pondrom, S. Poprocki, K. Potamianos, A. Pranko, F. Prokoshin, F. Ptohos, G. Punzi, N. Ranjan, I. Redondo Fernández, P. Renton, M. Rescigno, F. Rimondi, L. Ristori, A. Robson, T. Rodriguez, S. Rolli, M. Ronzani, R. Roser, J. L. Rosner, F. Ruffini, A. Ruiz, J. Russ, V. Rusu, W. K. Sakumoto, Y. Sakurai, L. Santi, K. Sato, V. Saveliev, A. Savoy-Navarro, P. Schlabach, E. E. Schmidt, T. Schwarz, L. Scodellaro, F. Scuri, S. Seidel, Y. Seiya, A. Semenov, F. Sforza, S. Z. Shalhout, T. Shears, P. F. Shepard, M. Shimojima, M. Shochet, I. Shreyber-Tecker, A. Simonenko, P. Sinervo, K. Sliwa, J. R. Smith, F. D. Snider, H. Song, V. Sorin, M. Stancari, R. St. Denis, B. Stelzer, O. Stelzer-Chilton, D. Stentz, J. Strologas, Y. Sudo, A. Sukhanov, I. Suslov, K. Takemasa, Y. Takeuchi, J. Tang, M. Tecchio, P.K. Teng, J. Thom, E. Thomson, V. Thukral, D. Toback, S. Tokar, K. Tollefson, T. Tomura, D. Tonelli, S. Torre, D. Torretta, P. Totaro, M. Trovato, F. Ukegawa, S. Uozumi, F. Vázquez, G. Velev, C. Vellidis, C. Vernieri, M. Vidal, R. Vilar, J. Vizán, M. Vogel, G. Volpi, P. Wagner, R. Wallny, S.M. Wang, A. Warburton, D. Waters, W. C. Wester III, D. Whiteson, A. B. Wicklund, S. Wilbur, H. H. Williams, J. S. Wilson, P. Wilson, B. L. Winer, P. Wittich, S. Wolbers, H. Wolfe, T. Wright, X. Wu, Z. Wu, K. Yamamoto, D. Yamato, T. Yang, U. K. Yang, Y. C. Yang, W.-M. Yao, G. P. Yeh, K. Yi, J. Yoh, K. Yorita, T. Yoshida, G. B. Yu, I. Yu, A. M. Zanetti, Y. Zeng, C. Zhou, S. Zucchelli Oct. 9, 2014 hep-ex We report on a measurement of the top-quark electric charge in ttbar events in which one W boson originating from the top-quark pair decays into leptons and the other into hadrons. The event sample was collected by the CDF II detector in sqrt(s)=1.96 TeV proton-antiproton collisions and corresponds to 5.6 fb^(-1). We find the data to be consistent with the standard model and exclude the existence of an exotic quark with -4/3 electric charge and mass of the conventional top quark at the 99% confidence level. Measurement of the Top-Quark Mass in the All-Hadronic Channel using the full CDF data set (1409.4906) The top-quark mass M_top is measured using top quark-antiquark pairs produced in proton-antiproton collisions at a center-of-mass energy of 1.96 TeV and decaying into a fully hadronic final state. The full data set collected with the CDFII detector at the Fermilab Tevatron Collider, corresponding to an integrated luminosity of 9.3 fb-1, is used. Events are selected that have six to eight jets, at least one of which is identified as having originated from a b quark. In addition, a multivariate algorithm, containing multiple kinematic variables as inputs, is used to discriminate signal events from background events due to QCD multijet production. Templates for the reconstructed top-quark mass are combined in a likelihood fit to measure M_top with a simultaneous calibration of the jet-energy scale. A value of M_top = 175.07+- 1.19(stat)+1.55-1.58(syst) GeV/c^2 is obtained for the top-quark mass. Measurement of differential production cross section for $Z/\gamma^*$ bosons in association with jets in $p\bar{p}$ collisions at $\sqrt{s}=1.96$ TeV (1409.4359) Sept. 15, 2014 hep-ex Differential cross sections for the production of $Z$ bosons or off-shell photons $\gamma^*$ in association with jets are measured in proton-antiproton collisions at center-of-mass energy $\sqrt{s}=1.96$ TeV using the full data set collected with the Collider Detector at Fermilab in Tevatron Run II, and corresponding to 9.6 fb$^{-1}$ of integrated luminosity. Results include first measurements at CDF of differential cross sections in events with a $Z/\gamma^*$ boson and three or more jets, the inclusive cross section for production of $Z/\gamma^*$ and four or more jets, and cross sections as functions of various angular observables in lower jet-multiplicity final states. Measured cross sections are compared to several theoretical predictions. Studies of high-transverse momentum jet substructure and top quarks produced in 1.96 TeV proton-antiproton collisions (1407.3484) T. Aaltonen, R. Alon, S. Amerio, D. Amidei, A. Anastassov, A. Annovi, J. Antos, G. Apollinari, J.A. Appel, T. Arisawa, A. Artikov, J. Asaadi, W. Ashmanskas, B. Auerbach, A. Aurisano, F. Azfar, W. Badgett, T. Bae, A. Barbaro-Galtieri, V.E. Barnes, B.A. Barnett, P. Barria, P. Bartos, M. Bauce, F. Bedeschi, S. Behari, G. Bellettini, J. Bellinger, D. Benjamin, A. Beretvas, A. Bhatti, K.R. Bland, B. Blumenfeld, A. Bocci, A. Bodek, D. Bortoletto, J. Boudreau, A. Boveia, L. Brigliadori, C. Bromberg, E. Brucken, J. Budagov, H.S. Budd, K. Burkett, G. Busetto, P. Bussey, P. Butti, A. Buzatu, A. Calamba, S. Camarda, M. Campanelli, F. Canelli, B. Carls, D. Carlsmith, R. Carosi, S. Carrillo, B. Casal, M. Casarsa, A. Castro, P. Catastini, D. Cauz, V. Cavaliere, A. Cerri, L. Cerrito, Y.C. Chen, M. Chertok, G. Chiarelli, G. Chlachidze, K. Cho, D. Chokheli, A. Clark, C. Clarke, M.E. Convery, J. Conway, M. Corbo, M. Cordelli, C.A. Cox, D.J. Cox, M. Cremonesi, D. Cruz, J. Cuevas, R. Culbertson, N. d'Ascenzo, M. Datta, P. de Barbaro, L. Demortier, M. Deninno, M. D'Errico, F. Devoto, A. Di Canto, B. Di Ruzza, J.R. Dittmann, S. Donati, M. D'Onofrio, M. Dorigo, A. Driutti, E. Duchovni, K. Ebina, R. Edgar, A. Elagin, R. Erbacher, S. Errede, B. Esham, S. Farrington, J.P. Fernandez Ramos, R. Field, G. Flanagan, R. Forrest, M. Franklin, J.C. Freeman, H. Frisch, Y. Funakoshi, C. Galloni, A.F. Garfinkel, P. Garosi, H. Gerberich, E. Gerchtein, S. Giagu, V. Giakoumopoulou, K. Gibson, C.M. Ginsburg, N. Giokaris, P. Giromini, V. Glagolev, D. Glenzinski, M. Gold, D. Goldin, A. Golossanov, G. Gomez, G. Gomez-Ceballos, M. Goncharov, O. Gonzalez Lopez, I. Gorelov, A.T. Goshaw, K. Goulianos, E. Gramellini, C. Grosso-Pilcher, R.C. Group, J. Guimaraes da Costa, S.R. Hahn, J.Y. Han, F. Happacher, K. Hara, M. Hare, R.F. Harr, T. Harrington-Taber, K. Hatakeyama, C. Hays, J. Heinrich, M. Herndon, A. Hocker, Z. Hong, W. Hopkins, S. Hou, R.E. Hughes, U. Husemann, M. Hussein, J. Huston, G. Introzzi, M. Iori, A. Ivanov, E. James, D. Jang, B. Jayatilaka, E.J. Jeon, S. Jindariani, M. Jones, K.K. Joo, S.Y. Jun, T.R. Junk, M. Kambeitz, T. Kamon, P.E. Karchin, A. Kasmi, Y. Kato, W. Ketchum, J. Keung, B. Kilminster, D.H. Kim, H.S. Kim, J.E. Kim, M.J. Kim, S.H. Kim, S.B. Kim, Y.J. Kim, Y.K. Kim, N. Kimura, M. Kirby, K. Knoepfel, K. Kondo, D.J. Kong, J. Konigsberg, A.V. Kotwal, M. Kreps, J. Kroll, M. Kruse, T. Kuhn, M. Kurata, A.T. Laasanen, S. Lammel, M. Lancaster, K. Lannon, G. Latino, H.S. Lee, J.S. Lee, S. Leo, S. Leone, J.D. Lewis, A. Limosani, E. Lipeles, A. Lister, H. Liu, Q. Liu, T. Liu, S. Lockwitz, A. Loginov, D. Lucchesi, A. Luc, J. Lueck, P. Lujan, P. Lukens, G. Lungu, J. Lys, R. Lysak, R. Madrak, P. Maestro, S. Malik, G. Manca, A. Manousakis-Katsikakis, L. Marchese, F. Margaroli, P. Marino, K. Matera, M.E. Mattson, A. Mazzacane, P. Mazzanti, R. McNulty, A. Mehta, P. Mehtala, C. Mesropian, T. Miao, D. Mietlicki, A. Mitra, H. Miyake, S. Moed, N. Moggi, C.S. Moon, R. Moore, M.J. Morello, A. Mukherjee, Th. Muller, P. Murat, M. Mussini, J. Nachtman, Y. Nagai, J. Naganoma, I. Nakano, A. Napier, J. Nett, C. Neu, T. Nigmanov, L. Nodulman, S.Y. Noh, O. Norniella, L. Oakes, S.H. Oh, Y.D. Oh, I. Oksuzian, T. Okusawa, R. Orava, L. Ortolan, C. Pagliarone, E. Palencia, P. Palni, V. Papadimitriou, W. Parker, G. Pauletta, M. Paulini, C. Paus, G. Perez, T.J. Phillips, G. Piacentino, E. Pianori, J. Pilot, K. Pitts, C. Plager, L. Pondrom, S. Poprocki, K. Potamianos, A. Pranko, F. Prokoshin, F. Ptohos, G. Punzi, I. Redondo Fernandez, P. Renton, M. Rescigno, F. Rimondi, L. Ristori, A. Robson, T. Rodriguez, S. Rolli, M. Ronzani, R. Roser, J.L. Rosner, F. Ruffini, A. Ruiz, J. Russ, V. Rusu, W.K. Sakumoto, Y. Sakurai, L. Santi, K. Sato, V. Saveliev, A. Savoy-Navarro, P. Schlabach, E.E. Schmidt, T. Schwarz, L. Scodellaro, F. Scuri, S. Seidel, Y. Seiya, A. Semenov, F. Sforza, S.Z. Shalhout, T. Shears, P.F. Shepard, M. Shimojima, M. Shochet, I. Shreyber-Tecker, A. Simonenko, P. Sinervo, K. Sliwa, J.R. Smith, F.D. Snider, H. Song, V. Sorin, R. St. Denis, M. Stancari, D. Stentz, J. Strologas, Y. Sudo, A. Sukhanov, I. Suslov, K. Takemasa, Y. Takeuchi, J. Tang, M. Tecchio, P.K. Teng, J. Thom, E. Thomson, V. Thukral, D. Toback, S. Tokar, K. Tollefson, T. Tomura, D. Tonelli, S. Torre, D. Torretta, P. Totaro, M. Trovato, F. Ukegawa, S. Uozumi, F. Vazquez, G. Velev, C. Vellidis, C. Vernieri, M. Vidal, R. Vilar, J. Vizan, M. Vogel, G. Volpi, P. Wagner, R. Wallny, S.M. Wang, D. Waters, W.C. Wester III, D. Whiteson, A.B. Wicklund, S. Wilbur, H.H. Williams, J.S. Wilson, P. Wilson, B.L. Winer, P. Wittich, S. Wolbers, H. Wolfe, T. Wright, X. Wu, Z. Wu, K. Yamamoto, D. Yamato, T. Yang, U.K. Yang, Y.C. Yang, W.-M. Yao, G.P. Yeh, K. Yi, J. Yoh, K. Yorita, T. Yoshida, G.B. Yu, I. Yu, A.M. Zanetti, Y. Zeng, C. Zhou, S. Zucchelli (The CDF Collaboration) Results of a study of the substructure of the highest transverse momentum (pT) jets observed by the CDF collaboration are presented. Events containing at least one jet with pT > 400 GeV/c in a sample corresponding to an integrated luminosity of 5.95 inverse fb, collected in 1.96 TeV proton-antiproton collisions at the Fermilab Tevatron collider, are selected. A study of the jet mass, angularity, and planar-flow distributions is presented, and the measurements are compared with predictions of perturbative quantum chromodynamics. A search for boosted top-quark production is also described, leading to a 95% confidence level upper limit of 38 fb on the production cross section of top quarks with pT > 400 GeV/c.
CommonCrawl
Why does the "Phase Kickback" mechanism work in the Quantum phase estimation algorithm? I've probably read the chapter The quantum Fourier transform and its applications from Nielsen and Chuang (10 th anniversary edition) a couple of times before and this took this thing for granted, but today, when I looked at it again, it doesn't seem obvious to me at all! Here's the circuit diagram for the Phase estimation algorithm: The first register having $t$ qubits is supposedly the "control register". If any of the qubit in the first register is in state $|1\rangle$ the corresponding controlled unitary gate gets applied to the second register. If it is in a state $|0\rangle$ then it doesn't get applied to the second register. If it is in a superposition of the two states $|0\rangle$ and $|1\rangle$ the action of the corresponding unitary on the second register can be determined by "linearity". Notice, that all the gates are acting only on the second register and none on the first register. The first register is supposed to be only a control. However, they show that the final state of the first register as: $$\frac{1}{2^{t/2}}\left(|0\rangle+\text{exp}(2\pi i 2^{t-1}\varphi)|1\rangle)(|0\rangle+\text{exp}(2\pi i 2^{t-2}\varphi)|1\rangle)...(|0\rangle+\text{exp}(2\pi i 2^{0}\varphi)|1\rangle\right)$$ I'm surprised as to why we consider there to be a change in the state of the first register of qubits at all, after the action of the Hadamard gates. The final state of the first register should just have been $$\left(\frac{|0\rangle+|1\rangle}{\sqrt 2}\right)^{\otimes t}$$ isn't it? I say this because the first register is supposed to be a control only. I don't understand how or why the state of the first register should change when acting as a control. I initially thought that considering the exponential factors to be part of the first register qubit states was only a mathematical convenience, but then it didn't make sense. State of a qubit or a system of qubits shouldn't depend upon what is mathematically convenient to us! So, could someone please explain why exactly the state of the first register of qubits changes, even when it simply acts as a "control" for the second register? Is it just a mathematical convenience or is there something deeper? quantum-state quantum-fourier-transform phase-estimation phase-kickback $\begingroup$ Not an answer, but: What would it mean for it to be a 'mathematical convenience', if it didn't represent an actual change in the state? Either the maths accurately describes how quantum states change, or it doesn't. If it doesn't, you have bigger problems than this one example. If you do suppose that the mathematics accurately describes the physics, then the mathematical representation is not just convenient: the states of the (scare quotes ahead) "control" wires do actually change in this subroutine. It's okay to be confused as to why, but first you have to accept that they do change. $\endgroup$ – Niel de Beaudrap Jul 1 '18 at 23:12 $\begingroup$ The maths is exactly that explained in this answer: quantumcomputing.stackexchange.com/a/1791/1837 but that situation is simpler, and perhaps easier to understand $\endgroup$ – DaftWullie Jul 2 '18 at 5:23 $\begingroup$ @NieldeBeaudrap Well, my question is precisely "why" it changes $\endgroup$ – Sanchayan Dutta Jul 2 '18 at 5:40 $\begingroup$ @DaftWullie The math doesn't look hard. Let's just take a simple example of a controlled-$U^{2^0}$ gate. If the control register is in state $|1\rangle$ then it gets applied to $|u\rangle$ to give $\exp(2\pi i 2^0 \phi)|u\rangle$. But, they are considering that exponential factor of $\exp(2\pi i 2^0 \phi)$ to be a factor of the control qubit in the first register i.e. $\exp(2\pi i 2^0 \phi)$ and not of the second register. My question is: why so? $\endgroup$ – Sanchayan Dutta Jul 2 '18 at 6:14 $\begingroup$ cc @NieldeBeaudrap ^ $\endgroup$ – Sanchayan Dutta Jul 2 '18 at 6:15 Imagine you have an eigenvector $|u\rangle$ of $U$. If you have a state such as $|1\rangle|u\rangle$ and you apply controlled-$U$ to it, you get out $e^{i\phi}|1\rangle|u\rangle$. The phase isn't attached to a specific register, it's just an overall multiplicative factor. Now let's use a superposition on the first register: $$ (|0\rangle+|1\rangle)|u\rangle\mapsto |0\rangle|u\rangle+e^{i\phi}|1\rangle|u\rangle $$ You can rewrite this as $$ (|0\rangle+e^{i\phi}|1\rangle)|u\rangle $$ so it appears on the first register, even though it was sort-of created on the second register. (Of course that interpretation isn't entirely true because it was created by a two-qubit gate acting on both qubits). This step is at the heart of many quantum algorithms. Why don't we write $|\Psi\rangle=|0\rangle|u\rangle+|1\rangle(e^{i\phi}|u\rangle)$ and just claim that it is not separable? One can't just claim it, but must show it mathematically. For example, we can take the partial trace over the second qubit, $$ \text{Tr}_B(|\Psi\rangle\langle\Psi|_{AB})=\text{Tr}_B(|0\rangle\langle 0|\otimes |u\rangle\langle u|+|1\rangle\langle 0|\otimes e^{i\phi}|u\rangle\langle u|+|0\rangle\langle 1|\otimes |u\rangle\langle u|e^{-i\phi}+|1\rangle\langle 1|\otimes e^{i\phi}|u\rangle\langle u|e^{-i\phi}) $$ To take the partial trace, we pick a basis to sum over. For simplicity, let's pick $\{|u\rangle,|u^\perp\rangle\}$ where $\langle u|u^\perp\rangle=0$ and $\langle u|(e^{i\phi}|u\rangle=e^{i\phi}$. Then you get $$ \text{Tr}_B(|\Psi\rangle\langle\Psi|_{AB})=|0\rangle\langle 0|+e^{i\phi}|1\rangle\langle 1|+e^{-i\phi}|0\rangle\langle 1|+|1\rangle\langle 1| $$ This is rank 1 (and you can see the phase has appeared on the first register), so the state is not entangled. It is separable. $\begingroup$ My main issue is with the "re-writing" part. Mathematically it is simply a re-arrangement but physically that re-writing can have deep implications. Say, why don't I write it instead as $|0\rangle(|u\rangle) + |1\rangle (e^{i\phi}|u\rangle)$ and just claim that it is not separable into tensor products due to entanglement? Why should that $e^{i\phi}$ factor belong to a state of a qubit in the first register rather than the state of a qubit in the second register? $\endgroup$ – Sanchayan Dutta Jul 2 '18 at 6:35 $\begingroup$ How do you define "entangled"? By any definition, this is not entangled. Try taking the partial trace, for example. Moreover, I guess you don't generally have a problem with removing a global phase from an entire expression, as compared to holding that phase on different components? $\endgroup$ – DaftWullie Jul 2 '18 at 6:40 $\begingroup$ I'm probably having some elementary misconception. Say, I have two qubits, where the first one (qubit $A$) is in state $(|0\rangle)_A$ and the second one (qubit B) is in state $(e^{i\theta }|0\rangle)_B$. Then the composite state is $(|0\rangle)_A\otimes (e^{i\theta}|0\rangle)_B$. Now I indeed have seen it being written as $e^{i\theta}(|0\rangle)_A\otimes (|0\rangle)_B$, but I'm not sure why that should be possible. What's the actual physical state of qubit A and qubit B in this case? Is it $(e^{i\theta}|0\rangle)_A$ & $|0\rangle_B$ or is it $(|0\rangle)_A$ & $(e^{i\theta}|0\rangle)_B$? $\endgroup$ – Sanchayan Dutta Jul 2 '18 at 6:50 $\begingroup$ I guess I do have a problem with shifting around "global phase" like that. I never thought about it before. $\endgroup$ – Sanchayan Dutta Jul 2 '18 at 6:50 $\begingroup$ There is no physical difference. Think about it this way: what experiment would you do to distinguish the two? If there's a physical difference, there must be a way to distinguish them. $\endgroup$ – DaftWullie Jul 2 '18 at 6:55 A first remark This same phenomenon of 'control' qubits changing states in some circumstances also occurs with controlled-NOT gates; in fact, this is the entire basis of eigenvalue estimation. So not only is it possible, it is an important fact about quantum computation that it is possible. It even has a name: a "phase kick", in which the control qubits (or more generally, a control register) incurs relative phases as a result of acting through some operation on some target register.$\def\ket#1{\lvert#1\rangle}$ The reason why this happens Why should this be the case? Basically it comes down to the fact that the standard basis is not actually as important as we sometimes describe it as being. Short version. Only the standard basis states on the control qubits are unaffected. If the control qubit is in a state which is not a standard basis state, it can in principle be changed. Longer version — Consider the Bloch sphere. It is, in the end, a sphere — perfectly symmetric, with no one point being more special than any other, and no one axis more special than any other. In particular, the standard basis is not particularly special. The CNOT operation is in principle a physical operation. To describe it, we often express it in terms of how it affects the standard basis, using the vector representations $$ \ket{00} \to {\scriptstyle \begin{bmatrix} 1 \\ 0 \\ 0 \\ 0 \end{bmatrix}}\,, \quad \ket{01} \to {\scriptstyle \begin{bmatrix} 0 \\ 1 \\ 0 \\ 0 \end{bmatrix}}\,, \quad \ket{10} \to {\scriptstyle \begin{bmatrix} 0 \\ 0 \\ 1 \\ 0 \end{bmatrix}}\,, \quad \ket{11} \to {\scriptstyle \begin{bmatrix} 0 \\ 0 \\ 0 \\ 1 \end{bmatrix}}$$ — but this is just a representation. This leads to a specific representation of the CNOT transformation: $$ \mathrm{CNOT} \to {\scriptstyle \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{bmatrix}}\,.$$ and for the sake of brevity we say that those column vectors are the standard basis states on two qubits, and that this matrix is a CNOT matrix. Did you ever do an early university mathematics class, or read a textbook, where it started to emphasise the difference between a linear transformation and matrices — where it was said, for example, that a matrix could represent a linear transformation, but was not the same as a linear transformation? The situation with CNOT in quantum computation is one example of how this distinction is meaningful. The CNOT is a transformation of a physical system, not of column vectors; the standard basis states are just one basis of a physical system, which we conventionally represent by $\{0,1\}$ column vectors. What if we were to choose to represent a different basis — say, the X eigenbasis — by $\{0,1\}$ column vectors, instead? Suppose that we wish to represent $$ \begin{aligned} \ket{++} \to{}& [\, 1 \;\; 0 \;\; 0 \;\; 0 \,]^\dagger\,, \\ \ket{+-} \to{}& [\, 0 \;\; 1 \;\; 0 \;\; 0 \,]^\dagger\,, \\ \ket{-+} \to{}& [\, 0 \;\; 0 \;\; 1 \;\; 0 \,]^\dagger\,, \\ \ket{--} \to{}& [\, 0 \;\; 0 \;\; 0 \;\; 1 \,]^\dagger \,. \end{aligned}$$ This is a perfectly legitimate choice mathematically, and because it is only a notational choice, it doesn't affect the physics — it only affects the way that we would write the physics. It is not uncommon in the literature to do analysis in a way equivalent to this (though it is rare to explicitly write a different convention for column vectors as I have done here). We would have to represent the standard basis vectors by: $$ \ket{00} \to \tfrac{1}{2}\,{\scriptstyle \begin{bmatrix} 1 \\ 1 \\ 1 \\ 1 \end{bmatrix}}\,, \quad \ket{01} \to \tfrac{1}{2}\,{\scriptstyle \begin{bmatrix} 1 \\ -1 \\ 1 \\ -1 \end{bmatrix}}\,, \quad \ket{10} \to \tfrac{1}{2}\,{\scriptstyle \begin{bmatrix} 1 \\ 1 \\ -1 \\ -1 \end{bmatrix}}\,, \quad \ket{11} \to \tfrac{1}{2}\,{\scriptstyle \begin{bmatrix} 1 \\ -1 \\ -1 \\ 1 \end{bmatrix}}\,.$$ Again, we're using the column vectors on the right only to represent the states on the left. But this change in representation will affect how we want to represent the CNOT gate. A sharp-eyed reader may notice that the vectors which I have written on the right just above are the columns of the usual matrix representation of $H \otimes H$. There is a good reason for this: what this change of representation amounts to is a change of reference frame in which to describe the states of the two qubits. In order to describe $\ket{++} = [\, 1 \;\; 0 \;\; 0 \;\; 0 \,]^\dagger$, $\ket{+-} = [\, 0 \;\; 1 \;\; 0 \;\; 0 \,]^\dagger$, and so forth, we have changed our frame of reference for each qubit by a rotation which is the same as the usual matrix representation of the Hadamard operator — because that same operator interchanges the $X$ and $Z$ observables, by conjugation. This same frame of reference will apply to how we represent the CNOT operation, so in this shifted representation, we would have $$ \begin{aligned} \mathrm{CNOT} \to \tfrac{1}{4}{}\,{\scriptstyle \begin{bmatrix} 1 & 1 & 1 & 1 \\ 1 & -1 & 1 & -1 \\ 1 & 1 & -1 & -1 \\ 1 & -1 & -1 & 1 \end{bmatrix} \,\begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{bmatrix}\, \begin{bmatrix} 1 & 1 & 1 & 1 \\ 1 & -1 & 1 & -1 \\ 1 & 1 & -1 & -1 \\ 1 & -1 & -1 & 1 \end{bmatrix} }\, = \,{\scriptstyle \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \end{bmatrix}} \end{aligned}$$ which — remembering that the columns now represent $X$ eigenstates — means that the CNOT performs the transformation $$ \begin{aligned} \mathrm{CNOT}\,\ket{++} &= \ket{++} , \\ \mathrm{CNOT}\,\ket{+-} &= \ket{--}, \\ \mathrm{CNOT}\,\ket{-+} &= \ket{-+} , \\ \mathrm{CNOT}\,\ket{--} &= \ket{+-} . \end{aligned} $$ Notice here that it is only the first, 'control' qubits whose state changes; the target is left unchanged. Now, I could have shown this same fact a lot more quickly without all of this talk about changes in reference frame. In introductory courses in quantum computation in computer science, a similar phenomenon might be described without ever mentioning the words 'reference frame'. But I wanted to give you more than a mere calculation. I wanted to draw attention to the fact that a CNOT is in principle not just a matrix; that the standard basis is not a special basis; and that when you strip these things away, it becomes clear that the operation realised by the CNOT clearly has the potential to affects the state of the control qubit, even if the CNOT is the only thing you are doing to your qubits. The very idea that there is a 'control' qubit is one centered on the standard basis, and embeds a prejudice about the states of the qubits that invites us to think of the operation as one-sided. But as a physicist, you should be deeply suspicious of one-sided operations. For every action there is an equal and opposite reaction; and here the apparent one-sidedness of the CNOT on standard basis states is belied by the fact that, for X eigenbasis states, it is the 'target' which unilaterally determines a possible change of state of the 'control'. You wondered whether there was something at play which was only a mathematical convenience, involving a choice of notation. In fact, there is: the way in which we write our states with an emphasis on the standard basis, which may lead you to develop a non-mathematical intuition of the operation only in terms of the standard basis. But change the representation, and that non-mathematical intuition goes away. The same thing which I have sketched for the effect of CNOT on X-eigenbasis states, is also going on in phase estimation, only with a different transformation than CNOT. The 'phase' stored in the 'target' qubit is kicked up to the 'control' qubit, because the target is in an eigenstate of an operation which is being coherently controlled by the first qubit. On the computer science side of quantum computation, it is one of the most celebrated phenomena in the field. It forces us to confront the fact that the standard basis is only special in that it is the one we prefer to describe our data with — but not in how the physics itself behaves. Niel de BeaudrapNiel de Beaudrap I once asked this too, but it is not just a matter of mathematical convenience. The controlled-U is an "entangling" gate. Once there's entanglement, you cannot separate the state into "first register" and "second register". Only think of these registers separately at the beginning, or when there's no entanglement. After there's entanglement, your best bet is to work through the mathematics (matrix multiplications) thoroughly, and you will indeed get the state given by Nielsen and Chuang. $\begingroup$ Trying to upvote the question but need to wait until I have 15 reputation. $\endgroup$ – user2898 Jul 2 '18 at 5:27 $\begingroup$ I can't see any entanglement. The output seems to be separable between the two registers. $\frac{1}{2^{t/2}}\left(|0\rangle+\text{exp}(2\pi i 2^{t-1}\varphi)|1\rangle)(|0\rangle+\text{exp}(2\pi i 2^{t-2}\varphi)|1\rangle)...(|0\rangle+\text{exp}(2\pi i 2^{0}\varphi)|1\rangle\right)$ is state of the first register whereas $|u\rangle$ is the state of the second register. $\endgroup$ – Sanchayan Dutta Jul 2 '18 at 6:10 $\begingroup$ @Blue I do not write it as a full answer because I myself find it difficult to internalize the concept in my mind, anyway this is due to the "Phase Kick-Back" phenomenon, and it is actually also due to the fact that control and target are somewhat entangled. Try and give a read to section 2.2 of Mosca's PhD thesis, it's the best explanation I have found so far. $\endgroup$ – FSic Jul 2 '18 at 8:38 $\begingroup$ @F.Siciliano Okay, thank you. I'll give it a read $\endgroup$ – Sanchayan Dutta Jul 2 '18 at 8:48 Not the answer you're looking for? Browse other questions tagged quantum-state quantum-fourier-transform phase-estimation phase-kickback or ask your own question. How can a CNOT gate change the control qbit (e.g. in the Deutsch Oracle problem)? Bernstein–Vazirani problem in book as exercise How does a single-qubit gate affect other qubits? How to compute the average value $\langle X_1 Z_2\rangle$ for a two-qubit system? Why can the Discrete Fourier Transform be implemented efficiently as a quantum circuit? Quantum algorithm for linear systems of equations (HHL09): Step 1 - Confusion regarding the usage of phase estimation algorithm SWAP gate(s) in the $R(\lambda^{-1})$ step of the HHL circuit for $4\times 4$ systems What is the meaning of the state $|1\rangle-|1\rangle$? How to complete this teleportation circuit? How to create a copy of $|\psi〉$? The control phase gate in Quantum fourier transform and the question it brings up regarding control gates in general Input and output qubit notation in quantum gates How and why does swap test works?
CommonCrawl
International Journal for Equity in Health December 2019 , 18:42 | Cite as Trends and predictors of inequality in childhood stunting in Nepal from 1996 to 2016 Mirak Raj Angdembe Bishnu Prasad Dulal Kreepa Bhattarai Sumit Karn Although decreasing in trend, one-in-three children remain stunted in Nepal and its distribution is unequal among different socioeconomic and geographical subgroups. Thus, it is crucial to assess inequalities in stunting for designing equity focused interventions that target vulnerable groups with higher burden of stunting. This study measures trends and predictors of socioeconomic inequalities in childhood stunting in Nepal. Data from five rounds (1996–2016) of Nepal Demographic and Health Survey, nationally representative cross-sectional surveys, were used. Levels and trends of absolute and relative disparity in stunting between the poorest and the richest wealth quintiles, and among all quintiles were assessed by calculating absolute and relative difference, concentration curve and index. Average marginal effects of predictors on stunting were calculated using probit regression. The concentration index was subsequently decomposed into contributing factors. Even though stunting consistently declined in all wealth quintiles between 1996 and 2016, reduction was relatively higher among the richer quintiles compared to poorer ones. The absolute difference between the poorest and the richest quintile increased from 24.7 in 1996 (64.5% in poorest – 39.8% in richest) to 32.7 percentage points in 2016 (49.2–16.5%). The relative disparity also increased; the ratio of stunting in the poorest to the richest quintile was 1.6 in 1996 and 3.0 in 2016. The concentration index increased (in absolute value) from − 0.078 in 1996 to − 0.147 in 2016 indicating that stunting was disproportionately concentrated in poorer households and socioeconomic inequalities worsened from 1996 to 2016. Decomposition analysis revealed that in 1996, wealth (61%), caste/ethnicity (12%), mother's education (12%) and birth order (9%) were the major contributors to observed socioeconomic inequalities in stunting; while in 2016, wealth (72%), mother's BMI (12%) and birth order (9%) were the major contributors. Despite remarkable improvements in average stunting over the last two decades, substantial socioeconomic inequalities in stunting exists and is determined not only by immediate factors but also by underlying and contextual factors which emphasize the need for coherent actions across different sectors. In addition to reducing inequalities in wealth, nutrition programming should be focused on most disadvantaged subgroups which are prone to both stunting and relative poverty. Concentration index Decomposition Predictors Nepal Nepal demographic and health survey Socioeconomic inequalities Stunting Concentration Index NDHS Nepal Demographic and Health Survey Nepal Family Health Survey Bishnu Prasad Dulal, Kreepa Bhattarai and Sumit Karn contributed equally to this work. Under-nutrition is still a major barrier for child growth and development in developing countries. Low height-for-age or stunting is an important indicator for assessing undernutrition among children. It represents the devastated result of poor nutrition over a long period, in children under-five years [1]. Stunting can happen in the first 1000 days of child's life after conception and is worsened by recurrent and chronic illnesses [2, 3]. Coupled with micronutrient deficiencies, affected children may suffer from irreversible brain damage, impeding their complete developmental potential. Even survivors are more likely to lead diminished lives, have compromised cognitive abilities, reduced school performance, lowered economic productivity and are at a greater risk of nutrition-related chronic diseases later in life [2, 4]. Globally, 150 million children under-five were stunted in 2017, of which, more than half (55%) were Asian [5]. Although undernutrition is decreasing globally, stunting is declining relatively at a slower pace, threatening the achievement of international commitments [6]. At 36% in 2016, stunting is still unacceptability high in Nepal with a slow rate of decline (57% in 1996 and 2001, 49% in 2006, 41% in 2011) [7, 8, 9, 10, 11]. Masked by national average, stunting and severe stunting in some parts of the country are even higher and wide variation between different socioeconomic and geographical subgroups is worrisome. In 2016, stunting was higher in the mountain region1 (47%) where access to health care and other services is particularly poor, Karnali province2 (55%), children born to mothers with no education (46%) and children belonging to households in the poorest wealth quintile (49%) [10]. Childhood stunting is considered to be the best overall indicator of children's well-being that reflects social inequalities [12]. It is associated with a multitude of demographic, socioeconomic and nutritional factors such as child's age, gender, dietary intake, household economic status, mother's education, mother's age and household food insecurity [2, 13]. Nevertheless, food insecurity, mother's education and household income were considered as the most important predictors [2, 14, 15, 16]. Whereas the major contributors to socioeconomic disparities in stunting and their changes over time were household economic status and sanitation, parental education, utilization of health services (antenatal care, delivery at health facility), maternal short stature, child's age, birth order, duration of breastfeeding and ethnicity [17, 18, 19]. The Government of Nepal is committed to achieving the Sustainable Development Goals (SDG) 2030 principled on 'leaving no one behind'. These goals necessitate disaggregation of data by multiple dimensions including wealth [20]. With equity and access as one of the strategic pillars; inclusion and disaggregated data needs are recurring motifs in Nepal's current health sector strategy 2015–2020 [21]. To provide momentum towards universal health coverage as envisioned by the sectoral strategy, an enriched understanding of who and where the disadvantaged and vulnerable children are, is thus important. To this end, efforts have been underway. Nepal joined the 'scaling up nutrition' movement in 2011 to strengthen political commitment and accountability for ending malnutrition. Efforts have also been seen in the form of nutrition specific and sensitive interventions through different sectors under the common framework of Multi-Sector Nutrition Plan [22]. However, these are still relatively recent and may not be widespread enough to reach the most vulnerable and impoverished population [15]. Given that one-in-three children remain stunted and its distribution is unequal among different population subgroups, understanding inequalities in stunting is crucial for designing equity focused interventions. In this context, this study aims to assess the levels and trends in childhood stunting by wealth quintile (a measure of household economic status) in Nepal; use absolute and relative measures including concentration index to capture inequality across all quintiles; and then 'decompose' these inequalities by quantifying the contributions attributable to each predictors and examine their changes over time. To our knowledge this is the first study to investigate drivers of disparities in stunting and their trends in Nepal. We used data from five rounds of Nepal Demographic and Health Surveys (NDHS) conducted in Nepal in 1996 (originally called Nepal Family Health Survey, NFHS), 2001, 2006, 2011 and 2016. The datasets were downloaded with permission from the DHS program. NDHS are nationally representative cross-sectional household surveys that provide data for a wide range of indicators in the areas of population, health and nutrition. They use two-stage or three-stage, systematic cluster random sampling design. Sample size and response rates are presented in Table 1. The NDHS reports can be referred to for further detail [7, 8, 9, 10, 11]. Number of households, response rate, number of children and time of field work by survey year Total households (N) Response rate (%) Total childrena (N) Total childrenb (weighted N) Total childrenc (weighted N) Time of field work Jan-Jun 1996 Feb-Aug 2006 Jun 2016 – Jan 2017 aIn 1996 anthropometric data were collected for under-three years old children; in 2001, 2006, 2011 and 2016 anthropometric data were collected for under-five years old children bSample size used for calculating the quintile specific trends, concentration curves, levels and trends in concentration indices and absolute and relative differences cSample size used for decomposition analysis to explain between-year changes in inequalities comparing 1996 and 2016 NA: Not applicable The 1996 and 2001 surveys did not calculate the wealth index factor scores in original datasets. These are available as separate files. Similarly, the updated anthropometric z-scores based on 2006 World Health Organization's (WHO) child growth standards [23] are also available as separate files for these surveys.. Data from these separate files were thus merged with the datasets of 1996 and 2001. The 1996 survey collected anthropometric data for all children under-three years of age born in the three years before the survey to women interviewed. We used children's recode dataset for all calculations of 1996. Other surveys collected anthropometric measurements for under-five years' de facto children – those who stayed in the household the night before the interview. So except for 1996 survey, we used household member recode datasets which contain information for all children under-five in the household for calculating the quintile specific trends, concentration curves, levels and trends in concentration indices and absolute and relative differences (Weighted N: 1996–3703; 2001–6442; 2006–5258; 2011–2485; 2016–2421). For decomposition analysis of 1996 and 2016, we used children recode dataset for characteristics of children and their mother (Weighted N: 1996–2967; 2016–1588). Outcome variable Compared to underweight and wasting, both of which reflect recent nutritional distress, stunting is a result of chronic nutritional deprivation [24]. Although all three indicators are equally important to measure nutritional imbalance resulting in undernutrition, we analysed stunting among children as outcome to better measure inequalities in long term nutritional progress in Nepal. Stunting was measured using height-for-age z-scores. The WHO child growth standard [23] expresses a child's height in standard deviation units (z-scores) above or below the median height of healthy children in the same age group or in a reference group. Using this standard, we classified children whose height-for-age z-score was below minus two standard deviations (<− 2 SD) from the median of the reference population as short for their age (stunted) or chronically malnourished. The WHO's conceptual framework on childhood stunting [25] explains the context, causes and consequences of stunting and underpins our analysis. The process of stunting begins even before birth. Maternal factors such as short maternal stature, intrauterine growth retardation, short birth spacing and poor nutrition during pre-conception, pregnancy and lactation contributes to stunted growth and development of the child. After birth, breastfeeding practices become important and household factors such as inadequate sanitation affects the risk of infection and morbidities that interfere with growth. Wider contextual factors, among many others, include wealth, education, socio-cultural factors such as caste/ethnicity, and access to health care. Together with wealth and education, caste/ethnicity determines the socioeconomic position of populations placing certain groups at an advantage in terms of access and use of resources while marginalizing others. Differences in exposure and vulnerability to poor health outcomes are linked to people's respective social status [26]. In Nepal, significant disparities in access to health care among people of different caste/ethnic groups exist [27]. Food consumption patterns [28] including complementary feeding practices [29] also vary between different sociocultural groups affecting their nutritional status. In health care access, antenatal care (ANC) visits and delivery in health facility are usually the first points of contact with the health system for most pregnant women. These are critical windows of opportunity for health programs to provide evidence-based interventions likely to prevent stunting in utero and later. Predictors of socioeconomic inequalities We used the conceptual frame work and previously published literatures [13, 19, 30, 31, 32] that have shown strong association between either stunting or height-for-age z-scores and socioeconomic, maternal, child and Water, Sanitation and Hygiene (WASH) factors to guide the selection of our predictor variables. Our selections were also restricted by what was available in the 1996 NDHS dataset. In socioeconomic factors, first we included household wealth index. Income and wealth enables access to better quality foods, health care and other factors that can raise the nutritional status of children [33]. The NDHS calculates wealth index (also called the asset index) composed of a set of variables asked in household questionnaires that describe household assets and utilities. It is a composite measure of a household's cumulative living standard and is used as a proxy for household welfare. Households are given scores based on the number and kinds of consumer goods they own, ranging from a television to a bicycle or car, and housing characteristics such as source of drinking water, toilet facilities, and flooring materials [10]. Constructed using principal component analysis, scores for the first principal component gives the index and individual households are placed on a continuous scale of relative wealth. The quintiles are then constructed with each quintile containing 20% of the population. Second we included caste/ethnicity. The caste variable recoded3 by NDHS was adapted for analysis into following groups: Brahmin/Chhetri, Dalit, Janajati, Muslim, Newar, other terai caste and others. The category 'others' was later omitted during model fitting to minimize errors. Maternal and parental factors included education level of mother and her husband/partner, height (cm), body mass index (BMI), ANC visits and delivery in a health facility. Child characteristics consisted of age (months), birth order, status of breastfeeding and perceived size of baby at birth. WASH factors comprised availability of toilet facilities in household. We used these predictor variables in decomposition analysis to explain between-year changes in inequalities comparing 1996 and 2016. Measurement of socioeconomic inequalities We started with comparison of stunting across different wealth quintiles. We then used both absolute and relative measures of inequality for comprehensive assessment of household economic disparities in stunting, as using either one only can lead to dissimilar inferences about the magnitude and changes [34]. We calculated one absolute (the difference between stunting in the poorest/first and richest/fifth quintiles) and two relative indicators of inequality (the ratio of stunting in the poorest quintile to the richest quintile and the concentration curve/index). The concentration curve and index captures inequality across all wealth quintiles. The concentration curve plots the cumulative proportion of stunting against the cumulative proportion of children, ranked by wealth index, beginning with the poorest, and ending with the richest (x-axis). We constructed concentration curves for 1996 and 2016 to illustrate changes in inequality in stunting between these survey periods and also applied statistical test of dominance [35] between the concentration curves to assess whether differences between curves are significant. To quantify the degree of socioeconomic inequality in stunting we calculated the concentration index which is defined as twice the area between the stunting concentration curve and the line of equality (the diagonal or 45-degree line; see Fig. 2) [35]. The index is expressed in a scale ranging from − 1 to 1; a value of zero represents perfect equality, whereas a value of 1 to − 1 indicates that only the richest or the poorest household bear the burden of stunting. It is negative (positive) when the curve lies above (below) the line of equality, indicating a higher relative burden of stunting among the poor (rich). When we multiply the value of concentration index by 75 we get an estimation of the percentage of stunting to be redistributed from the richer half to the poorer half, in order to reach a distribution of perfect equality and to obtain an index value of zero [36]. In our analysis, since the outcome variable is dichotomous, the bounds of the concentration index are not − 1 and 1 but depend on the mean of the variable. To ensure robustness, we further normalized the standard index estimates by dividing through by 1 minus the mean [37] and presented the results. The concentration index can be written in terms of the covariance between the outcome variable (stunting) and the fractional rank in the socioeconomic distribution (wealth index) as follows: $$ C=\frac{2}{\mu } CO{V}_w\left({y}_i,{R}_i\right) $$ Here, yi refers to the outcome of the ith individual, Ri is the fractional rank of the ith ranked individual in the socioeconomic distribution, while μ is the weighted mean of y, and COVw denotes the weighted covariance. Decomposition of inequalities To identify the contribution of each of the predictors to the observed socioeconomic inequality in stunting we decomposed the concentration index of stunting to reflect the proportional contributions of predictors, together with an unexplained residual component (ε). For any linear additive regression model explaining outcome (y), with a set of predictors (k), their regression coefficients (βk), the intercept (α), the relative contributions of Xk predictors and error term (ε) [35, 38] such as $$ y=\alpha +{\sum}_k{\beta}_k{X}_k+\varepsilon $$ the concentration index for y (i.e. C) can be written as follows: $$ C={\sum}_k\left(\frac{\beta_k{\overline{X}}_k}{\mu}\right){C}_k+\frac{GC_{\varepsilon }}{\mu } $$ Equation (3) shows that the overall inequality in outcome has two components, a deterministic or 'explained' component and an 'unexplained' component; one which cannot be explained by systematic variation in the predictors across wealth groups. In the deterministic component βk is the coefficient from a regression of outcome on predictor k, \( {\overline{X}}_k \) is the mean of predictor Xk, μ is the mean of y and Ck is the concentration index for predictor Xk (defined analogously to C). In the unexplained component, GCε is the generalized concentration index for the error term (ε). Using the explained component, we can calculate the contribution of each predictor to inequality by multiplying the outcome elasticity (impact each predictor has on the outcome) with respect to that predictor and its concentration index (degree of unequal distribution) i.e. \( \left(\frac{\beta_k{\overline{X}}_k}{\mu}\right){C}_k \). Even if the contribution of a predictor is large, it will not have a large contribution to inequality, if it is equally distributed between the rich and the poor. Predictors that are more concentrated among the poor and associated with a higher probability of stunting, or those that are more prevalent among the rich and associated with a lower probability of stunting would lead to inequality. Thus, both the impact of the predictor on the outcome, as well as its distribution by economic status (given by concentration index) determines the contribution of predictors to total inequality. Next, the percentage contribution of each predictor can be estimated by dividing its absolute contribution by the concentration index of the outcome i.e. \( \left(\frac{\beta_k{\overline{X}}_k}{\mu}\right){C}_k/C \). The decomposition method was first introduced for use with linear prediction models [38]. However, in our study the outcome variable is binary and thus requires non-linear statistical techniques. Of the two popular choices – the logit and the probit model that yield probabilities in the range (0,1) and are fitted by maximum likelihood; we used the probit model which has also been used in another study [19]. Because the normal and logistic distributions are similar, the choice between a probit or a logit specification is not important in most cases [35]. $$ {h}_i={\alpha}^m+{\sum}_k{\beta}_k^m{X}_{ki}+{u}_i $$ In a probit model, using marginal or partial effects (dh/dx), which gives the change in predicted probability associated with unit change in predictor variable, allows for dealing with discrete changes from 0 to 1. Thus, this approximation of non-linear relationship using marginal effects restores the mechanism of the decomposition framework in eqs. (2) through (4) [39]. Equation (4) gives the linear approximation of the non-linear estimations, where ui indicates the error generated by the linear approximation used to obtain the marginal effects. This method has been used previously in analysis of health sector inequalities [40, 41]. We calculated average marginal effects from predictions of probit model and a two-tailed P value < 0.05 was considered statistically significant. All the estimates take sampling weights into consideration. We used Stata 15.1® for statistical analyses. Socioeconomic inequalities Stunting has consistently declined in all wealth quintiles between 1996 and 2016. However, the reduction was relatively higher among the richer quintiles compared to the poorer ones. In 1996, stunting was 64.5% (95% CI: 61.4 to 67.7) in the poorest (first) quintile and 39.8% (35.5 to 44.0) in the richest (fifth) quintile (Table 2, Fig. 1). By 2016, stunting dropped in Nepal to 49.2% (44.8 to 53.6) in the poorest quintile, 16.5% (11.7 to 21.3) in the richest quintile, and in between these outcomes in the middle quintiles. The average decline per year during this period was 0.8 percentage points in the poorest quintile compared to 1.2 percentage points in the richest quintile which was the highest rate of reduction among all wealth groups. Trends and estimates for quintile-specific stunting in 1996, 2001, 2006, 2011 and 2016 Stunting (95% confidence intervals) Absolute decline in stunting (percent points) Percentage decline in stunting (%) Average decline per year Total decline First quintile (poorest) 64.5 (61.4 to 67.7) 56 (51.7 to 60.2) Second quintile Third quintile Fourth quintile Fifth quintile (richest) Total stunting Ratio of first to fifth quintile stunting Difference in first and fifth quintile stunting Concentration index (95% confidence intervals) −0.078 (− 0.094 to − 0.061) − 0.083(− 0.095 to − 0.070) − 0.164 (− 0.194 to − 0.134) Standard error of concentration index Normalized concentration indexa − 0.178 Weighted N aNormalization of concentration index involves dividing the concentration index by (1 – proportion stunted) Quintile-specific trends in stunting from 1996 to 2016. (a) Stunting prevalence (b) Absolute difference (Q1-Q5) (c) Relative difference (Q1/Q5). The first quintile (Q1) is the 20% poorest quintile and the fifth quintile (Q5) is the 20% richest. (Weighted N: - 1996: 3703; 2001: 6442; 2006: 5258; 2011: 2485; 2016: 2421) The largest absolute decline in stunting prevalence between 1996 and 2016 occurred in the richest quintile with 23.3 percentage points. The corresponding percentage decline between 1996 and 2016 was 58.5% which took place at the rate of 4.3% per year (Table 2). As a result of the greater absolute decline in stunting in the richest quintile than in the poorest quintile, the difference between these quintiles increased from 24.7 in 1996 to 32.7 in 2016 (Table 2, Fig. 1). The relative disparity between the poorest and the richest groups has also increased; the ratio of stunting in the first quintile to fifth quintile was 1.6 in 1996 and 3.0 in 2016. Figure 2 presents concentration curves for stunting in 1996 and 2016. In 1996, the concentration curve was consistently above the line of equality, which means stunting was disproportionately concentrated in poorer households. In 2016, the curve has shifted even further away from the line of equality, showing increase in degree of inequality over the years. In order to obtain zero inequality, it is necessary to redistribute 5.9% stunting (not stunted) in 1996 and 11% in 2016 from the richer to the poorer half of the population. Here, we can reject the null of non-dominance at the five percentage level of significance using the less strict option within the test described in O'Donnell et al. [35]. This means the 2016 concentration curve dominates (lies above) that of 1996 but the two curves overlap toward the bottom of the wealth distribution. Concentration curve for stunting, 1996 and 2016 (Weighted N: - 1996: 3703; 2016: 2421) In addition, the concentration index increased (in absolute value) from − 0.078 (95% CI: -0.094 to − 0.061) in 1996 to − 0.164 (− 0.194 to − 0.134) in 2011 (Table 2, Fig. 3). It decreased slightly from 2011 to 2016 (− 0.147, 95% CI: -0.179 to − 0.114). The negative values indicate that stunting was disproportionately concentrated in poorer households. We find that socioeconomic inequalities in stunting, as measured by concentration index, worsened from 1996 to 2011, however improved somewhat in 2016. The 'normalized' concentration indices show even stronger evidence of the burden of stunting being more concentrated among the poor (Table 2). Trend in concentration index of stunting with 95% confidence intervals. Curve illustrates point estimates. Shaded areas are 95% confidence intervals. (Weighted N: - 1996: 3703; 2001: 6442; 2006: 5258; 2011: 2485; 2016: 2421) Marginal effects of predictors In mothers having a higher BMI the chances of stunting were significantly reduced in 2016 (Normal, P = 0.042 and Overweight/Obese, P = 0.003). Results were similar in 1996, but statistically non-significant (Table 3). Mothers with 'secondary and above' education had significantly lower probability (P = 0.020) of stunting in 1996. Although results were similar in 2016 i.e. higher education decreasing the chances of stunting, it was not statistically significant. Mothers with short height (< 145 cm) were significantly more likely (P < 0.001) to have stunted children both in 1996 and 2016 with higher effect in 2016. Decomposition of the concentration index for stunting (1996 & 2016) Marginal effect (1996: Meanstunted = 0.553; 2016: Meanstunted = 0.337) Concentration Index (C) (1996: Cstunted = − 0.082;2016: Cstunted = − 0.138) Absolute contribution Percentage contribution (%) Mother's education −0.030 − 0.0010 Secondary and above −0.085* −0.0087 − 11.1 Husband/Partner's education Maternal short stature (< 145 cm) < 0.001 Mother's BMI Normal (18.5 to < 25) Overweight/Obese (≥25) Antenatal care visits Delivery in health facility Children aged 25–59 months Birth order 2nd - 4th − 0.6 Breast fed children Perceived size of baby at birth Caste/Ethnicity Janajati Other terai caste Brahmin/Chhetri Wealth quintile Fifth (Richest) Total sum contribution Residual (Unexplained) Weighted N: - 1996: 2967; 2016: 1588. Average marginal effects and P values were calculated from predictions of probit model. The reference groups used were mothers with no education, husband/partner with no education, mothers who are not short, mothers who are underweight (BMI < 18.5), mothers who did not visit health facility for ANC, mothers who delivered in places others than a health facility, children aged 0–24 months, first born children, children who are not breastfed, perceived size of baby at birth large, caste/ethnicity Dalit, first wealth quintile (poorest), households with toilet facilities. The contributions of predictors to the observed socioeconomic inequality in stunting is calculated according to eq. (3) by using means, concentration indices of predictors and marginal effects. The total contribution is the sum of absolute contributions or the overall concentration index minus the residual. *p < 0.05 Children aged 25–59 months had significantly higher chances (P < 0.001) of stunting and it did not change over the study period (Table 3). Likewise, in 1996 (P < 0.001) as well as 2016 (P = 0.007), higher birth order was significantly associated with increased probability of stunting; effects were similar and were largest for children born fifth or later. Small (perceived) size of baby at birth had significantly higher probability of stunting in 1996 (P < 0.001) as well as 2016 (P = 0.039). Those of Janajati (P < 0.001), Newar (P < 0.001) and Brahmin/Chhetri (P = 0.002) origin were significantly less likely to be stunted in 1996, the effect being largest in Newar. In contrast, such effects were not significant in 2016. The upper two wealth quintiles had significant negative associations with probability of stunting in 1996 (Fourth, P < 0.001 and Fifth, P = 0.001); additionally, in 2016, the middle/third quintile also showed significant negative association (P = 0.002). In both years, largest effects were seen in the richest quintile. In Table 3 which presents decomposition analysis, the concentration index for stunting was − 0.082 in 1996 and − 0.138 in 2016 (Weighted N: 1996–2967; 2016–1588), indicating that stunting was concentrated amongst the poor, more in 2016 than in 1996. The absolute contribution of each predictor was obtained by multiplying its marginal effect by its mean and concentration index, then dividing by the mean of stunting (0.553 in 1996 and 0.337 in 2016). For example, the contribution of being in the richest wealth quintile in 1996 can be computed as: Marginal effect (− 0.145) *Mean (0.148) *Concentration Index (0.852) divided by weighted mean of stunting (0.553) = − 0.0330. Likewise, for 2016, (− 0.200) *(0.144) *(0.856)/ (0.337) = − 0.0732. Positive (negative) contributions of predictors indicate that the total inequality would, ceteris paribus, be lower (higher) if that predictor had no impact on stunting (instead of that reflected in marginal effects) or was equally distributed across the socioeconomic spectrum (instead of concentrated, as mirrored in the concentration indices of predictors) [41]. The corresponding percentage contributions can be calculated by dividing the contribution of each predictor by the total (sum contribution) explained portion of the concentration index (− 0.0797 in 1996 and − 0.1305 in 2016), which is = − 0.0330/− 0.0797 = 41.4% in 1996 and − 0.0732/− 0.1305 = 56.1% in 2016. Therefore, being in the richest wealth quintile contributed to 41.4% of the inequalities in stunting in 1996, which is lower than the contribution of 56.1% in 2016. Similarly, in Table 3, where overall stunting is concentrated amongst the poor (negative concentration index) we can interpret the contributions of individual predictors to the overall inequality as follows. Children born fifth or later (birth order) had an above average probability of stunting (positive marginal effect), were disproportionately concentrated in lower income groups (negative concentration index), and thus contributed − 0.0077 or 9.7% in 1996 and − 0.0101 or 7.7% in 2016 to the total observed inequality in stunting; a decrease in contribution of two percentage points. Since these contributions have the same sign as the overall concentration index, which indicates that stunting was concentrated amongst the poor, these indicate that children born fifth or later were a major pool of poor people with stunting. In perceived size of baby at birth, the 3.6% contribution of babies born small in 2016 was lower than the 7.3% contribution estimated in 1996; a decrease in contribution of 3.7 percentage points. In the same way we can compare contributions across categories of a predictor. For example, in 2016 the contribution of babies born 2nd – 4th in order was only 0.8%; lower than that contributed by those born 5th or later (7.7%). We can interpret a contribution with opposite sign to that on the overall concentration index in the following way. In 1996, children aged 25–59 months had a higher probability of stunting than children aged 0–24 months (the reference group) – a significant positive marginal effect of 0.172. However, since they were disproportionately concentrated in the higher income group (positive concentration index of 0.005), their contribution of 0.0004 was in the opposite direction to the overall inequality observed. Analysing caste/ethnicity in 1996 we find that there is negative association between being a Newar and stunting (negative marginal effect); and with Newar being a rich group (positive concentration index) results in being of Newar origin contributing − 0.0089, or 11.1%, to the total inequality; which is higher than the contribution of just 0.6% in 2016; a decrease of 10.5 percentage points. In mother's education, a negative association between those with secondary level and above education and stunting (negative marginal effect) and their concentration among the richer households (positive concentration index), has led to higher education level of mothers contributing − 0.0087, or 11.0%, to the total inequality. In 2016, this contribution decreased by 6.3 percentage points and was just 4.7%. Maternal short stature was positively associated with stunting (positive marginal effect), and as those with short height were disproportionately poor (negative concentration index) they contributed − 0.0002, or 0.3%, in 1996 and − 0.0077, or 5.9%, in 2016; an increase in contribution of 5.6 percentage points to the total inequality. A higher mother's BMI was negatively associated with stunting (negative marginal effect), and since those who were overweight/obese were disproportionately rich (positive concentration index) they contributed − 0.0007, or 0.9%, in 1996 and − 0.0221, or 16.9%, in 2016; an increase in contribution of 16 percentage points to the total inequality. In the same way we can interpret other decomposition results. We can also compare the overall contribution of a particular predictor by summing up contributions across its categories (reflected in sub-total, Table 3 and Fig. 4). In 1996, the major contributors to the observed socioeconomic inequalities in stunting were: wealth (60.7%), mother's education (12.2%), caste/ethnicity (12.1%) and birth order (9.1%). ANC visits, maternal short stature and mother's BMI played a less important role in terms of inequalities. In 2016, wealth (72.3%), mother's BMI (12.3%) and birth order (8.6%) were the major contributors while mother's education, delivery in health facility and ANC visits did not contribute considerably. Wealth was the biggest contributor to inequality in stunting and its contribution has increased by 11.6 percentage points between 1996 and 2016. The contribution of caste/ethnicity and mother's education has decreased, while that of mother's BMI has increased. The contribution of birth order to the total inequality in stunting has more or less remained constant. The percentage contribution of predictors to the total inequality in stunting in Nepal in 1996 compared to 2016. (Weighted N: - 1996: 2967; 2016: 1588) From 1996 to 2016, Nepal's political and economic climate was marred by an armed conflict, unstable governments, a massive earthquake with subsequent economic blockade by its neighbour, and transition to a new federal governance structure. In spite of such adversities, on an average, Nepal made remarkable ground in maternal and child nutrition indicators [30, 32]. However, with national averages we cannot identify those who are being left behind. Thus, using data from periodic surveys conducted during the 20-year time period, we analysed trends and predictors of inequalities in chronic malnutrition, stunting. Our analysis revealed widening disparity in stunting between the poorest and richest households both in absolute and relative terms. The better off have experienced larger and faster decline in stunting compared to the least affluent. Household economic status was the biggest contributor to inequality in stunting and its contribution has increased. Although stunting has decreased in all wealth quintiles, the absolute gap between the poorest and richest households has increased, with consistently lower (higher) prevalence in richest (poorest) group. The absolute decline in stunting in the richest households was nearly 50% higher than those in the poorest households. Usually, as coverage of health interventions increase, the rich selectively benefit first, then only the poorest will lag behind all other groups [42, 43]. This was evident in our study where absolute disparities were smaller when stunting prevalence was high – possibly because most of the population, including the richest households, had inadequate access to nutrition interventions. But as stunting declined over time, largely driven by the faster decline among the richest households, stunting persisted in the poor with relatively lower rates of decline. Likewise, the relative difference between the poorest and richest stunting prevalence has also increased. By 2016 three times more children were stunted in the poorest quintile as compared with those in the richest quintile. This is expected when prevalence at the national level is decreasing. As stunting decline among the poor is outpaced by that in the rich, the absolute differences become higher which leads to higher ratios or relative inequalities. The concentration curve and index analyses also revealed pro-rich improvements which have become more pronounced over time. Other studies from the region show similar patterns where stunting is declining on one hand while socioeconomic inequality is increasing on the other [18, 19, 44]. A lack of association between the rate of decline in stunting and improvement in equity was also reported in a multi-country analysis of survey data [45]. This unwanted trend in light of the government's explicit commitment to enhance equity [21, 22, 46] questions the strategies and also the effectiveness of nutrition interventions. While government policies and programs are pro-poor and pro-inclusive, these results provide a strong case that stunting is distributed unequally across different socioeconomic subgroups in the population. From decomposition analysis it is clear that most of the inequality was due to household economic status both in 1996 and 2016. In Nepal, improved access to healthcare, improvements in sanitation mainly use of toilets, improvements in (mother's) education and particularly wealth accumulation have been identified as the key drivers of reduction in undernutrition [30, 32]. It is fair to say that nutrition sensitive interventions that address the underlying determinants of nutrition, including poverty and that draw on complementary sectors such as agriculture, health, social protection, early child development, education, and water and sanitation [47, 48] are more important for addressing inequities in stunting. Social safety nets that raise income among vulnerable groups and make them resilient to economic shocks are needed [48]. By redistributing income to the poorest and most vulnerable, they not only have an immediate impact on poverty and inequality [49] but also increase use of health and education services [50]. In this regard, there is much to learn from Bangladesh where pro-poor and women-focused investments in health and social development have contributed to equity gains in child survival [51]. One intervention that stands out is women's microcredit that aimed to reduce poverty by providing poor families with access to small collateral-free loans. They targeted the ultra-poor women, encouraging economic and social empowerment and has had various positive effect on health outcomes [51]. Nevertheless, given the limited health system capacity, we need to first better understand the cost effectiveness and desirability of social safety net programs in Nepal, as in the past they have been found to have unintended negative consequences [52]. Similar studies in the region have noted mother's education to be a prime contributor to inequalities in stunting [18, 19, 53]. Better education will not only contribute to higher household income but may also result in improved health knowledge, greater access to and use of health services, healthier feeding habits and enhanced decision making power within the household [54, 55]. In Nepal, more girls are getting education than ever before. The ratio of girls to boys in primary education increased from 0.56 in 1990 to 1.09 in 2015 [56]. This is a result of efforts to increase public expenditure in education. It started in the early 1990s and was provided further momentum in 2001 when the education for all national plan of action was adopted that brought gender equality and social inclusion to the forefront [30]. As a result, not only rich but poor women may also be getting educated, thus reducing the contribution of mother's education to overall inequality. With 125 caste/ethnic groups [57], Nepal's population is diverse in caste/ethnicity. The contribution of caste/ethnicity to inequality in stunting has decreased, largely due to reduction in contribution of Newar. Stunting in Newar has become lesser pro-rich in 2016 compared to 1996, which means inequality has decreased. In general, Newar have a better health status compared to others. In 2016, they had the highest levels of institutional delivery, demand satisfied for family planning and children fully immunized; and lowest prevalence of anaemia and thinness (BMI < 18.5) among women. They also had the lowest rates of under-five and neonatal mortality rate and total fertility rate. In contrast, Muslim, other Terai caste and Dalit had relatively lower levels of service utilization and poorer health outcomes [58]. During the period of notable poverty reduction in Nepal between 1995 and 2010 when national poverty rates fell from 64 to 25% [59]; among all caste/ethnic groups, Newar consistently had lowest poverty rates, while Dalit stood on the bottommost end of the spectrum [60, 61]. The contribution of mother's BMI to inequality has increased, owing to bigger contribution of overweight/obese mothers to inequality. Both in 1996 and 2016, stunting among children of overweight/obese mothers was disproportionately concentrated in richer households. In fact, a previous study has shown that compared to poorest households, the odds of being overweight and obese were higher among adult women belonging to the richest households [62]. However, compared to 1996, not only there was significant negative association between overweight/obese mothers with probability of stunting but also the proportion of mothers who were overweight/obese increased from just 1.8 to 17.6% in 2016. Other studies based on DHS data also show increasing prevalence of overweight and obesity among women of reproductive age [63, 64]. The rich poor differentials and growing prevalence of overweight/obesity has made mother's BMI a major contributor to inequality. These are indications of changing lifestyle and highlights the need to adjust the existing strategies so that multiple forms of malnutrition can be addressed. Similar to our results, a previous study reported significant association between higher birth order and stunting among children in Nepal [13]. Additionally, height-for-age z-scores, was also found to have significant negative association with higher fertility [65]. As opposed to 1996, the proportion of children born fifth or later has decreased in 2016, from 25.4 to 9.3% in our analysis; but this effect has been nullified by the increase in inequality – stunting concentration becoming even higher among the poor. In Nepal, even though women are having fewer children than before, the decline in total fertility rate in the poorer households is relatively slower compared to that in the richer households. In 2016, the total fertility rate was highest in the poorest wealth quintile at 3.2 compared to 1.6 in the richest quintile [10], which explains the negative shift of the concentration index in 2016 with regards to birth order. This is possibly the reason why contributions of birth order to inequality has remained similar over the years. A higher fertility not only has adverse consequences on mother's health, but also affects the child's birth weight and the mother's ability to feed and care for her child [18], which are possible pathways to stunting. This highlights the need to further improve family planning practices among the poor. Maternal short stature has intergenerational effects as measured by height of the mother on stunting [66]. A shorter height may lead to intrauterine growth retardation [67] and low birth weight [68], causing mortality and impaired child growth [2]. With higher positive effect and pro poor distribution, the contribution of maternal short stature to inequality has increased. Similarly, the concentration of small size of the baby at birth (perceived) has increased among the poor but due to its reduced positive effect on stunting as well as reduced prevalence, its overall contribution to inequality has decreased. A generation of poor women who were stunted as children may have given birth to children with similar impaired growth. Thus, it is crucial to formulate strategies that incorporate a life cycle approach to address nutrition issues during critical periods from conception to adulthood. Integration of nutrition counselling across maternal and child health and family planning interventions is also equally important. However, any new interventions and/or changes will need to be reflected at all levels of the government, especially at the local level, where services are delivered. Improvements in availability of health care services have been identified to be among the key drivers of reduction in undernutrition in Nepal [30, 32, 65]. We examined the contributions of ANC visits and institutional deliveries which have reduced overtime and together accounted for only 3.5% of inequality in 2016. Utilization of these services has improved remarkably over the years. But what is even more noteworthy is the reduction in stunting inequality in these groups of mothers as indicated by the large shift of concentration index towards the null. Yet still the quality of ANC services is a concern that threatens these gains made. Only 17% of health facilities in Nepal offered high quality ANC services in 2015 [69]. Poor-quality care is now a bigger barrier than insufficient access in countries like Nepal where multi stakeholder commitment is critical for quality improvements [70]. While interpreting results, the following caveats will need to be considered. First, although decomposition of concentration index helps identify factors that potentially contribute to socioeconomic inequality, it should not be interpreted as causal. Second, some important variables deemed to have association with stunting but not measured during 1996 survey were omitted. Third, analysis of dichotomous rather than continuous variables arguably weakens the power of statistical tests [71] but in exchange for easy interpretation and presentation of results we preferred the former. Besides, in case of stunting the cut off points are standardized and recognized world over and the sample sizes are large enough to greatly reduce this concern. In a previous study, conclusions did not change radically when height-for-age z-scores were used as outcome instead of stunting [19]. Fourth, the use of wealth indices for the measurement of socioeconomic position in low income countries is criticized for being sensitive to choice of assets and for not reflecting short term economic shocks [72] and food affordability [31]. However, in absence of direct measurements of household wealth, such asset based index is a good proxy for household socioeconomic status. Fifth, the design of the 1996 and 2016 NDHS may not match completely but since we did not conduct pooled analysis, the estimates from respective surveys stand on their own and proportions should be comparable without much concern. By applying standard analytical methods our results shed much needed light on the unequal progress in stunting decline in Nepal and has several implications. Nutrition sensitive interventions delivered via a multi-sectoral approach, including increasing overall investment in health and education is key. Increasing coverage of nutrition specific interventions using a life cycle approach in groups that are being left behind may address intergenerational stunting. Although recent efforts for policy coherence across the sectors [22] and rolling out of community level nutrition programmes is commendable; current strategies require targeted approach to address inequalities. By enhancing outreach services and strengthening the community system of service delivery with a focus on poor, disadvantaged and marginalized groups, coverage could be improved across all segments of the population. Efforts for poverty alleviation should go hand in hand with direct investments for reduction of undernutrition primarily in the poorest segments of the population. It is necessary to tackle structural factors that cause unequal wealth distribution through social protection programmes. They can also serve as a delivery platform for nutrition-specific interventions, potentially increasing their scale, coverage and effectiveness. With the current average annual rate of reduction Nepal will not achieve the World Health Assembly's target to reduce stunting to 24.3% by 2025 [22] which will in turn offset its course towards SDG 2030 target as well. Interventions should be targeted at the subnational level where the factors causing inequalities are rife. The mountain zone [73], Karnali province and Province 2 [74] not only have relatively higher rates of poverty but also lower levels of education attainment among women and higher total fertility rate [10]. Within these regions, rural areas and communities that suffer higher levels of deprivation should be prioritized to reduce inequalities in stunting. In doing so, leadership of the local governments will be critical in the new federal governance structure. The focus should be on community based primary care approaches using community health workers and volunteers who in the past have successfully contributed to reducing equity gaps and improving access [75]. Despite remarkable improvements in average stunting over the past two decades, substantial socioeconomic inequalities in stunting exists in Nepal. Not only health system functions such as controlling fertility and improving maternal health but also factors beyond the scope of health authorities and care delivery system such as asset-based wealth distribution and education are at play in determining inequalities. In the past two decades, the better off segments of the population have experienced larger and faster decline in stunting compared to the least affluent and contribution of household wealth to inequalities has increased. Policy interventions that are tailored to inequality patterns, which reach the most disadvantaged and vulnerable groups, might help to change these trends. Multi-sectoral efforts are needed to target nutrition specific and nutrition sensitive programs including social protection for the poor. Political commitments need to be translated in to large scale programs. Nevertheless, for targeting priority populations, subnational information might be needed, which is an area for future studies to delve into. Topographically, Nepal is divided in to three distinct ecological belts along a south-to-north transect: Terai (or plains) constitutes 23%, Hill constitutes 42% and Mountain constitutes 35% of total land. According to the latest national population census [57], 50% of the population live in the terai, about 42% live in the hills and only about 7 % live in the mountains. Because of the harsh terrain, transportation and communication facilities are very limited in the mountain zone. The new constitution of Nepal which came into effect on September 2015, restructured the country in to a federal democratic republic governed with three levels of government: a federal level, seven provinces and 753 local governments. The seven provinces are (starting from the East): Province 1, Province 2, Province 3, Gandaki Province, Province 5, Karnali Province and Sudur Paschim Province. Province 1, 2, 3 and 5 are yet to be named. Caste/Ethnicity groups as recoded by NDHS: Hill Brahmin, Hill Chhetri, Terai Brahmin/Chhetri, Other Terai caste, Hill Dalit, Terai Dalit, Newar, Hill Janajati, Terai Janajati, Muslim, Other. We would like to thank Nissim Raj Angdembay for reviewing the manuscript for language corrections. We are also grateful to the DHS program for providing the datasets for analysis. The datasets analysed during the current study are available in the DHS repository, https://dhsprogram.com/data/available-datasets.cfm MRA and BPD conceived and designed the study. MRA performed statistical analysis and drafted the manuscript. BPD and KB helped analysing the data and draft the manuscript. SK and BPD helped discuss the results. MRA made critical revisions and finalized the manuscript. All authors read and approved the final manuscript. Not applicable since the study uses secondary data from the NDHS series. World Health Organization. Nutrition Landscape Information System. Geneva: Switzerland World Health Organization; 2010.Google Scholar Black RE, Allen LH, Bhutta ZA, Caulfield LE, de Onis M, Ezzati M, Mathers C, Rivera J. Maternal and child undernutrition: global and regional exposures and health consequences. Lancet. 2008;371:243–60.CrossRefGoogle Scholar Victora CG, Adair L, Fall C, Hallal PC, Martorell R, Richter L, Sachdev HS. Maternal and child undernutrition: consequences for adult health and human capital. Lancet. 2008;371:340–57.CrossRefGoogle Scholar National Planning Commission [Government of Nepal]. Multi-sector Nutrition Plan I (2013-2017). Kathmandu: National Planning Commission. p. 2012.Google Scholar United Nations Children's Fund, World Health Organization, World Bank Group. Levels and trends in child malnutrition: Key findings of the 2018 Edition of the Joint Child Malnutrition Estimates. p. 2018.Google Scholar World Health Organization. Reducing stunting in children: equity considerations for achieving the global nutrition targets 2025. Geneva: World Health Organization; 2018.Google Scholar Ministry of Health [Nepal], New ERA, ORC Macro. Nepal Demographic and Health Survey 2001. Calverton, Maryland, USA: Family Health Division, Ministry of Health; New ERA; and ORC Macro; 2002.Google Scholar Ministry of Health and Population (MOHP) [Nepal], New ERA, ICF International Inc. Nepal Demographic and Health Survey 2011. Kathmandu: Ministry of Health and Population, New ERA, and ICF International, Calverton, Maryland; 2012.Google Scholar Ministry of Health and Population (MOHP) [Nepal], New ERA, Macro International Inc. Nepal Demographic and Health Survey 2006. Kathmandu: Ministry of Health and Population, New ERA, and Macro International Inc; 2007.Google Scholar Ministry of Health Nepal, New ERA, ICF. Nepal Demographic and Health Survey 2016, Kathmandu: Ministry of Health, Nepal. p. 2017.Google Scholar Pradhan A, Aryal RH, Regmi G, Ban B, Govindaswamy P. Nepal Family Health Survey 1996. Kathmandu, Nepal and Calverton, Maryland: Ministry of Health [Nepal], New ERA, and Macro International Inc; 1997.Google Scholar de Onis M, Branca F. Childhood stunting: a global perspective. Maternal & Child Nutrition. 2016;12:12–26.CrossRefGoogle Scholar Tiwari R, Ausman LM, Agho KE. Determinants of stunting and severe stunting among under-fives: evidence from the 2011 Nepal demographic and health survey. BMC Pediatr. 2014;14:239.CrossRefGoogle Scholar Corsi DJ, Mejía-Guevara I, Subramanian SV. Risk factors for chronic undernutrition among children in India: estimating relative importance, population attributable risk and fractions. Soc Sci Med. 2016;157:165–85.CrossRefGoogle Scholar Devkota MD, Adhikari RK, Upreti SR. Stunting in Nepal: looking back, looking ahead. Maternal & Child Nutrition. 2016;12:257–9.CrossRefGoogle Scholar Krishna A, Mejia-Guevara I, McGovern M. Trends in inequalities in child stunting in South Asia; 2017.Google Scholar Hangoma P, Aakvik A, Robberstad B. Explaining changes in child health inequality in the run up to the 2015 millennium development goals (MDGs): the case of Zambia. PLoS One. 2017;12:e0170995.CrossRefGoogle Scholar Huda TM, Hayes A, El Arifeen S, Dibley MJ. Social determinants of inequalities in child undernutrition in Bangladesh: a decomposition analysis. Matern Child Nutr. 2018;14.Google Scholar Rabbani A, Khan A, Yusuf S, Adams A. Trends and determinants of inequities in childhood stunting in Bangladesh from 1996/7 to 2014. Int J Equity Health. 2016;15:186.CrossRefGoogle Scholar United Nations. Transforming our world: the 2030 agenda for sustainable development. New York: United Nations; 2015.Google Scholar Ministry of Health and Population [Government of Nepal]: Nepal Health Sector Strategy 2015-2020. Kathmandu: Ministry of Health and Population 2015.Google Scholar National Planning Commission [Government of Nepal]. Multi-sector Nutrition Plan II (2018-2022). Kathmandu: National Planning Commission. p. 2017.Google Scholar World Health Organization (WHO) Multicentre Growth Reference Study Group. WHO child growth standards: length/height-for-age, weight-for-age, weight-for-length, weight-for-height and body mass index-for-age: methods and development. Geneva: World Health Organization; 2006.Google Scholar World Health Organization. Nutrition landscape information system (NLIS) country profile indicators: interpretation guide. Geneva: World Health Organization; 2010.Google Scholar Stewart CP, Iannotti L, Dewey KG, Michaelsen KF, Onyango AW. Contextualising complementary feeding in a broader framework for stunting prevention. Matern Child Nutr. 2013;9(Suppl 2):27–45.CrossRefGoogle Scholar Solar O, Irwin A. A conceptual framework for action on the social determinants of health. Social determinants of health discussion paper 2 (policy and practice). Geneva: World Health Organization; 2011.Google Scholar Pandey JP, Dhakal MR, Karki S, Poudel P, Pradhan MS. Maternal and child health in Nepal: the effects of caste, ethnicity, and regional identity: further analysis of the 2011 Nepal demographic and health survey. Nepal Ministry of Health and Population, New ERA, and ICF International: Calverton, Maryland, USA; 2013.Google Scholar United States Agency for International Development [Global Health Technical Assistance Project]. In: Adhikari RK, editor. Food utilization practices, beliefs and taboos in Nepal: An overview. Washington, DC: QED Group, LLC, with CAMRIS International and Social & Scientific Systems, Inc; 2010.Google Scholar Na M, Aguayo VM, Arimond M, Dahal P, Lamichhane B, Pokharel R, Chitekwe S, Stewart CP. Trends and predictors of appropriate complementary feeding practices in Nepal: An analysis of national household survey data collected between 2001 and 2014. Matern Child Nutr. 2018;14 Suppl 4:e12564.CrossRefGoogle Scholar Cunningham K, Headey D, Singh A, Karmacharya C, Pandey Rana P. Maternal and child nutrition in Nepal: examining drivers of progress from the mid-1990s to. Global Food Security. 2010s;13:30–7.CrossRefGoogle Scholar Headey D. Developmental drivers of nutritional change: a cross-country analysis. World Dev. 2013;42:76–88.CrossRefGoogle Scholar Headey DD, Hoddinott J. Understanding the rapid reduction of undernutrition in Nepal, 2001-2011. PLoS One. 2015;10:e0145738.CrossRefGoogle Scholar Hong R, Banta JE, Betancourt JA. Relationship between household wealth inequality and chronic childhood under-nutrition in Bangladesh. Int J Equity Health. 2006;5:15.CrossRefGoogle Scholar Houweling TA, Kunst AE, Huisman M, Mackenbach JP. Using relative and absolute measures for monitoring health inequalities: experiences from cross-national analyses on maternal and child health. Int J Equity Health. 2007;6:15.CrossRefGoogle Scholar O'Donnell O, Doorslaer EV, Wagstaff A, Lindelow M. Analyzing health equity using household survey data: a guide to techniques and their implementation. Washington, DC: World Bank Publications; 2008.Google Scholar Koolman X, van Doorslaer E. On the interpretation of a concentration index of inequality. Health Econ. 2004;13:649–56.CrossRefGoogle Scholar Wagstaff A. The bounds of the concentration index when the variable of interest is binary, with an application to immunization inequality. Health Econ. 2005;14:429–32.CrossRefGoogle Scholar Wagstaff A, van Doorslaer E, Watanabe N. On decomposing the causes of health sector inequalities with an application to malnutrition inequalities in Vietnam. J Econ. 2003;112:207–23.CrossRefGoogle Scholar van Doorslaer E, Koolman X, Jones AM. Explaining income-related inequalities in doctor utilisation in Europe. Health Econ. 2004;13:629–47.CrossRefGoogle Scholar Sozmen K, Baydur H, Simsek H, Unal B. Decomposing socioeconomic inequalities in self assessed health in Turkey. Int J Equity Health. 2012;11:73.CrossRefGoogle Scholar Yiengprugsawan V, Lim LL, Carmichael GA, Sidorenko A, Sleigh AC. Measuring and decomposing inequity in self-reported morbidity and self-assessed health in Thailand. Int J Equity Health. 2007;6:23.CrossRefGoogle Scholar Victora CG, Joseph G, Silva ICM, Maia FS, Vaughan JP, Barros FC, Barros AJD. The inverse equity hypothesis: analyses of institutional deliveries in 286 National Surveys. Am J Public Health. 2018;108:464–71.CrossRefGoogle Scholar Victora CG, Vaughan JP, Barros FC, Silva AC, Tomasi E. Explaining trends in inequities: evidence from Brazilian child health studies. Lancet. 2000;356:1093–8.CrossRefGoogle Scholar Subramanyam MA, Kawachi I, Berkman LF, Subramanian SV. Socioeconomic inequalities in childhood undernutrition in India: analyzing trends between 1992 and 2005. PLoS One. 2010;5:e11392.CrossRefGoogle Scholar Restrepo-Mendez MC, Barros AJ, Black RE, Victora CG. Time trends in socio-economic inequalities in stunting prevalence: analyses of repeated national surveys. Public Health Nutr. 2015;18:2097–104.CrossRefGoogle Scholar National Planning Commission [Government of Nepal]: Fourteen Development Plan (2016/17–2018/19) Kathmandu, Nepal; 2016.Google Scholar Black RE, Victora CG, Walker SP, Bhutta ZA, Christian P, de Onis M, Ezzati M, Grantham-McGregor S, Katz J, Martorell R, Uauy R. Maternal and child undernutrition and overweight in low-income and middle-income countries. Lancet. 2013;382:427–51.CrossRefGoogle Scholar Ruel MT, Alderman H. Nutrition-sensitive interventions and programmes: how can they help to accelerate progress in improving maternal and child nutrition? Lancet. 2013;382:536–51.CrossRefGoogle Scholar Grosh M, Del Ninno C, Tesliuc E, Ouerghi A. For protection and promotion: the design and implementation of eff ective safety nets. Washington, DC: World Bank; 2008.CrossRefGoogle Scholar Lagarde M, Haines A, Palmer N. Conditional cash transfers for improving uptake of health interventions in low- and middle-income countries: a systematic review. Jama. 2007;298:1900–10.CrossRefGoogle Scholar Adams AM, Rabbani A, Ahmed S, Mahmood SS, Al-Sabir A, Rashid SF, Evans TG. Explaining equity gains in child survival in Bangladesh: scale, speed, and selectivity in health and development. Lancet. 2013;382:2027–37.CrossRefGoogle Scholar Leroy JL, Gadsden P, Rodriguez-Ramirez S, de Cossio TG. Cash and in-kind transfers in poor rural communities in Mexico increase household fruit, vegetable, and micronutrient consumption but also lead to excess energy consumption. J Nutr. 2010;140:612–7.CrossRefGoogle Scholar Kumar A, Kumari D, Singh A. Increasing socioeconomic inequality in childhood undernutrition in urban India: trends between 1992-93, 1998-99 and 2005-06. Health Policy Plan. 2015;30:1003–16.CrossRefGoogle Scholar Aslam M, Kingdon GG. Parental education and child health—understanding the pathways of impact in Pakistan. World Dev. 2012;40:2014–32.CrossRefGoogle Scholar Miller JE, Rodgers YV. Mother's education and children's nutritional status: new evidence from Cambodia. Asian Development Review. 2009;26:131–65.Google Scholar National Planning Commission [Government of Nepal]. Nepal and the Millennium Development Goals: Final Status Report 2000-2015, Kathmandu: National Planning Commission. p. 2016.Google Scholar Central Bureau of Statistics [Nepal]. National Population and Housing Census 2011 (National Report). vol. 1. Kathmandu: National Planning Commission Secretariat, Central Bureau of Statistics; 2012.Google Scholar Ministry of Health and Population [Nepal]. Mind the gap: Policy brief. Kathmandu: Ministry of Health and Population; 2018.Google Scholar Uematsu H, Shidiq AR, Tiwari S. Trends and Drivers of Poverty Reduction in Nepal: A Historical Perspective: World Bank Group [Poverty and Equity Global Practice Group]; 2016.Google Scholar Lawyers' Association for Human Rights of Nepalese Indigenous Peoples (LAHURNIP), The International Work Group for Indigenous Affairs (IWGIA). A study on the Socio-Economic Status of Indigeneous Peoples in Nepal. Kathmandu: LAHURNIP and IWGIA; 2014.Google Scholar DFID Nepal, The World Bank. "Unequal citizens" gender, caste and ethnic exclusion in Nepal, summary. DFID Nepal and The World Bank: Kathmandu; 2006.Google Scholar Bishwajit G. Household wealth status and overweight and obesity among adult women in Bangladesh and Nepal. Obes Sci Pract. 2017;3:185–92.CrossRefGoogle Scholar Amugsi DA, Dimbuene ZT, Mberu B, Muthuri S, Ezeh AC. Prevalence and time trends in overweight and obesity among urban women: an analysis of demographic and health surveys data from 24 African countries, 1991-2014. BMJ Open. 2017;7:e017344.CrossRefGoogle Scholar Chowdhury MAB, Adnan MM, Hassan MZ. Trends, prevalence and risk factors of overweight and obesity among women of reproductive age in Bangladesh: a pooled analysis of five national cross-sectional surveys. BMJ Open. 2018;8:e018468.Google Scholar Headey D, Hoddinott J, Park S. Drivers of nutritional change in four south Asian countries: a dynamic observational analysis. Maternal & Child Nutrition. 2016;12:210–8.CrossRefGoogle Scholar Martorell R, Zongrone A. Intergenerational influences on child growth and undernutrition. Paediatr Perinat Epidemiol. 2012;26(Suppl 1):302–14.CrossRefGoogle Scholar Kramer MS. The epidemiology of adverse pregnancy outcomes: an overview. J Nutr. 2003;133:1592s–6s.CrossRefGoogle Scholar Maternal anthropometry and pregnancy outcomes. A WHO Collaborative Study. Bull World Health Organ. 1995;73 Suppl:1–98.Google Scholar Acharya S, Sharma S, Dulal B, Aryal K. Quality of Care and Client Satisfaction with Maternal Health Services in Nepal: Further Analysis of the 2015 Nepal Health Facility Survey. DHS Further Analysis Reports No. 112. Rockville, Maryland, USA: ICF; 2018.Google Scholar Kruk ME, Gage AD, Arsenault C, Jordan K, Leslie HH, Roder-DeWan S, Adeyi O, Barker P, Daelmans B, Doubova SV, et al. High-quality health systems in the sustainable development goals era: time for a revolution. Lancet Glob Health.Google Scholar Royston P, Altman DG, Sauerbrei W. Dichotomizing continuous predictors in multiple regression: a bad idea. Stat Med. 2006;25:127–41.CrossRefGoogle Scholar Howe LD, Hargreaves JR, Huttly SR. Issues in the construction of wealth indices for the measurement of socio-economic position in low-income countries. Emerg Themes Epidemiol. 2008;5:3.CrossRefGoogle Scholar Asian Development Bank: Country Poverty Analysis Nepal Manila, Philippines Asian Development Bank; 2013.Google Scholar National Planning Commission [Government of Nepal]. Nepal's Multidimensional Poverty Index: Analysis Towards Action. Kathmandu: National Planning Commission; 2018.Google Scholar Bhutta ZA, Bang A, Afsana K, Gyawali B, Mirzazada S, Jayatissa R. Rethinking community based strategies to tackle health inequities in South Asia. Bmj. 2018;363:k4884.CrossRefGoogle Scholar 1.HERD InternationalKathmanduNepal 2.Social Development and Promotion CentreLalitpurNepal 3.South Asian Infant Feeding Research Network – NepalKathmanduNepal Angdembe, M.R., Dulal, B.P., Bhattarai, K. et al. Int J Equity Health (2019) 18: 42. https://doi.org/10.1186/s12939-019-0944-z Received 24 October 2018 Accepted 25 February 2019
CommonCrawl
Genetics Selection Evolution Selection for feed efficiency in Atlantic salmon using individual indicator traits based on stable isotope profiling Hanne Dvergedal ORCID: orcid.org/0000-0003-2552-55001, Jørgen Ødegård1,2, Margareth Øverland1, Liv Torunn Mydland1 & Gunnar Klemetsdal1 Genetics Selection Evolution volume 51, Article number: 13 (2019) Cite this article We used stable isotope profiling (15N and 13C) to obtain indicator phenotypes for feed efficiency in aquaculture. Our objectives were to (1) examine whether atom percent of stable isotopes of nitrogen and carbon can explain more of the variation in feed conversion ratio than growth alone, and (2) estimate the heritabilities of and genetic correlations between feed efficiency, growth and indicator traits as functions of nitrogen and carbon metabolism in various tissues. A 12-day experiment was conducted with 2281 Atlantic salmon parr, with an average initial weight of 21.8 g, from 23 full-sib families that were allocated to 46 family tanks and fed an experimental diet enriched with 15N and 13C. Using leave-one-out cross-validation, as much as 79% of the between-tank variation in feed conversion ratio was explained by growth, indicator traits, and sampling day, compared to 62% that was explained by growth and sampling day alone. The ratio of tissue metabolism, estimated by a change in isotope fractions relative to body growth, was used as an individual indicator for feed efficiency. For these indicator ratio traits, the estimated genetic correlation to feed conversion ratio approached unity but their heritabilities were low (0.06 to 0.11). These results indicate that feed-efficient fish are characterized by allocating a high fraction of their metabolism to growth. Among the isotope indicator traits, carbon metabolism in the liver had the closest estimated genetic correlation with feed conversion ratio on a tank level (− 0.9) but a low estimated genetic correlation with individually recorded feed efficiency indicator ratio traits. The underlying determinants of these correlations are largely unknown. Our findings show that the use of indicator ratio traits to assess individual feed efficiency in Atlantic salmon has great prospects in selection programs. Given that large quantities of feeds with contrasting isotope profiles of carbon and/or nitrogen can be produced cost-effectively, the use of stable isotopes to monitor nitrogen and carbon metabolism in various tissues has potential for large-scale recording of individual feed efficiency traits, without requiring individual feed intake to be recorded. The steadily growing human population increases the demand for protein resources from both the livestock and aquaculture industries. In 2050, the number of mouths to feed is expected to reach ~ 9 billion [1]. In the near future, livestock and aquaculture production will be in competition with direct human consumption for many of the same protein resources and, therefore, efficiency must be increased. Selective breeding is, and has for several decades, been an important tool to improve feed efficiency in both livestock and farmed fish [2,3,4,5,6]. Feed efficiency can be defined as feed conversion ratio (FCR), which is the amount of feed consumed per unit growth, or alternatively, by its inverse, the feed efficiency ratio (FER), i.e., growth per unit of feed consumed [7]. Selective breeding for improved feed efficiency assumes that both individual growth and individual feed intake can be routinely recorded on a large number of individuals. In aquaculture, recording of individual growth rate is easily attainable, and it has been the major trait in breeding schemes of Atlantic salmon since the 1970s [8]. Various methods for recording individual feed intake have been proposed such as X-radiography, where generally radio-opaque ballotini glass beads are mixed into the feed, fish are X-rayed, and the number of pellets eaten is counted [9,10,11,12]. Video recording is another method for feed intake recording [13, 14], with manual feeding of pellets one by one and retrospective identification of individual fish from video analysis. However, since sib-testing of Atlantic salmon is carried out in large sea-cage units and since fish are communally fed with feed dispersed into the water, large-scale recording of individual feed intake with these methods is difficult to implement in selective breeding programs of Atlantic salmon. Hence, the first option in selective breeding for improved FCR has been to rely on selection for traits such as growth rate [15, 16], which has been shown to improve feed retention ratio and FCR [3, 16,17,18,19] because of the generally accepted high genetic correlation between FCR and growth rate, ranging from 0.63 to 0.99 in rainbow trout (Oncorhynchus mykiss) [20]. The effect of increased growth rate on feed efficiency is through reducing maintenance requirements per unit of growth produced, mainly by reducing time to slaughter. Kause et al. [12] proposed to add information from indicator traits such as the percentage of muscle lipid to enhance the genetic progress in feed efficiency, which could be an alternative to recording feed intake. In our study, we examined the potential use of stable isotopes to assess feed efficiency traits in Atlantic salmon, with the objective to establish indicator phenotypes that explain more of the genetic variation in feed efficiency than growth alone. McCarthy et al. [9] identified individual variation in protein metabolism, with feed efficient fish having a lower protein degradation for the same level of feed eaten than inefficient fish. The potential use of feeding stable isotope such as 15N to fish to assess individual protein metabolism was investigated in a previous study [21]. In this study, fish were fed a standard diet (low in 15N) followed by a 15N-enriched diet with various inclusion levels, which resulted in isotope profile changes of body nitrogen (protein metabolism), which is closely related to body growth. Using protein-bound 15N enrichment, significant correlations between relative weight gain and protein metabolism were found in muscle (r = 0.31–0.98) and in liver (r = 0.59–0.94) [21]. This study also found that not all individual variation in protein metabolism was explained by growth. Isotope profiles can be recorded individually, in contrast to the challenge of recording feed intake and feed efficiency at the individual level. If feed efficiency can be accurately predicted by atom percentages (atom%) of nitrogen and carbon stable isotopes, individual isotope profiles could be used for more direct selection for improved feed efficiency. However, first it is necessary to validate the method in an experiment in which both isotope profiles and feed efficiency are recorded and estimate associated genetic parameters, i.e., in family material. This requires a large-scale experiment, in which families are kept in separate (replicate) tanks, and feed consumption and growth are monitored at the tank level. Dvergedal et al. [21] reported a curvilinear increase in the level of isotopes in tissue over time, with the atom% reaching an asymptote when fish were fed until saturation, i.e., all fish will eventually approach equilibrium isotopic levels, reflecting that of the feed. This implies that length of the experiment is crucial for recording individual variation in metabolism, since individual variation in nitrogen and carbon metabolism can be detected only prior to the point when the fish are expected to be in equilibrium with the isotopic level in the feed. In this paper, we report the results of a large-scale experiment, in which families were kept separate in replicate tanks, growth and isotope profiles were recorded at the individual level, and feed consumption and FCR were recorded at the tank level. Feed was labelled with both 15N and 13C stable isotopes. One objective was to examine whether the atom% of stable isotopes of nitrogen and carbon can explain more of the variation in FCR than growth alone, i.e. to explore the potential of using indicator traits in selective breeding for improved feed efficiency in Atlantic salmon. Another objective was to estimate the heritabilities of and genetic correlations between feed efficiency, growth and indicator traits, as functions of nitrogen and carbon metabolism in various tissues. Fish and housing The experiment included 23 full-sib families (offspring of 23 dams and 22 sires) of Atlantic salmon (Salmo salar) from AquaGen's breeding population. To ensure clearly contrasted family groups with respect to growth potential and, potentially, feed efficiency, the parents of the families were selected for high/low estimated breeding values for growth in seawater, although the experiment was conducted in freshwater. From the eyed egg stage until the start of the experiment, all families were communally reared in a single tank. Before pit-tagging, 15 fish were individually weighed to establish whether they were ready for tagging. The fish were pit-tagged with a 2 × 12 mm unique glass tag (RFID Solutions, Hafrsfjord, Norway) and a fin-clip was collected for genotyping. All fish were genotyped using AquaGen's custom Axiom®SNP genotyping array from Thermo Fisher Scientific (San Diego, CA, USA), which includes 56,177 single-nucleotide polymorphisms (SNPs). Prior to the experiment, the parentage of each individual fish was established using genomic relationship likelihood for parentage assignment [22]. Based on parentage assignment, 100 family members were identified for each of the 23 families used in the experiment. These fish were randomly allocated to family tanks with 50 fish per tank and two tanks per family, except for nine tanks in which the number of fish varied between 42 and 54, due to some mortality prior to the start of the experiment and to a larger number in one tank because of a counting mistake. A single fish was allocated to an incorrect family tank but it was later identified. In total, 2281 fish were included in the experiment. The tanks, each with a 270-L capacity, were supplied with recirculated fresh water, at a flow rate of 7 to 8 L.min−1, and the fish were kept under 24 h light regime, with an average temperature of 14.5 °C. Dissolved oxygen was measured daily and maintained above 8 mg.L−1 in the outlet water (Handy Delta, OxyGuard® AS, Farum, Denmark). Dietary treatment and feeding A labelled diet with the stable isotopes 15N and 13C, with inclusion levels of 2% and 1% respectively, was fed during the experimental period of 12 days. Due to the large variation in growth rate and thus in the rate of inclusion of new nutrients among families, a pre-defined period of 12 days was set to feed the labelled feed, such that an equilibrium was not reached in any of the families. Termination of the experiment and tissue sampling were done over a 5-day period with different tanks being sampled each day, i.e., the dietary switch was done according to the pre-defined termination day of the tank. The formulation and analysed chemical composition of the diet are in Table 1. The diet was produced at the feed laboratory of the Norwegian University of Life Sciences, Aas, Norway, as explained by Dvergedal et al. [21]. The fish were fed twice daily (07:00 and 15:00) for a period of 1 h, by automatic belt feeders. The feeding level equalled 10% in excess, based on the level of uneaten feed. Registrations of uneaten feed and calculations of feed intake were performed according to Helland et al. [23]. The daily feed intake per tank was calculated by first collecting the waste feed on a wedge wire screen [24] and correcting the total waste feed for leasing losses. As explained by Shomorin et al. [24], the wedge wire is placed at an inclined position in the outlet water column of the tank. The design of the screen ensures efficient drainage so that uneaten feed that is trapped on the screen is exposed minimally to water. Then, the difference between total fed feed and total uneaten feed was calculated as g dry matter intake, after drying the uneaten feed at 105 °C overnight. Table 1 Formulation and analysed contenta of the experimental diet Sampling was carried out over 5 days, about 10 tanks were sampled each day, i.e. ~ 500 fish daily. Fish were anesthetized with metacaine (MS-222TM; 1 g.L−1 water) and killed with a sharp blow to the head prior to dissection. Whole body weight and length were recorded for all fish, and tissue samples from muscle, liver, and adipose were collected in a cryotube, snap-frozen in liquid nitrogen and stored at − 20 °C until stable isotope analysis. Tissue sampling was standardized; muscle was sampled in the front area of the dorsal fin (1 × 1 cm cube), the liver was divided into four small pieces, and adipose tissue was sampled from the fat that was deposited around the gut between the pyloric ceca and the distal intestine. The feed was dried and ground prior to analysis, and analyses were performed in duplicate for dry matter by drying to a constant weight at 104 °C, for ash by combustion at 550 °C, for crude protein by Kjeldahl nitrogen × 6.25 according to Commission Regulation (EC) No 152/2009, and for starch as described in McCleary et al. [25]. Lipid was determined after extraction with petroleum ether and acetone (70/30) on an accelerated solvent extractor (ASE 200) (Dionex Corp, Sunnyvale, CA, USA), while gross energy was established with a PARR 1281 Adiabatic bomb calorimeter (Parr Instruments, Moline, IL, USA) according to ISO 9831. Amino acids were analysed according to Commission Regulation (EC) No 152/2009, for all amino acids except tryptophan, on a Biochrom 30 amino acid analyser (Biochrom Ltd,. Cambridge, UK). Tryptophan was analysed according to Commission Regulation (EC) No 152/2009 with a Dionex Ultimate 3000 HPLC system (Dionex Softron GmbH, Germering, Germany) and a Shimadzu RF-535 fluorescence detector (Shimadzu Corporation, Kyoto, Japan). Stable isotope analysis Tissue samples were freeze-dried and homogenized, and samples of approximately 1 mg were weighed into small tin capsules (8 x 5 mm, Elemental Microanalysis, Devon, UK). Samples were analysed for N- and C-isotope compositions using a Nu Horizon isotope-ratio mass spectrometer (IRMS) (Nu Instruments, Wrexham, UK) coupled to a Eurovector element analyser (EA) 3028 (Eurovector S.p.A, Redavalle, Italy) at the Institute for Energy Technology (Kjeller, Norway). Analysed contents of 15N and 13C in the diet are in Table 1. Isotopic signatures were reported as \(\delta\) values, and Atom% was calculated as follows (taking 15N as an example) [26]: $${\text{Atom\% }}^{15} {\text{N}} = \left( {\frac{{\left( {\delta^{15} {\text{N}}_{Sample} + 1000} \right)}}{{\left( {\delta^{15} {\text{N}}_{Sample} + 1000 + \left( {\frac{1000}{{\delta^{15} {\text{N}}_{Standard} }}} \right)} \right)}}} \right) 100,$$ where \(\delta^{15} {\hbox{N}}_{Sample}\) (\(\delta^{13} {\text{C}}_{Sample}\)) and \(\delta^{15} {\text{N}}_{Standard}\) \((\delta^{13} {\text{C}}_{Standard}\)) are the proportion of 15N divided by the proportion of 14N in the sample and in the reference standard (air for nitrogen; \(\delta^{15} {\text{N}}_{Standard}\) = 0.003676 [27], and Vienna Pee Dee Belemnite for carbon (VPDB); \(\delta^{13} {\text{C}}_{Standard}\) = 0.0112372 [28]). The atom% 15N and 13C in excess (APE) after feeding with enriched feed is proportional to the fraction of newly deposited amino acids in the tissue, resulting from both tissue growth and replacement of previously deposited nitrogen and carbon, denoted as metabolism. Atom% 15N (13C) in excess is the total atom% 15N (13C) in the sample adjusted for the initial isotope percentage in the sample (IA%). Initial isotope profile was accounted for in the calculations of individual feed conversion ratio (IFCR) and of individual feed efficiency ratio (IFER) (described in the next paragraph). Prior to the experiment IA% was assessed by using 20 randomly sampled fish from the experimental population. The 15N average and standard deviations were 0.370 ± 0.0001 in muscle and 0.370 ± 0.0003 in liver. Corresponding values for 13C in muscle, liver, and adipose tissue were 1.087 ± 0.0005, 1.086 ± 0.0007 and 1.082 ± 0.0003, respectively. Calibration of 15N and 13C was performed against international certified reference materials and internal standards. The internal standard IFE Trout and USGS-41 were analysed as unknowns, and certified standards such as USGS-41 (certified value), IAEA (International Atomic Energy Agency) N-1, USGS-24, Isolife P10501 and IAEA 311 were used to define the calibration curve. Three calibration standards (USGS-41, USGS-24, and Isolife P10501) were analysed in each sequence, with ~ 60 samples per sequence. In addition, IAEA 303B (\(\delta\) 13CVPDB: 466 ± 3) was analysed on multiple occasions to verify the linearity of \(\delta\) 13CVPDB measurements above the Isolife P10501 standard. The \(\delta\) 15N composition of IFE trout was calibrated using a two-point calibration curve using IAEA 311 and IAEA-N-1 standards. The \(\delta\) 13C composition of IFE trout was calibrated against the USGS-24 standard. The average \(\delta\) 15N in IFE trout was 11.60‰ with a standard deviation of 0.20 and, correspondingly, for \(\delta\) 13C the average was − 20.22‰ with a standard deviation of 0.19. The corresponding \(\delta\) 15N values for samples analysed according to IAEA 311 were 4693 ± < 5.0‰, and for \(\delta\) 13C values according to USGS-24 the values were − 16.05 ± < 0.25‰. Phenotypes analysed When entering the tank, the initial weight of each fish \(i\) (\(IW_{i}\), g) was recorded. After the experiment, i.e. at sampling, final weight (\(FW_{i}\), g) was recorded. From these two variables, individual weight gain (\(WG_{i}\)) and relative weight gain (\(RG_{i}\)) were calculated as follows: $$\begin{aligned} WG_{i} & = FW_{i} - IW_{i} , \\ RG_{i} & = \left( {\left( {FW_{i} - IW_{i} } \right)/FW_{i} } \right) \times 100. \\ \end{aligned}$$ A total of 32 fish (1.4% of the total) were set to missing for these two variables, with four fish having either missing initial or final weights. Furthermore, missing was imposed for fish with an extremely low growth rate (N = 21) (relative weight gain less than 6.4%, corresponding to a growth rate of less than 1.3 g) or an extremely high growth rate (N = 7) (relative weight gain higher than 49%, not accompanied by a corresponding change in the isotope profile), indicating abnormal development and phenotyping error, respectively. From the tissue samples, the following Atom% variables were available at the individual level: Atom% for 13C in muscle (AMCi), 15N in muscle (AMNi), 13C in liver (ALCi), 15N in liver (ALNi) and 13C in adipose tissue (AACi). Lack of tissue sample resulted in nine fish with missing records for Atom% variables; AAC (5), AMC (1), AMN (1), ALC (1) and ALN (1). From feed recording at the tank level (\(t\) = 1…46), tank feed intake (\(FI_{t}\), g dry matter) was obtained, as well as the feed conversion ratio (\(FCR_{t}\)), which calculated as follows: $$FCR_{t} = \frac{{FI_{t} }}{{WG_{t} }},$$ where \(WG_{t}\) is the total \(WG\) in tank \(t\). As mentioned above, 32 fish had missing phenotypes for weight gain and thus were not included in the FCR calculation. Some of these fish had a low or even negative growth indicating that their contribution to the total tank feed intake was likely rather small. In any case, the fraction of fish that lacked growth records was low (< 1.4%), which implies that the potential bias in FCR is limited. From the individual levels of Atom% 13C (\(AMC_{i}\)) and Atom% 15N (\(AMN_{i}\)) in muscle, individual isotope-based indicator ratio traits for feed conversion ratio (\(IFCR\)) and feed efficiency ratio (\(IFER\)); \(IFCR\_AMC_{i}\), \(IFCR\_AMN_{i}\), \(IFER\_AMC_{i}\), and \(IFER\_AMN_{i}\), were defined as follows (taking 15N as an example): $$\begin{aligned} IFCR\_AMN_{i} & = \frac{{FW_{i} *APE_{Ni} }}{{FW_{i} - IW_{i} }}, \\\\ IFER\_AMN_{i} & = \frac{{FW_{i} - IW_{i} }}{{FW_{i} * APE_{Ni} }}, \\ \end{aligned}$$ where \(APE_{Ni} = (AMN_{i} - IA \% )\) with \(IA \%\) equal to 0.370% for 15N and 1.087% for 13C. After diet switching, the APE of a stable isotope in muscle tissue is expected to be proportional to the fraction of newly synthesized nutrients in the muscle, and the product of APE and final weight is expected to be proportional to the mass of new nutrients in body tissue. Because the \(IFCR\) ratio is expected to be proportional to the amount of newly deposited body nutrients per g increase in body weight, fish that exchange a larger fraction of the body mass per unit of growth will be less feed-efficient. Exchange of body tissue is traceable with stable-isotope profiling and is related to the feed intake of the individual, while the denominator of the ratio is the weight gain, and the ratio between these two variables is equal to \(IFCR\) or, alternatively, the inverse is equal to \(IFER\). Statistical analysis of FCR At the tank level, first we examined to what degree tank averages for \(\overline{WG}\) and \(\overline{RG}\), in addition to the tank average isotope content, could explain variation in \(FCR\) between tanks by using the following multiple regression model: $$FCR_{td} = \mu + \beta d + bX_{t} + e_{t} ,$$ where \(FCR_{td}\) is the observed FCR in tank \(t\) on sampling-day \(d\), the latter taking values 1 to 5 and was included as a covariate, since this gave better predictive ability, \(\beta\) is the corresponding regression coefficient, \(\it {\text{X}}_{t}\) is the covariate value for tank \(t\) based on one of the following covariates at a time: \(FI\), \(\overline{WG}\), \(\overline{RG}\), \(\overline{AMC}\), \(\overline{AMN}\), \(\overline{ALC}\), \(\overline{ALN}\) and \(\overline{AAC}\), \(b\) is the corresponding regression coefficient, and \(e_{t}\) is the tank residual. The final model was chosen by including the covariates: \(FI\), \(\overline{RG}\), \(\overline{AMC}\), \(\overline{AMN}\), \(\overline{ALC}\), \(\overline{ALN}\) and \(\overline{AAC}\) (\(k = 7\)) simultaneously using the following model: $$FCR_{td} = \mu + \beta d + \mathop \sum \limits_{j = 1}^{k} b_{j} X_{jt} + e_{t} .$$ Backward elimination with leave-one-out cross-validation was used to identify the model with the lowest predicted residual error sum of squares (PRESS). The analyses were conducted using PROC REG in SAS®. For all regression models, the bias of the model was calculated as the average difference between the observed phenotypes and predicted values obtained by PROC GLM in SAS®. Moreover, the coefficient of determination of prediction was computed as: $$\hat{R}^{2} = 1 - \frac{PRESS}{{SS_{tot} }},$$ where \(PRESS = \sum \left( {y_{t} - \hat{y}_{t} } \right)^{2}\) and \(\hat{y}_{t}\) is the predicted \(FCR\) phenotype for tank \(t\), using data from all other tanks in the analysis and \(SS_{tot}\) is the total sums of squares. The \(\hat{R}^{2}\) is an estimate of the fraction of variance in \(FCR\) explained by the model in the prediction of missing observations. Genetic analysis Genetic analysis of traits was performed using the ASReml4 software package [29]. Bivariate analyses were conducted between \(FCR\) and \(FI\) and of \(FCR\) and \(FI\) with each of the following traits: \(\overline{RG}\), \(\overline{WG}\), \(\overline{AMC}\), \(\overline{AMN}\), \(\overline{ALC}\), \(\overline{ALN}\), \(\overline{AAC}\), \(\overline{IFCR\_AMC}\), \(\overline{IFCR\_AMN}\), \(\overline{IFER\_AMC}\) and \(\overline{IFER\_AMN}\). For each bivariate analysis, the model was: $$\left[ {\begin{array}{*{20}c} {{\mathbf{y}_\mathbf{1}} } \\ {\mathbf{y}_\mathbf{2} } \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} {\mathbf{X}_\mathbf{1} } & \mathbf{0} \\ \mathbf{0} & {\mathbf{X}_\mathbf{2} } \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {\mathbf{b}_\mathbf{1} } \\ {\mathbf{b}_\mathbf{2} } \\ \end{array} } \right] + \left[ {\begin{array}{*{20}c} {\mathbf{Z}_{\mathbf{a1}} } & \mathbf{0} \\ \mathbf{0} & {{\mathbf{Z}}_{\mathbf{a2}} } \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {{\mathbf{a}_\mathbf{1}} } \\ {{\mathbf{a}_\mathbf{2}} } \\ \end{array} } \right] + \left[ {\begin{array}{*{20}c} {{\mathbf{e}_\mathbf{1}} } \\ {{\mathbf{e}_\mathbf{2}} } \\ \end{array} } \right],$$ where \({\mathbf{y}}_{1}\) is a vector of tank level phenotypes for \(FCR\) or \(FI\), \({\mathbf{y}}_{2}\) is a vector of (tank) phenotypes for one of the other traits; \(\overline{RG}\), \(\overline{WG}\), \(\overline{AMC}\), \(\overline{AMN}\), \(\overline{ALC}\), \(\overline{ALN}\), \(\overline{AAC}\), \(\overline{IFCR\_AMC}\), \(\overline{IFCR\_AMN}\), \(\overline{IFER\_AMC}\) and \(\overline{IFER\_AMN}\), \({\mathbf{b}}_\mathbf{1}\) and \({\mathbf{b}}_\mathbf{2}\) are vectors of fixed effects, including trait-specific intercepts and effects of sampling day, \(\left[ {\begin{array}{*{20}c} {{\mathbf{a}}_\mathbf{1} } \\ {{\mathbf{a}}_\mathbf{2} } \\ \end{array} } \right] \sim N\left( {\mathbf{0},{\mathbf{T}}_\mathbf{0} \otimes {\mathbf{G}}_{{\mathbf{T}}} } \right)\) is a vector of random additive genetic tank effects for the two traits, \(\left[ {\begin{array}{*{20}c} {{\mathbf{e}}_\mathbf{1} } \\ {{\mathbf{e}}_\mathbf{2} } \\ \end{array} } \right] \sim N\left( {\mathbf{0},{\mathbf{R}} \otimes {\mathbf{I}}} \right)\) is a vector of random tank residuals for the two traits. The \({\mathbf{X}}\) and \({\mathbf{Z}}\) matrices are appropriate incidence matrices, \({\mathbf{T}}_{0}\) is an additive genetic (co)variance matrix between traits at the tank level, \({\mathbf{G}}_{{\mathbf{T}}}\) is an (46 × 46) additive genetic relationship matrix that describes the average genomic relationships between fish in different tanks and \({\mathbf{R}}\) is the tank residual (co)variance matrix, which was diagonal. Matrix \({\mathbf{G}}_{{\mathbf{T}}}\) was calculated based on a subset of 51,543 SNPs of high genotype quality, covering all chromosomes and is defined as: $${\mathbf{G}}_{{\mathbf{T}}} = {\mathbf{TT}}\varvec{',}$$ where element \(tj\) in \({\mathbf{T}}\) (tank \(t\), locus \(j\)) is: \(T_{tj} = \frac{1}{{n_{t} }}\sum\nolimits_{i = 1}^{{n_{t} }} {\left( {M_{ij} - 2P_{j} } \right)}\), \(M_{ij}\) is the genotype of individual \(i\) within tank \(t\) at locus j, \(P_{j}\) is the allele frequency at locus \(j\), and \(n_{t}\) is the number of individuals in tank \(t\). Finally, the elements of \({\mathbf{G}}_{{\mathbf{T}}}\) were scaled such that the average of the diagonal elements in \({\mathbf{G}}_{{\mathbf{T}}}\) equalled 1.0. Narrow-sense heritability cannot be estimated for traits that are modelled at the tank level, i.e. \(FCR\) and \(FI\). Instead \(h_{t}^{2}\), which quantifies the fraction of the between-tank variance explained by genetics, was estimated as \(h_{t}^{2} = \frac{{\sigma_{{a_{t} }}^{2} }}{{\sigma_{{a_{t} }}^{2} + \sigma_{{e_{t} }}^{2} }}\), where \(\sigma_{{a_{t} }}^{2}\) \({\text{and }}\sigma_{{e_{t} }}^{2}\) are the estimates at the tank level of additive genetic and residual variance, respectively, of the trait. The individual phenotypes for \(RG\), \(WG\), \(AMC\), \(AMN\), \(ALC\), \(ALN\), \(AAC\), \(IFCR\_AMC\), \(IFCR\_AMN\), \(IFER\_AMC\), and \(IFER\_AMN\) were also analysed using bivariate models. For each bivariate analysis, the model was: $$\left[ {\begin{array}{*{20}c} {{\mathbf{y}}_\mathbf{1} } \\ {{\mathbf{y}}_\mathbf{2} } \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} {{\mathbf{X}}_\mathbf{1} } & \mathbf{0} \\ \mathbf{0} & {{\mathbf{X}}_\mathbf{2} } \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {{\mathbf{b}}_\mathbf{1} } \\ {{\mathbf{b}}_\mathbf{2} } \\ \end{array} } \right] + \left[ {\begin{array}{*{20}c} {{\mathbf{Z}}_{{{\mathbf{a1}}}} } & \mathbf{0} \\ \mathbf{0} & {{\mathbf{Z}}_{{{\mathbf{a2}}}} } \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {{\mathbf{a}}_\mathbf{1} } \\ {{\mathbf{a}}_\mathbf{2} } \\ \end{array} } \right] + \left[ {\begin{array}{*{20}c} {{\mathbf{Z}}_{{{\mathbf{t1}}}} } & 0 \\ 0 & {{\mathbf{Z}}_{{{\mathbf{t2}}}} } \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {{\mathbf{t}}_\mathbf{1} } \\ {{\mathbf{t}}_\mathbf{2} } \\ \end{array} } \right] + \left[ {\begin{array}{*{20}c} {{\mathbf{e}}_\mathbf{1} } \\ {{\mathbf{e}}_\mathbf{2} } \\ \end{array} } \right],$$ where \(\left[ {\begin{array}{*{20}c} {{\mathbf{y}}_{1} } \\ {{\mathbf{y}}_{2} } \\ \end{array} } \right]\) is a vector of individual phenotypes for the two traits analysed, \({\mathbf{b}}_\mathbf{1}\) and \({\mathbf{b}}_\mathbf{2}\) are vectors of fixed effects for the two traits as described above, \(\left[ {\begin{array}{*{20}c} {{\mathbf{a}}_\mathbf{1} } \\ {{\mathbf{a}}_\mathbf{2} } \\ \end{array} } \right] \sim N\left( {\mathbf{0},{\mathbf{G}}_\mathbf{0} \otimes {\mathbf{G}}} \right)\) is a vector of random additive genetic effects for the two traits, \(\left[ {\begin{array}{*{20}c} {{\mathbf{t}}_\mathbf{1} } \\ {{\mathbf{t}}_\mathbf{2} } \\ \end{array} } \right] \sim N\left( {\mathbf{0},{\mathbf{T}} \otimes {\mathbf{I}}} \right)\) is a vector of random tank effects for the two traits, and \(\left[ {\begin{array}{*{20}c} {{\mathbf{e}}_\mathbf{1} } \\ {{\mathbf{e}}_\mathbf{2} } \\ \end{array} } \right] \sim N\left( {\mathbf{0},{\mathbf{R}} \otimes {\mathbf{I}}} \right)\) is a vector of random residuals. The \({\mathbf{X}}\) and \({\mathbf{Z}}\) matrices are corresponding incidence matrices, \({\mathbf{G}}_\mathbf{0}\) is an additive genetic (co)variance matrix, \({\mathbf{G}}\) is the genomic relationship matrix, \({\mathbf{T}}\) is the tank (co)variance matrix, and \({\mathbf{R}}\) is the residual (co)variance matrix. The genomic relationship matrix was generated according to VanRaden's first method [30] and was used to account for stratification of the individuals by selection of families based on fast and slow growth rates (in seawater). Matrix \({\mathbf{G}}\) was calculated based on the same subset of SNPs as defined for \({\mathbf{G}}_{{\mathbf{T}}}\) above. Heritabilities of individual traits were estimated as: \(h^{2} = \frac{{\sigma_{a}^{2} }}{{\sigma_{a}^{2} + \sigma_{t}^{2} + \sigma_{e}^{2} }}\), where \(\sigma_{a}^{2}\), \(\sigma_{t}^{2} , {\text{and }}\sigma_{e}^{2}\) are the estimates of the individual additive genetic, tank environmental, and individual residual variance, respectively, of the trait. The fraction of variance explained by tank was estimated as: \(c^{2} = \frac{{\sigma_{t}^{2} }}{{\sigma_{a}^{2} + \sigma_{t}^{2} + \sigma_{e}^{2} }}\). Significance of the genetic effect was tested using a likelihood-ratio (\(LR\)) test-statistic, comparing a single-trait model with genetic effects (\({\text{H}}_{1}\)) to a model without genetic effects (\({\text{H}}_{0}\)) with the \({\mathbf{G}}\) matrices (\({\mathbf{G}}_{{\mathbf{T}}}\) and \({\mathbf{G}}\), respectively) in \({\text{H}}_{1}\): $$LR = 2\left( {\left( {\log L |\hat{\theta }_{{H_{1} }} } \right) - \left( {\log L |\hat{\theta }_{{H_{0} }} } \right)} \right).$$ The genetic effect was considered significant if \(LR < \chi_{{\left( {\alpha = 0.05; \,df = 1} \right)}}^{2}\). The diet was formulated for increased 15N and 13C isotope levels, by using 2% and 1% of 15N- and 13C-labelled spirulina, respectively, which resulted in an Atom% of 2.7 and 2.0 of 15N and 13C, respectively, in the diet (Table 1). All fish were healthy throughout the experiment and tanks were fed the diet at 10% in excess of uneaten feed. Table 2 shows the descriptive statistics of the data. The mean Atom% of 15N and 13C in muscle, liver, and adipose tissue ranged from 1.01 to 1.64% and from 1.17 to 1.59%, respectively. These results confirm that none of the tissues was in equilibrium with the diet that contained 2.7 and 2.0% 15N and 13C, respectively. Thus, variation in the Atom % of 15N and 13C could be determined between individuals. For the individually recorded traits, large differences in WG and RG were observed between families (Fig. 1a, b). However, for the tank-recorded traits, i.e. FI and FCR, larger differences were observed between families for FI than for FCR (Fig. 1c, d), which was reflected in the larger coefficient of variation for FI than for FCR (Table 2). This is logical because the coefficient of variation for FCR contains the standard deviation for WG, which was calculated from individual observations. Figure 2a–e show the Atom% of 15N and 13C in muscle, liver, and adipose tissue for all families, showing considerable differences between families. Table 2 Descriptive statistics of recorded trait phenotypes Averages per family for a weight gain (WG), b relative weight gain (RG), c feed intake (FI), and d feed conversion ratio (FCR = FI/WG) Averages per family for a Atom% 15N in muscle (AMN), b Atom% 15N in liver (ALN), c Atom% 13C in muscle (AMC), d Atom% 13C in liver (ALC), and e Atom% 13C in adipose tissue (AAC) Table 3 shows that \(\overline{RG}\) explained the largest fraction of variance in FCR as a single variable (in addition to day) (\(R^{2}\) = 62% and \(\hat{R}^{2}\) = 55%), followed by \(\overline{ALC}\) (\(R^{2}\) = 57% and \(\hat{R}^{2}\) = 52%) and \(\overline{WG}\) (\(R^{2}\) = 53% and \(\hat{R}^{2}\) = 46%). When simultaneously regressing all the explanatory variables on FCR and using backward elimination, the preferred model with the lowest PRESS value had an \(R^{2}\) of 79% (Table 3). This implies that the variables included in the model explained a major part of the variation between tanks with respect to FCR. The variables retained were Day, \(\overline{RG}\), \(\overline{AMN}\), \(\overline{ALC}\) and \(\overline{AAC}\). Using leave-one-out cross-validation, the coefficient of determination of the predicted tank averages was \(\hat{R}^{2}\) = 73%, i.e., even when predicting missing observations, the model explained most of the tank variation in FCR, while the bias was negligible. Moreover, when including interactions between indicator variables in the backward elimination process (data not shown), PRESS was reduced to 0.0118 in the preferred model, which had an \(R^{2}\) of 88%, while \(\hat{R}^{2}\) was 77% under prediction. Table 3 Results of regression analysis of tank level feed conversion rate on sampling day and each indicator trait, one by one, or when regressing on alla experimental variables, following backward elimination The results obtained for traits recorded at the tank level and analysed with model (1) showed that genetic background (family) explained 52 and 92% of the between-tank variation for FCR (p = 0.0002) and FI (p = 9.3 × 10−16), respectively (Table 4), i.e., the corresponding correlations between the average family phenotypes in different tanks were 0.72 and 0.96 for FCR and FI, respectively. For the individually recorded traits, significant (p < 0.05) heritabilities were estimated for all traits. The estimated heritability for WG was high (0.45), whereas heritabilities were moderate for RG, AMC, AMN, ALC, ALN and AAC (0.28, 0.18, 0.28, 0.15, 0.26 and 0.18, respectively), and relatively low for IFCR_AMC, IFCR_AMN, IFER_AMC and IFER_AMN (0.09, 0.06, 0.11 and 0.08, respectively). Non-genetic tank effects were generally low and explained 2 to 13% of the total phenotypic variance for individual traits. Table 4 Estimates with standard errors of genetic and residual variance components (\(\sigma_{a}^{2}\) and \(\sigma_{e}^{2}\), respectively), fraction of phenotypic variance explained by environmental tank effect (\(c^{2}\)), heritability (\(h^{2}\)), fraction of between-tank variance explained by genetics (\(h_{t}^{2}\)), as well as the χ2 statistics for the additive genetic family effect, with the corresponding level of significance (p) Genetic correlations between FCR/FI and all the other traits were estimated with model (1) and those between the remaining traits were estimated with model (2) (Table 5). Generally, estimates of the genetic correlation between FCR, measured at the tank level, with each other trait were negative, while those for the IFCR were positive, as expected. This means that FI, growth (WG and RG), and the indicator traits (i.e. the fraction of newly deposited tissue) all had favourable genetically correlations with FCR. For the indicator traits measured directly (excluding the indicator ratio traits), the closest genetic correlation with FCR was estimated for ALC (− 0.90 ± 0.11), followed by RG (− 0.82 ± 0.10), WG (− 0.74 ± 0.17), AMN (− 0.73 ± 0.14), AMC (− 0.69 ± 0.17), ALN (− 0.63 ± 0.19), FI (− 0.61 ± 0.21), and AAC (− 0.43 ± 0.28). In addition, a perfect genetic correlation was estimated between the indicator ratio traits IFCR_AMC, IFCR_AMN and IFER_AMN and FCR (1.0, 1.0 and − 1.0), except for IFER_AMC, which had a lower genetic correlation estimate with FCR (− 0.63 ± 0.30), albeit not significantly different from 1. Internally, IFCR and IFER variables had high estimated genetic correlations (− 0.71 to − 0.99). In general, estimated genetic correlations of the isotope content of the various tissues with growth (in particular RG) and FI were positive. Among the indicator traits, ALC had the lowest genetic correlation with the other isotope indicator traits (0.04–0.38) and with RG (0.12). AMN and ALN were closely genetically correlated to each other (0.89), which indicates that nitrogen metabolism in liver and in muscle are largely the same genetic trait. Estimates of the genetic correlation of AMN and ALN with RG were high (0.98 and 0.89, respectively). Likewise, AMC and AAC were closely genetically correlated with each other (0.73), with ALN and AMN (0.69 to 0.96) and with RG (0.78 to 0.92). FI was also closely genetically correlated with WG (0.98). For individual traits, phenotypic and genetic correlations were generally similar. Among the traits evaluated, ALC, IFCR (for both nitrogen and carbon) and IFER (for nitrogen) stood out as individual indicator traits for FCR. Estimates of the genetic correlation of ALC with the indicator ratio traits IFCR and IFER were low for both nitrogen and carbon (− 0.27 to 0.11). Table 5 Estimates of genetic (above diagonal) and phenotypic (below diagonal) correlations between traits, with standard errors In aquaculture, feed constitutes about half of the total production costs in the grow-out phase at sea [31]. Genetic improvement of feed efficiency will reduce production costs and, at the same time, have a favourable environmental impact by maximizing resource utilization and reducing nutrient load (e.g., nitrogen) to the environment. Protein metabolism is a major determinant of the conversion of feed into growth. Consequently, minimizing the energetic cost of protein metabolism is a strategic goal for enhancing fish growth and feed efficiency. Because Atom% of nitrogen and carbon, and functions thereof, can be individually recorded, these traits could be used as indicator traits for individual feed conversion ratio in growing fish. The observed between-family difference in nitrogen and carbon metabolism (Fig. 2a–e) have the potential to affect feed efficiency. To evaluate whether 15N and 13C stable isotopes can be used to capture variation in feed efficiency in Atlantic salmon, the observed tank level FCR was best predicted using a multiple regression model that included \(\overline{RG}\), \(\overline{AMN}\), \(\overline{ALC}\) and \(\overline{AAC}\) as covariates, in addition to sampling day. This prediction model explained 73% of the variation in masked FCR records (Table 3; \(\hat{R}^{2}\)). The single most important isotope variable for prediction was \(\overline{ALC}\), which together with sampling day explained 57% (\(R^{2}\)) of the variation in FCR (Table 3). In comparison, for similar models using \(\overline{WG}\) or \(\overline{RG}\), the proportions of variance in FCR explained were 53 and 62%, respectively (Table 3). Hence, by including information on isotope profiles, prediction of FCR data was substantially improved, compared to what was obtained by growth data alone (Table 3). This indicates that stable isotopes can be used to improve the prediction of individual FCR, which is of considerable value to fish breeding. The regression analyses revealed that, after adjusting for growth, improved feed efficiency was associated with reduced metabolism of nitrogen in muscle (AMN, β = 0.31 ± 0.17, results not shown) and reduced carbon metabolism in adipose tissue (AAC, β = 0.90 ± 0.41), but with elevated carbon metabolism in liver (ALC, β = − 0.75 ± 0.18). In fish, the main source of nitrogen in both liver and muscle is protein [7]. Similarly, the main sources of carbon in muscle is protein, but in liver the main sources are protein, fat and glycogen [32]. In adipose tissue, the main source of carbon is lipids, but they can originate from lipid biosynthesis from protein through oxidative degradation and deamination of amino acids, or from carbohydrates through acetyl-CoA formed in the mitochondria [33]. Fish are highly efficient in converting dietary protein into body protein [34]. This requires regulation of the flux of amino acids into metabolic fates such as oxidation, gluconeogenesis, and lipogenesis. Because salmonids are carnivores, they depend highly on glucose synthesis from non-carbohydrate sources. The surplus of amino acids has a major role in energy metabolism as oxidative substrates in many tissues. Fish with efficient growth seem to use a low-protein metabolism strategy [9, 35, 36]. It has also been reported that a reduced capacity for body lipid deposition is favourably associated with high protein growth efficiency [12]. Furthermore, low proteasome activity, i.e., reduced proteolysis in the liver has been linked to higher growth efficiency [37]. In this experiment, adjusted for growth, efficient fish were characterized by older nitrogen (i.e. protein) profiles of muscle tissues, which confirms earlier findings, since reduced proteolysis of body protein will preserve more of the old protein. These results suggest that efficient fish had newer liver carbon profiles (i.e., glycogen, fat, and protein, combined), which might be linked to the origin of the glycogen in the liver; efficient fish possibly synthesize relatively more of their glycogen through gluconeogenesis or lipogenesis in the liver and thus from nutrients that come directly from digestion of feed (new nutrients) and relatively less from proteolysis of older body protein. Our findings indicate that fast growth combined with reduced degradation rates of existing body tissues, especially in the muscle, is favourable, and that individual differences in these traits can be captured by nitrogen and carbon isotope profiling of the various tissues. The underlying biological mechanisms are likely complex and further studies are needed to elucidate the underlying factors relevant to feed efficiency. The prediction equation for individual feed efficiency shows that indicator traits add information to the prediction of feed efficiency beyond growth. The limitation of the prediction equation developed is that all variables are averages at the tank level because FCR was only recorded per tank. If this prediction equation was to be used to predict individual FCR, this would imply that the phenotypic and genetic correlations are assumed to be the same at both the individual and group levels, which may not be realistic. In addition, the prediction equation was estimated in freshwater during a phase of high growth and needs to be validated or re-estimated for larger fish in seawater, but this would require recording of feed intake in the sea. A prediction equation estimated during the freshwater phase may not predict the feed efficiency performance in the grow-out phase in the sea very well. During grow-out, other metabolic pathways such as lipid metabolism may explain more of the variation in feed efficiency in Atlantic salmon, since the relative weight gain decreases, which may leave more room for other factors than growth to contribute to feed efficiency, as previously demonstrated for large rainbow trout [12]. Hence, it is considered more effective to use individually measured phenotypes that are highly genetically correlated to feed efficiency to improve the feed efficiency indirectly. Indicator traits that are more highly correlated to feed efficiency in later life-stages could, therefore, be of high value. Estimates of genetic correlations (Table 5) revealed, as expected, that fast growth (WG and RG) is favourably associated with improved feed efficiency (r = − 0.74 and -0.82, respectively). The indicator traits AMN, AMC, and ALN were estimated to be highly genetically correlated with the growth traits and feed efficiency, as expected, since body growth depends on the deposition of new nutrients from enriched feed, which increased isotope levels in tissues. The estimate of the genetic correlation of carbon metabolism in adipose tissue with FCR (− 0.43) was moderate. The link between lipid deposition and FCR should, however, not be disregarded, since it is known to affect feed efficiency later in the life cycle of salmonids because lipid deposition is at its maximum first during the grow-out phase in the sea [12]. ALC had the closest estimated genetic correlation with FCR (− 0.90), but had lower genetic correlation estimates with the other indicator traits, which suggests that ALC might explain additional variation in the feed efficiency complex among the indicator traits considered here. As explained above, protein is likely the main source of nitrogen and carbon in muscle and nitrogen (but not necessarily carbon) in liver. This might explain the high genetic and phenotypic correlations of nitrogen and carbon metabolism in the muscle and nitrogen metabolism in the liver, since they all likely reflect protein metabolism. Compared with muscle, carbon metabolism in the liver (ALC) is affected by fat and glycogen to a larger extent and, thus, is expected to relate less to the other indicator traits. The IFCR and IFER variables for nitrogen and carbon in muscle are expected to be proportional to the mass of newly deposited nutrients in muscle and, as such, relate directly to the efficiency complex. Buchheister and Latour [38] proposed a ratio between specific growth rate and total metabolism, estimated from isotope profiling, as an indicator trait. A preliminary analysis showed that the trait definition of Buchheister and Latour was close to perfectly genetically correlated with the IFER indicators used in this study (results not shown). In our study, the estimate of the genetic correlation of IFCR with the observed FCR was very high, to the extent that the estimate was fixed at the border of the parameter space (\(r_{g} \sim 1.0\)) for both nitrogen and carbon metabolism in muscle, with a phenotypic (tank-level) correlation with observed FCR of 0.72 and 0.58, respectively. The IFER_AMN variable, being the inverse of IFCR_AMN, and correspondingly IFER_AMC were estimated with, respectively a highly negative genetic correlation (− 1.0) and a moderately negative, albeit highly uncertain, genetic correlation (− 0.63 ± 0.30) to FCR. These results indicate that the mass of new nutrients in the muscle is closely genetically associated with FCR at the tank level. Since the indicator ratio traits (IFCR/IFER) can be measured on individual fish, they are promising indicator traits for individual phenotyping of feed efficiency. However, the estimates of heritability of the indicator ratio traits were lower (0.06 to 0.11) than the estimates of heritability for the remaining traits. In addition, estimates of the genetic correlation of the indicator ratio traits IFCR_AMC, IFCR_AMN, IFER_AMC, and IFER_AMN with ALC were low. However, estimates of the genetic correlation of ALC and the indicator ratio traits with tank-FCR were high, which indicates that ALC explained individual variation in feed efficiency that was not explained by growth. The indicator ratio traits IFCR and IFER are intuitively appealing and can be easily interpreted biologically, compared to ALC, for which the underlying determinants are largely unknown. The efficiency of metabolization and allocation of nutrients for growth is closely related to the feed efficiency complex; using body tissue as fuel for, e.g., maintenance, is less efficient than using nutrients absorbed and metabolized from feed directly. However, there is some variation between individuals in the extent to which body tissues are used for maintenance [10, 39,40,41]. A lower exchange of body tissue components would result in more efficient use of protein and thus reduced feed costs [40]. The IFCR and IFER variables allow for direct measurement of nitrogen and carbon fluxes by using stable-isotope profiling to trace the contribution and allocation of nutrients from feed to growth in animal tissue [40,41,42] and are expected to have a universal relationship with FCR and could be useful independently of life-stage and species. The standard errors of the estimates of the genetic correlations were rather low in spite of the limited number of families in the study. However, the standard errors of the genetic correlations between our traits and FCR could be made smaller by increasing the size of the family dataset and could thus be used to validate our approach. Our experimental design made it possible to keep all individuals in one common environment until the start of the experiment, which strengthens our results by reducing the environmental variation between families. Our results indicate that the total variation between tanks was, to a large extent, explained by genetics, 52% for FCR and 92% for feed intake. Phenotyping of stable isotopes at the individual level requires liver and muscle samples, which normally implies that the fish are sacrificed. However, the isotope profile in muscle can be obtained from a muscle biopsy on live animals, which would allow these indicator ratio traits to be recorded even on selection candidates. Alternatively, if test fish have to be sacrificed through sib-testing, information on the full-sibs can be used to predict breeding values on the untested selection candidates. Genomic selection methods that use individual phenotypes and genotypes on training animals for selection among genotyped candidates are expected to be much more effective than traditional pedigree-based selection methods [43,44,45]. Hence, individual phenotyping is still very important, even for traits that cannot be recorded on the selection candidates. Thus, in full-sib testing an indicator trait is efficient if the estimated breeding value for the indicator ratio trait is estimated with high accuracy (which requires a considerable number of full-sibs), the indicator trait has a high genetic correlation with feed efficiency (as estimated for the IFCR phenotype), and feed efficiency has significant genetic variance (considered considerable, with 3% point standard deviation for FCR). A slaughter test using full-sibs of the breeding candidates is currently part of the breeding program and, thus, implementation of the indicator ratio traits can be carried out in the existing test under field conditions. Given that isotope-enriched feed can be produced at an acceptable cost, this study presents indicator ratio traits for individual FCR that might be recorded on a massive scale and used for selection, without requiring individual feed intake recording. This requires that the indicator ratio traits, IFCR and IFER, which have a strong genetic relationship to FCR (as reported here in freshwater) are also shown to have such a genetic relationship in the grow-out phase. FAO. 2050: A third more mouths to feed. Food and Agriculture Organization of the United Nations; 2018. http://www.fao.org/news/story/en/item/35571/icode/. Accessed 01 Nov 2018. Gjedrem T. The first family-based breeding program in aquaculture. Rev Aquac. 2010;2:2–15. Gjedrem T, Robinson N, Rye M. The importance of selective breeding in aquaculture to meet future demands for animal protein: A review. Aquaculture. 2012;350–3:117–29. Hayes BJ, Lewin HA, Goddard ME. The future of livestock breeding: genomic selection for efficiency, reduced emissions intensity, and adaptation. Trends Genet. 2013;29:206–14. Janssen K, Chavanne H, Berentsen P, Komen H. Impact of selective breeding on European aquaculture. Aquaculture. 2017;472:8–16. Lind CE, Ponzoni RW, Nguyen NH, Khaw HL. Selective breeding in fish and conservation of genetic resources for aquaculture. Reprod Domest Anim. 2012;47:255–63. Halver JE, Hardy RW. Fish nutrition. 3rd ed. Amsterdam: Academic Press; 2002. Gjedrem T. Improvement of productivity through breeding schemes. GeoJ. 1985;10:233–41. McCarthy ID, Houlihan DF, Carter CG. Individual variation in protein turnover and growth efficiency in rainbow trout, Oncorhynchus mykiss (Walbaum). Proc R Soc Lond B Biol Sci. 1994;257:141–7. McCarthy ID, Houlihan DF, Carter CG, Moutou K. Variation in individual food consumption rates of fish and its implications for the study of fish nutrition and physiology. Proc Nutr Soc. 1993;52:427–36. Jobling M, Covès D, Damsgård B, Kristiansen HR, Koskela J, Petursdottir TE, et al. Techniques for measuring feed intake. In: Houlihan D, Boujard T, Jobling M, editors. Food intake in fish. Oxford: Wiley-Blackwell; 2001. p. 49–87. Kause A, Kiessling A, Martin SAM, Houlihan D, Ruohonen K. Genetic improvement of feed conversion ratio via indirect selection against lipid deposition in farmed rainbow trout (Oncorhynchus mykiss Walbaum). Br J Nutr. 2016;116:1656–65. de Verdal H, Mekkawy W, Lind CE, Vandeputte M, Chatain B, Benzie JAH. Measuring individual feed efficiency and its correlations with performance traits in Nile tilapia, Oreochromis niloticus. Aquaculture. 2017;468:489–95. de Verdal H, Vandeputte M, Mekkawy W, Chatain B, Benzie JAH. Quantifying the genetic parameters of feed efficiency in juvenile Nile tilapia Oreochromis niloticus. BMC Genet. 2018;19:105. Thodesen J, Gjerde B, Grisdale-Helland B, Storebakken T. Genetic variation in feed intake, growth and feed utilization in Atlantic salmon (Salmo salar). Aquaculture. 2001;194:273–81. Thodesen J, Grisdale-Helland B, Helland SJ, Gjerde B. Feed intake, growth and feed utilization of offspring from wild and selected Atlantic salmon (Salmo salar). Aquaculture. 1999;180:237–46. Neely KG, Myers JM, Hard JJ, Shearer KD. Comparison of growth, feed intake, and nutrient efficiency in a selected strain of coho salmon (Oncorhynchus kisutch) and its source stock. Aquaculture. 2008;283:134–40. Ogata HY, Oku H, Murai T. Growth, feed efficiency and feed intake of offspring from selected and wild Japanese flounder (Paralichthys olivaceus). Aquaculture. 2002;211:183–93. Silverstein JT, Hostuttler M, Blemings KP. Strain differences in feed efficiency measured as residual feed intake in individually reared rainbow trout, Oncorhynchus mykiss (Walbaum). Aquacult Res. 2005;36:704–11. Henryon M, Jokumsen A, Berg P, Lund I, Pedersen PB, Olesen NJ, et al. Genetic variation for growth rate, feed conversion efficiency, and disease resistance exists within a farmed population of rainbow trout. Aquaculture. 2002;209:59–76. Dvergedal H, Ødegård J, Mydland LT, Øverland M, Hansen JØ, Ånestad RM, et al. Stable isotope profiling for large scale evaluation of feed efficiency in Atlantic salmon (Salmo salar). Aquacult Res. 2019;50:1153–61. Grashei KE, Ødegård J, Meuwissen THE. Using genomic relationship likelihood for parentage assignment. Genet Sel Evol. 2018;50:26. Helland SJ, Grisdale-Helland B, Nerland S. A simple method for the measurement of daily feed intake of groups of fish in tanks. Aquaculture. 1996;139:157–63. Shomorin GO, Storebakken T, Kraugerud OF, Øverland M, Hansen BR, Hansen JØ. Evaluation of wedge wire screen as a new tool for faeces collection in digestibility assessment in fish: The impact of nutrient leaching on apparent digestibility of nitrogen, carbon and sulphur from fishmeal, soybean meal and rapeseed meal-based diets in rainbow trout (Oncorhynchus mykiss). Aquaculture. 2019;504:81–7. McCleary BV, Solah V, Gibson TS. Quantitative measurement of total starch in cereal flours and products. J Cereal Sci. 1994;20:51–8. Fry B. Isotope notation and measurement. In: Fry B, editor. Stable isotope ecology. New York: Springer; 2006. p. 21–39. Coplen TB, Krouse HR, Böhlke JK. Reporting of nitrogen-isotope abundances (Technical Report). Pure Appl Chem. 1992;64:907–8. Craig H. Isotopic standards for carbon and oxygen and correction factors for mass-spectrometric analysis of carbon dioxide. Geochim Cosmochim Acta. 1957;12:133–49. Gilmour AR, Gogel BJ, Cullis BR, Welham SJ, Thompson R. ASReml user guide release 4.1. Hemel Hempstead: VSN International Ltd.; 2015. VanRaden PM. Efficient methods to compute genomic predictions. J Dairy Sci. 2008;91:4414–23. Directorate of Fisheries. Lønnsomhetsundersøkelse for laks og regnbueørret: Matfiskproduksjon. Directorate of Fisheries; 2018. https://www.fiskeridir.no/Akvakultur/Statistikk-akvakultur/Loennsomhetsundersoekelse-for-laks-og-regnbueoerret/Matfiskproduksjon-laks-og-regnbueoerret. Accessed 01 Nov 2018 (In Norwegian). Krogdahl Å, Sundby A, Olli JJ. Atlantic salmon (Salmo salar) and rainbow trout (Oncorhynchus mykiss) digest and metabolize nutrients differently. Effects of water salinity and dietary starch level. Aquaculture. 2004;229:335–60. Tocher DR. Metabolism and functions of lipids and fatty acids in teleost fish. Rev Fish Sci. 2003;11:107–84. Tacon AGJ, Cowey CB. Protein and amino acid requirements. In: Tytler P, Calow P, editors. Fish energetics: new perspectives. Dordrecht: Springer; 1985. p. 155–83. Carter CG, Houlihan DF, Owen SF. Protein synthesis, nitrogen excretion and long-term growth of juvenile Pleuronectes flesus. J Fish Biol. 1998;53:272–84. Carter CG, Houlihan DF, Brechin J, McCarthy ID. The relationships between protein intake and protein accretion, synthesis, and retention efficiency for individual grass carp, Ctenopharyngodon idella (Valenciennes). Can J Zool. 1993;71:392–400. Dobly A, Martin SAM, Blaney SC, Houlihan DF. Protein growth rate in rainbow trout (Oncorhynchus mykiss) is negatively correlated to liver 20S proteasome activity. Comp Biochem Physiol A Mol Integr Physiol. 2004;137:75–85. Buchheister A, Latour RJ. Turnover and fractionation of carbon and nitrogen stable isotopes in tissues of a migratory coastal predator, summer flounder (Paralichthys dentatus). Can J Fish Aquat Sci. 2010;67:445–61. Sun ZL, Gao QF, Dong SL, Shin PKS, Wang F. Estimates of carbon turnover rates in the sea cucumber Apostichopus japonicus (Selenka) using stable isotope analysis: the role of metabolism and growth. Marine Ecol Progress Ser. 2012;457:101–12. Barreto-Curiel F, Focken U, D'Abramo LR, Cuarón JA, Viana MT. Use of isotopic enrichment to assess the relationship among dietary protein levels, growth and nitrogen retention in juvenile Totoaba macdonaldi. Aquaculture. 2018;495:794–802. Xia B, Gao QF, Li H, Dong SL, Wang F. Turnover and fractionation of nitrogen stable isotope in tissues of grass carp Ctenopharyngodon idellus. Aquacult Environ Interact. 2013;3:177–86. Gamboa-Delgado J, Peña-Rodríguez A, Ricque-Marie D, Cruz-Suárez LE. Assessment of nutrient allocation and metabolic turnover rate in Pacific white shrimp Litopenaeus vannamei co-fed live macroalgae Ulva clathrata and inert feed: dual stable isotope analysis. J Shellfish Res. 2011;30:969–78. Vallejo RL, Leeds TD, Gao G, Parsons JE, Martin KE, Evenhuis JP, et al. Genomic selection models double the accuracy of predicted breeding values for bacterial cold water disease resistance compared to a traditional pedigree-based model in rainbow trout aquaculture. Genet Sel Evol. 2017;49:17. Ødegård J, Moen T, Santi N, Korsvoll SA, Kjøglum S, Meuwissen THE. Genomic prediction in an admixed population of Atlantic salmon (Salmo salar). Front Genet. 2014;5:402. Vela-Avitúa S, Meuwissen THE, Luan T, Ødegård J. Accuracy of genomic selection for a sib-evaluated trait using identity-by-state and identity-by-descent relationships. Genet Sel Evol. 2015;47:9. HD, JØ, MØ, LTM, and GK designed the experiment. HD was responsible for feed production and carried out the experiment. All authors contributed during sampling. HD prepared samples for stable isotope analysis. HD and JØ conducted the statistical analysis. HD, JØ and GK developed the methods, variables, and approaches, wrote and finalized the manuscript. MØ and LTM read and corrected language. All authors read and approved the final manuscript. We thank Bjørn Reidar Hansen, Harald Støkken and Bjørn Frode Eriksen for help and assistance at the fish laboratory, and Ricardo Tavares Benicio, Ragnhild Ånestad, Milena Bjelanovic, Jon Øvrum Hansen, Mathabela Nelson, Shomorin Oluwaseun George and Ingrid Marie Håkenåsen for their help during feed production, experiment, and sampling. A thank goes to all that contributed during sampling of experimental material. The genotypic data are owned by AquaGen AS, used under license for this study, and not publicly available. Phenotypic data can be made available on request. The experiment used phenotypic data, which were collected from a family experiment with Atlantic salmon carried out at the fish laboratory, Norwegian University of Life Sciences (NMBU), Aas, Norway, following the laws and regulations for experiments on live animals in EU (Directive 2010/637EU) and Norway (FOR-2015-06-18-761). The experiment was approved by the Norwegian Food Safety Authority (FOTS ID 11676). This study was supported by The Norwegian University of Life Sciences, AquaGen AS and Foods of Norway, a Centre for Research-based Innovation (the Research Council of Norway; Grant No. 237841/O30). Department of Animal and Aquacultural Sciences, Faculty of Biosciences, Norwegian University of Life Sciences, Post Box 5003, 1433, Ås, Norway Hanne Dvergedal, Jørgen Ødegård, Margareth Øverland, Liv Torunn Mydland & Gunnar Klemetsdal AquaGen AS, Post Box 1240, 7462, Trondheim, Norway Jørgen Ødegård Hanne Dvergedal Margareth Øverland Liv Torunn Mydland Gunnar Klemetsdal Correspondence to Hanne Dvergedal. Dvergedal, H., Ødegård, J., Øverland, M. et al. Selection for feed efficiency in Atlantic salmon using individual indicator traits based on stable isotope profiling. Genet Sel Evol 51, 13 (2019). https://doi.org/10.1186/s12711-019-0455-9 Submission enquiries: [email protected]
CommonCrawl
Analyzing hack subnetworks in the bitcoin transaction graph Daniel Goldsmith1, Kim Grauer1 & Yonah Shmalo1 Hacks are one of the most damaging types of cryptocurrency related crime, accounting for billions of dollars in stolen funds since 2009. Professional investigators at Chainalysis have traced these stolen funds from the initial breach on an exchange to off-ramps, i.e. services where criminals are able to convert the stolen funds into fiat or other cryptocurrencies. We analyzed six hack subnetworks of bitcoin transactions known to belong to two prominent hacking groups. We analyze each hack according to eight network features, both static and temporal, and successfully classify each hack to its respective hacking group through our newly proposed method. We find that the static features, such as node balance, in degree, and out degree are not as useful in classifying the hacks into hacking groups as temporal features related to how quickly the criminals cash out. We validate our operating hypothesis that the key distinction between the two hacking groups is the acceleration with which the funds exit through terminal nodes in the subnetworks. The Bitcoin network is a distributed, public ledger, secured through blockchain technology. All transactions occur between two distinct public addresses and are permanently recorded on the specific blockchain built for bitcoin. The process of securing these transactions is handled by bitcoin miners, who use their computing power to solve complex cryptographic problems and in the process verify blocks and transactions (Nakomoto 2009). Anyone can create a bitcoin address to receive funds through a variety of software projects such as Blockchain.info (BLOCKCHAIN LUXEMBOURG S.A 2011) or Electrum wallets (Electrum 2011). Additionally, there is no limit to the number of bitcoin addresses that any individual or organization can make. There are also no requirements for verifying your identity in the process of address creation. It is completely free to make an address, however, it costs money to transfer money on the network by paying transaction fees. Because of the ease of transactions between pseudonymous addresses, cryptocurrencies, and bitcoin in particular have been especially attractive to criminals who both exploit technological vulnerabilities and prefer to move funds through the pseudonymous bitcoin transaction network to avoid detection by law enforcement (Huang and et al. 2018). Indeed, the amount of cybercrime involving cryptocurrencies has grown via ransomware (Huang and et al. 2018), scamming activity, phishing scams, and hacking of exchanges or wallets (Chainalysis 2019). There have been several attempts to quantify the scale of criminal activity as well (Yin and et al. 2017). Notably, exchange hacks are one of the most costly types of cryptocurrency related crime. Hackers have stolen $1.7 billion dollars worth of cryptocurrency from exchanges since 2011 (Chainalysis 2019). Tracing stolen funds in order to freeze the assets of the perpetrators is one of the most effective ways of safeguarding against future attacks, as this method removes bad actors from the ecosystem and disincentivizes similar activity from other actors. Typically, either government or private cyberinvestigators, take up the task of tracing stolen cryptocurrency funds. Their investigations begin with a known address that has been hacked. They then follow the funds through up to thousands of different addresses until the funds hit a service (an off-ramp), i.e. an alternative means of cashing out the stolen bitcoin. Ideally, an investigator will trace funds to a service so that a subpoena can be issued to the service to unmask the identity of the criminal. These investigations result in traced out subnetworks representing the flow of stolen bitcoin from the point of breech on an exchange through exit ramps. We obtained six subnetworks from investigators at Chainalysis, a firm specializing in blockchain investigations. These were investigations carried out over several months, and which effectively trace all of the stolen funds through the entire bitcoin transaction graph. Each edge is a transfer of the stolen money to a node which is controlled by the hacker. The size and complexity of these graphs vary according to the amount of effort the hacker used to move funds and that hacker's level of technological sophistication. Similar subnetworks can be collectively generated by the community of users that trace funds on the public Bitcoin ledger and often does occur after a criminal steals cryptocurrency on a public ledger (ErgoBTC 2019). We present research to algorithmically visualize and analyze hack cash out subnetworks that capture the temporal behavior of hackers and locate the stolen funds. We then build similarity matrices based on eight graph features, run community detection over those matrices, and successfully classify certain hacks to the known hacking organization to have carried out the attack. We find that temporal features, such as the rate at which the hackers send funds to exit ramps, are the most effective features to use for grouping specific hacks together and classifying them to their hacking groups. We find that this method might prove useful as a component of some automated classification system designed for anti-money laundering or anti-fraud detection of transaction ledgers, not only for the specific use case that we describe in the work below as specific to these investigations, Chainalysis, or even bitcoin as a whole. Algorithmically traversing hack subnetworks and its limitations We investigate bitcoin hacks by traversing subnetworks of nodes that have been built out by professional crime investigators. These hack subnetworks are comprised of nodes that have either directly or indirectly received hacked funds, see Fig. 1 for visualization. Sample Hack Subnetwork, Hack A3 We then create visualizations to identify trends in the hack and to better understand the time patterns specific to each hack as the stolen bitcoin flows to the boundary of the networks generated, see Fig. 2. In some cases, when the level of obfuscation is minimal, investigations tracking stolen funds often terminate at services (see Methodology section on identifying services), simply because criminals want to change their stolen bitcoin for fiat currency, or at least convert it to a another cryptocurrency. Amount in Play over Time Yet cryptocurrency investigations are usually much more complex then this (Nouh and et al. 2019). Often, the investigator may not know if a node belongs to a service, particularly in the case of a mixing service. Furthermore, stolen bitcoin from some of the largest hacks may utilize laundering mechanisms in which OTC brokers act as third party sellers allowing for a change of hands to an entity that is no longer behind the hack. This activity can not be detected through blockchain analytics unless their is a source of ground truth confirming the funds passing through on OTC broker. Without this confirmation, the funds would appear to move from one pseudonymous node to another. Sometimes the investigations are so complex that the investigator simply cannot go through the process of tracing every single stolen bitcoin to an cash out point. In this case, the investigator may choose to chase particularly promising leads, rather then spend the time to analyze every single transaction that occurred. At any given time, stolen funds may be sitting idly in non-service clusters for extended periods of time. In practice, it is common for funds to slowly leak out of these "holding" clusters (Chainalysis 2019). Generally, as networks are built out manually by subject matter experts, methods such as the one proposed below can help ensure that the proper classifications of these networks have been achieved. We first gather subnetworks of known hacks that have been built out by professional investigators. Due to the sensitivity of this data and relative infrequency of hack events, the result of this process provided a small set of anonymized, curated subnetworks that trace stolen funds from the origin of the hacks to all end points of interest. It is at this point that we introduce a new tool for analyzing these subnetworks for additional insights that we can eventually return to the investigators and compliance officers at exchanges. We traverse these subnetworks from the starting clusters through the boundary of the subnetwork. An element of complexity emerges in this analysis that requires additional attention, namely that the terminal nodes require a more rigorous definition than any cluster sitting on the outskirts of the subnetwork since many of these terminal nodes act as sinks but still slowly leak funds despite maintaining control over the majority of their hacked balance. This definition will be fleshed out in the subsection "Defining Terminal Nodes." Additionally, as seen in Table 1, the simple static network characteristics demonstrate that the data is tree like, with low average degrees (in- and out-degrees are equivalent on average) and low clustering coefficients. Yet the complexity due to the temporal nature of the subnetworks as well as the nature of these terminal nodes require additional features to be defined before information can be meaningfully extracted from the data, since it is not always the leaves of these tree-like subnetworks that play important roles, either from the temporally - in that they arrive latest - or topologically - they sit farthest in the transaction graph. Table 1 Summary Statistics for Each Hack Next, to better visualize the temporal activity in the hacks, we create two time series that display the activity of the hacked funds. First, we measure how active the hackers are over time by computing the number of transfers the hackers make each day, as seen in Fig. 3. Transactions over Time Second, we measure the funds traced as they move to terminal nodes, as seen in Fig. 2. As the funds move through terminal nodes, the share of funds still held by hackers decreases. A fully tracked hack subnetwork would be visualized by the funds decreasing from 100% to 0% of funds still held by the hacker over the number of days that it takes to fully exit the funds through terminal nodes. We then generate distributions for the following features for each hack subnetwork: Logarithm of Hack balance of all nodes, see Fig. 4. Distribution of Log Balance Weighted In-degree of all nodes, see Fig. 5. Distribution of In Degree Weighted out-degree of all nodes, see Fig. 6. Distribution of Out Degree Average number of transactions to terminal nodes per day, across all ρ values, derived from data shown in Fig. 3. Terminal Nodes as a function of ρ, see Fig. 7. Number of Terminal Nodes as function of ρ, TvR Logarithmic difference of the average percent of funds still in play, across all ρ values, derived from data shown in Fig. 2. Second difference of the average percent of funds still in play, across all ρ values, derived from data shown in Fig. 2. Logarithmic difference of the standard deviation of the percent of funds still in play, across all ρ values, derived from data shown in Fig. 8. Standard Deviation of Amount in Play over Time Afterwards, we create similarity matrices corresponding to each distribution, whose elements are the pairwise similarities of the distributions corresponding to each of the hack subnetworks via the 1-Dimensional Wasserstein Distance, i.e. the Earthmover Distance (Villani 2003;). We run two community detection algorithms, Modularity Optimization (Clauset and et al. 2004) and Walktrap (Pons and et al. 2013). We compare the output of the overall approach across the similarity matrices for all the distributions against our ground truth attribution of the two underlying hacking groups and demonstrate the potential for such a method by properly reattributing the hack networks to their respective groups. Both this step and the previous step are motivated by the idea that relational data is best analyzed using the tools of network science and the similarity of the distributions between the hacks in question fall into relational data. For a larger range of approaches utilizing complex networks for more general data clustering see (de Arruda and et al. 2012). The reason we employ purely topological distance here, rather than the exponent of a related distance as suggested in de Arruda and et al. (2012), is due to the inherent assumption that the behavior of the underlying hacking groups are similar to the point of minor perturbations in the underlying distributions of activity, which we believe the Earthmover Distance is particularly well suited to detect. Lastly, we review the output communities and test our hypothesis that the features relating to the hack dynamics are more informative in classifying the hacking groups than the static network features. Identifying services A typical service can control thousands of addresses, while larger services can even manage into the millions. We identify services by exploiting features unique to the Bitcoin blockchain. There are many different approaches that blockchains employ to cryptographically verify transactions, but the Bitcoin blockchain relies on Unspent Transaction Outputs (UTXO's) to record all transactions. A UTXO is the unspent output of a previous transaction that a user is entitled to transfer to another bitcoin address. Every wallet that holds a positive bitcoin balance is in possession of at least one UTXO. When multiple UTXO's are held by a single user and spent together in a transaction, it then becomes possible to definitively ascribe common ownership to all of the UTXO's that were spent together. This concept of a cospend is the basis of the clustering activity used by blockchain analysis firms such as Chainalysis to identify clusters of addresses controlled by a single entity. The network then becomes comprised of cospend clusters, i.e. nodes, composed of multiple addresses rather than long chains of single-use addresses (Meiklejohn and et al. 2013; Akcora and et al. 2019). Once addresses have been mapped to a node through cospending activity, the node can be mapped to a named entity by interacting directly with it. For the example of an exchange, this process can occur by visiting an exchange's website, depositing funds on the exchange, and tracing that transaction via a block explorer (BLOCKCHAIN LUXEMBOURG S.A 2011). Only services with publicly available address information can be identified in this way. When stolen funds arrive at a known service, such as a an exchange, we can assume that the hackers have attempted to cash out their funds. Professional investigators trace funds through these nodes to create hack subnetworks that capture as much of the meaningful movement of the stolen funds as possible. Defining terminal nodes There are two types of terminal nodes discussed in this paper. 1) A known service terminal node that is a confirmed service through the process mentioned above of pairing ground truth knowledge with cospending activity. These services can be exchanges, mixers, gambling sites, merchant service platforms, or any exit ramp through which a criminal can off-load stolen bitcoin to an institutional cryptocurrency player. 2) An unknown service node, where the investigator has reason to believe a node is behaving like a service and will therefore terminate the investigation at that point. One problem may arise when the investigator simply chooses to stop pursuing a lead. At this point, the boundary of their investigated subnetwork might resemble a terminal node. This limitation should be further investigated in future work. In the cases of the subnetworks chosen for this research, the investigators followed all leads, which limited the terminal nodes to those described above. By default, terminal nodes are the edges of the graph subnetwork. Ideally, a subnetwork of a hack would track 100% of the funds from the point of a hack through all exit ramps. This would allow us to set ρ=0.00, as the terminal nodes would simply be all the natural edges of the graph. In this case, the investigator would trace funds to a service, whether it be an exchange, mixing site, gambling site, etc. ρ=0.00 indicates that a node has only ever received funds within the subnetwork. We focus on the ratio rather than the difference of funds sent to received because we want to maximize the number of meaningful leads for investigators rather than raw amount due to hacked funds. By returning this normalized list of terminal nodes and resulting charts, we find all partial sinks "of interest" in the subnetwork that may facilitate the issuance of subpoenas or other leads, as well as wallets to watch because they still contain funds, large or small. As a secondary filter, we can sort by balance due to the hack, but this feature is only relevant in the operational stage for investigators, not when conducting our analysis. We define ρ as: $$\rho = \frac{\textit{weighted in-degree}}{\textit{weighted out-degree}}\,, $$ i.e. the ratio for a given node of the total amount of funds it sent to the total amount of funds it received. Others have proposed using ratios of the in/out degrees when studying the Bitcoin Transaction Graph, but in different contexts and not as a node-level feature (Bovet and et al. 2018). We introduce this ratio as a means of classifying individual nodes based on features specific to networks of financial transactions. This is particularly important when trying to capture the underlying behavior of the nodes over time, as value flows in the temporal network that they collectively compose. Subnetworks that vary over time, such as hack investigations, generate terminal nodes throughout the duration of the network's activity. Terminal nodes with high ρ values should represent an optimal list of possible leads for an investigation, since they represent sinks of value in the transaction graph and are therefore plausibly operated by the true perpetrator of the hack or another entity of interest. Figure 9 shows the spectrum of ρ values and their subsequent interpretation. Spectrum of rho values and their significance Visualizing temporal behavior in the hack subnetworks The temporal visualizations are shown in Figs. 3 and 2. Figure 3 shows the number of transfers over time within the hack subnetwork so that the investigator can get a sense of how active the hackers are over time. They can answer questions such as: does the hacking group consistently make transactions over time, or do they tend to move funds according to a temporal pattern. A pattern may be indicative of an algorithm moving the funds, as opposed to actual individuals approving the transactions. Figure 2 shows how the funds exit over time through terminal nodes. It allows an investigator to see the exiting strategy of the hacking group in time. For example, do the hackers exit the funds in one period of time, or consistently over a longer duration of time? Each of these strategies has implications for how the investigator profiles the hacking group overall. For example, a hacking group that exits all the funds through one exchange in one day may be less organized and less well-funded than a hacking group that gradually, through thousands of strategic transactions, exits the funds over a long period of time. The trends are made visible by restructuring the hack subnetworks into time series. Figure 3 demonstrates how active the hackers are by using the number of transactions they carry out as proxies. Figure 3 allows us to see the way the hackers utilize terminal nodes. Hacking group alpha (A1) is much more active, slowly moving funds through terminal nodes over a shorter period. Hacking group beta (B1) utilizes fewer transactions in general, but tends to send all of their transfers to terminal nodes in a short period of time. In the case of chart B1 in Fig. 3, the hackers sat on their funds for a long period of time before abruptly exiting over 70% of the funds through a few exit ramps within a one week period. To test the hypothesis that the hackers are best classified using temporal features such as the rate at which funds cash out at terminal nodes, we vary ρ in the following sensitivity analysis section to observe stolen bitcoin exiting through terminal nodes under a range of conditions. Sensitivity analysis of ρ We allowed ρ to range from 0.02 to 0.98 to test the implications of gradually change the ρ parameter. A cluster with a very low ρ value, e.g. ρ=0.1, would have to hold on to more 90% of the funds it received to be considered a terminal node. On the other hand, a very high ρ value, e.g. ρ=0.9, allows a cluster to retain only 10% of the funds it received from the hack in order for it to be considered a terminal node. A higher ρ will capture many more terminal nodes, as it is an easier condition for nodes to meet. A lower ρ value means that the there are fewer terminal nodes picked up in the graph, and the criteria for being "of interest" to an investigator is extremely high. A very low ρ specifies that wallets of interested are those which may only hold small amounts of the total funds that it received. A node holding over 90% of the funds might be a holding wallet gradually leaking out funds, it might be a consolidation wallet for a criminal ring, a wallet associated with other types of criminal activity, or even a point of conversion to another cryptocurrency if, for example, the wallet is an Exodus wallet, which allows for wallet level cryptocurrency conversions. Choosing the right value for ρ allows us to optimally grow the hack subnetwork such that it would include the paths of interest without becoming too large to meaningfully analyze. We found that setting the ratio too high resulted in a less meaningful yet larger hack subnetwork, where the terminal nodes did not adequately capture dynamics of interest, and setting the ratio to be too low did not include clusters that likely should have been included. Applying a range of ρ from ρ=0.02 through ρ=0.98, in increments of 0.02, had very large implications for the amount of funds considered to be tracked. While changing ρ typically revealed how much of the funds the investigator tracked, at the same time, changing the ρ value does not impact the overall cash out trend witnessed by the investigator. These results indicate that varying ρ may not be useful for understanding the behaviors of the hacker, but is a useful tool for identifying nodes of interest that could be possible leads to the investigator. Indeed the variance in the ρ parameter proved one of the most useful tools for running community detection. We finally then needed to handle the introduction of funds at a time later than the hack by either the same or different user. To account for this, we either add these new flows to the funds at the start and work with the new total as our amount of hacked funds, or we incorporate these flows into our ρ definition, by stating a further constraint that if ρ>1, then it is a terminal node and we do not follow its flows forward in time. In the case of the former, we can track all funds engaged in clearly illicit activity, regardless of source, while in the case of the latter, we are actively restricting the subnetwork to funds that explicitly originated from the source of the hack. Feature definitions The goal when selecting which distributions to analyze was to capture the behavior of movement of the hacked funds in a precise way. To confirm the hypothesis that the two hacking groups exhibit different cashout strategies, we decided to consider the empirical distributions of 8 different features, as mentioned in Step 4 of the Pipeline. In the following definitions, the expectations are defined over the nodes of the subnetworks (and terminal nodes in the case of Transactions). Additionally, the time units are discretized at the daily level. Lastly, the Initial Hack Amount is the value stolen from the exchange by the hacking group which was the source of the investigated subnetworks. We define several of the features in our analysis as follows: Amount in Play. \({AIP} = {Initial\ Hack\ Amount} - \sum _{terminal nodes}{{weighted\ in-degree}} \) Hack balance of all nodes. Bal= log(weightedin−degree − weightedout−degree) Logarithmic first difference of the average, LDA, percent of amounts still in play, AIP, across all ρ values. \({LDA} = \log \left (\frac {\mathbf {E}[{AIP}(t+1)]}{\mathbf {E}[{AIP}(t)]}\right)\) Second difference of AIP, across all ρ values. \({Second\ Diff(AIP)} = \frac {LDA(t+1)\ -\ LDA(t)}{LDA(t)}\) Logarithmic difference of the standard deviation, LDST, of the AIP, across all ρ values. \(\textit {LDST} = \log \left (\frac {\mathbf {E}[({AIP}(t+1)-\mathbf {E}[{AIP}(t+1)])^{2}]}{\mathbf {E}[({AIP}(t)-\mathbf {E}[{AIP}(t)])^{2}]}\right)\) Average number of transactions to terminal nodes, TTN per day, across all ρ values. Transactions=E[TTN] Similarity matrices Once all of the normalized histograms were generated, we measure the pair-wise similarity between them, per variable, via the 1-Dimensional Wasserstein Distance, a.k.a. the Earthmover Distance or L1 Norm. Generally, the Lp Norm is defined as: $$W_{p}(F,G) = \left(\int_{0}^{1} |F^{-1}(u)-G^{-1}(u)|^{p} \; du\right)^{1/p}, $$ where F and G are empirical distribution functions with generalized inverses, F−1 and G−1 (Villani 2003;). Community detection After the similarity matrices are computed for the distributions of interest, the goal becomes differentiating between the two hacking groups. We propose a method of representing the similarity matrices as networks and searching for two distinct communities via both Modularity Optimization and Walktrap and comparing the results. Modularity Optimization (Clauset and et al. 2004) consists of finding a near maximal value for Modularity, Q, returned from the communities applied to some null model of network formation, typically a Random Network. $$Q = \frac{1}{2m}\sum\limits_{vw}\left[\left[A_{vw} - \frac{k_{v}k_{w}}{2m}\right]\delta(c_{v},c_{w})\right], $$ where m is the number of edges in the network, Avw is 1 when nodes v and w are connected and 0 otherwise, kv is the sum of Avw over w, and δ(i,j) is 1 when i and j are equal and 0 otherwise. Walktrap (Pons and et al. 2013) operates similarly, also attempting to optimize the same modularity, but with a focus on short random walks exiting communities as the explicit motivation and approach. Both algorithms are built for analyzing large networks, and their true modularity optimization functions are not explicitly the Q written above, but a derived form. We utilized both methods as independent confirmation rather than any benefits from their relative optimizations. As the resulting networks are small, with one node corresponding to each hack, are eight distributions analyzed, and two applications of community detection, any conclusions drawn from our method are only tentative since no conclusive results can be drawn from such small amounts of data. Nevertheless, we propose the full method as technically sound and a novel tool in the analysis of hack subnetworks in the bitcoin blockchain. As discussed in the Methodology, the communities shown in Fig. 10 correspond to those identified by two clustering algorithms with the first two rows being Walktrap's output communities on each distribution's similarity network as seen in Fig. 11, and the second two rows being the results obtained via Modality Optimization. As can be seen, similarity matrices derived from different distribution comparisons, whether analyzed by the same or different algorithm lead to different observed communities. Though they are often different, the communities do share some common characteristics with each other. For example, for all but the clustering of Balance similarity and TvR, nodes {B1,B2,B3} are always clustered together. Furthermore, 9 out of the 16 clusters have at least two members of group A together. Communities for all features' similarity matrices - First by Walktrap then Modularity Optimization Similarity Matrices of Feature Distributions for Hacking Groups A and B To better quantify consensus among the results in Fig. 10, we first find one node N which remains in the same group through all of the methods (we chose node B6) so as to establish a common group naming (in other words, it is no longer the case that a node is either in the blue or the red group seen in Fig. 10, rather that each node is either in the same group as our fixed node or in the opposite group), and then we generate a number ni,j associated to each node i and community j, with j∈{1,2,3,…,16}, setting ni,j=1 if i is in the same group as N and ni,j=0 otherwise. We then compute the probability of node i being in the same group as N with \(p = \frac {\sum _{j = 1}^{16}n_{i,j}}{16}\). Finally we bisect the vector of values to along its median and obtain the grouping {A1,A2,A3}, {B1,B2,B3}. This process was repeated using two feature set combinations. The first set contained all 8 features, and its resulting vector was (0.625, 0.5, 0.1875, 0.8125, 1, 1). The second set included only temporal features, namely: LDA, Second Diff(AIP), LDST, and ATVR and had a resulting vector of (0.25, 0.5, 0, 1, 1, 1). Note, that the ground-truth vector is simply (0, 0, 0, 1, 1, 1). In both cases, the bisection works to successfully find the two communities. In the case of only temporal features, the results are even more compelling where 0.5 can be used to bisect the set of hacks into their respective communities. We ran this analysis on historical hacks curated by Chainalysis investigators. The 6 hacks analyzed were carried out by 2 distinct and well-known hacking groups that have been active for the past several years. Each hack was manually classified by the investigators into one of the two groups, which we take as ground truth. We did not include images of these investigations because they visually did not contribute towards understanding the hacking methods. Analyzing the subnetworks using our proposed methodology allowed investigators to observe the cash out methods for the different hacking groups. Furthermore, the analysis of each subnetwork based on the features above facilitated greater understanding of each specific hack and hacking group, as well as the ability to successfully classify the subnetworks into their respective hacking groups via our pipeline. Hacking group alpha We analyzed three distinct hacks carried out by hacking group alpha. Hacking group alpha is a large, well-funded organization. The hacks analyzed in this paper reveal that the subnetworks tracing funds stolen by hacking group alpha are highly complex, with the stolen funds moving through many nodes. The stolen bitcoins are slowly cashed out through terminal nodes overtime. Investigators confirmed this trend. Funds flowing to terminal nodes from the three hacks visualized in Fig. 2 further confirm this trend. Stolen bitcoin being moved by hacking group alpha appear to slowly leak out of possession of the hackers through terminal nodes. Taking both the first and second differences for the amount in play visualized in Fig. 2 demonstrates that the acceleration at which stolen funds exit through terminal nodes is a significant means of clustering the graphs. Just taking first differences successfully clusters hack A3 and hack A1 together. Visually, A1 and A3 are more similar. Looking at the second differences, i.e. the acceleration, for the amount in play visualized in Fig. 2 is most successful at finding communities of hacks. Running community detection on the similarity matrices for the second differences of the amount in play successfully identifies that A1, A2, and A3 belong in the same community. The number of transfers that the hackers use to move the funds has also proven significant for helping to effectively classify the hacks according to their hacking groups. As shown in Fig. 3, hack A2 and A3 appear to have similar trends in terms of the number of transfers made each day following the hack. The community detection that we ran on the hacks classified these two hacks together when looking only at trends in the frequency of transactions sent to terminal nodes. Analysing the variance in the ρ parameter, as visualized by Fig. 8 captures how the share of funds exiting through terminal nodes changes as ρ approaches 1. The standard deviation for the ρ parameter as ρ approaches 1 approximates the variety in behavior for terminal nodes. Using the log difference in standard deviation across the amount in play by varying ρ allows us to classify hacks A3 and A1 together. Both these hacks had similar changes in the amount in play for each ρ over time, whereas A2 had some uncharacteristic behavior for hacking group alpha around day 250. A2 was a much smaller sized subnetwork, with only 55 nodes, than A1 and A3, with 1257 and 218 respectively. This made the standard deviation of the amount exiting through terminal node more sensitive as ρ increased. We investigated whether the distribution of balances across all the nodes in the hack would be a useful indicator to help classify hacks. This was one of the weakest features used to classify the hacks into hacking groups. As shown in Fig. 4 there is a wide variety in the distribution across all the nodes in the graph based on their hack balances. Hack A3's distribution, for example, had a higher peak, meaning many of the hacks in A3 held a similar balance. Yet A2 had much more variety across the nodes within the graph in terms of how much stolen bitcoins each node ended up holding. Using the distribution of the log balance by nodes was not useful on its own to help classify hacks, and caused one of the few instances of mistakenly grouping hacks A3 and B4 together as seen in Fig. 10. Hacking group beta We then analyzed three hacks carried out by the second hacking organization referred to here as hacking group beta. When visualizing the hack subnetworks for hacking group beta, there are striking differences in the cash out mechanisms. Hacking group beta tends to send a majority of its funds through terminal nodes over a short period of time. They tend to sit on their funds quietly, sometimes moving some funds through wallets of interest, but have a characteristically abrupt cash out pattern. This pattern is visualized in Fig. 2, where hacks B1, B2 and B3 all have notable vertical drops, representing abrupt moments of cashing out through terminal nodes. Running our community detection algorithms on the first differences of this activity successfully classified all B hacks as belonging together, see Fig. 10, yet also identified hack A2 as fitting a similar pattern. The second differences for the amount in play chart is the best at predicting the proper community assignment. Its top performance can be attributed to its correctly capturing the acceleration of the funds exiting through terminal nodes, which confirms the hypothesis put forward by investigators about temporal trends in exiting funds. All of the hacks from hacking group beta have a large variance for ρ as ρ approaches one, which can also be visualized in 2. This signifies a large range in sending versus receiving behavior for the nodes within the hacking group beta hacks. Funds are exiting through a wide variety of nodes, and not simply hitting one exit point which only ever received funds. Looking at the distribution of balances held by the nodes within the subnetwork demonstrates the variety of node behaviors present. However, this was again a weak feature when it came to classifying the hacks through community detection. Hack B1 had many nodes that passed through mixing services which were unclustered in the subnetwork. The mixers would siphon off parts of the stolen funds into consolidator wallets in similar patterns. The investigator only tracked the fattest paths, leaving many of the known nodes passing through mixers with a similar balance. Using balance distribution when the graph is not fully built out was shown not to be useful for community detection. We next looked at the variation in the AIP over all ρ as visualized in Fig. 8. The shape of this graph visualizes how ρ affects the share of funds exiting through terminal nodes. Almost all of hack B1's funds exit through a wide variety of terminal nodes on the first day. The standard deviation peaks at this point, followed by a long period of no fund movements. We successfully classified hacks B1, B2, and B3 together using our community detection algorithms, but hack A2 was mistakenly grouped in when using this feature, as shown in Fig. 10. We then analyzed the number of transactions going to terminal nodes in Fig. 3. The number of transactions showed no clear visible pattern to help classify the hacks into hacking groups. While the community detection algorithms successfully classified all three hacks from hacking group beta together, it also picked up hack A1. We began this analysis by talking with Chainalysis investigators about what they knew about the hacking groups. They indicated that the key differentiation between the two groups, is the pattern by which they hold funds and the subsequent rate at which they cash them out. Our analysis confirms this hypothesis. We conclude that static features of the charts, such as balance distributions, in degrees, and out degrees are not useful features for classifying the hacks into hacking groups. There are many limitations to these static features. To start, they likely require a fully built out, comprehensive graph. Many of the graphs we chose to analyze were incomplete from the start. This means the takeaways from the static features of the charts were also fundamentally incomplete. Table 1 contains general summary statistics that further reinforce the relative scarcity of meaningful information from the static features for the hacks. More importantly, our hypothesis of focusing on the temporal features of the subnetworks, rather than the static features was validated. The results indicate that the patterns by which the subnetworks evolve over time serve as useful features for optimal classification based on the method described in this paper. The optimal classifications in Fig. 10, specifically the second difference - or acceleration - of AIP, are most characteristic of the subnetworks temporal nature. Varying ρ to alter our level of resolution into terminal nodes also plays a role in the usefulness of our temporal features and the resulting classifications. The correct classifications were obtained when similarity matrices were built from these temporal features and the community detection algorithms was subsequently run to differentiate the hacking groups based on these features exclusively. Hacks represent an important challenge for law enforcement, the Bitcoin community, and financial institutions. There is opportunity for an algorithmically informed approach to analysis of existing hacks as well as real time monitoring of hacks. This research represents an attempt at building a more rigorous framework for such an approach via an analysis of both the static and temporal features of hack subnetworks and suggests that the temporal features represent an important avenue of exploration for a deeper understanding of the hack subnetworks. In this paper, we have described our proposed approach for analyzing characteristics of the hack subnetworks within the broader bitcoin transaction graph as a means of classifying specific hacks to their respective perpetrating hacking groups. We find that specifically, the temporal characteristics are the most effective for allowing this categorization to occur. Our methods, however, can also be used in other contexts. Open source investigations, for example, can exploit these methods to more effectively track stolen funds from the breach point on the exchange that has been hacked. This technique also, for example, could be used even in fiat systems such as the swift network. For example, once a potential fraud flag is raised on an account, this method could be used to learn from the behavior of the fraudulent actors. There are, however, limitations to extending this method to the fiat system. One key distinction between our use case and the fiat example stems from how we knew the hack was initiated by either of only two actors. If however one considers a much larger system with many more potential criminal actors, it might take many ground truth examples and a more robust learning algorithm to distinguish between the broader scope of potential illicit actors. Due to the sensitivity of the underlying data, we cannot currently release our dataset. AIP: Amount in Play Bal: LDA: Logarithmic first difference of the average TvR: Number of Terminal Nodes vs ρ UTXO: Unspent Transaction Output Akcora, CG, et al. (2019) BitcoinHeist: Topological Data Analysis for Ransomware Detection on the Bitcoin Blockchain. BLOCKCHAIN LUXEMBOURG S.A (2011) Block Explorer. https://www.blockchain.com/explorer. Accessed 12 Dec 2019. Bovet, A, et al. (2018) Network-based indicators of Bitcoin bubbles. Chainalysis (2019) Chainalysis Cryptocrime Report 2019. https://blog.chainalysis.com/2019-cryptocrime-review. Accessed 12 Dec 2019. Clauset, A, et al. (2004) Finding community structure in very large networks. Phys Rev E 70:66–111. de Arruda, GF, et al. (2012) A complex networks approach for data clustering. Phys A 391:6174–6183. Electrum (2011) Electrum Wallet. https://electrum.org. Accessed 12 Dec 2019. ErgoBTC (2019) Tracking Plustoken Funds. https://medium.com/@ErgoBTC/ tracking-the-plustoken-whale-attempted-bitcoin-laundering-and-its-impact-on-wasabi-wallet-787c0d240192. Accessed 12 Dec 2019. Huang, DY, et al. (2018) Tracking Ransomware End-to-end In: 2018 IEEE Symposium on Security and Privacy (SP): 20-24 May 2018, 618–631.. IEEE, San Francisco. Meiklejohn, S, et al. (2013) A fistful of bitcoins: characterizing payments among men with no names In: IMC '13 Proceedings of the 2013 conference on Internet measurement conference: 23 - 25 October 2013, 127–140.. ACM, Barcelona. Nakomoto, S (2009) Bitcoin: A Peer-to-Peer Electronic Cash System. https://bitcoin.org/en/bitcoin-paper. Accessed 12 Dec 2019. Nouh, M, et al. (2019) Cybercrime Investigators are Users Too! Understanding the Socio-Tehnical Challenges Faced by Law Enforcement In: Proceedings of the 2019 Workshop on Usable Security (USEC) at the Network and Distributed System Security Symposium (NDSS), 24-27 February 2019.. ACM, San Diego. Pons, P, et al. (2013) Computing communities in large networks using random walks In: IMC '13 Proceedings of the 2013 conference on Internet measurement conference: 23 - 25 October 2013, 127–140.. ACM, Barcelona. Villani, C (2003) Topics in Optimal Transportation, Graduate Studies in Mathematics. Am Math Soc. https://doi.org/10.1090/gsm/058. Villani, COptimal Transport: Old and New. Springer. Yin, S, et al. (2017) A first estimation of the proportion of cybercrminal entities in the bitcoin ecosystem using supervised machine learning In: 2017 IEEE International Conference on Big Data (Big Data), 17504747. https://doi.org/10.1109/bigdata.2017.8258365. We thank the Chainalysis investigators for their collaboration. This research was funded by Chainalysis. Chainalysis, New York, USA Daniel Goldsmith, Kim Grauer & Yonah Shmalo Daniel Goldsmith Kim Grauer Yonah Shmalo DG, KG, and YS designed research, performed research, and wrote the paper. All authors read and approved the final manuscript. Correspondence to Daniel Goldsmith. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Goldsmith, D., Grauer, K. & Shmalo, Y. Analyzing hack subnetworks in the bitcoin transaction graph. Appl Netw Sci 5, 22 (2020). https://doi.org/10.1007/s41109-020-00261-7 Temporal networks Sociotechnical systems Structure and dynamics of crime
CommonCrawl
Life beyond the Tanimoto coefficient: similarity measures for interaction fingerprints Anita Rácz ORCID: orcid.org/0000-0001-8271-98411, Dávid Bajusz ORCID: orcid.org/0000-0003-4277-94812 & Károly Héberger ORCID: orcid.org/0000-0003-0965-939X1 Interaction fingerprints (IFP) have been repeatedly shown to be valuable tools in virtual screening to identify novel hit compounds that can subsequently be optimized to drug candidates. As a complementary method to ligand docking, IFPs can be applied to quantify the similarity of predicted binding poses to a reference binding pose. For this purpose, a large number of similarity metrics can be applied, and various parameters of the IFPs themselves can be customized. In a large-scale comparison, we have assessed the effect of similarity metrics and IFP configurations to a number of virtual screening scenarios with ten different protein targets and thousands of molecules. Particularly, the effect of considering general interaction definitions (such as Any Contact, Backbone Interaction and Sidechain Interaction), the effect of filtering methods and the different groups of similarity metrics were studied. The performances were primarily compared based on AUC values, but we have also used the original similarity data for the comparison of similarity metrics with several statistical tests and the novel, robust sum of ranking differences (SRD) algorithm. With SRD, we can evaluate the consistency (or concordance) of the various similarity metrics to an ideal reference metric, which is provided by data fusion from the existing metrics. Different aspects of IFP configurations and similarity metrics were examined based on SRD values with analysis of variance (ANOVA) tests. A general approach is provided that can be applied for the reliable interpretation and usage of similarity measures with interaction fingerprints. Metrics that are viable alternatives to the commonly used Tanimoto coefficient were identified based on a comparison with an ideal reference metric (consensus). A careful selection of the applied bits (interaction definitions) and IFP filtering rules can improve the results of virtual screening (in terms of their agreement with the consensus metric). The open-source Python package FPKit was introduced for the similarity calculations and IFP filtering; it is available at: https://github.com/davidbajusz/fpkit. Interaction fingerprints are a relatively new concept in cheminformatics and molecular modeling [1]. As molecular fingerprints are binary (or bitstring) representations of molecular structure, analogously, interaction fingerprints are binary (or bitstring) representations of 3D protein–ligand complexes. Each bit position of an interaction fingerprint corresponds to a specific amino acid of the protein and a specific interaction type. A value of 1 ("on") denotes that the given interaction is established between the given amino acid and the small-molecule ligand (a 0, or "off" value denotes the lack of that specific interaction). Two such fingerprints are most commonly compared with the Tanimoto similarity metric (taking a value between 0 and 1, with 1 corresponding to identical fingerprints, i.e. protein–ligand interaction patterns). In the most common setting, the Tanimoto similarity is calculated between a reference fingerprint (usually belonging to a known active molecule) and many query fingerprints. Despite the straightforward definition, interaction fingerprints have been implemented by various research groups and commercial software developers with slight differences in the specifics. The first interaction fingerprint was termed structural interaction fingerprint (SIFt) and was introduced by Deng et al. [2]. This implementation contained originally seven interaction types (any contact, backbone contact, sidechain contact, polar contact, hydrophobic contact, H-bond donor and acceptor), and was later extended to include aromatic and charged interactions as well [3]. This modified version is implemented in the popular Schrödinger molecular modeling suite, which we also applied in this work, see Table 1 [4]. A similar implementation was published by Cao and Wang [5], containing 10 interactions per residue, and termed ligand-based interaction fingerprint (LIFt). Table 1 Summary of the bit definitions of the modified SIFt implemented in the Schrödinger Suite and applied in this work A widely-applied variant, simply termed interaction fingerprint (IFP) was introduced by Marcou and Rognan [6], containing seven interactions per residue. A marked difference between SIFt and IFP is that IFP differentiates aromatic interactions by their orientations (face-to-face vs. edge-to-face), and charged interactions by the specific charge distribution (i.e. cation on the ligand vs. anion on the ligand). Furthermore, IFPs can be configured to include less common interaction types, such as weak H-bonds or cation–π interactions. Later, the same group has introduced triplet interaction fingerprints (TIFPs), which encodes triplets of interaction points to a fixed length of 210 bits [7]. Mpamhanga et al. [8] have introduced three types of interaction fingerprints in their work in 2006, out of which the one termed CHIF is probably the most prominent. Atom-pairs based interaction fingerprint (APIF) is a variant implemented by Pérez-Nueno et al. [9] in the MOE SVL scripting language [10]. APIF accounts for the relative positions of pairs of interactions (based on their binned distances) and stores them in a count-based fingerprint with a fixed length (294 bits). Da and Kireev [11] have introduced SPLIF (Structural protein–ligand interaction fingerprints), whose main difference with respect to SIFt is that the interactions are encoded only implicitly by encoding the interacting ligand and protein fragments (whereas in SIFt the interaction type explicitly defines the given bit in the bitstring). In the same year, Sato and Hirokawa [12] have introduced another approach called PLIF (protein–ligand interaction fingerprints), which relies on the per-residue identification of the number of interacting atoms (with the ligand). To our knowledge, the most recent novel interaction fingerprint implementation is the PADIF (Protein per atom score contributions derived interaction fingerprint) approach of Jasper et al. [13]. PADIF incorporates the strengths of the different interactions by exploiting the per atom score contributions of the protein atoms, which are calculated for each pose during docking with GOLD, or with any other scoring function that can output atom contributions [14]. As a consequence, PADIF is an atom-based interaction fingerprint. Interaction fingerprints have been applied numerous times to complement docking scores in virtual screening campaigns, e.g. for the discovery of GPCR (G-protein coupled receptor) ligands [15] or kinase inhibitors [16]. In more complex examples, they have been applied for interpreting activity landscapes [17], for training machine learning models [18], and for identifying covalently targetable cysteine residues in the human kinome [19]. Additionally, interaction fingerprints are applied to support large, specialized structural databases, such as GPCRdb (for GPCRs) [20], KLIFS (for kinases) [21, 22] or PDEstrian (for phosphodiesterases) [23]. Binary similarity measures are applied in various scientific fields to compare binary and continuous data vectors. To our knowledge the most comprehensive collection of similarity measures was published by Todeschini et al. [24], listing 51 similarity measures (out of which seven have been shown to perfectly correlate with others). For binary data (e.g. for two interaction fingerprints), similarity measures are calculated from the contingency table (or confusion matrix) shown in Table 2, containing the frequencies of four events: (a) 1–1 (interaction present in both complexes), (b) 1–0 (interaction present in the first complex and absent from the second), (c) 0–1 (interaction absent from the first complex but present in the second), and (d) 0–0 (interaction absent from both complexes). With these parameters (along with the fingerprint length p), various similarity measures can be calculated, as exemplified here: $$SM = \frac{a + d}{p}$$ $$JT = \frac{a}{a + b + c}$$ $$BUB = \frac{{\sqrt {ad} + a}}{{\sqrt {ad} + a + b + c}}$$ In the examples, SM is the simplest similarity coefficient (called simple matching, or Sokal–Michener), JT corresponds to the Jaccard–Tanimoto coefficient (the de facto standard of the cheminformatics community), and BUB is the Baroni–Urbani–Buser coefficient that was suggested in our recent work as a good similarity metric for metabolomics fingerprints [25]. Table 2 Confusion matrix for a pair of interaction fingerprints, containing the frequencies of common on bits (a), common off bits (d), and exclusive on bits for Complex 1 (b) and Complex 2 (c) The values of similarity measures usually range from 0 to 1 (as for the above examples), but many of them (e.g. correlation-based measures) are defined to other ranges, such as − 1 to + 1. Such measures can be rescaled to the range [0, 1], based on this formula: $$s^{\prime} = \frac{s + \alpha }{\beta }$$ where α and β are the scaling parameters compiled by Todeschini et al. [24]. Similarity measures can be categorized according to symmetricity and metricity. A similarity coefficient is called symmetric (S) if it considers d (number of common off bits) equally to a (number of common on bits), intermediate (I) if d is underweighted with respect to a, or asymmetric (A) if d is not considered at all. Additionally, the work of Todeschini et al. denotes correlation-based metrics with the letter Q. Metricity specifies whether a similarity measure can be transformed into a metric distance, i.e. one that complies with the criteria of non-negativity, identity of indiscernible, symmetry (dA,B = dB,A) and triangle inequality. These can be called (similarity) metrics and are denoted with M, while non-metric measures are denoted with N. In this work, we have adapted the abbreviations introduced by Todeschini et al. [24]. In our related earlier works, we have confirmed the choice of the Tanimoto coefficient for molecular fingerprints (by a comparison of eight commonly available measures) [26], and more recently we have suggested the Baroni–Urbani–Buser (BUB) and Hawkins–Dotson (HD) coefficients for metabolomic fingerprints [25]. We should note however, that due to the highly different data structure, these conclusions are not transferrable to interaction fingerprints (or other fingerprint types). In this work, our goals were to (1) compare and rank these 44 similarity measures for their use with interaction fingerprint data, and (2) to dissect the interaction fingerprints and investigate how changes in the data structure affect the ranking of similarity coefficients. Also, we aimed to answer some specific questions considering interaction fingerprints, regarding e.g. the usefulness of IFP filtering schemes (i.e. exclusion of certain bit positions or blocks), or of general interaction definitions (e.g. "Any contact"). We note here that we use the abbreviation IFP throughout this work to refer to interaction fingerprints in general, not to the specific fingerprinting method of Marcou and Rognan [6]. (The specific method we used here is a modified version of SIFt [2], implemented in the Schrödinger Suite [4].) Ten protein targets were applied for the comparison, which were selected from the DUD datasets [27] based on the following criteria: (1) a crystal structure of the human protein from the PDB database must be available, (2) the co-crystallized ligand should have a reported bioactivity data (if more structures were available, the one with the most active ligand was selected), and (3) we strived to compile a set of proteins that are as diverse as possible. The applied protein targets and ligand sets are summarized in Table 3. Table 3 Summary of the applied protein targets and ligand sets The case studies correspond to ten virtual screening scenarios, where IFPs are used for retrieving the active molecules from among the chemically similar, but not active decoy compounds. A standard tool for evaluating virtual screenings is the area under the receiver operating characteristic curve (ROC AUC, or AUC for even shorter). The AUC can take values between 0 and 1, and corresponds to the probability of ranking a randomly selected active compound higher than a randomly selected inactive compound (as a consequence, an AUC value of 0.5 corresponds to random ranking) [28]. In this work, we have used AUC values as a first approach to evaluating the various IFP-similarity measure combinations, followed by a more detailed statistical analysis, as explained below. Generation of interaction fingerprints All the preprocessing procedures for the protein targets and ligands were carried out with the relevant Schrödinger software (LigPrep, Protein Preparation Wizard etc.) [29]. Standard (default) protocols were used for grid generation and ligand docking (Glide) [30, 31]. The IFPs were also generated with a Schrödinger module based on the docked poses, and contained by default all of the nine interactions listed in Table 1. To study the effects of the more general interaction definitions (bits), we have generated two more sets of IFPs, where we have omitted (1) the Any Contact (Any), and (2) the Any Contact (Any), Backbone Interaction (BB), and Sidechain Interaction (SC) definitions. We have labeled the resulting IFPs ALL (original), WO1 (without Any), WO3 (without Any, BB and SC). Additionally, we have implemented two IFP filtering rules to get rid of the large set of bits in the IFPs, which are consistently 0 across the whole ligand set. Briefly, residue-based filtering (RES) excludes any residue from the IFP that is found to be consistently non-interacting across the whole dataset, while interaction-based filtering (INTS) additionally omits any individual interaction that is never established in the whole dataset. The filtering rules are summarized and illustrated in Fig. 1. a Docked complex of a small-molecule virtual hit (green sticks) to JAK2 [16]. Potentially interacting residues in the vicinity of the ligand are highlighted in red. b Excerpt from the interaction fingerprint of the docked complex. Interacting residues are highlighted in red, while non-interacting residues are represented as gray blocks. Inside the red blocks, those interactions are grayed out that cannot be established by definition. c Short definition of the SIFt filtering rules implemented in this work. Residue-based filtering (RES) omits any residue that is found to be consistently non-interacting across the whole docked dataset. Interaction-based filtering (INTS) additionally omits any individual interaction that is not established even once across the whole dataset. The latter includes (but is not restricted to) those interactions that cannot be established by definition (grayed-out interactions inside red blocks); for example the "Aromatic" bit will be 0 for any residue that lacks an aromatic ring Similarity metrics We have implemented a Python module (FPKit) to calculate 44 similarity measures (collected by Todeschini et al. [24]) on plain bitstrings. The definitions of these similarity measures can be found in the original publication of Todeschini et al., and as a supplement to our recent (open access) article on metabolomic profiles [25]. Those measures that do not, by definition, produce values in the [0, 1] range are scaled with the α and β scaling parameters, published together with the definitions (see also Eq. 4). In some instances, we needed to correct some of these scaling parameters and implement additional checks to avoid division-by-zero errors: these are summarized in Additional file 1. The Python module additionally contains the implemented filtering rules, and is available at: https://github.com/davidbajusz/fpkit. Sum of ranking differences (implemented as a Microsoft Excel VBA macro) was used for the evaluation of the similarity values in each of the ten datasets. The similarity measures were scaled with Eq. 4 using the α and β parameters published in [24] (and corrected by us in a few cases, see Additional file 1), but even after scaling, some of the measures produced similarity values in highly different ranges between 0 and 1, therefore additional data pretreatment was used to obtain a balanced set of data, which can be compared in a fair way. The following options were considered for data pretreatment: autoscaling (a.k.a. standardization), rank transformation (i.e. assigning ranks to the values according to increasing magnitude) and range scaling (sometimes wrongly termed interval scaling). The workflow for generating the input matrices for SRD analysis is presented in Fig. 2. Ninety variants of SRD input matrices were calculated based on the different bit selections and filtering rules for each protein target. The input data matrices for SRD analysis contained the similarity values of the molecules, calculated with each of the 44 similarity measures. Workflow of the input matrix generation and the complete protocol of the study SRD is a novel algorithm based on the calculation of the differences between the object-wise ranks produced by a vector (corresponding to a method, model, similarity metric, etc.), as compared to a reference vector [32, 33]. The reference can be experimental values as a gold standard, or a consensus produced by data fusion, such as row-average, minimum or maximum, etc. This is related to the basic idea of multicriteria decision making, where the objective is to rank the objects simultaneously by each criterion: using that terminology, the criteria would be the various similarity measures in this case. The basic steps of the protocol are the following: (1) ranking the samples (here, ligands) in their order of magnitude by each column vector (similarity measure), (2) for each sample (ligand), calculating the differences between the ranks produced by each similarity measure and the reference, and (3) summing up the absolute values of the differences for each similarity measure. The resulting sums are called SRD values and can be used to compare the similarity measures: the smaller the SRD value, the closer the measure is to the reference (in terms of ranking behavior). A detailed animation of the calculation procedure can be found as a supplement to our earlier work [26]. The method is validated with cross-validation and a randomization test as well. The MS Excel SRD macro is freely available for download at: http://aki.ttk.mta.hu/srd We should note that besides SRD, a number of methods for the comparison of rankings is reported in the literature, or used routinely by statisticians. Spearman's rank correlation coefficient—probably the most commonly used rank-based statistical test—has been compared to SRD in the paper of Héberger and Kollár-Hunek [34] as early as 2011, and we have also shown in our recent work the more sophisticated discriminatory power of SRD as compared to Spearman's rho [35]. An interesting novel application of SRD is in Post-Pareto optimality analysis, where it was clearly shown to be a well-suited decision support tool (by ranking the solutions along the Pareto front) [36]. More generally: while it is also based on a comparison of rankings, the SRD workflow can be clearly distinguished from rank-based statistical tests, as it involves not one, but three essential steps. The first of these is the definition of the reference vector (i.e. reference ranking), which—depending of the problem—can be a "gold standard" (such as experimental values for the comparison of computational methods for modeling/predicting the same property) or a consensus of the existing (compared) methods, produced with a suitable data fusion technique, such as average, minimum, maximum, etc. This is again problem-dependent, as the reference vector must always represent a hypothetical optimum (or ideal) ranking. (It may involve more than one data fusion technique, if necessary, e.g. in the present work, the hypothetical best similarity measure would be one that produces the highest possible similarity value for active molecules and the lowest possible value for inactives, so our current solution involved the use of maximum values for actives, and minimum values for inactives, see Results section.) Definition of a reference vector is not part of any rank-based statistical test we are aware of. The second step is the calculation of the distance measure itself between the reference vector (ranking) and the rankings produced by the compared methods (here, similarity measures). In the current implementation of SRD, the Manhattan distance is applied: in the case where there are no tied ranks, this is identical to another rank-based distance measure, the Spearman footrule metric [37]. Koziol related SRD to another distance measure for permutations—namely, the inversion number [38], but it has less discriminatory power, and has not found any applications yet (to the best of our knowledge). The third step is the application of a meticulous validation approach, involving a randomization (permutation) test and leave-one-out or leave-many-out cross-validation. This step instantly provides answers to two important questions: whether the SRD values characterizing two compared methods (i.e. rankings) are significantly different from each other (cross-validation), and whether there is any among the compared methods (i.e. rankings) that is not significantly better (i.e. not closer to the reference vector) than random rankings (randomization test). The further statistical analysis of SRD values was carried out by factorial analysis of variance (ANOVA). This method is based on the comparison of the average values for the different groups of samples. The input matrices contained the SRD values and several grouping factors such as similarity metrics, symmetricity, metricity, bit selection and filtering rule. The complete procedure of statistical analysis was carried out three times with different pretreatment methods (rank transformation, range scaling, autoscaling). STATISTICA 13 (Dell Inc., Tulsa, OK, USA) was used for the analysis. Comparison based on AUC values As a first strategy, we have used AUC values for the 10 datasets as a basis for comparison and analysis. The AUC values were calculated with the scikit-learn Python package for each dataset and for each of the 44 similarity measures [39]. However, a detailed factorial ANOVA analysis revealed that the AUC values are not fit for the proper evaluation of similarity metrics, because the applied ten protein datasets have very different AUC values, leading to different means and very high standard deviations. In this sense, the AUC values are not sensitive enough to find the most or least consistent similarity measures, when using more than one dataset. Figure 3 illustrates the big differences between the protein targets in terms of AUC values, ranging from excellent classification (2H8H and 3ERT, or SRC kinase and estrogen receptor, respectively) to worse than random classification (4M0F and 4XZH, or acetylcholine esterase and aldose reductase, respectively). There is also no clear consensus regarding the relative performances of the various similarity measures, as the shapes of the curves in Fig. 3 are visibly different (and in some cases display opposite trends). Factorial ANOVA with the use of the protein targets and the similarity measures as factors. (AUC values are plotted against the similarity metrics.) The protein targets (with PDB codes) are marked with different colors and marks on the plot. Average values (dots) and 95% confidence intervals (lines) are shown in each case Results based on SRD values Because of the problem detailed above, we have decided to apply the SRD method for the statistical comparison. Selecting the reference value (data fusion) was not trivial in this particular case, since we have active and inactive ligands as well, where the ideal behavior for a similarity measure is to produce the highest and the smallest similarity values, respectively. Thus, the reference was defined as the minimum or maximum value among the similarity values, depending on the activity of the specific ligand (if it was active, the row-maximum was used, if it was inactive, then row-minimum was used). The analysis was run 90 times altogether, corresponding to each possible combination of 10 protein targets, 3 bit selections, and 3 filtering rules. The original input matrices contained the 44 similarity measures for the different molecules in each case study, but the ranges of these measures were sometimes very different. For example, values close to 0 were typical for the Mou (Mountford) similarity, but values close to 1 were typical for the Yu1 (Yule) similarity. Obviously, in such cases, taking the row minimum as the reference value would favor the former, regardless of the ligand being active or inactive. Thus, an additional round of data pretreatment was essential for the analysis, to provide a valid basis of comparison. Autoscaling, range scaling and rank transformation were applied for this purpose. One example of the original plots produced by the SRD script can be seen in Additional file 1: Figure S1, where the normalized (scaled) SRD values are plotted in increasing magnitude and the distribution of random SRD rankings (for random numbers) is plotted as a basis of comparison. SRD analysis was performed with fivefold cross-validation to every combination of the original parameters (bit selection, filtering, scaling) and the results (the SRD values) for each similarity measure were collected from every dataset (see Fig. 2) for a final factorial ANOVA analysis. The collected SRD values for the ten datasets (i.e. target proteins) were used together for the further ANOVA analysis, to allow us more general conclusions. First, we have compared the data pretreatment methods, to select a suitable one for the rest of the analyses. The effect of pretreatment was significant according to ANOVA, meaning that the results were significantly different for the different scaling options, as seen in Fig. 4. For the further analyses, we have chosen to use autoscaling, as range scaling and rank transformation are more biased and more sensitive to outliers. Additionally, autoscaling can be considered as a consensual choice between the other two (see Fig. 4). Factorial ANOVA with the use of scaling and similarity metrics as factors. Normalized SRD values [%] are plotted against the similarity metrics. The different scaling methods are marked with different symbols and lines. (RGS: range scaling, RANK: rank transformation, AUTO: autoscaling.) From this point on, standardized data were used for the further ANOVA analyses. The input matrix contained a total of 23,760 rows, corresponding to SRD values for each possible combination of 44 similarity measures, three filtering rules, three types of bit selections, six cross-validation rounds (fivefold cross-validation, including one round using the whole dataset, "All"), and ten datasets. (Part of the input matrix can be seen in Additional file 1: Table S1 for the better understanding of the ANOVA procedure.) We examined all the possible factors: similarity measures (44), bit selections (3), filtering rules (3), symmetricity (4) and metricity (2). With the use of these dependent factors, we can conclude whether their effects (one by one, or in combination) were significant on the α = 0.05 level based on the normalized SRD values. In the case of similarity measures the final outcome can be seen in Fig. 5. Factorial ANOVA with the similarity measures as the factor. Average values are marked with blue dots and the blue lines below and above the dots denote 95% confidence intervals. Normalized SRD values [%] are plotted against the similarity measures. The red dashed lines are arbitrary thresholds defined to select the best few metrics, and to identify the region with the less consistent similarity measures We can observe that there are some measures with very high SRD values (i.e. producing very different rankings as compared to the reference/consensus method), for example RR (Russel–Rao), Mic (Michael) or CT3 (Consonni–Todeschini 3). On the other hand, one can identify the best measures (i.e. closest to the reference) as SM (simple matching) [40], RT (Rogers–Tanimoto) [41], SS2 (Sokal–Sneath 2) [42], CT1 (Consonni–Todeschini 1), CT2 (Consonni–Todeschini 2) [43] or AC (Austin–Colwell) [44]. These similarity measures are closer to the reference and can be recommended for usage. The JT (Jaccard–Tanimoto) metric, which is the de facto standard of cheminformatics (simply called the "Tanimoto coefficient" in most of the related scientific literature) is located relatively close to the reference, but somewhat farther than those mentioned above, meaning that the SM, RT, SS2, CT1, CT2 and AC metrics could be considered as viable alternatives of the Tanimoto coefficient. If we examine the effects of the bit selection and filtering rule together, the ANOVA plot can be seen in Fig. 6. Significant differences between the filtering methods and the bit selections can be clearly observed. Interaction-based filtering (INTS) clearly improves the results, and so does residue-based filtering (RES) to a smaller extent. The differences between the bit selections are also clear: omitting the "Any contact" bit (WO1), results in a slight, but significant improvement, but omitting the BB and SC bits (Backbone and Sidechain interactions) causes a serious deterioration of SRD values. In summary, the best combination is the use of interaction-based filtering (INTS), while omitting the "Any contact" bit. Factorial ANOVA with the bit selection and the filtering rule as dependent factors. SRD values [%] are plotted against the bit selection options. Interaction based filtering (INTS) is marked with a blue dotted line, no filtering (NO) is marked with a red continuous line and residue based filtering (RES) is marked with a green dashed line The similarity measures can be grouped by symmetricity and metricity (see Introduction). ANOVA plots based on these factors are included in Fig. 7. It is clearly seen on Fig. 7a that metric similarity measures give, on average, much closer results to the ideal reference method than non-metric measures. According to Fig. 7b, symmetric and intermediately symmetric similarity measures tend to give more consistent results with the reference method. Both factors gave statistically significant differences (at α = 0.05) between the groups. The result of ANOVA analysis with metricity (a) and symmetricity (b) as factors. SRD values [%] are plotted against the different groups of similarity measures. Average values are plotted and the 95% confidence intervals are indicated with whiskers The preference for symmetric measures over asymmetric ones is somewhat surprising, considering that one would expect symmetric measures to be affected by the amount of "off" bits (and consequently, the number of common "off" bits, d) more than asymmetric ones. If we look at the effects of the filtering rules (and therefore the amount of "off" bits) on the SRD values of the similarity metrics separately (Additional file 1: Figure S2), we find that this assumption is confirmed, but only partially: similarity measures, where we can observe major differences are Mic (Michael), HD (Hawkins–Dotson), Den (Dennis), dis (dispersion), SS4 (Sokal–Sneath 4), Phi (Pearson–Heron), Coh (Cohen), Pe1, Pe2 (Peirce), MP (Maxwell–Pilliner), and HL (Harris–Lahey). These are symmetric and correlation-based coefficients, without exception. The associated ANOVA plots are included in Additional file 1: Figure S2. In this study forty-four similarity measures were compared based on ten case studies, corresponding to interaction fingerprint-based virtual screening scenarios. The effects of the applied set of bits (interaction types) and filtering rules were studied in detail. The comparison was carried out with a novel algorithm, sum of ranking differences (SRD), coupled with analysis of variance (ANOVA). This work complements our earlier comparative studies on metabolomic fingerprints [25] and molecular fingerprints [26]. There are several similarity metrics that are worth consideration as viable alternatives of the popular Jaccard–Tanimoto coefficient, namely: Sokal–Michener (SM), Rogers–Tanimoto (RT), Sokal–Sneath 2 (SS2), Consonni–Todeschini 1 and 2 (CT1, CT2) and Austin–Colwell (AC). These six similarity measures gave the most consistent results with the "ideal" (hypothetical best) reference method in our evaluations using 10 highly diverse protein data sets. We can also conclude that metric similarities are usually more consistent with the reference method than non-metric ones. Similarly, symmetric and intermediately symmetric measures gave more consistent results than asymmetric and correlation-based ones. Finally, there are important and significant differences with regard to the applied bit definitions and filtering rules. As a general conclusion, we can recommend omitting the "Any contact" bit definition from IFP-based analyses, as it will not deteriorate the results in a virtual screening scenario (however, omitting the backbone and sidechain interaction bits, BB and SC, is not recommended). Similarly, applying a bit filtering rule, such as interaction-based filtering (omitting any interaction that is not established even once in the whole dataset) can improve the results on average. The open-source Python-based FPKit (FingerPrint Kit) package applied for IFP filtering and similarity calculations is freely available at: https://github.com/davidbajusz/fpkit. The abbreviations and definitions of similarity metrics can be found in the work of Todeschini et al. [24] and our recent open access article on metabolomics fingerprints [25]. Bit selections ALL: all interactions; WO1: all interactions, except "Any contact"; WO3: all interactions, except "Any contact", "Backbone interaction" and "Sidechain interaction". Filtering rules INTS: interaction-based filtering; NO: no filtering; RES: residue-based filtering. ANOVA: analysis of variance; SRD: sum of (absolute) ranking differences. Data pretreatment AUTO: autoscaling (or standardization); RGS: range (interval) scaling; RANK: rank transformation. Bajusz D, Rácz A, Héberger K (2017) Chemical data formats, fingerprints, and other molecular descriptions for database analysis and searching. In: Chackalamannil S, Rotella DP, Ward SE (eds) Comprehensive medicinal chemistry III. Elsevier, Oxford, pp 329–378 Deng Z, Chuaqui C, Singh J (2004) Structural interaction fingerprint (SIFt): a novel method for analyzing three-dimensional protein–ligand binding interactions. J Med Chem 47:337–344 Mordalski S, Kosciolek T, Kristiansen K et al (2011) Protein binding site analysis by means of structural interaction fingerprint patterns. Bioorg Med Chem Lett 21:6816–6819. https://doi.org/10.1016/j.bmcl.2011.09.027 Small-Molecule Drug Discovery Suite 2017-4, Schrödinger, LLC, New York, NY, 2017. https://www.schrodinger.com/citations Cao R, Wang Y (2016) Predicting molecular targets for small-molecule drugs with a ligand-based interaction fingerprint approach. ChemMedChem 11:1352–1361. https://doi.org/10.1002/cmdc.201500228 Marcou G, Rognan D (2007) Optimizing fragment and scaffold docking by use of molecular interaction fingerprints. J Chem Inf Model 47:195–207. https://doi.org/10.1021/ci600342e Desaphy J, Raimbaud E, Ducrot P, Rognan D (2013) Encoding protein–ligand interaction patterns in fingerprints and graphs. J Chem Inf Model 53:623–637. https://doi.org/10.1021/ci300566n Mpamhanga CP, Chen B, McLay IM, Willett P (2006) Knowledge-based interaction fingerprint scoring: a simple method for improving the effectiveness of fast scoring functions. J Chem Inf Model 46:686–698. https://doi.org/10.1021/ci050420d Pérez-Nueno VI, Rabal O, Borrell JI, Teixidó J (2009) APIF: a new interaction fingerprint based on atom pairs and its application to virtual screening. J Chem Inf Model 49:1245–1260. https://doi.org/10.1021/ci900043r Molecular Operating Environment (MOE), 2013.08 (2018) Chemical Computing Group ULC, QC, Canada. https://www.chemcomp.com/Research-Citing_MOE.htm Da C, Kireev D (2014) Structural protein–ligand interaction fingerprints (SPLIF) for structure-based virtual screening: method and benchmark study. J Chem Inf Model 54:2555–2561 Sato M, Hirokawa T (2014) Extended template-based modeling and evaluation method using consensus of binding mode of GPCRs for virtual screening. J Chem Inf Model 54:3153–3161. https://doi.org/10.1021/ci500499j Jasper JB, Humbeck L, Brinkjost T, Koch O (2018) A novel interaction fingerprint derived from per atom score contributions: exhaustive evaluation of interaction fingerprint performance in docking based virtual screening. J Cheminform 10:15. https://doi.org/10.1186/S13321-018-0264-0 Jones G, Willett P, Glen RC et al (1997) Development and validation of a genetic algorithm for flexible docking. J Mol Biol 267:727–748. https://doi.org/10.1006/jmbi.1996.0897 de Graaf C, Kooistra AJ, Vischer HF et al (2011) Crystal structure-based virtual screening for fragment-like ligands of the human histamine H1 receptor. J Med Chem 54:8195–8206 Bajusz D, Ferenczy GG, Keserű GM (2016) Discovery of subtype selective Janus kinase (JAK) inhibitors by structure-based virtual screening. J Chem Inf Model 56:234–247. https://doi.org/10.1021/acs.jcim.5b00634 Méndez-Lucio O, Kooistra AJ, de Graaf C et al (2015) Analyzing multitarget activity landscapes using protein–ligand interaction fingerprints: interaction cliffs. J Chem Inf Model 55:251–262. https://doi.org/10.1021/ci500721x Smusz S, Mordalski S, Witek J et al (2015) Multi-step protocol for automatic evaluation of docking results based on machine learning methods—a case study of serotonin receptors 5-HT 6 and 5-HT 7. J Chem Inf Model 55:823–832. https://doi.org/10.1021/ci500564b Zhao Z, Liu Q, Bliven S et al (2017) Determining cysteines available for covalent inhibition across the human kinome. J Med Chem 60:2879–2889. https://doi.org/10.1021/acs.jmedchem.6b01815 Pándy-Szekeres G, Munk C, Tsonkov TM et al (2018) GPCRdb in 2018: adding GPCR structure models and ligands. Nucleic Acids Res 46:D440–D446. https://doi.org/10.1093/nar/gkx1109 van Linden OPJ, Kooistra AJ, Leurs R et al (2014) KLIFS: a knowledge-based structural database to navigate kinase–ligand interaction space. J Med Chem 57:249–277. https://doi.org/10.1021/jm400378w Kooistra AJ, Kanev GK, van Linden OPJ et al (2016) KLIFS: a structural kinase–ligand interaction database. Nucleic Acids Res 44:D365–D371. https://doi.org/10.1093/nar/gkv1082 Jansen C, Kooistra AJ, Kanev GK et al (2016) PDEStrIAn: a phosphodiesterase structure and ligand interaction annotated database as a tool for structure-based drug design. J Med Chem 59:7029–7065. https://doi.org/10.1021/acs.jmedchem.5b01813 Todeschini R, Consonni V, Xiang H et al (2012) Similarity coefficients for binary chemoinformatics data: overview and extended comparison using simulated and real data sets. J Chem Inf Model 52:2884–2901. https://doi.org/10.1021/ci300261r Rácz A, Andrić F, Bajusz D, Héberger K (2018) Binary similarity measures for fingerprint analysis of qualitative metabolomic profiles. Metabolomics 14:29. https://doi.org/10.1007/s11306-018-1327-y Bajusz D, Rácz A, Héberger K (2015) Why is Tanimoto index an appropriate choice for fingerprint-based similarity calculations? J Cheminform 7:20. https://doi.org/10.1186/s13321-015-0069-3 Huang N, Shoichet B, Irwin J (2006) Benchmarking sets for molecular docking. J Med Chem 49:6789–6801 Jain AN, Nicholls A (2008) Recommendations for evaluation of computational methods. J Comput Aided Mol Des 22:133–139. https://doi.org/10.1007/s10822-008-9196-5 Sastry GM, Adzhigirey M, Day T et al (2013) Protein and ligand preparation: parameters, protocols, and influence on virtual screening enrichments. J Comput Aided Mol Des 27:221–234. https://doi.org/10.1007/s10822-013-9644-8 Friesner RA, Banks JL, Murphy RB et al (2004) Glide: a new approach for rapid, accurate docking and scoring. 1. Method and assessment of docking accuracy. J Med Chem 47:1739–1749 Halgren TA, Murphy RB, Friesner RA et al (2004) Glide: a new approach for rapid, accurate docking and scoring. 2. Enrichment factors in database screening. J Med Chem 47:1750–1759 Héberger K (2010) Sum of ranking differences compares methods or models fairly. TrAC Trends Anal Chem 29:101–109. https://doi.org/10.1016/j.trac.2009.09.009 Kollár-Hunek K, Héberger K (2013) Method and model comparison by sum of ranking differences in cases of repeated observations (ties). Chemom Intell Lab Syst 127:139–146. https://doi.org/10.1016/j.chemolab.2013.06.007 Héberger K, Kollár-Hunek K (2011) Sum of ranking differences for method discrimination and its validation: comparison of ranks with random numbers. J Chemom 25:151–158. https://doi.org/10.1002/cem.1320 Andrić F, Bajusz D, Rácz A et al (2016) Multivariate assessment of lipophilicity scales—computational and reversed phase thin-layer chromatographic indices. J Pharm Biomed Anal 127:81–93. https://doi.org/10.1016/j.jpba.2016.04.001 Lourenco JM, Lebensztajn L (2018) Post-pareto optimality analysis with sum of ranking differences. IEEE Trans Magn 54:1–10. https://doi.org/10.1109/TMAG.2018.2836327 Sipos L, Gere A, Popp J, Kovács S (2018) A novel ranking distance measure combining Cayley and Spearman footrule metrics. J Chemom 32:e3011. https://doi.org/10.1002/cem.3011 Koziol JA (2013) Sums of ranking differences and inversion numbers for method discrimination. J Chemom 27:165–169. https://doi.org/10.1002/cem.2504 Pedregosa F, Varoquaux G, Gramfort A et al (2011) Scikit-learn: machine learning in Python. J Mach Learn Res 12:2825–2830 Sokal R, Michener C (1958) A statistical method for evaluating systematic relationships. Univ Kansas Sci Bull 28:1409–1438 Rogers D, Tanimoto T (1960) A computer program for classifying plants. Science 132:1115–1118. https://doi.org/10.1126/science.132.3434.1115 Sokal R, Sneath P (1963) Principles of numerical taxonomy. W. H, Freeman, San Francisco, CA Consonni V, Todeschini R (2012) New similarity coefficients for binary data. MATCH Commun Math Comput Chem 68:581–592 Austin B, Colwell R (1977) Evaluation of some coefficients for use in numerical taxonomy of microorganisms. Int J Syst Bacteriol 27:204–210 AR conducted the virtual screening campaigns. AR and KH conducted the statistical analyses. DB implemented and calculated the similarity metrics, and supervised the project. All authors have participated in preparing the manuscript. All authors read and approved the final manuscript. The authors thank Prof. Roberto Todeschini for his advice regarding the implementation of some of the similarity metrics. This work was supported by the National Research, Development and Innovation Office of Hungary under Grant Numbers OTKA K 119269 and KH_17 125608. The FPKit package is freely available at: https://github.com/davidbajusz/fpkit. Plasma Chemistry Research Group, Research Centre for Natural Sciences, Hungarian Academy of Sciences, Magyar tudósok krt. 2, Budapest, 1117, Hungary Anita Rácz & Károly Héberger Medicinal Chemistry Research Group, Research Centre for Natural Sciences, Hungarian Academy of Sciences, Magyar tudósok krt. 2, Budapest, 1117, Hungary Dávid Bajusz Search for Anita Rácz in: Search for Dávid Bajusz in: Search for Károly Héberger in: Correspondence to Dávid Bajusz. 13321_2018_302_MOESM1_ESM.docx Additional file 1. Supplementary information. Rácz, A., Bajusz, D. & Héberger, K. Life beyond the Tanimoto coefficient: similarity measures for interaction fingerprints. J Cheminform 10, 48 (2018) doi:10.1186/s13321-018-0302-y Interaction fingerprint FPKit Binary fingerprints
CommonCrawl
A destination-preserving model for simulating Wardrop equilibria in traffic flow on networks NHM Home (Almost) Everything you always wanted to know about deterministic control problems in stratified domains December 2015, 10(4): 837-855. doi: 10.3934/nhm.2015.10.837 Regularity of densities in relaxed and penalized average distance problem Xin Yang Lu 1, Department of Mathematical Sciences, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA, 15213, United States Received March 2014 Revised June 2015 Published October 2015 The average distance problem finds application in data parameterization, which involves ``representing'' the data using lower dimensional objects. From a computational point of view it is often convenient to restrict the unknown to the family of parameterized curves. The original formulation of the average distance problem exhibits several undesirable properties. In this paper we propose an alternative variant: we minimize the functional \begin{equation*} \int_{{\mathbb{R}}^d\times \Gamma_\gamma} |x-y|^p {\,{d}}\Pi(x,y)+\lambda L_\gamma +\varepsilon\alpha(\nu) +\varepsilon' \eta(\gamma)+\varepsilon''\|\gamma'\|_{TV}, \end{equation*} where $\gamma$ varies among the family of parametrized curves, $\nu$ among probability measures on $\gamma$, and $\Pi$ among transport plans between $\mu$ and $\nu$. Here $\lambda,\varepsilon,\varepsilon',\varepsilon''$ are given parameters, $\alpha$ is a penalization term on $\mu$, $\Gamma_\gamma$ (resp. $L_\gamma$) denotes the graph (resp. length) of $\gamma$, and $\|\cdot\|_{TV}$ denotes the total variation semi-norm. We will use techniques from optimal transport theory and calculus of variations. The main aim is to prove essential boundedness, and Lipschitz continuity for Radon-Nikodym derivative of $\nu$, when $(\gamma,\nu,\Pi)$ is a minimizer. Keywords: average-distance, regularity., optimal transport, Kantorovich potential, Nonlocal variational problem. Mathematics Subject Classification: Primary: 49Q20, 49K10, 49Q10, 35B6. Citation: Xin Yang Lu. Regularity of densities in relaxed and penalized average distance problem. Networks & Heterogeneous Media, 2015, 10 (4) : 837-855. doi: 10.3934/nhm.2015.10.837 L. Ambrosio, N. Gigli and G. Savaré, Gradient Flow in Metric Spaces and in the Space of Probability Measures,, Second Editon, (2005). Google Scholar G. Buttazzo, E. Mainini and E. Stepanov, Stationary configurations for the average distance functional and related problems,, Control Cybernet., 38 (2009), 1107. Google Scholar G. Buttazzo, E. Oudet and E. Stepanov, Optimal transportation problems with free Dirichlet regions,, Progr. Nonlinear Differential Equations Appl., 51 (2002), 41. Google Scholar G. Buttazzo and F. Santambrogio, A mass transportation model for the optimal planning of an urban region,, SIAM J. Math. Anal., 37 (2005), 514. doi: 10.1137/S0036141003438313. Google Scholar G. Buttazzo and E. Stepanov, Minimization problems for average distance functionals,, in Calculus of Variations: Topics from the Mathematical Heritage of Ennio De Giorgi (ed. D. Pallara), 14 (2004), 48. Google Scholar G. Buttazzo and E. Stepanov, Optimal transportation networks as free Dirichlet regions for the Monge-Kantorovich problem,, Ann. Sc. Norm. Sup. Pisa Cl. Sci., 2 (2003), 631. Google Scholar P. Drineas, A. Frieze, R. Kannan, S. Vempala and V. Vinay, Clustering large graphs via the singular value decomposition,, Mach. Learn., 56 (2004), 9. doi: 10.1023/B:MACH.0000033113.59016.96. Google Scholar T. Duchamp and W. Stuetzle, Extremal properties of principal curves in the plane,, Ann. Statist., 24 (1996), 1511. doi: 10.1214/aos/1032298280. Google Scholar T. Duchamp and W. Stuetzle, Geometric properties of principal curves in the plane,, in Robust Statistics, 109 (1996), 135. doi: 10.1007/978-1-4612-2380-1_9. Google Scholar A. Fischer, Selecting the length of a principal curve within a Gaussian model,, Electron. J. Statist., 7 (2013), 342. doi: 10.1214/13-EJS775. Google Scholar W. Gangbo and R. J. McCann, The geometry of optimal transportation,, Acta Math., (1996), 113. doi: 10.1007/BF02392620. Google Scholar T. Hastie, Principal Curves and Surfaces,, Ph. D Thesis, (1984). Google Scholar T. Hastie and W. Stuetzle, Principal curves,, J. Amer. Statist. Assoc., 84 (1989), 502. doi: 10.1080/01621459.1989.10478797. Google Scholar B. Kégl, Principal Curves: Learning, Design, and Applications,, Ph.D thesis, (1999). Google Scholar K. Kégl and K. Aetal, Learning and design of principal curves,, IEEE Trans. Pattern Anal. Mach. Intell., 22 (2000), 281. Google Scholar A. Lemenant, A presentation of the average distance minimizing problem,, J. Math. Sci. (N.Y.), 181 (2012), 820. doi: 10.1007/s10958-012-0717-3. Google Scholar X. Y. Lu, Example of minimizer of the average distance problem with non closed set of corners,, Rend. Sem. Mat. Univ. Padova, (). Google Scholar X. Y. Lu and D. Slepčev, Properties of minimizers of average distance problem via discrete approximation of measures,, SIAM J. Math. Anal., 45 (2013), 3114. doi: 10.1137/130905745. Google Scholar U. Ozertem and D. Erdogmus, Locally defined principal curves and surfaces,, J. Mach. Learn. Res., 12 (2011), 1249. Google Scholar E. Paolini and E. Stepanov, Qualitative properties of maximum and average distance minimizers in $\mathbbR^n$,, J. Math. Sci. (N.Y.), 122 (2004), 3290. doi: 10.1023/B:JOTH.0000031022.10122.f5. Google Scholar P. Polak and G. Wolanski, The lazy traveling salesman problem in $\mathbbR^2$,, ESAIM: Control Optim. Calc. Var., 13 (2007), 538. doi: 10.1051/cocv:2007025. Google Scholar D. Slepčev, Counterexample to regularity in average-distance problem,, Ann. Inst. H. Poincaré (C), 31 (2014), 169. doi: 10.1016/j.anihpc.2013.02.004. Google Scholar A. J. Smola, S. Mika, B. Schölkopf and R. C. Williamson, Regularized principal manifolds,, J. Mach. Learn., 1 (2001), 179. doi: 10.1162/15324430152748227. Google Scholar R. Tibshirani, Principal curves revisited,, Stat. Comput., 2 (1992), 183. doi: 10.1007/BF01889678. Google Scholar C. Villani, Optimal Transport, Old and New,, Grundlehren der mathematischen Wissenschaften, (2009). doi: 10.1007/978-3-540-71050-9. Google Scholar Abbas Moameni. Invariance properties of the Monge-Kantorovich mass transport problem. Discrete & Continuous Dynamical Systems - A, 2016, 36 (5) : 2653-2671. doi: 10.3934/dcds.2016.36.2653 Hang-Chin Lai, Jin-Chirng Lee, Shuh-Jye Chern. A variational problem and optimal control. Journal of Industrial & Management Optimization, 2011, 7 (4) : 967-975. doi: 10.3934/jimo.2011.7.967 Jesus Garcia Azorero, Juan J. Manfredi, I. Peral, Julio D. Rossi. Limits for Monge-Kantorovich mass transport problems. Communications on Pure & Applied Analysis, 2008, 7 (4) : 853-865. doi: 10.3934/cpaa.2008.7.853 Orlando Lopes. Uniqueness and radial symmetry of minimizers for a nonlocal variational problem. Communications on Pure & Applied Analysis, 2019, 18 (5) : 2265-2282. doi: 10.3934/cpaa.2019102 Giuseppe Buttazzo, Eugene Stepanov. Transport density in Monge-Kantorovich problems with Dirichlet conditions. Discrete & Continuous Dynamical Systems - A, 2005, 12 (4) : 607-628. doi: 10.3934/dcds.2005.12.607 Zuo Quan Xu, Jia-An Yan. A note on the Monge-Kantorovich problem in the plane. Communications on Pure & Applied Analysis, 2015, 14 (2) : 517-525. doi: 10.3934/cpaa.2015.14.517 Sunra J. N. Mosconi. Optimal elliptic regularity: A comparison between local and nonlocal equations. Discrete & Continuous Dynamical Systems - S, 2018, 11 (3) : 547-559. doi: 10.3934/dcdss.2018030 Christian Léonard. A survey of the Schrödinger problem and some of its connections with optimal transport. Discrete & Continuous Dynamical Systems - A, 2014, 34 (4) : 1533-1574. doi: 10.3934/dcds.2014.34.1533 Cédric Villani. Regularity of optimal transport and cut locus: From nonsmooth analysis to geometry to smooth analysis. Discrete & Continuous Dynamical Systems - A, 2011, 30 (2) : 559-571. doi: 10.3934/dcds.2011.30.559 Pierluigi Colli, Gianni Gilardi, Jürgen Sprekels. Distributed optimal control of a nonstandard nonlocal phase field system with double obstacle potential. Evolution Equations & Control Theory, 2017, 6 (1) : 35-58. doi: 10.3934/eect.2017003 Nassif Ghoussoub. A variational principle for nonlinear transport equations. Communications on Pure & Applied Analysis, 2005, 4 (4) : 735-742. doi: 10.3934/cpaa.2005.4.735 Qiang Du, Jingyan Zhang. Asymptotic analysis of a diffuse interface relaxation to a nonlocal optimal partition problem. Discrete & Continuous Dynamical Systems - A, 2011, 29 (4) : 1443-1461. doi: 10.3934/dcds.2011.29.1443 Giulia Cavagnari. Regularity results for a time-optimal control problem in the space of probability measures. Mathematical Control & Related Fields, 2017, 7 (2) : 213-233. doi: 10.3934/mcrf.2017007 Qinglan Xia. An application of optimal transport paths to urban transport networks. Conference Publications, 2005, 2005 (Special) : 904-910. doi: 10.3934/proc.2005.2005.904 G. Idone, A. Maugeri. Variational inequalities and a transport planning for an elastic and continuum model. Journal of Industrial & Management Optimization, 2005, 1 (1) : 81-86. doi: 10.3934/jimo.2005.1.81 Wilfrid Gangbo, Andrzej Świech. Optimal transport and large number of particles. Discrete & Continuous Dynamical Systems - A, 2014, 34 (4) : 1397-1441. doi: 10.3934/dcds.2014.34.1397 Dario Cordero-Erausquin, Alessio Figalli. Regularity of monotone transport maps between unbounded domains. Discrete & Continuous Dynamical Systems - A, 2019, 39 (12) : 7101-7112. doi: 10.3934/dcds.2019297 Yupeng Li, Wuchen Li, Guo Cao. Image segmentation via $ L_1 $ Monge-Kantorovich problem. Inverse Problems & Imaging, 2019, 13 (4) : 805-826. doi: 10.3934/ipi.2019037 Xiaolong Han, Guozhen Lu. Regularity of solutions to an integral equation associated with Bessel potential. Communications on Pure & Applied Analysis, 2011, 10 (4) : 1111-1119. doi: 10.3934/cpaa.2011.10.1111 Yan Jin, Jürgen Jost, Guofang Wang. A new nonlocal variational setting for image processing. Inverse Problems & Imaging, 2015, 9 (2) : 415-430. doi: 10.3934/ipi.2015.9.415 Xin Yang Lu
CommonCrawl
Multiplication of Functions Function is a correspondence $f$ between elements of a space $X$ and those of a space $Y$ such that any element $x$ of $X$ has a unique corresponding element $y$ of $Y$ which is denoted $y = f(x).$ The following is a widely used shorthand for "a function $f$ from $X$ to $Y$" $f:\,X\rightarrow Y.$ In most cases $f(x)$ is defined for all $x\in X,$ but sometimes its domain of definition is smaller. The reason to allow such aberration is that most of the elementary functions are real $(X = R)$ and real-valued $(Y = \mathbb{R})$ and are defined by some kind of formulas. E.g., $f(x) = x^{2} - 10.$ However, some formulas are not defined for all real values as, for example, $f(x) = 1/(x-1)$ which is not defined for $x = 1.$ The right way to described this latter function is $(-\infty,1)\cup (1,\infty)\rightarrow\mathbb{R}.$ Which is a little awkward. Thus we often write $1/(x-1): \mathbb{R}\rightarrow \mathbb{R}$ and try not to forget that $x = 1$ is an exceptional point that does not belong to the domain of definition of that function. Returning to the question of function multiplication, I can think of four quite different operations meaningfully and fruitfully defined for functional spaces (i.e., spaces that contain functions, or whose points are functions.) Componentwise multiplication. If $Y$ is a multiplicative group we may look at functions $f: X\rightarrow Y$ as vectors with components in $Y$ indexed by elements from $X.$ In this case, we can sensibly define a componentwise multiplication: $(fg)(x) = f(x)g(x)$ where, as is customary, I have dropped the symbol of multiplication. For example, if $f(x) = x+1$ and $g(x) = x - 1$ then $(fg)(x) = x^{2} - 1.$ Associativity, commutativity and the distributive law are inherited from $Y.$ However, if, for whatever reason, $Y$ is not a multiplicative group, the function multiplication won't be a group operation either. If $Y$ has a unit element $1,$ then the function $f(x)$ which is identically equal to $1,$ will serve as the unit element for the componentwise multiplication. If $Y = \mathbb{R}$ where all elements but $0$ have inverses, then a function $f: X\rightarrow \mathbb{R}$ among whose values there is a $0,$ won't have an inverse. Whatever the case, the important thing is that we only can multiply functions from the same space $Y^{X}.$ Componentwise multiplication is not defined for functions with different domains of definition. Assume we are given two functional spaces $Y^{X}$ and $Z^{Y}.$ We may construct elements from $Z^{X}$ in the following manner: if $y = g(x)$ and $z = f(y)$ then $(fg)(x) = z.$ In other words, $(fg)(x) = f(g(x)).$ The most interesting case is when $X = Y = Z$ so that we deal with a single space $X^{X}$ throughout. If, for a given $f: X\rightarrow X,$ the inverse function exists, it naturally belongs to the same space and constitutes the inverse element $f^{-1}.$ A function $f$ has an inverse iff it's 1-1 and on. The latter means that the inverse function is defined on the whole of $X.$ The identity function $f(x) = x$ serves as the unit element for the composition. A common notation for the composition $f(g(x))$ is $(g\circ f)(x).$ Associativity is straightforward, but the commutative law does not hold. For $f(x) = x+5$ and $g(x) = x^{2},$ $f(g(x)) = x^{2}+5,$ while $g(f(x)) = (x+5)^{2}.$ Scalar Product. Scalar product for functions is pretty much the same as the scalar product for vectors. If $X$ is an infinite set then a finite sum must be replaced with a sum of an infinite series (if $X$ is countable) or an integral (if $X$ is not.) In both cases it may not be defined for every $f\in Y^{X}.$ Subspaces of $Y^{X}$ for which the scalar product is defined are analogous to finite-dimensional vector spaces. We may consider orthogonal functions, and posit a problem of finding a basis in such a space. The quest for bases in function spaces with scalar product led to the theory of Fourier series and Fourier transform, to the theory of generalized functions and most recently to the discovery of wavelets. As in the finite-dimensional case, scalar product is not a group operation because its result does not belong to the space itself. Convolution. Assume $X = Y = R$. Then along with every function $f: \mathbb{R}\rightarrow\mathbb{R}$ we may consider a whole family of functions $f_{a}(x) = f(a - x).$ These are reflected shifts of the function $f.$ If we now restrict ourselves to the subspaces of $\mathbb{R}^{\mathbb{R}}$ on which a scalar product is defined, we may form a product $g*f = g\cdot f_{a},$ where by $h\cdot k$ I denote the scalar product of the functions $h$ and $k.$ Strictly speaking, $g*f$ depends on $a\in\mathbb{R}$ and is itself a real number. Therefore, it may be regarded as a function $g*f: \mathbb{R}\rightarrow\mathbb{R}.$ This function is known as the convolution of $f$ and $g.$ Convolutions arise naturally when differential equations are being replaced with integral ones. Fourier transform of a convolution coincides with the (componentwise) product of Fourier transforms. Thus integral equations with convolutions may be further reduced to algebraic equations. The unit element for the convolution is the well-known Dirac $\delta$ function. This is a "function" that was invented by P. Dirac in 1920s. It's zero everywhere except for the origin where its value is such that the total integral of the function equals $1!$ Later S. L. Sobolyev and L. Schwartz developed a framework of generalized functions in which such a definition indeed made sense. Functional Notations and Terminology Examples of Functions Addition of Functions Addition and Subtraction of Functions Cantor Set and Function Limit and Continuous Functions Sine And Cosine Are Continuous Functions Composition of Functions, an Exercise What Can Be Multiplied? What Is Multiplication? Multiplication of Equations Multiplication of Matrices Multiplication of Numbers Peg Solitaire and Group Theory Multiplication of Permutations Multiplication of Sets Multiplication of Vectors Multiplication of a Vector by a Matrix Vector Space and Spaces with the Scalar Product Addition and Multiplication Tables in Various Bases Multiplication of Points on a Circle Multiplication of Points on an Ellipse |Contact| |Front page| |Contents| |Up|
CommonCrawl
Session G34: Atoms in Reduced Dimensions and Novel Geometries Sponsoring Units: DAMOP Chair: Lincoln Carr, Colorado School of Mines G34.00001: Magnetism on the Lieb Lattice Fr\'ederic H\'ebert, Vladimir Iglovikov, Richard Scalettar, George Batrouni The fermionic Hubbard model on a square lattice is known to exhibit antiferromagnetism at half-filling for arbitrarily weak interactions, due to the nesting of its Fermi surface and the divergence of the density of states. This talk presents a determinant Quantum Monte Carlo study of the magnetic properties of the Hubbard model on the ``Lieb lattice,'' which is obtained from the square lattice by removing 1/4 of the sites in a regular pattern. This model exhibits a flat band at half-filling, surrounded by two dispersive bands. The non interacting states at half-filling are localized, which allows us to study the magnetic properties of a system that does not have a Fermi surface, although there is an infinite density of states at half-filling. Other magnetic phases may also appear away from half-filling. [Preview Abstract] G34.00002: Quantum Monte Carlo Study of Superconductivity on the Lieb Lattice Vladimir Iglovikov, Richard Scalettar, Fr\'ederic H\'ebert, George Batrouni The attractive fermion Hubbard model on a square lattice is known to have a finite temperature (Kosterlitz-Thouless) transition to a superconducting ground state away from half-filling. This talk presents determinant Quantum Monte Carlo results for pairing correlations on a ``Lieb lattice,'' which is obtained from the square lattice by regular pattern removal of 1/4 of the sites. The Lieb lattice has two dispersing bands, and one flat band. We will show how superconductivity differs when the chemical potential lies in the flat band. For large values of the on-site attractive interaction, the fermions form tightly bound bosonic pairs. In that limit, our work addresses the nature of Bose-Einstein condensation when the non-interacting band structure has no wave-vector which minimizes the energy. [Preview Abstract] G34.00003: FFLO pairing correlations in a trapped quasi one-dimensional Fermi gas Dominique Gautreau, Stephen Kudla, Daniel Sheehy Recent work has pursued the possibility of a Fulde-Ferrell-Larkin-Ovchinnikov (FFLO) phase of imbalanced Fermi gases in one dimension, in which an imposed population imbalance between two species of interacting fermion leads to spatially-modulated local pairing correlations. While experiments at Rice (Liao et al, Nature 467, 567 (2010)) show consistency with the expected bulk phase diagram using the local density approximation, little is known about how the FFLO pairing correlations will be revealed experimentally. Using a simple variational wavefunction ansatz for the FFLO state of a trapped 1D gas, in which the population imbalance leads to an imbalance in pairing among harmonic oscillator states, we compute predicted experimental signatures of the FFLO phase in a trapped one-dimensional fermionic atomic gas. [Preview Abstract] G34.00004: ABSTRACT WITHDRAWN [Preview Abstract] G34.00005: Heat capacity and sound velocities of low dimensional Fermi gases P. Salas, M.A. Solis We report the heat capacity ratio and sound velocities for an interactionless Fermi gas immersed in periodic structures such as penetrable multilayers or multitubes created by one (planes) or two perpendicular (tubes) external Dirac comb potentials. The isobaric specific heat of the fermion gas presents the dimensional crossover previously observed in the isochoric specific heat [1] - from 3D to 2D or to 1D -. The quotient between the two quantities has a prominent bump related to the confinement, and as the temperature increases, it goes towards the monoatomic classical gas value 5/3. We present the isothermal and the adiabatic sound velocities of the fermion gas which show anomalous behavior at temperatures below $T_F$ due to the dimensionality of the system, while at higher temperatures again we recover the behavior of a classical Fermi gas. Furthermore, as the temperature goes to zero the sound velocity has a finite value, as expected.\\[4pt] [1] P. Salas and M.A. Sol\'is, ``Trapping effect of periodic structures on the thermodynamic properties of a Fermi gas,'' J. Low Temp, Phys. (2013) DOI 10.1007/s10909-013-0939-x. [Preview Abstract] G34.00006: Spin imbalance in 1D trapped attractive Fermi atoms: T\textgreater 0 quantum Monte Carlo results ChangMo Yang, D.M. Ceperley Spin imbalance in a one-dimensional system of fermions with short-ranged attractive interactions is studied with continuous-space path-integral Monte Carlo simulation. Following closely the experiment with ${}^6$Li atoms~[1], the pair momentum distribution is calculated at various spin polarizations. FFLO-type pairing is confirmed at the experimental temperature and coupling strength. We compare our results to those of other numerical methods~[2-4] and discuss the prospects for experimental detection.\\[4pt] [1] Y.-A. Liao et al.,~\textit{Nature},~467, 567 (2010).\\[0pt] [2] H. Lu et al.,~\textit{Phys.~Rev.~Lett.},~108, 225302 (2012).\\[0pt] [3] F. Heidrich-Meisner et al.,~\textit{Phys.~Rev.~A},~81, 023629 (2010).\\[0pt] [4] M. Takahashi,~\textit{Prog.~Theor.~Phys,}~46, 1388 (1971).\\[0pt] [5] M. Casula et al.,~\textit{Phys.~Rev.~A},~78, 033607, (2008). [Preview Abstract] G34.00007: Quasi-two-dimensional Fermi gases Meera Parish, Andrea Fischer, Jesper Levinsen We consider a two-component gas of fermionic atoms confined to a quasi-two-dimensional geometry by a harmonic trapping potential in the transverse direction. We construct a mean field theory of the BCS-BEC crossover that allows us to extrapolate to an infinite number of transverse harmonic oscillator levels. Even when the interactions are weak and the Fermi energy is less than the confinement frequency, we find that the higher transverse levels can substantially modify fermion pairing. We also investigate the effect of confinement on few-body correlations, and we demonstrate the existence of a universal tetramer for mass ratios $> 5.0$. [Preview Abstract] G34.00008: Pairing correlations in a trapped quasi one-dimensional Fermi gas Stephen Kudla, Dominique Gautreau, Daniel Sheehy We utilize a BCS-type variational wavefunction to study attractively-interacting quasi one-dimensional fermionic atomic gases, motivated by cold-atom experiments that access this regime using a anisotropic harmonic trapping potential (characterized by $\omega_x = \omega_y\gg \omega_z$) that confines the gas to a cigar-shaped geometry. To handle the presence of the trap along the $z$ direction, we construct our variational wavefunction from the harmonic oscillator Hermite functions that are the eigenfunctions of the single-particle problem. Using an analytic determination of the effective interaction among Hermite function states along with a numerical calculation of the resulting variational equations, we make specific experimental predictions for how local pairing correlations will be revealed in experimental probes like the local density, the momentum distribution, and the momentum correlation function. [Preview Abstract] G34.00009: Finite-temperature dynamical structure factor of the one-dimensional Bose gas: From the Gross-Pitaevskii equation to the Kardar-Parisi-Zhang universality class of dynamical critical phenomena Austen Lamacraft, Manas Kulkarni We study the finite-temperature dynamical structure factor S(k,$\omega$) of a one-dimensional Bose gas using numerical simulations of the Gross-Pitaevskii equation appropriate to a weakly interacting system. The line shape of the phonon peaks in S(k,$\omega$) has a width proportional to $|$k$|^{3/2}$ at low wave vectors. This anomalous width arises from resonant three-phonon interactions, and reveals a remarkable connection to the Kardar-Parisi-Zhang universality class of dynamical critical phenomena. [Preview Abstract] G34.00010: Casimir interaction between mobile impurities in one-dimensional quantum liquids Michael Schecter, Alex Kamenev At zero temperature virtual phonons of a quantum liquid scatter off impurities and mediate a long-range interaction, analogous to the Casimir effect. At finite temperature, moving impurities also experience a correlated friction due to coherent exchange of real phonons. In one dimension the effect is universal and the induced interaction decays as $1/r^3$, much slower than the van der Waals interaction $\sim1/r^6$ where $r$ is the impurity separation. The magnitude of the effect is characterized by the product of impurity-phonon scattering amplitudes, which are seen to vanish for the class of integrable impurity models. By tuning the parameters near integrability one can thus observe an attractive interaction turned into a repulsive one. [Preview Abstract] G34.00011: Spreading of correlations in the XXZ chain at finite temperatures Lars Bonnes, Andreas L\"auchli In a quantum quench, for instance by abruptly changing the interaction parameter in a spin chain, correlations can spread across the system but have to obey a speed limit set by the Lieb-Robinson bound. This results into a causal structure where the propagation front resembles a light-cone. One can ask how fast a correlation front actually propagates and how its velocity depends on the nature of the quench. This question is addressed by performing global quenches in the XXZ chain initially prepared in a finite-temperature state using minimally entangled typical thermal states (METTS). We provide numerical evidence that the spreading velocity of the spin correlation functions for the quench into the gapless phase is solely determined by the value of the final interaction and the amount of excess energy of the system. This is quite surprising as the XXZ model is integrable and its dynamics is constrained by a large amount of conserved quantities. In particular, the spreading velocity seems to interpolate linearly from a universal value at $T=\infty$ to the spin wave velocity of the final Hamiltonian in the limit of zero excess energy for $\Delta_{\mathrm{final}} > 0$. [Preview Abstract] G34.00013: Order-by-disorder of interacting bosons on the dice lattice under a synthetic gauge field Matjaz Payrits, Ryan Barnett We consider a gas of interacting bosons in the two-dimensional dice lattice in the presence of a half-elementary magnetic flux threading each plaquette. The single particle spectrum of the system consists of three doubly-degenerate completely flat bands, which indicates a large ground state degeneracy. It is shown how this degeneracy is partially lifted in the superfluid regime at the mean-field level. Furthermore, it is shown how quantum and thermal fluctuations conclusively remove the remaining accidental degeneracy between the mean field states, thus selecting a unique state up to overall symmetries. This can be elegantly described by means of the distribution of condensate vortices in the Kagom\'{e} vortex lattice, which is dual to the dice lattice. [Preview Abstract] G34.00014: Collective Excitations in Quasi-2D and 3D Condensates Dan Lobser, Andrew Barentine, Heather Lewandowski, Eric Cornell Collective motion of a Bose-Einstein condensate has been a system of interest since the discovery of BEC. Boltzmann proved that the monopole mode of a thermal gas in an isotropic, harmonic and 3D trap is undamped [1, 2]. BECs are not classical gases and their weakly interacting nature causes damping in a 3D monopole mode. However, experimental limitations have precluded studies of this behavior in very spherical traps. Quantum gases confined to lower dimensions exhibit remarkable physical properties such as the Berezkinskii-Kosterlitz-Thouless transition or the Tonks-Girardeau gas. Confinement effects in a quasi-2D condensate are predicted to shift the frequency of the monopole mode [3]. One correction in particular connects a small frequency shift with certain quantum corrections [4]. Current results of our studies in 2D and 3D will be presented. \\[4pt] [1] ``Transverse Breathing Mode of an Elongated Bose-Einstein Condensate,'' F. Chevy, V. Bretin, P. Rosenbusch, K. W. Madison, and J. Dalibard, Phys. Rev. Lett. 88 250402 (2002). \\[0pt] [2] L. Boltzmann, Wissenschaftliche Abhandlungen, edited by F. Hasenorl (Barth, Leipzig, 1909), Vol. II \\[0pt] [3] Olshanii et al., Phys. Rev. Lett. 105, 095302 (2010) \\[0pt] [4] Hu, et al., Phys. Rev. Lett. 107, 110401 (2011) [Preview Abstract] G34.00015: Universal high frequency high momentum behavior of dynamic structure factor in one dimensional interacting boson gas Ran Qi, Michael Pustilnik, Shina Tan We study the short-distance and short-time structure of density-density correlation in one dimensional repulsively interacting boson gas. A compact universal formula is obtained for the high frequency high momentum asymptotic behavior of dynamic structure factor. We observe non-monotonous behavior in the dependence of DSF on interacting strength and qualitative change in the singular behavior in different region of $q^2/\omega$. Possible experimental applications are discussed. [Preview Abstract]
CommonCrawl
Earthquake-induced landslide scenarios for seismic microzonation: application to the Accumoli area (Rieti, Italy) S.I.: Seismic Microzonation of Central Italy S. Martino ORCID: orcid.org/0000-0003-1277-77841, S. Battaglia1, F. D'Alessandro1, M. Della Seta1, C. Esposito1, G. Martini2, F. Pallone1 & F. Troiani1 Bulletin of Earthquake Engineering volume 18, pages 5655–5673 (2020)Cite this article A Correction to this article was published on 25 April 2019 Scenarios of earthquake-induced landslides are necessary for seismic microzonation (SM) studies since they must be integrated with the mapping of instability areas. The PARSIFAL (Probabilistic Approach to pRovide Scenarios of earthquake‐Induced slope FAiLures) approach provides extensive analyses, over tens to thousands of square kilometers, and is designed as a fully comprehensive methodology to output expected scenarios which depend on seismic input and saturation conditions. This allows to attribute a rating, in terms of severity level, to the landslide-prone slope areas in view of future engineering studies and designs. PARSIFAL takes into account first-time rock- and earth-slides as well as re-activations of existing landslides performing slope stability analyses of different failure mechanisms. The results consist of mapping earthquake-induced landslide scenarios in terms of exceedance probability of critical threshold values of co-seismic displacements (P[D ≥ Dc|a(t),ay]). PARSIFAL was applied in the framework of level 3 SM studies over the municipality area of Accumoli (Rieti, Italy), strongly struck by the 2016 seismic sequence of Central Apennines. The use of the PARSIFAL was tested for the first time to screen the Susceptibility Zones (ZSFR) from the Attention Zones (ZAFR) in the category of the unstable areas, according to the guidelines by Italian Civil Protection. The results obtained were in a GIS-based mapping representing the possibility for a landslide to be induced by an earthquake (with a return period of 475 years) in three different saturation scenarios (i.e. dry, average, full). Only 41% of the landslide-prone areas in the Municipality of Accumoli are existing events, while the remaining 59% is characterized by first-time earth- or rock-slides. In dry conditions, unstable conditions or P[D ≥ Dc|a(t),ay] > 0 were for 54% of existing landslides, 17% of first-time rock-slides and 1% of first-time earth-slides. In full saturation conditions, the findings are much more severe since unstable conditions or P[D ≥ Dc|a(t),ay] > 0 were found for 58% of the existing landslides and for more than 80% of first-time rock- and earth-slides. Moreover, comparison of the total area of the ZAFR versus ZSFR, resulted in PARSIFAL screening reducing of 22% of the mapped ZAFR. The reconstruction of landslide scenarios is a milestone for environmental planning in terms of both hazard mapping and strategic and engineering solutions to be adopted in urbanised areas. In particular, the awareness of the spatial distribution of earthquake-induced landslides support the competent local authorities to plan intervention priorities aimed at managing the natural hazard on the basis of the available economic resources. The combined nature of the hazard related to earthquake-induced landslides implies that such processes should be analyzed taking into account both the proneness to failure (landslide susceptibility), and the probability of occurrence of the triggering event (seismic hazard). However, the predisposing conditions to slope failures vary according to the environmental constraints, among which the water saturation of covers and deposits or the presence of pore water pressures related to groundwater flow, more commonly originated by impulsive events such as heavy rainfall. Nevertheless, for the purposes of spatial planning and its technical products (such as SM maps), the susceptibility mapping or the analysis of the spatial distribution of hazard related to the trigger action is not sufficient. Instead, it is necessary to resort to the reconstruction of scenario maps which have the dual purpose of: (1) providing a spatial distribution of effects to a certain probability of occurrence; and (2) make it possible to take into account their variability as a function of the predisposing environmental conditions, including soil saturation. Since the late '80s the prefiguration of earthquake-induced landslides scenarios, which means analyzing over large areas the proneness of slopes to fail under seismic loading, was strongly requested by scientific and technical communities for risk mitigation purposes. This goal was initially pursued by analyzing the maximum runout distances for the expected landslide occurrence (Keefer 1984; Rodriguez et al. 1999) on the basis of datasets obtained on a global scale. Afterwards, mapping solutions were proposed based on susceptibility or hazard analyses, whose development in the last few decades—significantly favored by the constant improvement over the years of computing facilities and GIS platforms—represent a well-established topic in the scientific and technical communities dealing with landslide risk reduction. A first category of methods can be classified as physically-based approaches that are aimed at computing and mapping the safety factors and/or co-seismic displacements (Newmark 1965; Rathje and Bray 2000; Rathje and Antonakos 2011) under specific seismic loading (e.g., Prestininzi and Romeo 2000; Romeo 2000; Jibson et al. 2000; Jibson 2007; Hsieh and Lee 2011; Bozzano et al. 2013; Tsai et al. 2019). Such methods have the great advantage to explicitly consider the interaction slope-seismic waves, but also require a detailed knowledge of physical–mechanical properties of the slopes, which is not easy to achieve as the stability analyses should be extensively performed over wide areas. This issue can be relevant especially for first-time slope failures, while is less significant for landslide reactivation under seismic conditions if the geomechanical model of the slope is sufficiently constrained. Other approaches strongly rely on indirect susceptibility or hazard assessments, such as those developed and widely used in the last decades. These are extensively described in the review papers by Reichenbach et al. (2018), encompassing a wide range of methods to assess at several scales both qualitatively and quantitatively the proneness of territories to almost each type of landslides. Most of the quantitative approaches are designed to perform susceptibility or hazard assessments on large scales; statistically-based models are commonly used. Due to the implications for earthquake-triggered landslide applications, it is worth highlighting one of the basic principles of this kind of methods: the assumption that past and present are the key to the future (Varnes and IAEG Commission on Landslides and other Mass-Movements, 1984), which means that these analyses are basically back analyses. Susceptibility models, especially if based on statistic approaches, are in fact trained and validated on already occurred landslides; this implies the availability of reliable landslide inventories. To turn susceptibility analyses in proper hazard analyses, the reference inventory should be referred to a specific triggering event with its temporal probability of occurrence as an earthquake (Harp et al. 2011). As a matter of fact, several earthquake-triggered landslide susceptibility analyses were performed by means of statistically-based methods exploiting event-based inventories (e.g., Xu et al. 2012a; García-Rodríguez and Malpica 2010; Kamp et al. 2008), which is by itself a way to consider, although implicitly, the role of the trigger. In other studies trigger-related factors (such as PGA, shake maps, Arias Intensity) referred to the triggering event have been explicitly considered as variables in the susceptibility function (e.g. Lee et al. 2008; Bai et al. 2012; Xu et al. 2012b; Lee et al. 2002; Tian et al. 2019). Such studies can be formally regarded as hazard analyses but being trained on event-based inventories and event-specific triggers, their results are not easily generalizable to triggering events with different magnitudes. Based on these premises, the PARSIFAL method (Esposito et al. 2016) has been developed trying to put together the benefits of both approaches in a comprehensive, multi-step methodology: a susceptibility assessment under static conditions (i.e., regardless the availability of an event-based inventory) is a screening tool to identify areas actually prone to landslides. The analysis can be carried out with the most reliable method depending on the data availability, i.e. quantitative statistically-based techniques or qualitative heuristic approaches, to provide the "background noise" in terms of landslide proneness of a given area, regardless the triggering factor. Subsequent physically-based analyses can be performed in the so-identified areas to assess landslide scenarios according to "customized" seismic inputs. This kind of integrated approach can be particularly suitable for sites featured by high levels of both seismic and landslide hazards, where no event-specific inventories are available and, thus, neither event-specific scenarios can be assessed by means of susceptibility approaches. Furthermore, the first step (slope analysis, explained further in more detail), which is actually the integration of landslide inventorying and susceptibility assessment for soil slopes and kinematic compatibility analyses for rock slopes, provides a basic and single input for pseudo-static and pseudo-dynamic analyses that can be performed under different seismic scenarios. The PARSIFAL method considers different landslide typologies (new and existing in rock or soil) and different failure mechanisms. Some test sites of the method have been: (1) a coastal sector of Calabria in Southern Italy (Bozzano et al. 2013) as part of a POR project funded by the Calabria Region, (2) some municipalities of Central Italy (Esposito et al. 2016) as part of pilot studies promoted by the Lazio Region with high seismic hazard and, (3) in Spain (Martino et al. 2018), the Alcoy area in the Community of Valencia, historically hit by strong earthquakes and today intensely urbanized. The recent seismic events that struck the Central Apennines in 2016–2017 have launched an impressive reconstruction program that involves 15 municipalities in the province of Rieti, Lazio region. Thanks to the Ordinance of the Presidency of the Council of Ministers No. 24 of 12/05/2017 (OPCM 24/2017), technical support was provided for the design phase of the reconstruction by means of quantitative studies for SM which, following the guidelines issued in 2008 by the Italian Department of Civil Protection (DPC) (ICMS 2008), have provided the inventory of already unstable and potentially unstable slopes with the purpose of defining specific areas of attention for landslides (ZAFR). More specific landslides guidelines were recently published by the DPC (CTMS, 2017). According to these guidelines the areas of attention for slope failures (ZAFR) must be selected among the areas of susceptibility (ZSFR) that, compared to the former, constitute the subset of the areas for which: (1) the attitude to earthquake-induced failure can be evaluated quantitatively by means of simplified analytical approaches; (2) an area of invasion by debris accumulation or an area of expansion (AE) (for enlargement, advancement or retraction) should be associated. In this study the procedure adopted in the Municipality of Accumoli is described. The SM studies followed ODPCM 24/2017) to obtain the perimeter of the ZSFR starting from the inventory of landslide areas (ZF) and potentially unstable slopes (ZPF) which, together, define the zones of attention for landslide (ZAFR). This procedure, based on the construction of scenario maps referring to different types of failure mechanisms and to different conditions of saturation of deposits and covers, has been implemented starting from the previously experienced applications to SM studies and to the needs mapping standards of the DPC guidelines. Geological and geomorphological setting The geological and structural setting of the central Apennines is strongly influenced by the variable geodynamic context that characterized the Mediterranean area from the Triassic to the Neogene (Centamore et al. 2012; Cosentino et al. 2010). The Accumoli area is characterised by a Meso-Cenozoic geological succession of limestones and marls with a thickness of about 8000 meters. The attitude of strata is strongly controlled by the folding and faulting responsible for the growth of the Apennine chain (Bigi et al. 2009) which evolved through a general northeastward migration towards the Adriatic foreland. This caused the formation of inner basin systems which hosted a thick turbiditic sedimentation during the Miocene (e.g. Laga Formation; Falcini et al. 2009, and references therein). The present morpho-structural setting of Central Apennine is strongly driven by active extensional tectonics that characterizes this area since the Upper Pliocene. This extensional phase generated NW–SE-trending normal fault systems, which are responsible for the development of intermontane basins and include the main seismogenic sources of this sector of the Apennines (Tondi and Cello 2003; Galadini and Galli 2003; Boncio et al. 2004; Pizzi and Galadini 2009). The Municipality of Accumoli is located in the piedmont zone of the Central Apennines, between the Sibillini Mountains to the west and the Laga Mountains to the east. Most of the area is characterised by the outcropping of deposits ascribable to the flysch of the Laga Formation while the western portion is characterized by the outcropping of Meso-Cenozoic limestones and marls. In general, the lithological variability of the study area accounts for different erosional behaviour along the main slopes (Fig. 1) and represents one of the main predisposing factors for intense mass wasting of slopes due to gravitational process. Lithotechnical map of the Municipality of Accumoli with main morphostructural elements: (1) slope and alluvial deposits, (2) mostly arenaceous flysch, (3) mostly pelitic flysch, (4) marly clay, (5) marly limestone, (6) limestone Tectonics and climate influenced the long-term landscape evolution of this sector of the Apennines and the drainage network underwent important phases of plano-altimetric re-organization, mainly consisting in stream piracy and cyclic alternation of erosional and aggradational episodes, as already testified within adjacent river basins in central Apennines (Aringoli et al. 2014; Fubelli et al. 2014). The geomorphological setting of the western portion of the area is characterized by deeply incised, mainly V-shaped valleys and diffuse erosional landforms such as sub-vertical rocky cliff and ridges, which are particularly prone to landslides. The remaining part of the area is divided by the Tronto River trunk-valley and presents a morphological asymmetry due to different characteristics of the outcropping bedrock (Fig. 1). The left valley-side, mostly carved into the arenaceous member of the Laga Formation, is featured by steep or sub-vertical slopes, ridges and V-shaped incised valley often interrupted, at various heights, by fluvial terraces and terrace-like morphologies of different origin. The opposite valley-flank, where the mostly pelitic member of the Laga Formation crops out, shows a gentle morphology and a quasi-continuous vegetation cover. The Municipality of Accumoli is affected by several typologies of landslides mainly including roto-translational slides involving both rock masses as well as soils and rock-falls (sensu Hungr et al. 2014). The mostly pelitic flysch of the Laga Formation is most affected by instability, since the presence of low shear strength pelitic interlayers, is the main predisposing factor to landslides activation. The strata attitude has great influence on the hillslope erosion, which is characterized by steeper anti-dip slopes with respect to dip slopes. These geological conditions represent a preparatory factor for planar sliding of rock blocks (Cacciuni et al. 1995). The PARSIFAL method The PARSIFAL method deals with the analysis of both re-activation of pre-existing landslides and the occurrence of first-time failures (Martino et al. 2018). First-time failures occur on slopes previously not affected by instabilities, while re-activation refer to already existing landslide that can be affected by further displacements. Both processes are triggered by either external or internal forces, such as seismic action and pore pressures increase related to intense rainfall events, respectively. Specifically, PARSIFAL analyzes re-activations of roto/translational slides and first-time failures that fall into two main categories: rock-slope failures and shallow earth-slides. To properly address the first stage (slope analysis) of the methodology, field surveys are of crucial importance to inventory already existing landslides and characterize rock masses and shallow deposits to assess their proneness to first-time failures. The main output of PARSIFAL consists in a complete mapping of the probability of exceedance of earthquake‐induced landslide displacement in respect to an assumed threshold, for different scenarios (i.e., return times of seismic hazard and saturation conditions). PARSIFAL is a sequential procedure based on the following three main steps (Fig. 2): Flow chart illustrating the multi-step PARSIFAL approach Step 1 Slope Analysis (SA)—The territory is analyzed in terms of landslide susceptibility, regardless the dynamic input. As above mentioned, the methodology aims at analyzing the most common co-seismic landslides: re-activation of already existing roto/translational landslides, first-time rock-slope failures and newly generated shallow landslides of soil covers over the bedrock. The susceptibility to re-activation is assessed by detecting and inventorying the already existing landslides which, in terms of GIS analysis, are mapped as polygons (Fig. 3). For first-time rock failures, the procedure is more articulated. In a first instance the territory is divided by identifying, on the basis of field surveys and related geo-structural and geomechanical measurements, zones that are homogeneous in terms of lithology and structural features (homogeneous geo-structural zones—HGZ). Each zone is then featured by a specific geometric pattern of discontinuity sets, that are in turn characterized in terms of geomechanical properties. The HGZs are then partitioned into a grid of square cells. A test of kinematic compatibility to planar/wedge sliding and toppling is performed by GIS routines on a cell-by-cell basis by comparing dip and dip direction of each discontinuity sets (or trend and plunge of each line of intersection in the case of wedge mechanism) with slope and aspect of the corresponding cell, calculated starting from the available DEM (Digital Elevation Model) of the area. It is worth noting that each cell can be potentially compatible with more than one failure mechanism. With regards to first-time shallow landslides, i.e. the deposits potentially involved in shallow landslides, the proneness to failure is assessed after dividing each polygon representing a significant outcrop of soil cover into Unique Condition Units, i.e. sub-areas characterized by specific classes of slope exposure and lithology. The proneness of each UCU to landslides is then assessed by applying heuristic or quantitative susceptibility methods depending on data availability. Examples of landslide inventorying for PARSIFAL approach: a contour of landslide mass on satellite image (Google Earth), b contour of landslide mass on DEM, c picture of an inventoried earthquake-induced rock fall from the on-line CEDIT catalogue, d picture of an inventoried earthquake-induced rock slide from the on-line CEDIT catalogue The territorial units resulting as susceptible to failure (grid cells in HGZs and UCU) or to co-seismic further displacements (polygons representing the inventoried existing landslides) are analyzed in the next step. Step 2 Slope Stability (SS)—This step consists in the assessment of the actual potential of failure/re-activation under combined seismic and saturation loading of the areas resulted prone to failure in the previous phase. Specifically, this step aims at the computation of an exceedance probability of co‐seismic displacements based on an assumed failure threshold and evaluated by the Newmark (1965) approach, i.e. considering a critical pseudostatic acceleration, ay, as unelastic displacement threshold. The degree of mobility is expressed by the exceedance probability of a critical displacement (P[D ≥ Dc|a(t),ay]) responsible of the slope collapse, which can be assumed according to literature, as specified in the following. To calculate the degree of stability of landslides under seismic conditions, the input data considered are natural accelerometric records selected considering the seismological characteristics of the most severe seismogenic source among the ones that contribute to the seismic hazard of the site. In case that the failure displacement threshold (ay) is not exceeded, a safety factor (SF) is computed as to provide a stability level of the slope. The SF is computed according to Limit Equilibrium Methods, selected on the basis of the landslide shape H/L (where H is the depth and L is the length of the landslide mass). In particular, according to Dewitte and Demoulin (2005): infinite slope method was adopted in case of H/L < 0.01, the Janbu (1973) method was applied for slides characterized by 0.01 ≤ H/L ≤ 0.1, i.e. with a mainly translational mechanism, the Bishop (1955) method was applied for a roto-translational mechanism (where H/L > 0.1). Based on the previous reported definitions, three possible conditions can be expected by the computing: 1) for ay = 0 and SF ≤ 1 then P[D ≥ Dc|a(t),ay] = 1; 2) for ay > PGA and SF > 1 then P[D ≥ Dc|a(t),ay] = 0; 3) for ay ≤ PGA and SF ≤ 1 then 0 < P[D ≥ Dc|a(t),ay] < 1. Step 3 Resulting scenario (RS)—This step aims at providing, for each considered scenario of combined seismic and saturation conditions, a synthetic and integrated GIS mapping of the slope stability results. The scenario map shows the different territorial units defined as elementary areas able to represent the slope stability results in terms of either failure exceedance probability or safety factor. To provide a unique value of P[D ≥ Dc|a(t),ay] into the same cell unit, the following criteria were assumed as reported in Martino et al. (2017): (1) for the same mechanism, the P[D ≥ Dc|a(t),ay] computed for different block volumes are solved in a unique value by performing a weight averaging; (2) combined probability is computed for each different mechanism (i.e., toppling, wedge sliding, planar sliding) using the probability values computed for each block volume; (3) in case that only SF values are computed for each block volume and landslide mechanism the lowest SF value is attributed to the cell unit, adopting a conservative solution. More in particular, in case of simultaneous presence of n-unstable blocks with P(D ≥ Dc) > 0 for the same type of instability, the weighted exceedance probability, P(D ≥ Dc)w, is computed on the basis of the mobilized block volumes, according to the formula: $$P\left( {D \ge D_{c} } \right)_{w} \, = \,\frac{{\sum\nolimits_{i = 1}^{n} {P\left( {D \ge D_{c} } \right)_{i} \cdot V_{i} } }}{{\sum\nolimits_{i} {V_{i} } }}$$ where P(D ≥ Dc)i is the probability of exceedance P[D ≥ Dc|a(t),ay] of the ith rocky block size and Vi its volume. In case more than one type of instability can occur, the overall failure probability is computed, according to the total probability theorem: $$P_{f} \, = \,1\, - \,\prod\nolimits_{j} {\left( {1 - P\left( {D \ge D_{c} } \right)_{j} } \right)}$$ where P(D ≥ Dc)j is the exceedance probability P[D ≥ Dc|a(t),ay] of the jth type of instability (weighted, if any). From seismic inputs to exceedance probability In order to define a set of natural accelerometric records representative of the local expected ground-motion to be used in the evaluation of co-seismic displacements by Newmark (1965) approach, a constraint imposed by the general activity of level 3 SM in the Central Italy area struck by 2016-17 seismic sequence (ODPCM 24/2017) has been adopted. In detail, Luzi (2017) defined the seismic hazard level for the municipality area of Accumoli, according to the Italian national seismic map (Montaldo et al. 2007) and the Italian building code (NTC08—CS.LL.PP. 2008), for the selection of 7 natural accelerometric time-histories to be used as seismic input for local seismic response analysis by 1D and 2D engineering-geology numerical models. The accelerometric records have been selected by the REXELite code (Iervolino et al. 2011) as to obtain time-histories spectrum-compatible with a target response spectrum evaluated for a Return Time of 475 years (i.e. a 10% exceedance probability within a reference time period of 50 years; obtained PGA = 0.26 g), a Eurocode 8 site ground type "A" (hard rock/rock or Vs30 ≥ 800 m/s) and ground "flat" condition (i.e., slope angle < 15°). Further seismological parameters has been set in: (1) magnitude range 5.2–6.2 MW or ML; (2) seismic source-site maximum distance 30 km; focal mechanism "normal". The selection has been performed on the INGV ITalian ACcelerometric Archive (Luzi et al. 2008) increased with the records obtained by the temporary accelerometric network installed during the 2016–2017 Central Italy seismic sequence. To increase the set of accelerograms to be used for the computation of the probability distributions of co-seismic Newmark's displacements, a further selection was performed (Martino et al. 2018) by querying additional accelerometric databases like ESM-Engineering Strong-Motion database release 1.0 (Luzi et al. 2016) or Japanese NIED strong-motion seismograph networks (website: http://www.kyoshin.bosai.go.jp/). The previously mentioned selection criteria are extended to: (1) a Eurocode 8 site ground type "A" and "B", which are the most representative for the thickness as well as for the geotechnical and geophysical parameters of the deposits outcropping in the study area; (2) "free-field" or "ground" recording-sites; iii) Focal mechanism "normal" or "normal-oblique", i.e. in the rake interval − 20°/− 70° (normal left-lateral oblique); − 70°/− 110° (normal); − 110°/− 160° (normal right-lateral oblique); iv) hypocentral depth h ≤ 30 km. Additional constraints involved the selected time-histories in term of: PGA which must fulfill the condition 0.67 ≤ PGA ≤ 1.5 of local seismic hazard; Arias intensities (in base-10 logarithm values) lying inside the 95% confidence interval of that computed from the selected records; shape similarity of response spectra compared to the target one, evaluated for a Return Time of 475 years, for different period intervals. Finally, the resulting time-histories are scaled to the local seismic hazard PGA, namely 0.26 g. Considering both the selection criteria, a set of 25 natural accelerometric records is collected (Fig. 4a) and used for the evaluation of Newmark (1965) co-seismic displacements. Assuming a critical displacement threshold of 10 cm for earth‐slides and 5 cm for rock‐slides, according to Romeo (2000), the probability of exceedance P[D ≥ Dc|a(t),ay] for each landslide or instability area can be evaluated Procedure and data used for the assessment of exceedance probability in an example instability (herein rock-slide): a response spectra of selected input time-histories (thin grey lines) compared with the target response spectrum (bold red line), b cumulative co-seismic displacements evaluated by Newmark (1965) by an example ay pseudo-static critical acceleration, c distribution of relative frequencies of maximum displacements, in terms of base-10 logarithm, compared with the x-axis position of the critical displacement threshold (5 cm for rock-slide), d probability functions from maximum cumulative displacements distribution: in blue the Cumulative Distribution Function with overlapped the probability of each maximum co-seismic displacement, in red the Probability Density Function of obtained standard normal distribution compared with the probability of the critical displacement threshold expressed in standard unit (z-Score) and (light red area) the resulting exceedance probability In detail, for every local ground instability, the pseudo-static critical acceleration (ay) evaluated at different saturation conditions allow the identification of accelerations which can induce a co-seismic displacement. By a double-integration of time-history accelerations overcoming the ay acceleration, a cumulative co-seismic displacement is calculated (Fig. 4b). In this way, a set of 25 maximum cumulative co-seismic displacement values is obtained: Fig. 4c shows the distribution of relative frequencies of obtained maximum displacements, in terms of base-10 logarithm, compared as example with the x-axis position of the critical displacement threshold for rock‐slides. It is therefore assumed that the co-seismic displacements obtained by applying the Newmark's method to the selected time-histories are distributed according to a normal distribution characterized by an average (\(\mu\)): $$\mu = \frac{{\mathop \sum \nolimits_{i = 1}^{Nth} log10\left( {Sp_{i} } \right)}}{N}$$ and a standard deviation (\(\sigma )\): $${ \sigma} = \sqrt {\frac{{\mathop \sum \nolimits_{\rm{i = 1}}^{\rm{Nth}} [ {\text{log10}( {\text{Sp}_{\rm{i}} }) - \mu } ]^{2} }}{{\text{N}( {\text{N} - 1} )}}}$$ where N is the number of selected time-histories and log10(Spi) is the base-10 logarithm of i-th maximum cumulative co-seismic displacement. The probability of occurrence of a critical displacement calculated by means of the standard normal distribution (i.e. with mean μ = 0 and standard deviation σ = 1) is given by the equation: $$Prob\left( x \right) = 1 - \left( {\mathop \smallint \limits_{ - \infty }^{SpCr} \frac{1}{{\sqrt {2\pi } }}e^{{ - \frac{1}{2} z^{2} }} } \right)$$ where x is the base-10 logarithm of critical displacement, log10(SpCr), and z is the critical displacement expressed in standard unit (z-Score): $$z = \frac{\left[ \left( {\textit{log10}}\left( {\rm SpCr} \right) - \mu \right) \right]}\sigma$$ Therefore, for a given local ground instability and pseudo-static critical acceleration (ay) evaluated at different saturation conditions, the exceedance probability of a critical displacement (Fig. 4d) is obtained from the equation: $$\text{P}\left[ {\text{D} \ge \text{Dc}|\text{a}(\text{t}),\text{a}_{\text{y}} } \right] = 1 - { \Pr }\text{ob}(x)$$ For both first-time failures and re-activations the Newmark (1965) co-seismic displacements were computed by using the selected time histories and by considering the previously derived ay values for each typical rock block and earth-slide mass at different saturation conditions. Following this approach, the landslide mass is considered like a rigid block sliding on an inclined plane (in case of planar sliding) or on a curved surface (in case of roto-translational slides) as well as on an intersection line (in case of wedge sliding). For the rock blocks, only toppling mechanism was also accounted for and driving forces are considered to produce an overturning moment. Each block has ay values which represents the threshold seismic acceleration required for triggering the landslide failure. Thematic maps were obtained through GIS by synthetically reporting the slope stability conditions under dynamic action as well as the probability of exceedance of the critical displacement referred to scenarios of different saturation conditions. Each territorial unit corresponding to a landslide mass or to a landslide prone area was classified with a color scale corresponding to the computed probability of exceedance (Fig. 5). Legend to the PARSIFAL output: the colour scales correspond to different P[D ≥ Dc|a(t),ay] values for soil and rock landslides respectively In case of null probability (i.e. no statistic of Newmark displacement computed) the resulting safety factor was attributed to the corresponding territorial unit. As it resulted by the PARSIFAL application to Accumoli area, up to 2% of the entire municipal territory correspond to already unstable or landslide-prone areas. Within this area, 59% is represented by first-time landslides (33% rock mass failures and 26% shallow debris covers) and 41% correspond to re-activation of already existing landslides. In the dry condition RS (Fig. 6) for the first-time landslides in rock mass, 17% are unstable and in no case has a probability of exceedance been obtained (0 < P[D ≥ Dc|a(t),ay] < 1). The first- time landslides in shallow debris cover are stable for the 99% while for the 1% a (0 < P[D ≥ Dc|a(t),ay] < 1) was computed. Among the already existing landslides, the 46% have a (0 < P[D ≥ Dc|a(t),ay] < 1) and the 8% are unstable (P[D ≥ Dc|a(t),ay] = 1). Resulting scenario from PARSIFAL referred to dry conditions (see Fig. 6 for legend), the zoomed excerpts are details of the mapping In the middle saturation RS (Fig. 7) for the first-time landslides in rock mass, 83% are unstable and in no case has a probability of exceedance been obtained (0 < P[D ≥ Dc|a(t),ay] < 1). Among the first-time landslides, in shallows debris cover it was obtained a probability of exceedance (0 < P[D ≥ Dc|a(t),ay] < 1) for the 41%. For already existing landslides, 29% are unstable (P[D ≥ Dc|a(t),ay] = 1) and it was obtained a probability of exceedance (0 < P[D ≥ Dc|a(t),ay] < 1) for the 42%. Resulting scenario from PARSIFAL referred to middle-saturation conditions (see Fig. 6 for legend), the zoomed excerpts are details of the mapping In the full saturation scenario (Fig. 8), due to the severity of the assumed condition, for all the shallow debris cover susceptible for first-time landslides was obtained a probability of exceedance (0 < P[D ≥ Dc|a(t),ay] < 1) while the results for first-time landslides in rock mass do not change respect to the middle saturation scenario. Among the already existing landslides the 58% became unstable (P[D ≥ Dc|a(t),ay] = 1) and for the 17% probability of exceedance (0 < P[D ≥ Dc|a(t),ay] < 1) have been computed. Resulting scenario from PARSIFAL referred to full saturation conditions (see Fig. 6 for legend), the zoomed excerpts are details of the mapping Following the DPC guidelines (ICMS 2008; CTMS, 2017) referred to landslides, the Zones of Attention (ZAFR) are areas in which the available data indicate the presence of conditions predisposing the slope failures but are not sufficient to define if instability can actually occur in case of earthquake; nevertheless, through the consultation of inventories like the on-line CEDIT catalogue (Martino et al. 2014; Martino 2017—www.ceri.uniroma1.it/index.php/web-gis/cedit), it is possible to document the occurrence of landslide events during historical earthquakes. Susceptibility Zones (ZSFR) are areas to which it is possible to attribute a non-null earthquake-induced hazard through simplified methods of analysis. Respect Zones (ZRFR) are areas in which it is possible to quantify the caused danger as well as the involvement area due to an earthquake-induced landslide through quantitative approaches. The identification of the ZAFR allows to perform, in a subsequent level of analysis, a "screening" operation that output from the mapped areas (including the already unstable and the potentially unstable ones) those that can be activated in case of earthquake by the use of simplified methods (pseudo-static). The landslide areas that should be unstable in case of earthquake occurrence will constitute the ZSFR dataset and its related mapping. Only more specific and in-depth studies will lead to verify the specific stability conditions of the landslide slopes leading to obtain the ZRFR map starting from the ZSFR one. The application of the PARSIFAL methodology has made it possible to define scenario maps referring to different conditions of debris and cover saturation. In particular, the methodology aimed at deducting the ZSFR map starting from the scenario maps initially obtained; they were taken into account all the results obtained in the three considered scenarios reconstructed by PARSIFAL, therefore as the saturation conditions range from zero to the admissible maximum, the final susceptibility was attributed through a weighting of the probabilities of exceedance resulting from all the reconstructed scenarios. If in two of the three considered saturation scenarios the probability of exceedance was non-zero, i.e. a statistic for the earthquake-induced displacements was calculated, the ZAFR was converted into a ZSFR and deemed to be subject to further in-depth studies for evaluating its admissibility as ZRFR, i.e. for the purposes of planning engineering designs to be provided. Expansion areas (AE) with a buffer of 10 meters around the landslide, following the DPC recommendations (ICMS 2008; CTMS 2017), have been defined only for already existing landslides, since in case of first-time debris slides the distribution of activity (sensu Varnes 1978) cannot be attributed. In case of rock slides and toppling, the AE were defined according to the shadow cone approach (as suggested by ICMS 2008), involving the maximum distances that could be reached by the detached blocks (runout distance). The shadow cones were defined through the Conefall 1.0 software (http://conefall.software.informer.com/1.0/), which allows to obtain an estimate approximate area of invasion of an unstable block from the local DEM. Through the performed application PARSIFAL proved to be a valid methodological approach to make the screening between ZAFR and ZSFR (Fig. 9), as required by the DPC guidelines with the advantages: to return the set of ZF (already existing landslides) and ZPF (first generation landslides) that can be integrated with the landslide bodies already registered by the IFFI (Inventario dei Fenomeni Franosi in Italia—http://www.isprambiente.gov.it/it/progetti/suolo-e-territorio-1/iffi-inventario-dei-fenomeni-franosi-in-italia) and PAI (Piani di Assetto Idrogeologico - http://www.regione.lazio.it/prl_ambiente) catalogues, assigning to each identified unstable area a level of hazard, expressed as the exceedance probability with respect to a trigger threshold; to allow a screening between ZAFR and ZSFR based on a comparison among different analyses of scenario that take into account the variability of environmental predisposing conditions (specifically the saturation of land and cover); to be flexible with respect to specific needs related to the SM activities envisaged by ODPCM 24/2017, including in particular: i) the preservation of areas already bound by the PAI among the ZSFR (regardless of scenario solutions); ii) the pre-selection of ZPF areas by means of dimensional threshold spatial filters, commensurate with the size of the building and therefore of the vulnerable elements; iii) the epistemic error due to the variability/uncertainness of mechanical parameters values that could be taken into account through parametric analyses. Flow chart illustrating the link between the outputs from PARSIFAL and the products of level 3 SM (following ICMS 2008) More in detail, for the Municipality of Accumoli, based on the results obtained by the screening performed through PARSIFAL, the ZSFR result as sub-dataset of the ZAFR (Fig. 10). By comparing the total area of the ZAFR with the total area of the ZSFR, it results that the PARSIFAL screening led to a reduction of 22% of the mapped ZAFR. Maps of ZAFR and ZSFR. The ZSFR map was derived after the PARSIFAL screening adding the AE to the screened ZAFR A limitation to the current application of PARSIFAL consists in not being able to take into account the anthropic slopes coinciding with the cuts of the road trenches both for lack of resolution of the DEM used, and for the necessity to verify the suitability of the slope-to-road system to different types of failure mechanisms: (1) shallow sliding of debris covers existing upstream of the cut; (2) sliding of the road plan; (3) sliding of the slope immediately downstream of the roadway. Quantitative studies of SM provided by ODPCM 24/2017 have made it possible to test the PARSIFAL approach for the screening of landslide areas (ZAFR), including the already existing landslides and potentially unstable areas, for the purpose of selecting zones susceptible to landslide (ZSFR), according to the provisions of the DPC guidelines for SM studies. To this aim, PARSIFAL has been used to compare different scenarios of earthquake-induced landslides in the Municipality of Accumoli, thus being able to weigh the predisposition to the mobilization of already unstable or prone to failure slopes (the latter both in soil and in rock) in relation to predisposing environmental conditions, in particular the saturation from zero to the maximum admissible. Therefore, all the ZAFR were considered susceptible to landslide (ZSFR) if in two of the three scenarios reconstructed by the saturation conditions it was possible to calculate a non-zero probability of exceedance with respect to the assumed threshold of co-seismic displacement considered for the failure. The ZSFR also included all the areas already subject to hydrogeological restrictions by the current technical legislation (PAI). In perspective, PARSIFAL can be considered a valid screening tool between ZAFR and ZSFR with the advantage of being able to be applied to extended territorial scales and with the possibility to be implemented in the future for taking into account anthropic cuts, i.e. in correspondence with the road trenches where, based on the evidence gathered during the seismic crisis of 2016–2017 in the Central Apennines, the highest percentage of landslide rock falls occurred. During some overlook in typesetting process low-resolution files for Fig. 1 was used. The correct Fig. 1 is given. Aringoli D, Cavitolo P, Farabollini P, Galindo-Zaldivar J, Gentili B, Giano SI, Lòpez-Garrido AC, Materazzi M, Nibbi L, Pedrera A, Pambianchi G, Ruano P, Ruiz-Constàn A, Sanz de Galdeano C, Savelli D, Tondi E, Troiani F (2014) Morphotectonic characterization of the quaternary intermontane basins in the Umbria-Marche Apenniness (Italy). Rendiconti Lincei 25(2):11–128 Bai S, Wang J, Zhang Z, Cheng C (2012) Combined landslide susceptibility mapping after Wenchuan earthquake at the Zhouqu segment in the Bailongjiang Basin, China. CATENA 99:18–25 Bigi S, Milli S, Corrado S, Casero P, Aldega L, Botti F, Moscatelli M, Stanzione O, Falcini F, Marini M, Cannata D (2009) Stratigraphy, structural setting and burial history of the Messinian Laga basin in the context of Apennine foreland basin system. J Mediterr Earth Sci 1:61–84 Bishop AW (1955) The use of the slip circle in the stability analysis of slopes. Géotechnique 5:7–17 Boncio P, Lavecchia G, Pace B (2004) Defining a model of 3D seismogenic sources for seismic hazard assessment applications: the case of central Apenniness (Italy). J Seismol 8:407–425 Bozzano F, Esposito C, Martini G, Martino S, Prestininzi A, Rinaldis D, Romeo RW, Scarascia Mugnozza G (2013) Earthquake-reactivated landslide scenarios in Southern Italy based on spectral-matching input analysis. Bull Earthq Eng 11:1927–1948 Cacciuni A, Centamore E, Di Stefano R, Dramis F (1995) Evoluzione morfotettonica della conca di Amatrice. Studi Geologici Camerti 1995(2):95–100 Centamore E, Fumanti F, Nisio S (2012) The Central-Northern Apennines geological evolution from Triassic to Neogene time. Boll Soc Geol It 1(2002):181–197 Cosentino D, Cipollari P, Marsili P, Scrocca D (2010) Geology of the central Apennines: a regional review. J Virtual Explorer. https://doi.org/10.3809/jvirtex.2010.00223 CS.LL.PP. (2008). DM 14 gennaio, Norme tecniche per le costruzioni. Gazzetta Ufficiale della Repubblica Italiana 29 CTMS—Commissione tecnica per la microzonazione sismica (2017). Linee guida per la gestione del territorio in aree interessate da instabilità di versante sismoindotte (FR). https://www.centromicrozonazionesismica.it/it/download/category/13-linee-guida-per-la-gestione-del-territorio-in-aree-interessate-da-instabilita-di-versante-cosismiche Dewitte O, Demoulin A (2005) Morphometry and kinematics of landslides inferred from precise DTMs in West Belgium. Nat Hazards Earth Syst Sci 5:259–265 Esposito C, Martino S, Pallone F, Martini G, Romeo R (2016) A methodology for a comprehensive assessment of earthquake-induced landslide hazard, with an application to pilot sites in Central Italy. In: Landslides and engineered slopes. Experience, theory and practice, Taylor and Francis Inc. 2: 869–877 Falcini F, Marini M, Milli S, Moscatelli M (2009) An inverse problem to infer paleoflow conditions from turbidites. J Geophys Res 114:C10019 Fubelli G, Della Seta M, Amato G (2014) Drainage system adjustment in response to the opening of the Rieti intermontane basin (Central Italy): geostatistical reconstruction of the PaleoFarfa River alluvial plain. Rendiconti Lincei 25(S2):167–176 Galadini F, Galli P (2003) Paleoseismology of silent faults in the central Apennines (Italy): the Mt. Vettore and Laga Mts. faults. Ann Geofis 46:815–836 García-Rodríguez MJ, Malpica JA (2010) Assessment of earthquake-triggered landslide susceptibility in El Salvador based on an Artificial Neural Network model. Nat Hazards Earth Syst Sci 10:1307–1315 Harp EL, Keefer DK, Sato HP, Yagi H (2011) Landslide inventories: the essential part of seismic landslide hazard analyses. Eng Geol 122:9–21 Hsieh SU, Lee CT (2011) Empirical estimation of the Newmark displacement from the Arias intensity and critical acceleration. Eng Geol 122:34–42 Hungr O, Leroueil S, Picarelli L (2014) The Varnes classification of landslide types, an update. Landslides 11(2):167–194 ICMS (2008) Gruppo di lavoro MS, Indirizzi e criteri per la microzonazione sismica, Conferenza delle Regioni e delle Province autonome—Dipartimento della protezione civile, Roma, 2008, 3 vol. e Dvd Iervolino I, Galasso C, Paolucci R, Pacor F (2011) Engineering ground motion record selection in the ITalian ACcelerometric Archive. Bull Earthq Eng 9(6):1761–1778 Janbu N (1973) Slope stability computations Embankment Dam Engineering Casagrande Memorial Volume. Wiley, New York, pp 47–86 Jibson RW (2007) Regression models for estimating coseismic landslide displacement. Eng Geol 91:209–218 Jibson RW, Harp EL, Michael JA (2000) A method for producing digital probabilistic seismic landslide hazard maps. Eng Geol 58:271–289 Kamp U, Growley BJ, Khattak GA, Owen LA (2008) GIS-based landslide susceptibility mapping for the 2005 Kashmir earthquake region. Geomorphology 101:631–642 Keefer DK (1984) Landslides caused by earthquakes. Geol Soc Am Bull 95(4):406 Lee S, Choi J, Min K (2002) Landslide susceptibility analysis and verification using the Bayesian probability model. Environ Geol 43(1–2):120–131 Lee CT, Huang CC, Lee JF, Pan KL, Lin ML, Dong JJ (2008) Statistical approach to earthquake-induced landslide susceptibility. Eng Geol 100:43–58 Luzi L (2017) Rapporto tecnico sulla selezione di un insieme di 7 accelerogrammi compatibili con lo spettro di normativa (NTC08), da utilizzare come input per le simulazioni numeriche della microzonazione sismica di terzo livello (Ordinanza n. 24 del 12 maggio 2017) Luzi L, Hailemikael S, Bindi D, Pacor F, Mele F, Sabetta F (2008) ITACA (ITalian ACcelerometric Archive): a web portal for the dissemination of Italian Strong-motion Data. Seismol Res Lett 79(5):716–722 Luzi L, Puglia R, Russo E, ORFEUS WG5 (2016) Engineering strong motion database, version 1.0. Istituto Nazionale di Geofisica e Vulcanologia, Observatories & Research Facilities for European Seismology Martino S (2017) Earthquake-induced landslides in Italy: from the distribution of effects to the hazard mapping. Ital J Eng Geol Environ 1(2017):53–67 Martino S, Prestininzi A, Romeo RW (2014) Earthquake-induced ground failures in Italy from a reviewed database. Nat Hazards Earth Syst Sci 14:799–814 Martino S, Battaglia S, Delgado J, Esposito C, Martini G, Missori C (2018) Probabilistic approach to provide scenarios of earthquake-induced slope failures (PARSIFAL) applied to the Alcoy Basin (South Spain). Geosciences 8(57):1–20 Montaldo V, Meletti C, Martinelli F, Stucchi M, Locati M (2007) On-line seismic hazard data for the new italian building code. J Earthq Eng 11(S1):119–132 Newmark NM (1965) Effects of earthquakes on dams and embankments. Geotechnique 15(2):139–159 ODPCM (2017) Ordinanza n. 24 del 12 maggio 2017, Assegnazione dei finanziamenti per gli studi di microzonazione sismica di III livello ai Comuni interessati dagli eventi sismici verificatisi a far data dal 24 agosto 2016 e proroga di termini di cui all'ordinanza n. 13 del 9 gennaio 2017. https://sisma2016.gov.it/wp-content/uploads/2017/05/Ordinanza-24-del-12-maggio-2017.pdf Pizzi A, Galadini F (2009) Pre-existing cross-structures and active fault segmentation in the northern-central Apennines (Italy). Tectonophysics 476(1–2):304–319 Prestininzi A, Romeo R (2000) Earthquake-induced ground failures in Italy. Eng Geol 58:387–397 Rathje EM, Antonakos G (2011) A unified model for predicting earthquake-induced sliding displacements of rigid and flexible slopes. Eng Geol 122:51–60 Rathje EM, Bray JD (2000) Nonlinear coupled seismic sliding analysis of earth structures. J Geotech Geoenviron Eng ASCE 126(11):1002–1014 Reichenbach P, Rossi M, Malamud BD, Mihir M, Guzzetti F (2018) A review of statistically-based landslide susceptibility models. Earth Sci Rev 180:60–91 Rodriguez CE, Bommer JJ, Chandler RJ (1999) Earthquake induced landslides: 1980–1997. Soil Dynam Earthq Eng 18:325–346 Romeo RW (2000) Seismically induced landslide displacements: a predictive model. Eng Geol 58:337–351 Tian Y, Xu C, Hong H, Zhou Q, Wang D (2019) Mapping earthquake-triggered landslide susceptibility by use of artificial neural network (ANN) models: an example of the 2013 Minxian (China) Mw 5.9 event. Geomat Nat Hazards Risk 10:1–25 Tondi E, Cello G (2003) Spatio-temporal evolution of the central Apenniness fault system (Italy). J Geodyn 36:113–128 Tsai HY, Tsai CC, Chang WC (2019) Slope unit based approach for assessing regional seismic landslide displacement for deep and shallow failure. Eng Geol 248:124–139 Varnes DJ (1978) Slope movements types and processes. In: Schuster RL, Krizek RJ (ed) Landslides-analysis and control. 12–33, National Academy of Sciences and Transportation Research Board, Special report 176 Varnes DJ (1984) IAEG Commission on landslide and other mass movement on slopes (1984) Landslide hazard zonation: A review of principles and practice. The UNESCO Press, Paris, p 63 Xu C, Dai F, Xu X, Lee YH (2012a) GIS-based support vector machine modeling of earthquake-triggered landslide susceptibility in the Jianjiang River watershed, China. Geomorphology 145–146:70–80 Xu C, Xu X, Dai F, Saraf AK (2012b) Comparison of different models for susceptibility mapping of earthquake triggered landslides related with the 2008 Wenchuan earthquake in China. Comput Geosci 46:317–329 Aknowledgements The Authors wish to thank the Municipality of Accumoli for the logistic support to the field activities; G. Di Martino, L. Falconi, C. Missori and C. Puglisi for collecting part of the field data; C. Di Marzio and D. Marchetti for providing the public technical data and reports on seismic microzonation studies that they performed at Accumoli. S. Martino coordinated the research activities; M. Della Seta and F. Troiani performed the geomorphological surveys; F. D'Alessandro performed the engineering-geological surveys and the slope stability analyses; S. Battaglia, C. Esposito, G. Martini and F. Pallone managed the PARSIFAL procedure, computation and mapping. The study was carried out in the framework of the Ordinance of the Presidency of the Council of Ministers No. 24 of 12/05/2017 (ODPCM 24/2017) managed by the Centro per la Microzonazione Sismica e le sue Applicazioni of the Italian National Research Council (CNR). The authors recall with esteem and affection the late colleague Roberto Romeo with whom this study began. Dipartimento di Scienze della Terra e Centro di Ricerca per i Rischi Geologici (CERI), Università di Roma Sapienza, P.za A. Moro 5, 00185, Rome, Italy S. Martino, S. Battaglia, F. D'Alessandro, M. Della Seta, C. Esposito, F. Pallone & F. Troiani Agenzia Nazionale per le Nuove Tecnologie, l'Energia e lo Sviluppo Economico Sostenibile (ENEA), Via Enrico Fermi, 45, 00044, Frascati, Rome, Italy G. Martini S. Martino S. Battaglia F. D'Alessandro M. Della Seta C. Esposito F. Pallone F. Troiani Correspondence to S. Martino. Martino, S., Battaglia, S., D'Alessandro, F. et al. Earthquake-induced landslide scenarios for seismic microzonation: application to the Accumoli area (Rieti, Italy). Bull Earthquake Eng 18, 5655–5673 (2020). https://doi.org/10.1007/s10518-019-00589-1 Earthquake-induced landslides Scenario mapping Seismic microzonation
CommonCrawl
The self-normalized Donsker theorem revisited Peter Parczewski We extend the Poincaré–Borel lemma to a weak approximation of a Brownian motion via simple functionals of uniform distributions on n-spheres in the Skorokhod space $D([0,1])$. This approach is used to simplify the proof of the self-normalized Donsker theorem in Csörgő et al. (2003). Some notes on spheres with respect to $\ell _{p}$-norms are given. Asymptotic behavior of functionals of the solutions to inhomogeneous Itô stochastic differential equations with nonregular dependence on parameter Grigorij Kulinich Svitlana Kushnirenko The asymptotic behavior, as $T\to \infty $, of some functionals of the form $I_{T}(t)=F_{T}(\xi _{T}(t))+{\int _{0}^{t}}g_{T}(\xi _{T}(s))\hspace{0.1667em}dW_{T}(s)$, $t\ge 0$ is studied. Here $\xi _{T}(t)$ is the solution to the time-inhomogeneous Itô stochastic differential equation \[d\xi _{T}(t)=a_{T}\big(t,\xi _{T}(t)\big)\hspace{0.1667em}dt+dW_{T}(t),\hspace{1em}t\ge 0,\hspace{2.5pt}\xi _{T}(0)=x_{0},\] $T>0$ is a parameter, $a_{T}(t,x),x\in \mathbb{R}$ are measurable functions, $|a_{T}(t,x)|\le C_{T}$ for all $x\in \mathbb{R}$ and $t\ge 0$, $W_{T}(t)$ are standard Wiener processes, $F_{T}(x),x\in \mathbb{R}$ are continuous functions, $g_{T}(x),x\in \mathbb{R}$ are measurable locally bounded functions, and everything is real-valued. The explicit form of the limiting processes for $I_{T}(t)$ is established under nonregular dependence of $a_{T}(t,x)$ and $g_{T}(x)$ on the parameter T. Quantifying non-monotonicity of functions and the lack of positivity in signed measures Youri Davydov Ričardas Zitikis In various research areas related to decision making, problems and their solutions frequently rely on certain functions being monotonic. In the case of non-monotonic functions, one would then wish to quantify their lack of monotonicity. In this paper we develop a method designed specifically for this task, including quantification of the lack of positivity, negativity, or sign-constancy in signed measures. We note relevant applications in Insurance, Finance, and Economics, and discuss some of them in detail. Weighted entropy: basic inequalities Mark Kelbert Izabella Stuhl Yuri Suhov This paper represents an extended version of an earlier note [10]. The concept of weighted entropy takes into account values of different outcomes, i.e., makes entropy context-dependent, through the weight function. We analyse analogs of the Fisher information inequality and entropy power inequality for the weighted entropy and discuss connections with weighted Lieb's splitting inequality. The concepts of rates of the weighted entropy and information are also discussed. Random iterations of homeomorphisms on the circle Katrin Gelfert Örjan Stenflo We study random independent and identically distributed iterations of functions from an iterated function system of homeomorphisms on the circle which is minimal. We show how such systems can be analyzed in terms of iterated function systems with probabilities which are non-expansive on average. On singularity of distribution of random variables with independent symbols of Oppenheim expansions Liliia Sydoruk Grygoriy Torbin The paper is devoted to the restricted Oppenheim expansion of real numbers ($\mathit{ROE}$), which includes already known Engel, Sylvester and Lüroth expansions as partial cases. We find conditions under which for almost all (with respect to Lebesgue measure) real numbers from the unit interval their $\mathit{ROE}$-expansion contain arbitrary digit i only finitely many times. Main results of the paper state the singularity (w.r.t. the Lebesgue measure) of the distribution of a random variable with i.i.d. increments of symbols of the restricted Oppenheim expansion. General non-i.i.d. case is also studied and sufficient conditions for the singularity of the corresponding probability distributions are found.
CommonCrawl
Dimension dependence of factorization problems: Biparameter Hardy spaces Author: Richard Lechner Journal: Proc. Amer. Math. Soc. 147 (2019), 1639-1652 MSC (2010): Primary 46B07, 30H10, 46B25, 60G46 DOI: https://doi.org/10.1090/proc/14364 Published electronically: January 8, 2019 Abstract: Given $1 \leq p,q < \infty$, and $n\in \mathbb {N}_0$, let $H_n^p(H_n^q)$ denote the finite-dimensional building blocks of the biparameter dyadic Hardy space $H^p(H^q)$. Let $(V_n : n\in \mathbb {N}_0)$ denote either $\bigl (H_n^p(H_n^q) : n\in \mathbb {N}_0\bigr )$ or $\bigl ( (H_n^p(H_n^q))^* : n\in \mathbb {N}_0\bigr )$. We show that the identity operator on $V_n$ factors through any operator $T : V_N\to V_N$ which has a large diagonal with respect to the Haar system, where $N$ depends linearly on $n$. G. Blower, The Banach space $B(l^2)$ is primary, Bull. London Math. Soc. 22 (1990), no. 2, 176–182. MR 1045291, DOI https://doi.org/10.1112/blms/22.2.176 J. Bourgain, On the primarity of $H^{\infty }$-spaces, Israel J. Math. 45 (1983), no. 4, 329–336. MR 720307, DOI https://doi.org/10.1007/BF02804016 J. Bourgain and L. Tzafriri, Invertibility of "large� submatrices with applications to the geometry of Banach spaces and harmonic analysis, Israel J. Math. 57 (1987), no. 2, 137–224. MR 890420, DOI https://doi.org/10.1007/BF02772174 J. L. B. Gamlen and R. J. Gaudet, On subsequences of the Haar system in $L_{p}$ $[1,\,1](1\leq p\leq \infty )$, Israel J. Math. 15 (1973), 404–413. MR 328575, DOI https://doi.org/10.1007/BF02757079 Peter W. Jones, BMO and the Banach space approximation problem, Amer. J. Math. 107 (1985), no. 4, 853–893. MR 796906, DOI https://doi.org/10.2307/2374360 Niels Jakob Laustsen, Richard Lechner, and Paul F. X. Müller, Factorization of the identity through operators with large diagonal, J. Funct. Anal. 275 (2018), no. 11, 3169–3207. MR 3861733, DOI https://doi.org/10.1016/j.jfa.2018.02.010 R. Lechner, Direct sums of finite dimensional $SL^\infty _n$ spaces. ArXiv e-prints, Sept. 2017. R. Lechner, Dimension dependence of factorization problems: bi-parameter Hardy spaces. ArXiv e-prints, Feb. 2018. R. Lechner, Dimension dependence of factorization problems: Hardy spaces and $SL_n^\infty$. Israel J. Math, to appear. Richard Lechner, Factorization in mixed norm Hardy and BMO spaces, Studia Math. 242 (2018), no. 3, 231–265. MR 3794334, DOI https://doi.org/10.4064/sm8751-6-2017 Richard Lechner and Paul F. X. Müller, Localization and projections on bi-parameter BMO, Q. J. Math. 66 (2015), no. 4, 1069–1101. MR 3436171, DOI https://doi.org/10.1093/qmath/hav028 Joram Lindenstrauss and Lior Tzafriri, Classical Banach spaces. II, Ergebnisse der Mathematik und ihrer Grenzgebiete [Results in Mathematics and Related Areas], vol. 97, Springer-Verlag, Berlin-New York, 1979. Function spaces. MR 540367 Paul F. X. Müller, On projections in $H^1$ and BMO, Studia Math. 89 (1988), no. 2, 145–158. MR 955660, DOI https://doi.org/10.4064/sm-89-2-143-158 Paul F. X. Müller, Isomorphisms between $H^1$ spaces, Instytut Matematyczny Polskiej Akademii Nauk. Monografie Matematyczne (New Series) [Mathematics Institute of the Polish Academy of Sciences. Mathematical Monographs (New Series)], vol. 66, Birkhäuser Verlag, Basel, 2005. MR 2157745 Paul F. X. Müller, Two remarks on primary spaces, Math. Proc. Cambridge Philos. Soc. 153 (2012), no. 3, 505–523. MR 2990628, DOI https://doi.org/10.1017/S0305004112000199 H. M. Wark, A class of primary Banach spaces, J. Math. Anal. Appl. 326 (2007), no. 2, 1427–1436. MR 2280992, DOI https://doi.org/10.1016/j.jmaa.2006.03.047 H. M. Wark, The $l^\infty $ direct sum of $L^p$ $(1<p<\infty )$ is primary, J. Lond. Math. Soc. (2) 75 (2007), no. 1, 176–186. MR 2302737, DOI https://doi.org/10.1112/jlms/jdl010 J. L. B. Gamlen and R. J. Gaudet, On subsequences of the Haar system in $L_{p}$ $[1, 1](1\leq p\leq \infty )$, Israel J. Math. 15 (1973), 404–413. MR 0328575, DOI https://doi.org/10.1007/BF02757079 Joram Lindenstrauss and Lior Tzafriri, Classical Banach spaces. II: Function spaces, Ergebnisse der Mathematik und ihrer Grenzgebiete [Results in Mathematics and Related Areas], vol. 97, Springer-Verlag, Berlin-New York, 1979. MR 540367 Paul F. X. Müller, Isomorphisms between $H^1$ spaces, Instytut Matematyczny Polskiej Akademii Nauk. Monografie Matematyczne (New Series) [Mathematics Institute of the Polish Academy of Sciences. Mathematical Monographs (New Series)], vol. 66, Birkha̋user Verlag, Basel, 2005. MR 2157745 H. M. Wark, The $l^\infty$ direct sum of $L^p$ $(1<p<\infty )$ is primary, J. Lond. Math. Soc. (2) 75 (2007), no. 1, 176–186. MR 2302737, DOI https://doi.org/10.1112/jlms/jdl010 Retrieve articles in Proceedings of the American Mathematical Society with MSC (2010): 46B07, 30H10, 46B25, 60G46 Retrieve articles in all journals with MSC (2010): 46B07, 30H10, 46B25, 60G46 Richard Lechner Affiliation: Institute of Analysis, Johannes Kepler University Linz, Altenberger Strasse 69, A-4040 Linz, Austria MR Author ID: 1058338 Email: [email protected] Keywords: Factorization, local theory, almost-diagonalization, classical Banach spaces, Hardy spaces, BMO Received by editor(s): February 20, 2018 Received by editor(s) in revised form: August 14, 2018 Additional Notes: This work was supported by the Austrian Science Foundation (FWF) Pr.Nr. P28352. Communicated by: Stephen Dilworth
CommonCrawl
Wide beam steering by slow-light waveguide gratings and a prism lens Hiroyuki Ito, Yuma Kusunoki, Jun Maeda, Daichi Akiyama, Naoya Kodama, Hiroshi Abe, Ryo Tetsuya, and Toshihiko Baba Hiroyuki Ito, Yuma Kusunoki, Jun Maeda, Daichi Akiyama, Naoya Kodama, Hiroshi Abe, Ryo Tetsuya, and Toshihiko Baba* Department of Electrical and Computer Engineering, Yokohama National University, 95-5 Tokiwadai, Hodogayaku, Yokohama 240-8501, Japan *Corresponding author: [email protected] Jun Maeda https://orcid.org/0000-0003-0842-2307 H Ito Y Kusunoki J Maeda D Akiyama N Kodama H Abe R Tetsuya T Baba •https://doi.org/10.1364/OPTICA.381484 Hiroyuki Ito, Yuma Kusunoki, Jun Maeda, Daichi Akiyama, Naoya Kodama, Hiroshi Abe, Ryo Tetsuya, and Toshihiko Baba, "Wide beam steering by slow-light waveguide gratings and a prism lens," Optica 7, 47-52 (2020) Beam fanning Beam steering Grating prisms Microelectromechanical systems Revised Manuscript: December 1, 2019 Manuscript Accepted: December 1, 2019 A lattice-shifted photonic crystal waveguide (LSPCW) maintains slow light as a guided mode and works as an optical antenna when a kind of double periodicity is introduced. Selecting one LSPCW from its array and converting the fan beam to a spot beam using a collimator lens allows non-mechanical, two-dimensional beam steering. We employed a shallow-etched grating into the LSPCW as the double periodicity to increase the upward emission efficiency and designed a bespoke prism lens to convert the steering angle in a desired direction while maintaining the collimation condition for the steered beam. As a result, a sharp spot beam with an average beam divergence of 0.15° was steered in the range of ${40}^\circ \; \times \;{4.4}^\circ $ without precise adjustment of the lens position. The number of resolution points obtained was 4256. This method did not require complicated and power-consuming optical phase control like that in optical phased arrays, so it is expected to be applied in complete solid-state light detection and ranging. Thermally controlled Si photonic crystal slow light waveguide beam steering device Goro Takeuchi, Yosuke Terada, Moe Takeuchi, Hiroshi Abe, Hiroyuki Ito, and Toshihiko Baba Opt. Express 26(9) 11529-11537 (2018) Two-dimensional beam-steering device using a doubly periodic Si photonic-crystal waveguide Hiroshi Abe, Moe Takeuchi, Goro Takeuchi, Hiroyuki Ito, Tomoki Yokokawa, Keisuke Kondo, Yuya Furukado, and Toshihiko Baba Prism lens for beam collimation in a silicon photonic crystal beam-steering device Jun Maeda, Daichi Akiyama, Hiroyuki Ito, Hiroshi Abe, and Toshihiko Baba Wavelength-division multiplexing Si photonic crystal beam steering device for high-throughput parallel sensing Hiroyuki Ito, Tomoki Tatebe, Hiroshi Abe, and Toshihiko Baba Lens-based integrated 2D beam-steering device with defocusing approach and broadband pulse operation for Lidar application Chao Li, Xianyi Cao, Kan Wu, Xinwan Li, and Jianping Chen I. Puente, H. Gonzalez-Jorge, J. Martinez-Sanchez, and P. Arias, "Review of mobile mapping and surveying technologies," Measurement 46, 2127–2145 (2013). Y. Wang, G. Zhou, X. Zhang, K. Kwon, P. Blanche, N. Triesault, K. Yu, and M. C. Wu, "2D broadband beamsteering with large-scale MEMS optical phased array," Optica 6, 557–562 (2019). K. V. Acoleyen, W. Bogaerts, J. Jágerská, N. L. Thomas, R. Houdré, and R. Baets, "Off-chip beam steering with a one-dimensional optical phased array on silicon-on-insulator," Opt. Lett. 34, 1477–1479 (2009). J. K. Doylend, M. J. R. Heck, J. T. Bovington, J. D. Peters, M. L. Davenport, L. A. Coldren, and J. E. Bowers, "Hybrid III/V silicon photonic source with integrated 1D free-space beam steering," Opt. Lett. 37, 4257–4259 (2012). J. Sun, E. Timurdogan, A. Yaacobi, E. S. Hosseini, and M. R. Watts, "Large-scale nanophotonic phased array," Nature 493, 195–199 (2013). D. Kwong, A. Hosseini, J. Covey, Y. Zhang, X. Xu, H. Subbaraman, and R. T. Chen, "On-chip silicon optical phased array for two-dimensional beam steering," Opt. Lett. 39, 941–944 (2014). H. Abediasl and H. Hashemi, "Monolithic optical phased-array transceiver in a standard SOI CMOS process," Opt. Express 23, 6509–6519 (2015). D. N. Hutchison, J. Sun, J. K. Doylend, R. Kumar, J. Heck, W. Kim, C. T. Phare, A. Feshali, and H. Rong, "High-resolution aliasing-free optical beam steering," Optica 3, 887–890 (2016). M. Zadka, Y. Chang, A. Mohanty, C. T. Phare, S. P. Roberts, and M. Lipson, "On-chip platform for a phased array with minimal beam divergence and wide field-of-view," Opt. Express 26, 2528–2534 (2018). C. V. Poulton, P. Russo, B. Moss, M. Khandaker, M. J. Byrd, J. Tran, E. Timurdogan, D. Vermeulen, and M. R. Watts, "Small-form-factor optical phased array module for technology adoption in custom applications," in Conference on Lasers and Electro-Optics, OSA Technical Digest (Optical Society of America, 2019), paper JTh5B.6. J. J. López, S. A. Skirlo, D. Kharas, J. Sloan, J. Herd, P. Juodawlkis, M. Soljačić, and C. Sorace-Agaskar, "Planar-lens enabled beam steering for chip-scale LIDAR," in Conference on Lasers and Electro-Optics, OSA Technical Digest (online) (Optical Society of America, 2018), paper SM3I.1. D. Inoue, T. Ichikawa, A. Kawasaki, and T. Yamashita, "Demonstration of a new optical scanner using silicon photonics integrated circuit," Opt. Express 27, 2499–2508 (2019). C. V. Poulton, A. Yaacobi, D. B. Cole, M. J. Byrd, M. Raval, D. Vermeulen, and M. R. Watts, "Coherent solid-state LIDAR with silicon photonic optical phased arrays," Opt. Lett. 42, 4091–4094 (2017). C. V. Poulton, M. J. Byrd, P. Russo, E. Timurdogan, M. Khandaker, D. Vermeulen, and M. R. Watts, "Long-range LiDAR and free-space data communication with high-performance optical phased arrays," IEEE J. Sel. Top. Quantum Electron 25, 7700108 (2019). K. Kondo, T. Tatebe, S. Hachuda, H. Abe, F. Koyama, and T. Baba, "Fan beam steering device using a photonic crystal slow-light waveguide with surface diffraction grating," Opt. Lett. 42, 4990–4993 (2017). H. Abe, M. Takeuchi, G. Takeuchi, H. Ito, T. Yokokawa, K. Kondo, Y. Furukado, and T. Baba, "Two-dimensional beam-steering device using a doubly periodic Si photonic-crystal waveguide," Opt. Express 26, 9389–9397 (2018). J. Maeda, D. Akiyama, H. Ito, H. Abe, and T. Baba, "Prism lens for beam collimation in silicon photonic crystal beam-steering device," Opt. Lett. 44, 5780–5783 (2019). Y. Terada, K. Miyasaka, K. Kondo, N. Ishikura, T. Tamura, and T. Baba, "Optimized optical coupling to silica-clad photonic crystal waveguide," Opt. Lett. 42, 4695–4698 (2017). T. DiLazaro and G. Nehmetallah, "Multi-terahertz frequency sweeps for high-resolution, frequency-modulated continuous wave ladar using a distributed feedback laser array," Opt. Express 25, 2327–2340 (2017). G. Takeuchi, Y. Terada, M. Takeuchi, H. Abe, H. Ito, and T. Baba, "Thermally controlled Si photonic crystal slow light waveguide beam steering device," Opt. Express 26, 11529–11537 (2018). R. Tetsuya, H. Abe, H. Ito, and T. Baba, "Efficient light transmission, reception and beam forming in photonic crystal beam steering device in a phased array configuration," Jpn. J. Appl. Phys. 58, 082002 (2019). Abe, H. Abediasl, H. Acoleyen, K. V. Akiyama, D. Arias, P. Baba, T. Blanche, P. Bogaerts, W. Bovington, J. T. Byrd, M. J. Chang, Y. Chen, R. T. Coldren, L. A. Cole, D. B. Covey, J. Davenport, M. L. DiLazaro, T. Doylend, J. K. Feshali, A. Furukado, Y. Gonzalez-Jorge, H. Hachuda, S. Hashemi, H. Heck, J. Heck, M. J. R. Herd, J. Hosseini, A. Hosseini, E. S. Houdré, R. Hutchison, D. N. Ichikawa, T. Inoue, D. Ishikura, N. Ito, H. Jágerská, J. Juodawlkis, P. Kawasaki, A. Khandaker, M. Kharas, D. Kim, W. Kondo, K. Koyama, F. Kumar, R. Kwon, K. Kwong, D. López, J. J. Maeda, J. Martinez-Sanchez, J. Miyasaka, K. Mohanty, A. Moss, B. Nehmetallah, G. Phare, C. T. Poulton, C. V. Puente, I. Raval, M. Roberts, S. P. Rong, H. Russo, P. Skirlo, S. A. Sloan, J. Soljacic, M. Sorace-Agaskar, C. Subbaraman, H. Sun, J. Takeuchi, G. Takeuchi, M. Tamura, T. Tatebe, T. Terada, Y. Tetsuya, R. Thomas, N. L. Timurdogan, E. Tran, J. Triesault, N. Vermeulen, D. Wang, Y. Watts, M. R. Wu, M. C. Xu, X. Yaacobi, A. Yamashita, T. Yokokawa, T. Yu, K. Zadka, M. Zhang, X. Zhang, Y. Zhou, G. IEEE J. Sel. Top. Quantum Electron (1) Jpn. J. Appl. Phys. (1) » Visualization 1 Raster scanning of an emitted optical beam on graph paper » Visualization 2 Zigzag scanning of an emitted optical beam on graph paper » Visualization 3 Figure eight scanning of an emitted optical beam on graph paper Fig. 1. Schematics of a slow-light beam steering device and 2D beam steering, where (I)–(III) are the solutions for problems in the previous study [16] and for wider 2D beam steering. (a) LSPCW with shallow grating, which improves the upper emission intensity. (b) 2D beam steering by LSPCW array and prism lens that maintains the collimation condition for the wide range of $\theta $. (c) Beam steering in the $\phi $ direction by selecting one LSPCW from its array, which is the same concept as in Ref. [16]. (d) Continuous beam steering in the ${\pm }\theta^{\prime}$ direction including $\theta^{\prime}={0}^\circ $ by converting $\theta $ into $\theta^{\prime}$ using the prism lens and switching the direction of light incidence on the LSPCW. Fig. 2. Fabricated device and 1D beam steering. (a) Top view of fabricated chip. (b) SEM image of LSPCW. Magnified view shows the third-row lattice shifts and shallow grating. (c) Prism lens loaded above the device. (d) 1D steering of fan beam without lens for wavelength sweeping. The FFPs are overlapped with 0.1° spacing. (e) Wavelength dependence of $\theta $. Attached FFPs show a fan beam and a spot beam at $\lambda ={1.53}\;\unicode{x00B5}{\rm m}$. (f), (g) Beam divergence $\delta \theta $ and $\delta \phi $. Red and black show with and without the lens, respectively. Fig. 3. Switching of light. (a) Top view of MZ switch. (b) Temperature distribution at the switch, which was observed by thermal microscope at $P={51}\;{\rm mW}$. (c) Emission from LSPCW. Light is coupled via a spot size converter on the left (shown by arrow) and emitted from the first LSPCW after passing through the switch tree along the dotted line. Similar switching is confirmed for other LSPCWs. Fig. 4. Observed 2D beam steering characteristics. (a) Overlapped FFP image of steered spots. (b) Steering angle $\theta $. (c) Beam divergence $\delta \theta $ (black) and $\delta \phi $ (red). Fig. 5. Flexible 2D beam steering: (a) 256 spot beams projected onto the screen. The distance from the device to the screen is approximately 2.3 m. (b)–(d) Various types of scanning: (b) zigzag, (c) spiral, (d) figure eight. The spots arranged in a trapezoidal area were due to the beam irradiating obliquely on the paper.
CommonCrawl
All-----TitleAuthor(s)AbstractSubjectKeywordAll FieldsFull Text-----About Nagoya Mathematical Journal Nagoya Math. J. Volume 214 (2014), 53-77. Normal functions and the height of Gross–Schoen cycles Robin de Jong More by Robin de Jong Full-text: Open access PDF File (299 KB) Article info and citation We prove a variant of a formula due to Zhang relating the Beilinson–Bloch height of the Gross–Schoen cycle on a pointed curve with the self-intersection of its relative dualizing sheaf. In our approach, the height of the Gross–Schoen cycle occurs as the degree of a suitable Bloch line bundle. We show that the Chern form of this line bundle is nonnegative, and we calculate its class in the Picard group of the moduli space of pointed stable curves of compact type. The basic tools are normal functions and biextensions associated to the cohomology of the universal Jacobian. Nagoya Math. J., Volume 214 (2014), 53-77. First available in Project Euclid: 15 January 2014 Permanent link to this document https://projecteuclid.org/euclid.nmj/1389795890 doi:10.1215/00277630-2413391 Mathematical Reviews number (MathSciNet) MR3211818 Zentralblatt MATH identifier Primary: 14G40: Arithmetic varieties and schemes; Arakelov theory; heights [See also 11G50, 37P30] Secondary: 14C25: Algebraic cycles 14D06: Fibrations, degenerations de Jong, Robin. Normal functions and the height of Gross–Schoen cycles. Nagoya Math. J. 214 (2014), 53--77. doi:10.1215/00277630-2413391. https://projecteuclid.org/euclid.nmj/1389795890 [1] S. Y. Arakelov, An intersection theory for divisors on an arithmetic surface (in Russian), Izv. Akad. Nauk. SSSR Ser. Mat. 38 (1974), 1179–1192. Mathematical Reviews (MathSciNet): MR472815 [2] E. Arbarello and M. Cornalba, The Picard groups of the moduli spaces of curves, Topology 26 (1987), 153–171. Digital Object Identifier: doi:10.1016/0040-9383(87)90056-5 [3] A. Beauville, "Quelques remarques sur la transformation de Fourier dans l'anneau de Chow d'une variété abelienne" in Algebraic Geometry (Tokyo/Kyoto, 1982), Lecture Notes in Math. 1016, Springer, Berlin, 1983, 238–260. [4] A. Beilinson, "Height pairing between algebraic cycles" in Current Trends in Arithmetical Algebraic Geometry (Arcata, Calif., 1985), Contemp. Math. 67, Amer. Math. Soc., Providence, 1987, 1–24. Digital Object Identifier: doi:10.1090/conm/067/902590 [5] S. Bloch, "Height pairings for algebraic cycles" in Proceedings of the Luminy Conference on Algebraic $K$-theory (Luminy, 1983), J. Pure Appl. Algebra 34, 1984, 119–145. Digital Object Identifier: doi:10.1016/0022-4049(84)90032-X [6] S. Bloch, "Cycles and biextensions" in Algebraic $K$-theory and Algebraic Number Theory (Honolulu, 1987), Contemp. Math. 83, Amer. Math. Soc., Providence, 1989, 19–30. [7] J.-B. Bost, H. Gillet, and C. Soulé, Heights of projective varieties and positive Green forms, J. Amer. Math. Soc. 7 (1994), 903–1027. Mathematical Reviews (MathSciNet): MR1260106 Digital Object Identifier: doi:10.1090/S0894-0347-1994-1260106-X [8] E. Colombo and B. van Geemen, Note on curves in a Jacobian, Compos. Math. 88 (1993), 333–353. [9] R. de Jong, Second variation of Zhang's $\lambda$-invariant on the moduli space of curves, Amer. J. Math. 135 (2013), 275–290. Digital Object Identifier: doi:10.1353/ajm.2013.0008 [10] P. Deligne, "Le déterminant de la cohomologie" in Current Trends in Arithmetical Algebraic Geometry (Arcata, Calif., 1985), Contemp. Math. 67, Amer. Math. Soc., Providence, 1987, 93–177. [11] C. Deninger and J. Murre, Motivic decomposition of abelian schemes and the Fourier transform, J. Reine Angew. Math. 422 (1991), 201–219. [12] G. Faltings, Calculus on arithmetic surfaces, Ann. of Math. (2) 119 (1984), 387–424. Digital Object Identifier: doi:10.2307/2007043 [13] B. H. Gross and C. Schoen, The modified diagonal cycle on the triple product of a pointed curve, Ann. Inst. Fourier (Grenoble) 45 (1995), 649–679. Digital Object Identifier: doi:10.5802/aif.1469 [14] R. Hain, Biextensions and heights associated to curves of odd genus, Duke Math. J. 61 (1990), 859–898. Digital Object Identifier: doi:10.1215/S0012-7094-90-06133-2 Project Euclid: euclid.dmj/1077296997 [15] R. Hain, "Normal functions and the geometry of moduli spaces of curves" in Handbook of Moduli, Adv. Lectures Math. 24, International Press, Boston, 2013, 527–556. [16] R. Hain and D. Reed, Geometric proofs of some results of Morita, J. Algebraic Geom. 10 (2001), 199–217. [17] R. Hain and D. Reed, On the Arakelov geometry of moduli spaces of curves, J. Differential Geom. 67 (2004), 195–228. Project Euclid: euclid.jdg/1102536200 [18] B. Harris, Harmonic volumes, Acta Math. 150 (1983), 91–123. Digital Object Identifier: doi:10.1007/BF02392968 [19] B. Iversen, Cohomology of Sheaves, Universitext, Springer, Berlin, 1986. [20] I. Kausz, A discriminant and an upper bound for $\omega^{2}$ for hyperelliptic arithmetic surfaces, Compos. Math. 115 (1999), 37–69. Digital Object Identifier: doi:10.1023/A:1000580901251 [21] O. Meyer, Über Biextensionen und Höhenpaarungen algebraischer Zykel, Ph.D. dissertation, University of Regensburg, Regensburg, Germany, 2003. [22] L. Moret-Bailly, "Métriques permises" in Seminar in Arithmetic Bundles: The Mordell Conjecture (Paris, 1983/84), Astérisque 127, Soc. Math. France, Paris, 1985, 29–87. [23] L. Moret-Bailly, La formule de Noether pour les surfaces arithmétiques, Invent. Math. 98 (1989), 491–498. [24] S. Müller-Stach, $\mathbb{C}^{*}$-extensions of tori, higher Chow groups and applications to incidence equivalence relations for algebraic cycles, $K$-Theory 9 (1995), 395–406. [25] M. J. Pulte, The fundamental group of a Riemann surface: Mixed Hodge structures and algebraic cycles, Duke Math. J. 57 (1988), 721–760. [26] M. Seibold, Bierweiterungen für algebraische Zykel und Poincarébundel, Ph.D. dissertation, University of Regensburg, Regensburg, Germany, 2007. [27] T. Szamuely, Galois Groups and Fundamental Groups, Cambridge Stud. Adv. Math. 117, Cambridge University Press, Cambridge, 2009. [28] S.-W. Zhang, Admissible pairing on a curve, Invent. Math. 112 (1993), 171–193. [29] S. Zhang, Gross–Schoen cycles and dualising sheaves, Invent. Math. 179 (2010), 1–73. Digital Object Identifier: doi:10.1007/s00222-009-0209-3 [30] S. Zhang, Positivity of heights of codimension $2$ cycles over function field of characteristic $0$, preprint, arXiv:1001.4788v1 [math.AG]. arXiv: 1001.4788v1 What is MathJax? Picard–Lefschetz oscillators for the Drinfeld–Lafforgue–Vinberg degeneration for SL2 Schieder, Simon, Duke Mathematical Journal, 2018 Equidistribution in Bun2(P1) Shende, Vivek and Tsimerman, Jacob, Duke Mathematical Journal, 2017 Néron's pairing and relative algebraic equivalence Pépin, Cédric, Algebra & Number Theory, 2012 Jumps in the Archimedean height Brosnan, Patrick and Pearlstein, Gregory, Duke Mathematical Journal, 2019 On the Arakelov Geometry of Moduli Spaces of Curves Hain, Richard and Reed, David, Journal of Differential Geometry, 2004 Tautological module and intersection theory on Hilbert schemes of nodal curves Ran, Ziv, Asian Journal of Mathematics, 2013 Multiplicative differential algebraic $K$-theory and applications Bunke, Ulrich and Tamme, Georg, Annals of K-Theory, 2016 On the Chow-forms of elliptic normal curves of degree 4 Tanaka, Tatsuji, Tsukuba Journal of Mathematics, 2000 A tropical approach to nonarchimedean Arakelov geometry Gubler, Walter and Künnemann, Klaus, Algebra & Number Theory, 2017 Holomorphic Principal Bundles Over Elliptic Curves II: The Parabolic Construction Friedman, Robert and Morgan, John W., Journal of Differential Geometry, 2000 euclid.nmj/1389795890
CommonCrawl
Difference between revisions of "Colloquia/Fall18" Nagreen (talk | contribs) (→‎Abstracts) This lecture has been conceived for a broad audience. Accordingly, unnecessary technicalities will be avoided. ===Friday, March 17 at 4:00pm: Lillian Pierce=== "p-torsion in class groups of number fields of arbitrary degree" Abstract: Fix a number field K of degree n over the rationals, and a prime p, and consider the p-torsion subgroup of the class group of K. How big is it? It is conjectured that this p-torsion subgroup should be very small (in an appropriate sense), relative to the absolute discriminant of the field; this relates to the Cohen-Lenstra heuristics and various other arithmetic problems. So far it has proved extremely difficult even to beat the trivial bound, that is, to show that the p-torsion subgroup is noticeably smaller than the full class group. In 2007, Ellenberg and Venkatesh shaved a power off the trivial bound by assuming GRH. This talk will discuss several new, contrasting, methods that recover or improve on this bound for almost all members of certain infinite families of fields, without assuming GRH. === Wednesday, March 29 at 3:30PM (Wasow): Sylvia Serfaty (NYU)=== Monday, January 9, 9th floor Miklos Racz (Microsoft) Statistical inference in networks and genomics Valko January 13, B239 Mihaela Ifrim (Berkeley) Two dimensional water waves Angenent Tuesday, January 17, B139 Fabio Pusateri (Princeton) The Water Waves problem Angenent January 20, B239 Sam Raskin (MIT) Tempered local geometric Langlands Arinkin Monday, January 23, B239 Tamas Darvas (Maryland) Geometry on the space of Kahler metrics and applications to canonical metrics Viaclovsky January 27 Reserved for possible job talks February 3, 9th floor Melanie Matchett Wood (UW-, Madison) Random groups from generators and relations Monday, February 6, B239 (Wasow lecture) Benoit Perthame (University of Paris VI) Models for neural networks; analysis, simulations and behaviour Jin February 10 (WIMAW lecture), B239 Alina Chertock (NC State Univ.) Numerical Method for Chemotaxis and Related Models WIMAW February 17, 9th floor Gustavo Ponce (UCSB) The Korteweg-de Vries equation vs. the Benjamin-Ono equation Minh-Binh Tran Monday, February 20, 9th floor Amy Cochran (Michigan) Mathematical Classification of Bipolar Disorder Smith March 3, B239 Ken Bromberg (University of Utah) Renormalized volume for hyperbolic 3-manifolds Dymarz Tuesday, March 7, 4PM, 9th floor (Distinguished Lecture) Roger Temam (Indiana University) On the mathematical modeling of the humid atmosphere Smith Wednesday, March 8, 4PM, B239 Roger Temam (Indiana University) Weak solutions of the Shigesada-Kawasaki-Teramoto system. Smith March 10 No Colloquium Wednesday, March 15, 4PM Enrique Zuazua (Universidad Autónoma de Madrid) Control and numerics: Recent progress and challenges Jin & Minh-Binh Tran March 17 Lillian Pierce (Duke University) p-torsion in class groups of number fields of arbitrary degree M. Matchett Wood Wednesday, March 29 at 3:30PM (Wasow) Sylvia Serfaty (NYU) Microscopic description of Coulomb-type systems Tran April 7 Hal Schenck Erman April 14 Wilfrid Gangbo Feldman & Tran April 21 Mark Andrea de Cataldo (Stony Brook) TBA Maxim April 28 Thomas Yizhao Hou TBA Li September 8 TBA September 15 TBA Wednesday, September 20, LAA lecture Andrew Stuart (Caltech) TBA Jin October 6 TBA October 13 TBA October 20 Pierre Germain (Courant, NYU) TBA Minh-Binh Tran November 3 TBA November 10 Reserved for possible job talks TBA November 24 Thanksgiving break TBA December 1 Reserved for possible job talks TBA September 16: Po-Shen Loh (CMU) Title: Directed paths: from Ramsey to Pseudorandomness Abstract: Starting from an innocent Ramsey-theoretic question regarding directed paths in graphs, we discover a series of rich and surprising connections that lead into the theory around a fundamental result in Combinatorics: Szemeredi's Regularity Lemma, which roughly states that every graph (no matter how large) can be well-approximated by a bounded-complexity pseudorandom object. Using these relationships, we prove that every coloring of the edges of the transitive N-vertex tournament using three colors contains a directed path of length at least sqrt(N) e^{log^* N} which entirely avoids some color. The unusual function log^* is the inverse function of the tower function (iterated exponentiation). September 23: Gheorghe Craciun (UW-Madison) Title: Toric Differential Inclusions and a Proof of the Global Attractor Conjecture Abstract: The Global Attractor Conjecture says that a large class of polynomial dynamical systems, called toric dynamical systems, have a globally attracting point within each linear invariant space. In particular, these polynomial dynamical systems never exhibit multistability, oscillations or chaotic dynamics. The conjecture was formulated by Fritz Horn in the early 1970s, and is strongly related to Boltzmann's H-theorem. We discuss the history of this problem, including the connection between this conjecture and the Boltzmann equation. Then, we introduce toric differential inclusions, and describe how they can be used to prove this conjecture in full generality. September 30: Akos Magyar (University of Georgia) Title: Geometric Ramsey theory Abstract: Initiated by Erdos, Graham, Montgomery and others in the 1970's, geometric Ramsey theory studies geometric configurations, determined up to translations, rotations and possibly dilations, which cannot be destroyed by finite partitions of Euclidean spaces. Later it was shown by ergodic and Fourier analytic methods that such results are also possible in the context of sets of positive upper density in Euclidean spaces or the integer lattice. We present a new approach, motivated by developments in arithmetic combinatorics, which provide new results as well new proofs of some classical results in this area. October 14: Ling Long (LSU) Title: Hypergeometric functions over finite fields Abstract: Hypergeometric functions are special functions with lot of symmetries. In this talk, we will introduce hypergeometric functions over finite fields, originally due to Greene, Katz and McCarthy, in a way that is parallel to the classical hypergeometric functions, and discuss their properties and applications to character sums and the arithmetic of hypergeometric abelian varieties. This is a joint work with Jenny Fuselier, Ravi Ramakrishna, Holly Swisher, and Fang-Ting Tu. Tuesday, October 25, 9th floor: Stefan Steinerberger (Yale) Title: Three Miracles in Analysis Abstract: I plan to tell three stories: all deal with new points of view on very classical objects and have in common that there is a miracle somewhere. Miracles are nice but difficult to reproduce, so in all three cases the full extent of the underlying theory is not clear and many interesting open problems await. (1) An improvement of the Poincare inequality on the Torus that encodes a lot of classical Number Theory. (2) If the Hardy-Littlewood maximal function is easy to compute, then the function is sin(x). (Here, the miracle is both in the statement and in the proof). (3) Bounding classical integral operators (Hilbert/Laplace/Fourier-transforms) in L^2 -- but this time from below (this problem originally arose in medical imaging). Here, the miracle is also known as 'Slepian's miracle' (this part is joint work with Rima Alaifari, Lillian Pierce and Roy Lederman). October 28: Linda Reichl (UT Austin) Title: Microscopic hydrodynamic modes in a binary mixture Abstract: Expressions for propagation speeds and decay rates of hydrodynamic modes in a binary mixture can be obtained directly from spectral properties of the Boltzmann equations describing the mixture. The derivation of hydrodynamic behavior from the spectral properties of the kinetic equation provides an alternative to Chapman-Enskog theory, and removes the need for lengthy calculations of transport coefficients in the mixture. It also provides a sensitive test of the completeness of kinetic equations describing the mixture. We apply the method to a hard-sphere binary mixture and show that it gives excellent agreement with light scattering experiments on noble gas mixtures. Monday, October 31: Kathryn Mann (Berkeley) Title: Groups acting on the circle Abstract: Given a group G and a manifold M, can one describe all the actions of G on M? This is a basic and natural question from geometric topology, but also a very difficult one -- even in the case where M is the circle, and G is a familiar, finitely generated group. In this talk, I'll introduce you to the theory of groups acting on the circle, building on the perspectives of Ghys, Calegari, Goldman and others. We'll see some tools, old and new, some open problems, and some connections between this theory and themes in topology (like foliated bundles) and dynamics. November 7: Gaven Martin (New Zealand Institute for Advanced Study) Title: Siegel's problem on small volume lattices Abstract: We outline in very general terms the history and the proof of the identification of the minimal covolume lattice of hyperbolic 3-space as the 3-5-3 Coxeter group extended by the involution preserving the symmetry of this diagram. This gives us the smallest regular tessellation of hyperbolic 3-space. This solves (in three dimensions) a problem posed by Siegel in 1945. Siegel solved this problem in two dimensions by deriving the signature formula identifying the (2,3,7)-triangle group as having minimal co-area. There are strong connections with arithmetic hyperbolic geometry in the proof, and the result has applications in the maximal symmetry groups of hyperbolic 3-manifolds in much the same way that Hurwitz's 84g-84 theorem and Siegel's result do. Wednesday, November 16 (9th floor): Kathryn Lindsey (U Chicago) Title: Shapes of Julia Sets Abstract: The filled Julia set of a complex polynomial P is the set of points whose orbit under iteration of the map P is bounded. William Thurston asked "What are the possible shapes of polynomial Julia sets?" For example, is there a polynomial whose Julia set looks like a cat, or your silhouette, or spells out your name? It turns out the answer to all of these is "yes!" I will characterize the shapes of polynomial Julia sets and present an algorithm for constructing polynomials whose Julia sets have desired shapes. November 18: Andrew Snowden (University of Michigan) Title: Recent progress in representation stability Abstract: Representation stability is a relatively new field that studies somewhat exotic algebraic structures and exploits their properties to prove results (often asymptotic in nature) about objects of interest. I will describe some of the algebraic structures that appear (and state some important results about them), give a sampling of some notable applications (in group theory, topology, and algebraic geometry), and mention some open problems in the area. Monday, November 21: Mariya Soskova (University of Wisconsin-Madison) Title: Definability in degree structures Abstract: Some incomputable sets are more incomputable than others. We use Turing reducibility and enumeration reducibility to measure the relative complexity of incomputable sets. By identifying sets of the same complexity, we can associate to each reducibility a degree structure: the partial order of the Turing degrees and the partial order of the enumeration degrees. The two structures are related in nontrivial ways. The first has an isomorphic copy in the second and this isomorphic copy is an automorphism base. In 1969, Rogers asked a series of questions about the two degree structures with a common theme: definability. In this talk I will introduce the main concepts and describe the work that was motivated by these questions. Friday, December 2: Hao Shen (Columbia) Title: Singular Stochastic Partial Differential Equations - How do they arise and what do they mean? Abstract: Systems with random fluctuations are ubiquitous in the real world. Stochastic PDEs are default models for these random systems, just as PDEs are default models for deterministic systems. However, a large class of such stochastic PDEs were poorly understood until very recently: the presence of very singular random forcing as well as nonlinearities render it challenging to interpret what one even means by a ``solution". The recent breakthroughs by M. Hairer, M. Gubinelli and other researchers including the speaker not only established solution theories for these singular SPDEs, but also led to an explosion of new questions. These include scaling limits of random microscopic models, development of numerical schemes, ergodicity of random dynamical systems and a new approach to quantum field theory. In this talk we will discuss the main ideas of the recent solution theories of singular SPDEs, and how these SPDEs arise as limits of various important physical models. Monday, December 5: Botong Wang (UW-Madison) Title: Enumeration of points, lines, planes, etc. Abstract: It is a theorem of de Bruijn and Erdos that n points in the plane determine at least n lines, unless all the points lie on a line. This is one of the earliest results in enumerative combinatorial geometry. We will present a higher dimensional generalization of this theorem, which confirms a "top-heavy" conjecture of Dowling and Wilson in 1975. I will give a sketch of the key ideas of the proof, which are the hard Lefschetz theorem and the decomposition theorem in algebraic geometry. I will also talk about a log-concave conjecture on the number of independent sets. These are joint works with June Huh. Friday, December 9: Aaron Brown (U Chicago) Lattice actions and recent progress in the Zimmer program Abstract: The Zimmer Program is a collection of conjectures and questions regarding actions of lattices in higher-rank simple Lie groups on compact manifolds. For instance, it is conjectured that all non-trivial volume-preserving actions are built from algebraic examples using standard constructions. In particular—on manifolds whose dimension is below the dimension of all algebraic examples—Zimmer's conjecture asserts that every action is finite. I will present some background, motivation, and selected previous results in the Zimmer program. I will then explain two of my results within the Zimmer program: (1) a solution to Zimmer's conjecture for actions of cocompact lattices in SL(n,R) (joint with D. Fisher and S. Hurtado); (2) a classification (up to topological semiconjugacy) of all actions on tori whose induced action on homology satisfies certain criteria (joint with F. Rodriguez Hertz and Z. Wang). Monday, December 19: Andrew Zimmer (U Chicago) Metric spaces of non-positive curvature and applications in several complex variables Abstract: In this talk I will discuss how to use ideas from the theory of metric spaces of non-positive curvature to understand the behavior of holomorphic maps between bounded domains in complex Euclidean space. Every bounded domain has an metric, called the Kobayashi metric, which is distance non-increasing with respect to holomorphic maps. Moreover, this metric often satisfies well-known non-positive curvature type conditions (for instance, Gromov hyperbolicity or visibility) and one can then use these conditions to understand the behavior of holomorphic maps. Some of what I will talk about is joint work with Gautam Bharali. Monday, January 9: Miklos Racz (Microsoft) Statistical inference in networks and genomics Abstract: From networks to genomics, large amounts of data are increasingly available and play critical roles in helping us understand complex systems. Statistical inference is crucial in discovering the underlying structures present in these systems, whether this concerns the time evolution of a network, an underlying geometric structure, or reconstructing a DNA sequence from partial and noisy information. In this talk I will discuss several fundamental detection and estimation problems in these areas. I will present an overview of recent developments in source detection and estimation in randomly growing graphs. For example, can one detect the influence of the initial seed graph? How good are root-finding algorithms? I will also discuss inference in random geometric graphs: can one detect and estimate an underlying high-dimensional geometric structure? Finally, I will discuss statistical error correction algorithms for DNA sequencing that are motivated by DNA storage, which aims to use synthetic DNA as a high-density, durable, and easy-to-manipulate storage medium of digital data. Friday, January 13: Mihaela Ifrim (Berkeley) Two dimensional water waves The classical water-wave problem consists of solving the Euler equations in the presence of a free fluid surface (e.g the water-air interface). This talk will provide an overview of recent developments concerning the motion of a two dimensional incompressible fluid with a free surface. There is a wide range of problems that fall under the heading of water waves, depending on a number of assumptions that can be applied: surface tension, gravity, finite bottom, infinite bottom, rough bottom, etc., and combinations thereof. We will present the physical motivation for studying such problems, followed by the discussion of several interesting mathematical questions related to them. The first step in the analysis is the choice of coordinates, where multiple choices are available. Once the equations are derived we will discuss the main issues arising when analysing local well-posedness, as well as the long time behaviour of solutions with small, or small and localized data. In the last part of the talk we will introduce a new, very robust method which allows one to obtain enhanced lifespan bounds for the solutions. If time permits we will also introduce an alternative method to the scattering theory, which in some cases yields a straightforward route to proving global existence results and obtaining an asymptotic description of solutions. This is joint work with Daniel Tataru, and in part with John Hunter. Tuesday, January 17: Fabio Pusateri (Princeton) The Water Waves problem We will begin by introducing the free boundary Euler equations which are a system of nonlinear PDEs modeling the motion of fluids, such as waves on the surface of the ocean. We will discuss several works done on this system in recent years, and how they fit into the broader context of the study of nonlinear evolution problems. We will then focus on the question of global regularity for water waves, present some of our main results - obtained in collaboration with Ionescu and Deng-Ionescu-Pausader - and sketch some of the main ideas. Friday, January 20: Sam Raskin (MIT) Tempered local geometric Langlands The (arithmetic) Langlands program is a cornerstone of modern representation theory and number theory. It has two incarnations: local and global. The former conjectures the existence of certain "local terms," and the latter predicts remarkable interactions between these local terms. By necessity, the global story is predicated on the local. Geometric Langlands attempts to find similar patterns in the geometry of curves. However, the scope of the subject has been limited by a meager local theory, which has not been adequately explored. The subject of this talk is a part of a larger investigation into local geometric Langlands. We will give an elementary overview of the expectations of this theory, discuss a certain concrete conjecture in the area (on "temperedness"), and provide evidence for this conjecture. Monday, January 23: Tamas Darvas (Maryland) Geometry on the space of Kahler metrics and applications to canonical metrics A basic problem in Kahler geometry, going back to Calabi in the 50's, is to find Kahler metrics with the best curvature properties, e.g., Einstein metrics. Such special metrics are minimizers of well known functionals on the space of all Kahler metrics H. However these functionals become convex only if an adequate geometry is chosen on H. One such choice of Riemannian geometry was proposed by Mabuchi in the 80's, and was used to address a number of uniqueness questions in the theory. In this talk I will present more general Finsler geometries on H, that still enjoy many of the properties that Mabuchi's geometry has, and I will give applications related to existence of special Kahler metrics, including the recent resolution of Tian's related properness conjectures. Friday, February 3: Melanie Matchett Wood (UW-Madison) Random groups from generators and relations We consider a model of random groups that starts with a free group on n generators and takes the quotient by n random relations. We discuss this model in the case of abelian groups (starting with a free abelian group), and its relationship to the Cohen-Lenstra heuristics, which predict the distribution of class groups of number fields. We will explain a universality theorem, an analog of the central limit theorem for random groups, that says the resulting distribution of random groups is largely insensitive to the distribution from which the relations are chosen. Finally, we discuss joint work with Yuan Liu on the non-abelian random groups built in this way, including the existence of a limit of the random groups as n goes to infinity. Monday, February 6: Benoit Perthame (University of Paris VI) Models for neural networks; analysis, simulations and behaviour Neurons exchange informations via discharges, propagated by membrane potential, which trigger firing of the many connected neurons. How to describe large networks of such neurons? What are the properties of these mean-field equations? How can such a network generate a spontaneous activity? Such questions can be tackled using nonlinear integro-differential equations. These are now classically used in the neuroscience community to describe neuronal networks or neural assemblies. Among them, the best known is certainly Wilson-Cowan's equation which describe spiking rates arising in different brain locations. Another classical model is the integrate-and-fire equation that describes neurons through their voltage using a particular type of Fokker-Planck equations. Several mathematical results will be presented concerning existence, blow-up, convergence to steady state, for the excitatory and inhibitory neurons, with or without refractory states. Conditions for the transition to spontaneous activity (periodic solutions) will be discussed. One can also describe directly the spike time distribution which seems to encode more directly the neuronal information. This leads to a structured population equation that describes at time $t$ the probability to find a neuron with time $s$ elapsed since its last discharge. Here, we can show that small or large connectivity leads to desynchronization. For intermediate regimes, sustained periodic activity occurs. A common mathematical tool is the use of the relative entropy method. This talk is based on works with K. Pakdaman and D. Salort, M. Caceres, J. A. Carrillo and D. Smets. February 10: Alina Chertock (NC State Univ.) Numerical Method for Chemotaxis and Related Models Chemotaxis is a movement of micro-organisms or cells towards the areas of high concentration of a certain chemical, which attracts the cells and may be either produced or consumed by them. In its simplest form, the chemotaxis model is described by a system of nonlinear PDEs: a convection-diffusion equation for the cell density coupled with a reaction- diffusion equation for the chemoattractant concentration. It is well-known that solutions of such systems may develop spiky structures or even blow up in finite time provided the total number of cells exceeds a certain threshold. This makes development of numerical methods for chemotaxis systems extremely delicate and challenging task. In this talk, I will present a family of high-order numerical methods for the Keller-Segel chemotaxis system and several related models. Applications of the proposed methods to to multi-scale and coupled chemotaxis–fluid system and will also be discussed. Friday, February 17: Gustavo Ponce(UCSB) The Korteweg-de Vries equation vs. the Benjamin-Ono equation In this talk we shall study the [math]k[/math]-generalized Korteweg-de Vries [math](k[/math]-KdV) equation [math]\partial_t u + \partial_x^3u +u^k\,\partial_xu=0,\;\;\;\;\;\;\;x,t\in\Bbb R,\, k\in \Bbb Z^+, [/math] and the [math]k[/math]-generalized Benjamin-Ono ([math]k[/math]-BO) equation [math]\partial_t u-\partial_x^2\mathcal {H} u+u^k\,\partial_x u=0, \;\;\;\;\;\;\;x,t\in\Bbb R,\, k\in \Bbb Z^+,[/math] where [math]\mathcal {H}[/math] denotes the Hilbert transform, [math]\mathcal {H} f(x)=\frac{1}{\pi}\, {p.v.}\big(\frac{1}{x}\ast f\big)(x)=(-i\,sgn(\xi) \widehat{f}(\xi))^{\vee}(x).[/math] The goal is to review and analyze results concerning solutions of the initial value properties associated to these equations. These include a comparison of the local and global well-posedness and unique continuation properties as well as special features of the special solutions of these models. Monday, February 20, Amy Cochran (Michigan) Mathematical Classification of Bipolar Disorder Bipolar disorder is a chronic disease of mood instability. Longitudinal patterns of mood are central to any patient description, but are condensed into simple attributes and categories. Although these provide a common language for clinicians, they are not supported by empirical evidence. In this talk, I present patient-specific models of mood in bipolar disorder that incorporate existing longitudinal data. In the first part, I will describe mood as a Bayesian nonparametric hierarchical model that includes latent classes and patient-specific mood dynamics given by discrete-time Markov chains. These models are fit to weekly mood data, revealing three patient classes that differ significantly in attempted suicide rates, disability, and symptom chronicity. In the second part of the talk, I discuss how combined statistical inferences from a population do not support widely held assumptions (e.g. mood is one-dimensional, rhythmic, and/or multistable). I then present a stochastic differential equation model that does not make any of these assumptions. I show that this model accurately describes the data and that it can be personalized to an individual. Taken together, this work moves forward data-driven modeling approaches that can guide future research into precise clinical care and disease causes. Friday, March 3, Ken Bromberg (Utah) "Renormalized volume for hyperbolic 3-manifolds" Motivated by ideas in physics Krasnov and Schlenker defined the renormalized volume of a hyperbolic 3-manifold. This is a way of assigning a finite volume to a hyperbolic 3-manifold that has infinite volume in the usual sense. We will begin with some basic background on hyperbolic geometry and hyperbolic 3-manifolds before defining renormalized volume with the aim of explaining why this is a natural quantity to study from a mathematician's perspective. At the end will discuss some joint results with M. Bridgeman and J. Brock. Tuesday, March 7: Roger Temam (Indiana University) On the mathematical modeling of the humid atmosphere The humid atmosphere is a multi-phase system, made of air, water vapor, cloud-condensate, and rain water (and possibly ice / snow, aerosols and other components). The possible changes of phase due to evaporation and condensation make the equations nonlinear, non-continuous (and non-monotone) in the framework of nonlinear partial differential equations. We will discuss some modeling aspects, and some issues of existence, uniqueness and regularity for the solutions of the considered problems, making use of convex analysis, variational inequalities, and quasi-variational inequalities. Wednesday, March 8: Roger Temam (Indiana University) Weak solutions of the Shigesada-Kawasaki-Teramoto system We will present a result of existence of weak solutions to the Shigesada-Kawasaki-Teramoto system, in all dimensions. The method is based on new a priori estimates, the construction of approximate solutions and passage to the limit. The proof of existence is completely self-contained and does not rely on any earlier result. Based on an article with Du Pham, to appear in Nonlinear Analysis. Wednesday, March 15: Enrique Zuazua (Universidad Autónoma de Madrid) Control and numerics: Recent progress and challenges In most real life applications Mathematics not only face the challenge of modelling (typically by means of ODE and/or PDE), analysis and computer simulations but also the need control and design. And the successful development of the needed computational tools for control and design cannot be achieved by simply superposing the state of the art on Mathematical and Numerical Analysis. Rather, it requires specific tools, adapted to the very features of the problems under consideration, since stable numerical methods for the forward resolution of a given model, do not necessarily lead to stable solvers of control and design problems. In this lecture we will summarize some of the recent work developed in our group, motivated by different applications, that have led to different analytical and numerical methodologies to circumvent these difficulties. The examples we shall consider are motivated by problems of different nature and lead to various new mathematical developments. We shall mainly focus on the following three topics: - Inverse design for hyperbolic conservation laws, - The turnpike property: control in long time intervals, - Collective behavior: guidance by repulsion. We shall also briefly discuss the convenience of using greedy algorithms when facing parameter-dependence problems. Friday, March 17 at 4:00pm: Lillian Pierce Wednesday, March 29 at 3:30PM (Wasow): Sylvia Serfaty (NYU) Microscopic description of Coulomb-type systems We are interested in systems of points with Coulomb, logarithmic or more generally Riesz interactions (i.e. inverse powers of the distance). They arise in various settings: an instance is the classical Coulomb gas which in some cases happens to be a random matrix ensemble, another is vortices in the Ginzburg-Landau model of superconductivity, where one observes in certain regimes the emergence of densely packed point vortices forming perfect triangular lattice patterns named Abrikosov lattices, a third is the study of Fekete points which arise in approximation theory. After reviewing the motivations, we will take a point of view based on the detailed expansion of the interaction energy to describe the microscopic behavior of the systems. In particular a Central Limit Theorem for fluctuations and a Large Deviations Principle for the microscopic point processes are given. This allows to observe the effect of the temperature as it gets very large or very small, and to connect with crystallization questions. The main results are joint with Thomas Leblé and also based on previous works with Etienne Sandier, Nicolas Rougerie and Mircea Petrache. Retrieved from "https://www.math.wisc.edu/wiki/index.php?title=Colloquia/Fall18&oldid=13496"
CommonCrawl
qfm19 program overview (pdf file) opening - Frank Jülicher, director of the MPIPKS and the scientific coordinators Ferromagnetic quantum criticality (chair: Dietrich Belitz) 09:00 - 09:30 Valentin Taufour (University of California at Davis) Suppressing ferromagnetism: How to do it? What does it tell us? I will review some of our efforts to study what happens when ferromagnets are squeezed until suppression of the magnetic order. Various interesting behaviors are observed experimentally in the temperature-pressure ($T$-$p$) phase diagrams of UGe$_2$, LaCrGe$_3$, and CeTiGe$_3$, LaCrSb$_3$. Namely, either the ferromagnetic-paramagnetic transition becomes of the first-order at a tricritical point before being suppressed such as in UGe$_2$ [1,2], or transitions to modulated magnetic phases appear such as in LaCrGe$_3$ [3,4] and CeTiGe$_3$[5]. We have shown that the addition of a magnetic field ($H$) can lead to new quantum critical points at the end of "wings" in the $T$-$p$-$H$ phase diagram in both UGe$_2$ and LaCrGe$_3$ [5] or to quantum tricritical points in CeTiGe$_3$. Our careful study of the "wings" near the tricritical point reveal new rules that apply to the $T$-$p$-$H$ phase diagram [6]. We discuss how our experimental $T$-$p$-$H$ phase diagrams of UGe$_2$, LaCrGe$_3$, and CeTiGe$_3$ illustrate different strength of quantum fluctuations based on recent theoretical results [7]. I will also discuss how new promising compounds to study quantum ferromagnetism can be identified. [1] V. Taufour et al. Phys. Rev. Lett. 105, 217201 (2010). [2] H. Kotegawa et al. J. Phys. Soc. Jpn., 80, 8, 083703 (2011). [3] V. Taufour et al. Phys. Rev. Lett. 117, 037207 (2016). [4] U. S. Kaluarachchi et al. Nature Communications 8, 546 (2017). [5] U. S. Kaluarachchi et al. Phys. Rev. B 97, 045139 (2018). [6] V. Taufour et al. Phys. Rev. B 94 060410 (2016). [7] Belitz et al. Phys. Rev. Lett. 119 267207 (2017). 09:30 - 10:00 Thomas Vojta (Missouri University of Science and Technology) Collective modes at a disordered quantum phase transition FM criticality in d-electron systems (chair: Andrew Huxley) 10:30 - 11:00 Michael Baenitz (Max-Planck-Institut für Chemische Physik fester Stoffe) Binary Fe-based semi metals at the verge of order: correlations and criticality by the local NQR probe Magnetic resonance is a very suitable microscopic tool for correlated matter at the verge of long range magnetic ordering and aims in particular to expose the real nature of the magnetic fluctuations [antiferromagnetic (afm) versus ferromagnetic (fm)] by temperature and field scaling. Among 3d magnets tunable ferromagnetic quantum criticality could be found in itinerant Fe-based systems like $NbFe_2$ and $(Ta,V)Fe_2$ but also in systems with more localized Fe moments like $YFe_2Al_{10}$ and $YbFe_2Al_{10}$. Signatures of Kondo type of correlations are also found in some Fe-based binary semimetals. FeSi, $FeSb_2$, and $FeGa_3$ attracted great attention because of their nonmagnetic ground state and their promising low temperature thermoelectric performance assigned to correlated in gap states. Metallic behavior and Fe-based magnetism could be introduced by controlled substitutions on the Fe- or the framework site. Here we present a NQR study on the electron doped systems $Fe(Sb,Te)_2$ and $Fe(Ga,Ge)_3$. Being a local probe at zero field NQR could capture both most relevant points: (a) the degree of disorder upon doping and (b) the evolution of electronic correlations and the onset of critical fluctuations at the verge of long range order. Whereas in $Fe(Sb,Te)_2$ the predominant electronic NQR broadening effect provides a microscopic evidence for the electronic Griffith phase formation in $Fe(Ga,Ge)_3$ an absence of induced disorder was found from NQR. Here a crossover from an insulator to an antiferromagnetic correlated local moment metal in the low-doping regime and the evolution of itinerant ferromagnetism upon further doping is found. For the nearly critical concentration at the threshold of ferromagnetic order, the spin lattice relaxation rate exhibits a pronounced T −4/3 power law over two orders of magnitude in temperature, which indicates three-dimensional quantum critical ferromagnetic fluctuations. 11:00 - 11:30 Meigan Aronson (The University of British Columbia) Local quantum criticality and emergent moments It is now well accepted that the suppression of ordered states, such as magnetism, can give rise to a novel state with highly anomalous metallic characteristics. It remains a challenge to understand the role of the quantum critical fluctuations associated with the T=0 phase transition in inducing this new state, and if there is feedback between the fluctuations and the essential properties of the quasiparticles in the non-Fermi liquid electronic state. A lack of detailed experimental results on suitable QC systems has slowed progress towards this understanding. The quasi-two dimensional metal YFe2Al10 is a very promising system, comprised of layers of nearly square nets of Fe atoms. Despite the strong divergence of the susceptibility,\chi^{''}(T)~T^{-1.4} there is no evidence for magnetic order above 0.02 K. Inelastic neutron scattering measurements find that the scattering has no indication of incipient magnetic order, rather there is no measureable wave vector dependence, beyond that of the form factor. However, the scattering displays a strong energy divergence, and the Kramers-Kronig analysis indicates that it is these quantum critical excitations that are responsible for the divergence in the magnetic susceptibility. The scattering is also temperature independent, evidence that the imaginary part of the dynamical susceptibility \chi^{''}"(E,T) displays E/T scaling, where the absence of any characteristic energy scale beyond temperature is the hallmark of quantum critical systems. The quantum critical fluctuations are very strong in YFe2Al10, signaling that it is very close to a T=0 phase transition. The neutron scattering measurements reveal that the critical fluctuations are completely local, with each moment fluctuating incoherently, each with the same spectrum of excitations. These findings rule out the possibility that YFe2Al10 is near magnetic ordering, and instead it seems likely that the phase transition corresponds to a purely electronic phase transition, possibly an orbital selective Mott transition, where localized magnetic moments first emerge. FM criticality in quasi-1D systems (chair: Hilbert von Löhneysen) 11:30 - 12:00 Kristin Kliemt (Johann Wolfgang Goethe-Universität Frankfurt) Evolution from ferromagnetic quantum criticality to Fermi liquid behaviour in \(YbNi_4(P_{1-x}As_x)_2\) 12:00 - 12:30 Huiqiu Yuan (Zhejiang University) Ferromagnetic quantum critical point in a Ce-based heavy fermion compound Due to the low energy scales, the ground state of heavy fermion compounds can be readily tuned by parameters such as pressure, magnetic fields or doping. There have been numerous studies into antiferromagnetic quantum criticality and unconventional superconductivity, which have been revealed a possible role for spin fluctuations in the superconducting pairing [1, 2]. However, ferromagnetism in heavy fermion systems has been less frequently investigated [3, 4]. Here we report the resistivity and specific heat measurements of the ferromagnetic compound CeRh6Ge4 under pressure, which show the presence of a ferromagnetic critical point in this system. Similar results were also obtained by element substitutions. References: [1] N. D. Mathur et al., Nature 394, 39 (1998). [2] Z. F. Weng et al., Rep. Prog. Phys. 79, 094503 (2016) [3] A. Steppke et al. Science 339, 933 (2013). [4] M. Brando et al., Rev. Mod. Phys. 88, 025006 (2016). Magnetism and superconductivity (chair: Georg Knebel) 14:30 - 15:00 Kenji Ishida (Kyoto University) NMR studies on U-based ferromagnetic superconductors Since the discovery of superconductivity in ferromagnet UGe$_2$ under pressure [1], U-based ferromagnetic (FM) superconductors have attracted much attention since spin-triplet superconductivity is anticipated. Within them, URhGe[2] and UCoGe[3] show superconductivity at ambient pressure, and the latter has the highest superconducting (SC) transition temperature $T_{\rm Super}$ = 0.57 K below FM ordering at $T_{\rm Curie}$ =2.5 K. We have studied single-crystal UCoGe with microscopic measurements of $^{59}$Co nuclear magnetic resonance (NMR) and nuclear quadrupole resonance (NQR). We showed that superconductivity occurs in the FM region [4] and that both phenomena originate from U 5-$f$ electrons[5], resulting in the microscopic coexistence of ferromagnetism and superconductivity realized in UCoGe. We also studied the spin-dynamic properties from the measurements of $1/T_1$ and Knight shift along the each crystalline axis. The results show that both static and dynamic susceptibilities possess the strong Ising anisotropy along the $c$ axis being the easy axis and that the FM fluctuations are predominant at low temperatures and persist even below $T_{\rm Curie}$ [6]. From the angle-resolved NMR measurements, we found that the magnetic field along the $c$ axis ($H \parallel c$) strongly suppresses both the FM Ising-type fluctuations and superconductivity in the same manner [7]. On the other hand, we found that the field along the $b$ axis ($H \parallel b$) above 5 T enhances the Ising FM fluctuations at low temperatures, and that the superconductivity also becomes robust in the same field region[8]. These results strongly suggest that the characteristic FM fluctuations tuned by external fields induce unique spin-triplet superconductivity in UCoGe. This scenario is also supported by the recent $^{59}$Co-NQR/NMR measurements under pressure[9]. We will introduce NMR results on single-crystal URhGe [10,11] and UGe$_2$ done by other groups, and discuss the similarity and difference in these superconductors. The NMR/NQR studies on single-crystal UCoGe were done in the collaboration with T. Hattori, M. Manago, S. Kitagawa, K. Karube, Y. Ihara, K. Deguchi, N. K. Sato, and T. Yamamura. References [1] S.S. Saxena et al., Nature 406, 587 (2000). [2] D. Aoki et al, Nature 413, 613 (2001). [3] N. T. Huy et al., Phys.Rev. Lett. 99, 067006 (2007). [4] T. Ohta et al., J. Phys. Soc. Jpn. 77, 023707 (2008). [5] K. Karube et al. J. Phys. Soc. Jpn. 80, 064711 (2011). [6] Y. Ihara et al., Phys. Rev. Lett. 105, 206403 (2010). [7] T. Hattori et al. Phys. Rev. Lett. 108, 066403 (2012). [8] T. Hattori et al. J. Phys. Soc. Jpn. 83, 073708 (2014). [9] M. Manago et al. submitted to PRL. [10] Y. Tokunaga et al. Phys. Rev. Lett. 114, 216401 (2015). [11] Y. Tokunaga et al. Phys. Rev. B 93, 201112(R) (2016) 15:00 - 15:30 Christoph Geibel (Max-Planck-Institut für Chemische Physik fester Stoffe) \(CeRh_2As_2\): a new heavy-fermion superconductor with Rashba and quadrupolar interactions One of the hot topic in the field of superconductivity is the effect of breaking inversion symmetry in the presence of strong spin orbit interaction. This leads to a spin-splitted Fermi surface with unique momentum-locked spin polarization even in the absence of an external or an exchange field. Superconductivity appearing from such polarized bands is robust against Zeeman pair-breaking effect and can host mixed-parity pairing providing exotic superconducting states. So far, this kind of phenomena has only been studied in a handful of non-centrosymmetric system such as CeRhSi3 under pressure and quasi-two-dimensional transition-metal dichalcogenide superconductors. Here, we report the discovery of heavy-fermion superconductivity in CeRh2As2 with Tc ~ 0.25 K at ambient pressure. This compounds crystallizes in the CaBe2Ge2 structure type where inversion symmetry is locally broken at the Ce site, but is still present for the global structure. We observe a huge upper critical field of ≳ 12 T for the out-of-plane direction surpassing the Pauli-paramagnetic limit of ~ 0.5 T. This provides a clear signature of a Rashba-type in-plane spin polarization arising from an alternating asymmetric potential due to the broken local inversion symmetry. In addition our results indicate this system to be very close to a quantum critical point (QCP) with a further transition at To ~ 0.4 K. The presence of a quasi-quartet crystal electric field ground state as well as a very unusual B-T phase diagram suggest this transition to be a quadrupolar one. Therefore CeRh2As2 is a promising candidate for studying how heavy-fermion superconductivity behaves under the influence of a Rashba-type interactions and a possible multipolar QCP. 15:30 - 16:00 Daniel Braithwaite (Université Grenoble Alpes and CEA) Tuning ferromagnetism and superconductivity in URhGe with uniaxial stress qfm19 colloquium (chair: Inti Sodemann, MPIPKS) Andrey Chubukov (University of Minnesota) Interplay between superconductivity and non-Fermi liquid above a quantum critical point in a metal I discuss the interplay between non-Fermi liquid behavior and superconductivity near a quantum-critical point (QCP) in a metal. It is widely thought that the tendency towards superconductivity and towards non-Fermi liquid behavior compete with each other, and if the pairing interaction is reduced below a certain threshold, the system displays a naked non-Fermi liquid QC behavior. I show that the situation is more complex. First, there is a difference between spin-triple and spin-singlet superconductivity. For spin-triplet pairing, thermal fluctuations are crucial and make superconducting transition first order. For spin-singlet pairing, they are essentially irrelevant, and the transition is second order. Second, I show that for spin-singlet pairing, there are multiple solutions with the same gap symmetry. For all solutions, except one, Tc vanishes when the pairing interaction drops below the threshold. For the one special solution, Tc remains finite even when the pairing interaction is arbitrary small, despite that there is no Cooper logarithm. I argue that superconductivity between this Tc and a lower T, when other solutions appear, is special, as it is entirely induced by fermions with the first Matsubara frequency. I show that this has specific implications for the observable quantities, such as the density of states and the spectral function. Spin dynamics in ferromagnets (chair: Stephen Hayden) 09:00 - 09:30 Dmitry Sokolov (Max-Planck-Institut für Chemische Physik fester Stoffe) Hard-axis ordering in metallic ferromagnets 09:30 - 10:00 Marc Janoschek (Paul Scherrer Institut) High-precision studies of ferromagnetic quantum matter Strongly correlated metals near magnetic quantum phase transitions (QPTs) are prime examples for quantum matter. For clean ferromagnetic metals, Belitz-Kirkpatrick-Vojta (BKV) theory shows in excellent agreement with experiment that ferromagnetic QPTs are generally first-order due to the coupling of the magnetization to electronic soft modes, in contrast to the classical analogue that is an archetypical second-order phase transition. Our recent extensive magnetization study on the disordered ferromagnetic metal $UCo_{1-x}Fe_xGe$ demonstrate that BKV theory even correctly predicts that the second order nature of the QPT is restored because the electronic soft modes change their nature from ballistic to diffusive [1]. Despite this significant progress in understanding ferromagnetic metals, to date, little information is available on the underlying low-energy spin fluctuations that drive exotic behavior such as partial magnetic order, and topological non-Fermi liquid behavior, and unconventional spin-triplet superconductivity that is frequently observed in the vicinity of ferromagnetic QPTs. Notably, the outstanding challenge is that the underlying characteristic energy scales that drive these quantum matter states are tiny compared to typical electronic energy scales in solids, and are, in turn, notoriously difficult to measure. Here we discuss some of our recent results to showcase how current advances in the resolution of neutron spectroscopy. Using the novel Modulated IntEnsity by Zero Effort (MIEZE) technique implemented at the neutron spectrometer RESEDA in Munich, we achieve ultra-high energy resolution of 1 μeV and reveal that the spin fluctuations in UGe2 exhibit a dual nature arising from the interplay of localized and itinerant electronic degrees of freedom in consistent with spin-triplet superconductivity proposed for this material [2]. [1] K. Huang, S. Eley, P. F. S. Rosa, L. Civale, E. D. Bauer, R. E. Baumbach, M. B. Maple, and M. Janoschek, Phys. Rev. Lett. 117, 237202 (2016). [2] F. Haslbeck, S. Säubert, M. Seifert, C. Franz, M. Schulz, A. Heinemann, T. Keller, P. Das, J.D. Thompson, E. D. Bauer, C. Pfleiderer, and M. Janoschek, Phys. Rev. B 99, 014429 (2019). 10:00 - 10:30 Philipp Niklowitz (Royal Holloway, University of London) Neutron scattering of modulated magnetic order and low-energy excitations near the masked ferromagnetic quantum critical point in \(NbFe_2\) U-based FM superconductors (chair: Almut Schroeder) 11:00 - 11:30 Dai Aoki (Tohoku University) Ferromagnetism and unconventional superconductivity in uranium compounds 11:30 - 12:00 Nicholas Butch (National Institute of Standards and Technology, Gaithersburg) Spin triplet superconductivity in nearly ferromagnetic \(UTe_2\) I will discuss newly-discovered spin-triplet superconductivity in UTe2. Although the normal state is paramagnetic, unusual magnetization-field scaling suggests strong magnetic correlations and proximity to a ferromagnetic instability. The superconducting order parameter arises from strong ferromagnetic fluctuations, yielding a very high upper critical field with similar anisotropy to that found in established ferromagnetic superconductors. There is a large residual specific heat that implies that while electrons with parallel spins pair, only half of the available electrons participate, yielding a spin-polarized condensate that coexists with a spin-polarized metal. 12:00 - 12:30 Maria Teresa Mercaldo (Università degli studi di Salerno) Magnetically driven topological phases in spin-triplet superconductors Ferromagnetism in exotic Fermi liquids (chair: Thomas Vojta) 14:30 - 15:00 Theodore R. Kirkpatrick (University of Maryland) The quantum ferromagnetic transition in clean Dirac metals The nature of the quantum ferromagnetic phase transition in clean Dirac metals is studied. Generally it is shown that the transition is discontinuous one from a Dirac metal to a Weyl metal. Possible exceptions are pointed out. The results are somewhat surprising since strong spin-orbit scattering naively suppresses the mechanism that causes the first order transition ordinary metals. The important role of chirality is stressed. 15:00 - 15:30 Andreas Rost (University of St Andrews) Magnetism in topological semimetals - creating and controlling Weyl fermions 3D Dirac semimetals have attracted wide interest for their unique magnetotransport properties, many of which are related to the topological properties of their low energy band structure. A recent key focus is on the exploration and potential exploitation of new phenomena through the introduction of magnetism to such systems, with only a few candidate materials known to date. Here I will present magnetotransport, thermodynamic and neutron scattering experiments combined with DFT band structure calculations establishing the phase diagram of the magnetic inverse perovskite Eu3PbO. Detailed analysis across the low temperature phases reveals that this material hosts not only one, but a multitude of different topological phases that are easily controllable by an external magnetic field. Examples are a ferromagnetic Weyl phase, an antiferromagnetic Dirac phase, and two canted antiferromagnetic phases with both Weyl points and nodal lines. I will discuss the key features of the unusual band dispersions as well as the implications for surface states and the Hall effect. This opens up the possibility to manipulate the interplay of band topology, magnetism, and transport and I will discuss pathways to quantum phase transitions in such systems. 15:30 - 16:00 Andrey Chubukov (University of Minnesota) Collective modes near a Pomeranchuk instability I will discuss the behavior of collective excitations in a Fermi liquid near a Pomeranchuk transition towards a state with an order parameter characterized by angular momentum l. For each value of l, I will track the evolution of spin and/or charge collective modes with increasing magnitude of the Landau parameter F_l. I will argue that for each l there is one zero-sound mode, whose velocity vanishes at the Pomeranchuk instability, when F_l =-1. In a clean Fermi liquid the critical mode near the transition is purely relaxational in some cases; in others, it is almost propagating. This leads to qualitatively different time dependence of the susceptibility both near and at the transition. In a dirty Fermi liquid, the critical mode becomes purely relaxational in all cases. The situation is somewhat special for the l=1 order parameter coinciding with the spin or charge current. In this case, the residue of the critical mode vanishes at the Pomeranchuk transition, consistent with earlier results that the static susceptibility does not diverge at F_1 = -1. I will show, however, that the critical mode can be identified at any distance from the transition, and moves into the upper frequency half-plane at F_1 <-1, signaling that a Fermi surface gets distorted. The only peculiarity of a charge/spin current order parameter is that it takes longer to reach an equilibrium state at F_1 <-1. group photo (to be published on the workshop website) & coffee break FM-AFM/SDW competition (chair: Philipp Niklowitz) 09:00 - 09:30 Sven Friedemann (University of Bristol) Quantum tricritical points in \(NbFe_2\) Quantum phase transitions ferromagnetic metals impose long-standing questions including non-Fermi liquid behaviour with incompatible temperature dependencies in transport and thermodynamic properties. In most materials, ferromagnetic quantum critical points (QCPs) are avoided - most commonly through a change to 1st order as an initially 2nd order transition is tuned towards zero temperature. Here, we discuss our results on NbFe$_2$ in which the ferromagnetic quantum critical point is avoided through an intervening antiferromagnetic phase. We demonstrate that NbFe$_2$ is well described by a very general two-order-parameter model for a large set of samples with different compositions across the buried ferromagnetic QCP. This analysis establishes the presence of quantum tricritical points where both the uniform and finite wavelength susceptibility diverge. As our model is very general, we expect it to underlie a whole new class of quantum criticality in ferromagnets and other systems with two order parameters. 09:30 - 10:00 Cornelius Krellner (Johann Wolfgang Goethe-Universität Frankfurt) Evolution from ferromagnetism to antiferromagnetism in \(Yb(Rh_{1-x}Co_x)_2Si_2\) single crystals 10:00 - 10:30 Andrew G. Green (University College London) Anisotropy in critical itinerant ferromagnets FM-AFM/SDW competition (chair: Frank Krüger) 11:00 - 11:30 Sung-Sik Lee (McMaster University) Antiferromagnetic quantum critical metal Non-Fermi liquids are exotic metallic states which do not support well defined quasiparticles. Due to strong quantum fluctuations and the presence of extensive gapless modes near the Fermi surface, it has been difficult to understand universal low-energy properties of non-Fermi liquids. In this talk, we discuss recent progress made on field theories for non-Fermi liquids. Based on a dimensional regularization scheme which tunes the co-dimension of Fermi surface, critical exponents that control universal scaling behaviors of physical observables can be computed in a controlled way. The systematic expansion also provides important insight into strongly interacting non-Fermi liquids. This allows us to find the non-perturbative solution for the strange metal realized at the antiferromagnetic quantum critical point in 2+1 dimensions, and predict the exact critical exponents that can be experimentally tested in layered compounds. 11:30 - 12:00 Stephen Hayden (University of Bristol) Spin density wave order and fluctuations controlled by a magnetic field in \(Sr_3Ru_2O_7\) The layered oxide metal Sr$_3$Ru$_2$O$_7$ is close to ferromagnetic order at low temperatures. Thus it provides and model system to investigate the effect of a varying field induced exchange splitting on a 2D metal. For fields near 8T applied perpendicular to the RuO$_2$ planes spin density wave order is induced below 1K. I will describe our characterisation of the SDW ordered states using neutron scattering and show how the low-frequency magnetic quantum fluctuations can be controlled using a magnetic field. Our results suggest that the spin density wave order forms in a background of strong spin fluctuations. I will also show the magnetic excitations can be used to infer the low temperature specific heat. 12:00 - 12:30 Veronika Fritsch (Universität Augsburg) Suppression of ferromagnetic order in \(CeTiGe_3\) by V substitution CeTiGe$_3$ is a ferromagnetic Kondo system with a Curie temperature $T_C \approx 14$ K. The critical properties of pure CeTiGe3 follow the predictions of a classical 3D ferromagnet, despite a strong uniaxial anisotropy with the easy axis along the c-axis. This anisotropy is reduced with the substitution of Ti with V; CeVGe3 is an antiferromagnet with the ab-plane as easy plane. Furthermore the substitution of Ti with V results in a suppression of the ferromagnetic order with $T_C \rightarrow 0$ for $x \approx 0.4$. Around the critical concentration thermodynamic and NMR measurements point toward a mixture of ferromagnetic and antiferromagnetic spin-fluctuations. Overall our data indicate the existence of a quantum critical point in CeTi$_{1-x}$V$_x$Ge$_3$, which is, however, rather complex involving ferromagnetic and antiferromagnetic correlations. meeting at the institute's reception for the hiking tour meeting at guest house no. 4 for the visit of the exhibition terra mineralia in Freiberg meeting at the institute's reception for the guided city tour with an organ recital Quantum criticality in FM disordered systems (chair: Theodore R. Kirkpatrick) 09:00 - 09:30 Ryan Baumbach (Florida State University) Disordered ferromagnetic quantum phase transition in \(CePd_2P_2\)with Pd \(\rightarrow\)Ni substitution The Belitz-Kirkpatrick-Vojta (BKV) theory has emerged as a leading framework within which to describe ferromagnetic quantum phase transitions, and there already are many systems that have been described in this context. This includes both $d$- and $f$-electron magnets, which often feature intertwining energy scales such as the magnetic exchange (both sign and strength), the spin orbit interaction, the Kondo interaction, magnetic ion valence stability, and other factors. Furthermore, chemical/structural disorder plays an important role. Thus, it remains important to probe the universality of this theory by studying even more examples that span different parts of the electronic phase space. In this talk I will report results for the isoelectronic chemical substitution series $Ce(Pd_{1-x}Ni_x)_2Si_2$, where a possible ferromagnetic quantum phase transition is uncovered in the temperature-concentration ($T−x$) phase diagram. This behavior results partly from the contraction of the unit cell volume, which (i) tunes the relative strengths of the Kondo and Ruderman–Kittel–Kasuya–Yosida (RKKY) interactions and (ii) modifies the $f$-electron valence stability. The behavior is also influenced by the introduction of disorder through alloying. Near the critical region at $x_{\rm{cr}}$ $\approx$ 0.7, the rate of contraction of the unit cell volume strengthens, indicating that the cerium $f$ valence crosses over from trivalent to a noninteger value. Consistent with this picture, x-ray absorption spectroscopy measurements reveal that while $CePd_2P_2$ has a purely trivalent cerium $f$ state, $CeNi_2P_2$ has a small (< 10 \%) tetravalent contribution. In a broad region around $x_{\rm{cr}}$, there is a breakdown of Fermi-liquid temperature dependence seen in the heat capacity, signaling the influence of quantum critical fluctuations and disorder effects. Electrical transport measurements of clean $CePd_2P_2$ furthermore show that applied pressure has an initial effect similar to alloying on the ferromagnetic order. From these results, $CePd_2P_2$ emerges as a keystone system to test the BKV model in $f$-electron metals, where distinct behaviors are expected in the dirty and clean limits. 09:30 - 10:00 Gregory R. Stewart (University of Florida) Evolution of physical properties with decreasing size in \(U_2Pt_2In\), an undoped nFl compound In 2009, Kim, Stewart, and Samwer (PRB 79, 165119) investigated how the magnetic susceptibility, the magnetization, and the specific heat evolved with decreasing size in quantum critical CeRu0.8Rh1.2Si2 down to a particle size of 0.6 microns. They found evidence for Griffiths phase behavior with decreasing particle size, which below 3 microns began to be masked by uncompensated local moment defects seen in small particles of Ce compounds before. In order to avoid this masking behavior in small-sized CeRu0.8Rh1.2Si2, we report here on a similar study on samples of small particles of the undoped nFl compound U2Pt2In, also down to 0.6 microns in size. Uncompensated local moment defect peaks in the low temperature specific heat of CeRu0.8Rh1.2Si2 are avoided in our data for U2Pt2In, allowing a clearer look at the properties of this bulk nFl compound as a function of decreasing size. Properties of particles of U2Pt2In in the following size windows are reported: 10 - 20 microns; 3 - 10 microns; 1.2 - 3 microns; and 0.6 - 1.2 microns. 10:00 - 10:30 Matthias Vojta (Technische Universität Dresden) Cluster-glass phase in pyrochlore XY magnets with quenched disorder We study the impact of quenched disorder (random exchange couplings or site dilution) on easy-plane pyrochlore antiferromagnets. In the clean system, order-by-disorder selects a magnetically ordered state from a classically degenerate manifold. In the presence of randomness, however, different orders can be chosen locally depending on details of the disorder configuration. Using a combination of analytical considerations and classical Monte-Carlo simulations, we argue that any long-range-ordered magnetic state is destroyed beyond a critical level of randomness where the system breaks into magnetic domains due to random exchange anisotropies, becoming, therefore, a glass of spin clusters, in accordance with the available experimental data. These random anisotropies originate from off-diagonal exchange couplings in the microscopic Hamiltonian, establishing their relevance to other magnets with strong spin-orbit coupling. Quantum criticality in FM disordered systems (chair: Michael Baenitz) 11:00 - 11:30 Almut Schroeder (Kent State University) Ferromagnetic order and disorder in Ni-V alloys The Ni-alloy, Ni$_{1-x}$V$_x$, presents the opportunity to study a ferromagnetic (FM) quantum phase transition (QPT) with strong ''disorder'' introduced by ''random'' chemical substitution. Using complementary methods, neutron scattering, muon spin relaxation ($\mu $ SR) and magnetization, we show evidence of disorder and probe signatures of quantum Griffiths singularities close to the quantum critical concentration $x_c \approx11.6\%$ where the FM magnetic order is suppressed [1]. The key findings in Ni-V are: A detailed pair density function (PDF) analysis confirms that Ni-V is one of the rare binary Ni-alloys that form a solid solution (with random atomic occupation on a fcc-lattice) in the relevant Ni-rich regime. Field-dependent magnetization data display power laws with non-universal exponents in an extended concentration region around $x_c$. This signals the first evidence of quantum Griffiths phase (QGP) in the FM phase. $\mu$SR data also recognize a remaining dynamic magnetic cluster distribution besides the FM ordered response in the FM phase close to $x_c$ [1]. New small angle neutron scattering (SANS) data reveal a short-range magnetic cluster contribution and evidence for long-range FM domains for the same samples. The experimental signatures of a QGP with its limits are discussed in this Ni-alloy with strong short range and weak long-range interaction. in collaboration with: A. Gebretsadik, S. Bhattarai, J.-G. Lussier, A. Alyami, R. Wang, Kent State University; T. Vojta, Missouri University of S & T; K. Page, L. Debeer-Schmitt, ORNL; K. Krycka, NIST; P. J. Baker, F. L. Pratt, ISIS, STFC RAL; S. J. Blundell, T. Lancaster, J. S. Möller, Oxford [1] R. Wang, A. Gebretsadik, S. Ubaid-Kassis et al., Phys. Rev. Lett. 118, 267202 (2017) 11:30 - 12:00 Hilbert von Löhneysen (Karlsruher Institut für Technologie) Anomalous quantum-critical dynamic scaling the weak itinerant ferromagnet \(Sr_{1-x}Ca_{x}RuO_{3}\) The dynamics of continuous phase transitions is governed by the dynamic scaling exponent relating the correlation length and correlation time. For transitions at finite temperature, thermodynamic critical properties are independent of the dynamic scaling exponent z. In contrast, at quantum phase transitions where the transition temperature becomes zero, static and dynamic properties are inherently entangled by virtue of the uncertainty principle. Consequently, thermodynamic scaling equations explicitly contain the dynamic exponent. Basic thermodynamic measurements of thr magnetization M and the specific heat C (as a function of temperature and magnetic field) for the itinerant ferromagnet $Sr{1−x}Ca_xRuO_3$ show that the transition temperature becomes zero for x = 0.7. For samples of this Ca concentraton, we find dynamic scaling of M and C with highly unusual quantum critical dynamics. We observe a small dynamic scaling exponent z of 1.76 strongly deviating from current models of ferromagnetic quantum criticality and likely being governed by strong disorder in conjunction with strong electron–electron coupling. Work done in collaboration with C. L. Huang, D. Fuchs, M. Wissinger, R. Schneider, M. C. Ling, M. S. Scheurer, and J. Schmalian 12:00 - 12:30 Brian C Sales (Oak Ridge National Laboratory) Quantum criticality in the asymptotic limit of high disorder: a new twist on structural alloys U-based FM compounds (chair: Daniel Braithwaite) 14:30 - 15:00 Anne de Visser (University of Amsterdam) Superconducting and ferromagnetic phase diagram of UCoGe The intermetallic compound UCoGe has become a laboratory tool to study the unusual coexistence of itinerant ferromagnetism ($T_C$ = 3 K) and superconductivity ($T_s$ = 0.5 K) on the microscopic scale at ambient pressure [1]. Superconductivity in UCoGe is unconventional, as is demonstrated by the strongly anisotropic upper critical field and the extremely large $B_{c2}(0)$ values for magnetic field directions perpendicular to the ordered moment [2]. Here we report thermal expansion measurements in zero and applied magnetic fields on a single crystal of UCoGe around the magnetic and superconducting phase transitions [3]. The thermal expansion cell was mounted on a piezo-electric rotator in order to allow for a precise alignment of the magnetic field with the main crystal axes. The superconducting and magnetic phase diagram has been determined. With our bulk technique we confirm the S-shape of the upper-critical field for $B \parallel b$ and reinforcement of superconductivity above 6 T. At the same time the Curie point shifts towards lower temperatures when the field is applied along the $b$-axis. Our results lend further support to theoretical proposals of spin-fluctuation mediated reinforcement of superconductivity for $B \parallel b$. [1] N.T. Huy et al., Phys. Rev. Lett. 99, 067006 (2007). [2] D. Aoki et al., J. Phys. Soc. Jpn 78, 113709 (2009). [3] A.M. Nikitin et al., Phys. Rev. B 85, 115151 (2017). 15:00 - 15:30 Hisashi Kotegawa (Kobe University) NMR study on ferromagnetic critical point of \(UGe_2\) and the investigation of phase diagrams of some ferromagnets We report new NMR results on UGe2 with focusing the electronic state near the critical point between FM1 and FM2 phases. At the phase boundary, NMR can evaluate the microscopic states in each phase separately and detect the magnetic fluctuations originated from the critical point. We will discuss the relationship between superconductivity and the observed fluctuations. In the latter part of the presentation, we will show the pressure-temperature-magnetic field phase diagrams of some f-electrons ferromagnets. 15:30 - 16:00 Malte Grosche (University of Cambridge) Superconductivity in metallic magnets The threshold of metallic magnetism throws up fundamental questions that continue to challenge our understanding of correlated systems. Among these are the complex phase diagrams arising from the interplay of ferromagnetic and spin-density wave order, the non-Fermi liquid states observed near second order but in some cases also near first order quantum phase transitions, the consequences of disorder and inhomogeneity, the opportunities for topological defects to form, and the possibility of unconventional superconductivity. I will review progress with the layered iron-based superconductor YFe$_2$Ge$_2$ [1], which according to DFT [3] and neutron scattering studies [3] is closely balanced near the threshold of ferromagnetic and antiferromagnetic order. The high Sommerfeld ratio of order 100 mJ/molK$^2$ and a $T^{3/2}$ temperature dependence of the electrical resistivity at low temperature $T$ indicate that YFe$_2$Ge$_2$ is governed by strong electronic correlations. Our crystal growth study found that superconductivity in YFe$_2$Ge$_2$ is very sensitive to disorder scattering, pointing towards an unconventional pairing state. Only the most recent crystals, which reach resistance ratios of order 400, show a sharp superconducting heat capacity anomaly and enable a first examination of the superconducting state. 1. J. Chen et al. Phys. Rev. Lett. 116, 127001 (2016) and Phys. Rev. B 99, 020501 (2019). 2. D. J. Singh, Phys. Rev. B 89, 024505 (2014); A. Subedi Phys. Rev. B 89, 024504 (2014). 3. H. Wo, H. et al. cond-mat 180807262 (2018). Strong anisotropic FM systems and spin liquids (chair: Maria Teresa Mercaldo) 16:30 - 17:00 Natalia Perkins (University of Minnesota) Observing spin fractionalization in the Kitaev spin liquid Motivated by the ongoing effort to search for high-resolution signatures of quantum spin liquids, we investigate the temperature dependence of the indirect resonant inelastic x-ray scattering (RIXS) response for the Kitaev honeycomb model. We find that, as a result of spin fractionalization, the RIXS response changes qualitatively at two well-separated temperature scales, $T_L$ and $T_H$, which correspond to the characteristic energies of the two kinds of fractionalized excitations, Z2 gauge fluxes and Majorana fermions, respectively. While thermally excited Z2 gauge fluxes at temperature TL lead to a general broadening and softening of the response, the thermal proliferation of Majorana fermions at temperature $T_H\sim 10$ $T_L$ results in a significant shift of the spectral weight, both in terms of energy and momentum. Due to its exclusively indirect nature, the RIXS process we consider gives rise to a universal magnetic response and, from an experimental perspective, it directly corresponds to the K-edge of Ru$^{3+}$ in the Kitaev candidate material $\alpha$-RuCl$_3$. 17:00 - 17:30 Anton Jesche (Universität Augsburg) Anisotropy driven spontaneous magnetization A seemingly ferromagnetic state can emerge in an Ising system with sufficiently large anisotropy even when effective exchange interactions are prohibited by strong dilution. A prime example is given by iron-doped lithium nitride, $Li_2(Li_{1-x}Fe_x)N$ with x << 1. The basic magnetic unit is not a cluster or a magnetic domain but the magnetic moment of single, isolated iron atoms, which are embedded in the non-magnetic lithium nitride matrix [1]. This novel model system shows an extremely large magnetic anisotropy and allows to study quantum tunneling of the magnetization in a rather simple, inorganic material. At the origin of the outstanding properties is the special geometry that the iron finds itself in: the linear coordination between two nitrogen atoms is not subject to a Jahn-Teller distortion and gives rise to an unquenched orbital moment. Accordingly, this rare-earth-free material shows a huge hysteresis with coercivity fields of more than 11 Tesla. Recently we have found an extreme field dependence of the spin reversal process in $Li_2(Li_{1-x}Fe_x)N$ [2]. The spin-flip probability strongly increases in transverse magnetic fields that proves the resonant character of this magnetic tunneling process. Applied Longitudinal fields, on the other hand, lift the ground-state degeneracy and destroy the tunneling condition. An increase of the relaxation time by four orders of magnitude in applied fields of only a few milliTesla reveals exceptionally sharp tunneling resonances. Therefore, it is possible either to freeze the orientation and mimic ferromagnetic ordering or to promote the flip of a spin-state by tiny applied fields. The up and down states of the iron atom's spin have been made switchable and provide an 'atomic quantum bit' at easily accessible liquid helium temperatures. [1] A. Jesche et al. Nat. Commun. 5:3333 (2014) doi: 10.1038/ncomms4333 [2] M. Fix, J. H. Atkinson, P. C. Canfield, E. del Barco & A. Jesche Phys. Rev. Lett. 120, 147202 (2018) workshop dinner Lifshitz transitions in FM QC systems (chair: Cornelius Krellner) 09:00 - 09:30 Heike Pfau (Lawrence Berkeley National Laboratory) Magnetic field induced Lifshitz transitions in the ferromagnetic Kondo lattice \(YbNi_4P_2\) The composite heavy quasiparticles in Kondo lattice systems formed below the Kondo temperature T$_K$ hybridize with the conduction electrons. This leads to the formation of flat bands close to the Fermi level which alter the Fermi surface. Applying a magnetic field has two effects on this Fermi surface: First, it leads to Zeeman splitting of the flat bands, which can undergo Lifshitz transitions. Second, the composite quasiparticles eventually break up. The standard tool to determine Fermi surfaces are quantum oscillation measurements. However, they have to be performed at high magnetic fields, while their results are interpreted with the help of band structure calculations at zero field. The ability of quantum oscillation measurements to interpret zero field properties is therefore under intense discussion and a general understanding of the development of Kondo lattices in magnetic fields with an energy of the order of T$_K$ is needed. We studied the magnetic field development of the ferromagnetic Kondo lattice YbNi$_4$P$_2$ using low temperature thermopower and resistivity measurements. We observe several Zeeman driven Lifshitz transitions on top of a continuous suppression of the Kondo effect. The large number of Lifshitz transitions observed within a small energy window is the result of the flat renormalized band structure with strong 4f-electron character shaped by the Kondo lattice effect. Some of the Lifshitz transitions show a particularly strong signature in thermodynamic probes which can be a hint towards the involvement of the predicted quasi one-dimensional Fermi surface sheets. 09:30 - 10:00 Alexandre Pourret (CEA Grenoble) Thermoelectric power close to ferromagnetic instabilities 10:00 - 10:30 Joseph Betouras (Loughborough University) Effects of Lifshitz transitions in ferromagnetic superconductors: the case of URhGe In ferromagnetic superconductors, like URhGe, superconductivity coexists with magnetism near zero field, but then reappears in a finite field range, where the system also displays mass enhancement in the normal state. We present the theoretical understanding of this nonmonotonic behavior. We explore the multiband nature of URhGe and associate reentrant superconductivity and mass enhancement with the topological transition (Lifshitz) in one of the bands in a finite magnetic field. We find excellent agreement between our theory and a number of experimental results for URhGe, such as weakly first-order reentrant transition, the dependence of superconducting T$_c$ on a magnetic field, and the field dependence of the effective mass, the specific heat, and the resistivity in the normal state. Our theory can be applied to other ferromagnetic multiband superconductors. Spin dynamics in ferromagnets (chair: Dai Aoki) 11:00 - 11:30 Christian Pfleiderer (Technische Universität München) Neutron scattering studies of ferromagnetic quantum phase transition 11:30 - 12:00 Yasutomo Uemura (Columbia University) MuSR studies of itinerant electron systems MnSi, (Mn,Fe)Si and MnGe Muon Spin Relaxation (MuSR) studies reveal several fundamental aspects of quantum and thermal phase transitions and Skyrmions in itinerant electron magnets MnSi, (Mn,Fe)Si and MnGe. In quantum phase transitions of MnSi tuned by hydrostatic pressure and in (Sr,Ca)RuO3 by (Sr,Ca) substitutions, we found clear signature of first order transition associated with phase separation [1]. With (Mn,Fe) substitutions of 15 %, (Mn,Fe)Si exhibits features expected in second order transitions in pressure-tuned quantum evolution, which can be related to the effect of disorder [2]. Both in bulk MnSi, (Mn,Fe)Si and thin film of MnSi, we detected significant spin dynamics in the Skyrmion state, associated with dynamic critical behavior in the thermal transition from paramagnetic state in bulk MnSi and (Mn,Fe)Si, but without such dynamic critical behavior in the thin film of MnSi. In MnGe, we found peaking of 1/T1 relaxation rate and dynamic critical behavior in the boundary between the field induced ferromagnetic phase and helical phase with a 3-d Skyrmion lattice. The absence of critical behavior at boundary between para and 2-d Skyrmion phase with different winding number in MnSi thin film, and the existence of critical behavior between ferro and 3-d Skyrmion lattice with the same zero winding number in MnGe are consistent with effect of winding-number topology on critical behavior. [1] Y.J. Uemura et al., Nature Physics 3 (2007) 29-35 [2] T. Goko et al., npj Quantum Materials 2 (2017) 44. Skyrmion work was performed in collaboration with the RIKEN/Tokyo group of Y. Tokura, N. Kanazawa and N. Nagaosa. Magnetism and superconductivity (chair: Gregory R. Stewart) 14:00 - 14:30 Marcin M. Wysokiński (International Research Centre MagTop, Warszawa) Two-channel Stoner mechanism for ferromagnetic/antiferromagnetic phase transitions in metallic magnets Recent discovery of the pressure-induced phase transition to an antiferromagnetic state in itinerant ferromagnet $LaCrGe_3$ [1] has been interpreted as a genuine manifestation of ferromagnetic quantum critical effects. I will present an alternative approach to a switching between ferromagnetic and antiferromagnetic groundstates in d-electron magnets [2], the Stoner-like microscopic mechanism, that relies on a very generic electronic structure properties rather than quantum criticality. Mechanism can be conveniently parametrized to mirror general energy scales and oxidation states of the real material. In particular, it provides not only an interpretation of the pressure-induced ferromagnetic to antiferromagnetic transition in $LaCrGe_3$, but also lack thereof in $LaV_{0.16}Cr_{0.84}Ge_3$ [3]. I will also demonstrate applicability of the mechanism to $ZrZn_2$ and CrAs. If time permits, I will additionally comment on the interpretation of the pressure-induced antiferromagnetic to ferromagnetic phase transition in $USb_2$ based on similar physical principles [4]. [1] V. Taufour, et al., Phys. Rev. Lett. 117, 037207 (2016) [2] M. M. Wysokiński, arXiv:1808.04109v2 [3] X. Lin, et al., Phys. Rev. B 88, 094405 (2013) [4] M. M. Wysokiński, Phys. Rev. B 96 201115(R) (2017) 14:30 - 15:00 Georg Knebel (CEA Grenoble) UCoGe under extreme conditions Frustrated spin-density waves (chair: Andrew G. Green) 15:00 - 15:30 Chris O'Neill (The University of Edinburgh) Quantum criticality in a magnetically ordered stacked triangular lattice 15:30 - 16:00 Frank Krüger (University College London) Geometrically frustrated spin-density waves and Kondo physics in \(UAu_2\) discussion & departure
CommonCrawl
Topological obstructions to dominated splitting for ergodic translations on the higher dimensional torus DCDS Home On a new two-component $b$-family peakon system with cubic nonlinearity November 2018, 38(11): 5389-5413. doi: 10.3934/dcds.2018238 On the concentration of semiclassical states for nonlinear Dirac equations Xu Zhang , School of Mathematics and Statistics, Central South University, Changsha 410083, Hunan, China * Corresponding author: Xu Zhang Received July 2017 Revised December 2017 Published August 2018 Fund Project: The first author is supported by the China Postdoctoral Science Foundation 2017M611160 In this paper, we study the following nonlinear Dirac equation $\begin{equation*}-i\varepsilonα·\nabla w+aβ w+V(x)w = g(|w|)w, \ x∈ \mathbb{R}^3, \ {\rm for}\ w∈ H^1(\mathbb R^3, \mathbb C^4), \end{equation*}$ $a > 0$ is a constant, $α = (α_1, α_2, α_3)$ $α_1, α_2, α_3$ $β$ $4×4$ Pauli-Dirac matrices. Under the assumptions that $V$ $g$ are continuous but are not necessarily of class $C^1$ , when is super-linear growth at infinity we obtain the existence of semiclassical solutions, which converge to the least energy solutions of its limit problem as $\varepsilon \to 0$ Keywords: Dirac equation, semiclassical solutions, concentration. Mathematics Subject Classification: Primary: 35Q40; Secondary: 49J35. Citation: Xu Zhang. On the concentration of semiclassical states for nonlinear Dirac equations. Discrete & Continuous Dynamical Systems - A, 2018, 38 (11) : 5389-5413. doi: 10.3934/dcds.2018238 A. Ambrosetti, A. Malchiodi and W. M. Ni, Singularly perturbed elliptic equations with symmetry: existence of solutions concentrating on spheres Ⅰ, Comm. Math. Phys., 235 (2003), 427-466. doi: 10.1007/s00220-003-0811-y. Google Scholar T. Bartsch and Y. H. Ding, Solutions of nonlinear Dirac equations, J. Differential Equations, 226 (2006), 210-249. doi: 10.1016/j.jde.2005.08.014. Google Scholar J. Byeon and Z.-Q. Wang, Standing waves with a critical frequency for nonlinear Schrödinger equations, Arch. Rational Mech. Anal., 165 (2002), 295-316. doi: 10.1007/s00205-002-0225-6. Google Scholar J. Byeon and Z.-Q. Wang, Standing waves with a critical frequency for nonlinear Schrödinger equations. Ⅱ, Calc. Var. Partial Differential Equations, 18 (2003), 207-219. doi: 10.1007/s00526-002-0191-8. Google Scholar R. Dautray and J. L. Lions, Mathematical Analysis and Numerical Methods for Science and Technology, vol. 3, Springer, Berlin, 1990. Google Scholar P. D'Avenia, A. Pomponio and D. Ruiz, Semiclassical states for the onlinear Schrödinger equation on saddle points of the potential via variational methods, J. Funct. Anal., 262 (2012), 4600-4633. doi: 10.1016/j.jfa.2012.03.009. Google Scholar M. Del Pino and P. Felmer, Semi-classical states for nonlinear Schrödinger equations, J. Funct. Anal., 149 (1997), 245-265. doi: 10.1006/jfan.1996.3085. Google Scholar M. Del Pino and P. Felmer, Multi-peak bound states for nonlinear Schrödinger equations, Ann. Inst. H. Poincaré Anal. Non Lináire, 15 (1998), 127-149. doi: 10.1016/S0294-1449(97)89296-7. Google Scholar M. Del Pino, M. Kowalczyk and J. C. Wei, Concentration on curves for nonlinear Schrödinger equations, Comm. Pure Appl. Math., 60 (2007), 113-146. doi: 10.1002/cpa.20135. Google Scholar Y. H. Ding, Variational Methods for Strongly Indefinite Problems, Interdiscip. Math. Sci., vol. 7. World Scientific Publ., Singapore, 2007. doi: 10.1142/9789812709639. Google Scholar Y. H. Ding, Semi-classical ground states concentrating on the nonlinear potential for a Dirac equation, J. Differential Equations, 249 (2010), 1015-1034. doi: 10.1016/j.jde.2010.03.022. Google Scholar Y. H. Ding, C. Lee and B. Ruf, On semiclassical states of a nonlinear Dirac equation, Proc. Roy. Soc. Edinburgh Sect. A, 143 (2013), 765-790. doi: 10.1017/S0308210511001752. Google Scholar Y. H. Ding and X. Y. Liu, Semi-classical limits of ground states of a nonlinear Dirac equation, J. Differential Equations, 252 (2012), 4962-4987. doi: 10.1016/j.jde.2012.01.023. Google Scholar Y. H. Ding and B. Ruf, Solutions of a nonlinear Dirac equation with external fields, Arch. Ration. Mech. Anal., 190 (2008), 57-82. doi: 10.1007/s00205-008-0163-z. Google Scholar Y. H. Ding and B. Ruf, Existence and concentration of semiclassical solutions for Dirac equations with critical nonlinearities, SIAM J. Math. Anal., 44 (2012), 3755-3785. doi: 10.1137/110850670. Google Scholar Y. H. Ding and J. C. Wei, Stationary states of nonlinear Dirac equations with general potentials, J. Math. Phys., 20 (2008), 1007-1032. doi: 10.1142/S0129055X0800350X. Google Scholar Y. H. Ding, J. C. Wei and T. Xu, Existence and concentration of semi-classical solutions for a nonlinear Maxwell-Dirac system, J. Math. Phys., 54 (2013), 061505, 33pp. doi: 10.1063/1.4811541. Google Scholar Y. H. Ding and T. Xu, Localized concentration of semi-classical states for nonlinear Dirac equations, Arch. Rational Mech. Anal., 216 (2015), 415-447. doi: 10.1007/s00205-014-0811-4. Google Scholar M.J. Esteban, M. Lewin and E. Séré, Variational methods in relativistic quantum mechanics, Bull. Amer. Math. Soc. (N.S.), 45 (2008), 535-593. doi: 10.1090/S0273-0979-08-01212-3. Google Scholar M. J. Esteban and E. Séré, Stationary states of the nonlinear Dirac equation: A variational approach, Commun. Math. Phys., 171 (1995), 323-350. doi: 10.1007/BF02099273. Google Scholar G. M. Figueiredo and M. T. O. Pimenta, Existence of ground state solutions to Dirac equations with vanishing potentials at infinity, J. Differential Equations, 262 (2017), 486-505. doi: 10.1016/j.jde.2016.09.034. Google Scholar A. Floer and A. Weistein, Nonspreading wave pockets for the cubic Schrödinger equation with a bounded potential, J. Funct. Anal., 69 (1986), 397-408. doi: 10.1016/0022-1236(86)90096-0. Google Scholar D. Gilbarg and N. Trudinger, Elliptic Partial Differential Equations of Second Order, Springer-Verlag, Berlin Heidelberg New York, 1997. Google Scholar L. Jeanjean and K. Tanaka, Singularly perturbed elliptic problems with superlinear or asymptotically linear nonlinearities, Calc. Var. Partial Differential Equations, 21 (2004), 287-318. doi: 10.1007/s00526-003-0261-6. Google Scholar X. Kang and J. C. Wei, On interacting bumps of semi-classical states of nonlinear Schrödinger equations, Adv. Differ. Equ., 5 (2000), 899-928. Google Scholar Y. Y. Li, On a singularly perturbed elliptic equation, Adv. Diff. Equ., 2 (1997), 955-980. Google Scholar P. L. Lions, The concentration-compactness principle in the calculus of variations. The locally compact case, parts 1 and 2, Ann. Inst. H. Poincaré Anual. Non Linéair, 1 (1984), 109-145,223-283. doi: 10.1016/S0294-1449(16)30422-X. Google Scholar S. B. Liu, On superlinear Schrödinger equations with periodic potential, Calc. Var. Partial Differential Equations, 45 (2012), 1-9. doi: 10.1007/s00526-011-0447-2. Google Scholar F. Merle, Existence of stationary states for nonlinear Dirac equations, J. Differential Equations, 74 (1988), 50-68. doi: 10.1016/0022-0396(88)90018-6. Google Scholar Y. G. Oh, Existence of semiclassical bound states of nonlinear Schrödinger equations with potentials of the class $(V)_a$, Comm. Partial. Diff. Eq., 13 (1988), 1499-1519. doi: 10.1080/03605308808820585. Google Scholar A. Pankov, On decay of solutions to nonlinear Schrödinger equations, Proc. Amer. Math. Soc., 136 (2008), 2565-2570. doi: 10.1090/S0002-9939-08-09484-7. Google Scholar P. H. Rabinowitz, On a class of nonlinear Schrödinger equations, Z. Angew. Math. Phys., 43 (1992), 270-292. doi: 10.1007/BF00946631. Google Scholar A. Szulkin and T. Weth, Ground state solutions for some indefinite variational problems, J. Funct. Anal., 257 (2009), 3802-3822. doi: 10.1016/j.jfa.2009.09.013. Google Scholar X. Wang, On concentration of positive bound states of nonlinear Schrödinger equations, Comm. Math. Phys., 153 (1993), 229-244. doi: 10.1007/BF02096642. Google Scholar X. Wang and B. Zeng, On concentration of positive bound states of nonlinear Schrödinger equations with competing potential functions, SIAM J. Math. Anal., 28 (1997), 633-655. doi: 10.1137/S0036141095290240. Google Scholar J. Zhang, X. Tang and W. Zhang, Ground state solutions for nonperiodic Dirac equation with superquadratic nonlinearity, J. Math. Phys., 54 (2013), 101502, 10pp. doi: 10.1063/1.4824132. Google Scholar J. Zhang, X. Tang and W. Zhang, On ground state solutions for superlinear Dirac equation, Acta Math. Sci. Ser. B Engl. Ed., 34 (2014), 840-850. doi: 10.1016/S0252-9602(14)60054-0. Google Scholar Jian Zhang, Wen Zhang, Xiaoliang Xie. Existence and concentration of semiclassical solutions for Hamiltonian elliptic system. Communications on Pure & Applied Analysis, 2016, 15 (2) : 599-622. doi: 10.3934/cpaa.2016.15.599 Xudong Shang, Jihui Zhang. Multiplicity and concentration of positive solutions for fractional nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2018, 17 (6) : 2239-2259. doi: 10.3934/cpaa.2018107 Anatoli Babin, Alexander Figotin. Newton's law for a trajectory of concentration of solutions to nonlinear Schrodinger equation. Communications on Pure & Applied Analysis, 2014, 13 (5) : 1685-1718. doi: 10.3934/cpaa.2014.13.1685 A. Carati. On the existence of scattering solutions for the Abraham-Lorentz-Dirac equation. Discrete & Continuous Dynamical Systems - B, 2006, 6 (3) : 471-480. doi: 10.3934/dcdsb.2006.6.471 César E. Torres Ledesma. Existence and concentration of solutions for a non-linear fractional Schrödinger equation with steep potential well. Communications on Pure & Applied Analysis, 2016, 15 (2) : 535-547. doi: 10.3934/cpaa.2016.15.535 Min Liu, Zhongwei Tang. Multiplicity and concentration of solutions for Choquard equation via Nehari method and pseudo-index theory. Discrete & Continuous Dynamical Systems - A, 2019, 39 (6) : 3365-3398. doi: 10.3934/dcds.2019139 Zhongyuan Liu. Concentration of solutions for the fractional Nirenberg problem. Communications on Pure & Applied Analysis, 2016, 15 (2) : 563-576. doi: 10.3934/cpaa.2016.15.563 Shuangjie Peng, Jing Zhou. Concentration of solutions for a Paneitz type problem. Discrete & Continuous Dynamical Systems - A, 2010, 26 (3) : 1055-1072. doi: 10.3934/dcds.2010.26.1055 Yemin Chen. Analytic regularity for solutions of the spatially homogeneous Landau-Fermi-Dirac equation for hard potentials. Kinetic & Related Models, 2010, 3 (4) : 645-667. doi: 10.3934/krm.2010.3.645 Harald Friedrich. Semiclassical and large quantum number limits of the Schrödinger equation. Conference Publications, 2003, 2003 (Special) : 288-294. doi: 10.3934/proc.2003.2003.288 Silvia Cingolani, Mnica Clapp, Simone Secchi. Intertwining semiclassical solutions to a Schrödinger-Newton system. Discrete & Continuous Dynamical Systems - S, 2013, 6 (4) : 891-908. doi: 10.3934/dcdss.2013.6.891 Linfeng Mei, Wei Dong, Changhe Guo. Concentration phenomenon in a nonlocal equation modeling phytoplankton growth. Discrete & Continuous Dynamical Systems - B, 2015, 20 (2) : 587-597. doi: 10.3934/dcdsb.2015.20.587 Olivier Bonnefon, Jérôme Coville, Guillaume Legendre. Concentration phenomenon in some non-local equation. Discrete & Continuous Dynamical Systems - B, 2017, 22 (3) : 763-781. doi: 10.3934/dcdsb.2017037 Marcel Braukhoff. Semiconductor Boltzmann-Dirac-Benney equation with a BGK-type collision operator: Existence of solutions vs. ill-posedness. Kinetic & Related Models, 2019, 12 (2) : 445-482. doi: 10.3934/krm.2019019 Jun Wang, Lu Xiao. Existence and concentration of solutions for a Kirchhoff type problem with potentials. Discrete & Continuous Dynamical Systems - A, 2016, 36 (12) : 7137-7168. doi: 10.3934/dcds.2016111 Wenmin Gong, Guangcun Lu. On Dirac equation with a potential and critical Sobolev exponent. Communications on Pure & Applied Analysis, 2015, 14 (6) : 2231-2263. doi: 10.3934/cpaa.2015.14.2231 Shuji Machihara. One dimensional Dirac equation with quadratic nonlinearities. Discrete & Continuous Dynamical Systems - A, 2005, 13 (2) : 277-290. doi: 10.3934/dcds.2005.13.277 Xiaoyan Lin, Xianhua Tang. Solutions of nonlinear periodic Dirac equations with periodic potentials. Discrete & Continuous Dynamical Systems - S, 2019, 12 (7) : 2051-2061. doi: 10.3934/dcdss.2019132 Yu Chen, Yanheng Ding, Tian Xu. Potential well and multiplicity of solutions for nonlinear Dirac equations. Communications on Pure & Applied Analysis, 2020, 19 (1) : 587-607. doi: 10.3934/cpaa.2020028 Silvia Cingolani, Mónica Clapp. Symmetric semiclassical states to a magnetic nonlinear Schrödinger equation via equivariant Morse theory. Communications on Pure & Applied Analysis, 2010, 9 (5) : 1263-1281. doi: 10.3934/cpaa.2010.9.1263 PDF downloads (102) HTML views (85) Xu Zhang
CommonCrawl
Arc length contest! Minimize the arc length of $f(x)$ when given three conditions. Contest: Give an example of a continuous function $f$ that satisfies three conditions: $f(x) \geq 0$ on the interval $0\leq x\leq 1$; $f(0)=0$ and $f(1)=0$; the area bounded by the graph of $f$ and the $x$-axis between $x=0$ and $x=1$ is equal to $1$. Compute the arc length, $L$, for the function $f$. The goal is to minimize $L$ given the three conditions above. $\mathbf{\color{red}{\text{Contest results:}}}$ $$ \begin{array}{c|ll} \hline \text{Rank} & \text{User} & {} & {} & \text{Arc length} \\ \hline \text{1} & \text{robjohn $\blacklozenge$} & {} & {} & 2.78540 \\ \text{2} & \text{Glen O} & {} & {} & 2.78567 \\ \text{3} & \text{mickep} & {} & {} & 2.81108 \\ \text{4} & \text{mstrkrft} & {} & {} & 2.91946 \\ \text{5} & \text{MathNoob} & {} & {} & 3.00000 \\\hline \text{-} & \text{xanthousphoenix} & {} & {} & 2.78540 \\ \text{-} & \text{Narasimham} & {} & {} & 2.78 \\ \end{array}$$ Original question after contest statement: The contest question was inspired by this paper. Can anyone come up with a different entry than those listed in the table below? $$ \begin{array}{c|ll} \hline \text{Rank} & \text{Function} & {} & {} & \text{Arc length} \\ \hline \text{1} & 1.10278[\sin(\pi x)]^{0.153764} & {} & {} & 2.78946 \\ \text{2} & (8/\pi)\sqrt{x-x^2} & {} & {} & 2.91902 \\ \text{3} & 1.716209468\sqrt{x}\,\mathrm{arccos}(x) & {} & {} & 2.91913 \\ \text{4} & (8/\pi)x\,\mathrm{arccos}(x) & {} & {} & 3.15180 \\ \text{5} & (15/4)x\sqrt{1-x} & {} & {} & 3.17617 \\ \text{6} & -4x\ln x & {} & {} & 3.21360 \\ \text{7} & 10x(1-\sqrt{x}) & {} & {} & 3.22108 \\ \text{8} & -6x^2+6x & {} & {} & 3.24903 \\ \text{9} & 9.1440276(2^x-x^2-1) & {} & {} & 3.25382 \\ \text{10} & (-12/5)(x^3+x^2-2x) & {} & {} & 3.27402 \\ \end{array}$$ calculus contest-math recreational-mathematics big-list calculus-of-variations Daniel W. Farlow Daniel W. FarlowDaniel W. Farlow 18k1111 gold badges4848 silver badges9090 bronze badges $\begingroup$ This sounds like a calculus of variations problem, but I'm not too familiar with the subject. Someone who is might want to consider adding the tag. $\endgroup$ – DanielV Jan 28 '15 at 5:29 $\begingroup$ @DanielV. Yes, given area for minimum length with constrained /fixed boundary line slope. $\endgroup$ – Narasimham Feb 6 '15 at 16:28 $\begingroup$ Is it possible to determine the minimum/infimum of all possible arc lengths without giving the function explicitly using calc of variations? $\endgroup$ – dalastboss Feb 6 '15 at 21:01 $\begingroup$ A familiar problem. Looks like Stewart should have credited Riddle (and maybe he does somewhere in the front matter or back matter). $\endgroup$ – alex.jordan Feb 9 '15 at 7:09 $\begingroup$ @M.Wind I don't understand your quibble--have you not seen some of the other contests on this site where nothing at all is given and users are encouraged to contribute something valuable? Or did you just look through all of my questions and try to find something you could harp on? Also, reputation does not matter in a case like this--I have had some of my own questions closed, downvoted, deleted, etc. Meanwhile, Andre Nicolas (330k) has received very poor treatment from a number of users bent on deleting questions to which he has provided answers. So: what exactly is your point? $\endgroup$ – Daniel W. Farlow Jul 9 '15 at 16:04 Find the Shape of the Graph We wish to minimize $$ \int_0^1\sqrt{f'(x)^2+1}\,\mathrm{d}x\tag{1} $$ while keeping $$ \int_0^1f(x)\,\mathrm{d}x=1\tag{2} $$ This means that we wish to find an $f$ so that the variation of length is $0$ $$ \int_0^1\frac{f'(x)\,\delta f'(x)}{\sqrt{f'(x)^2+1}}\,\mathrm{d}x=0\tag{3} $$ which, after integration by parts, noting that $\delta f(0)=\delta f(1)=0$, becomes $$ \int_0^1\frac{f''(x)\,\delta f(x)}{\sqrt{f'(x)^2+1}^{\,3}}\,\mathrm{d}x=0\tag{4} $$ for all variations of $f$, $\delta f$, so that the variation of area is $0$ $$ \int_0^11\,\delta f(x)\,\mathrm{d}x=0\tag{5} $$ This means that $\frac{f''(x)}{\sqrt{f'(x)^2+1}^{\,3}}$ is perpendicular to all $\delta f$ that $1$ is. This is so only when there is a $\lambda$ so that $$ \frac{f''(x)}{\sqrt{f'(x)^2+1}^{\,3}}=\lambda\tag{6} $$ However, $(6)$ just says that the curvature of the graph of $f$ is $\lambda$. That is, the graph of $f$ is an arc of a circle. Find the Length of the Arc Since the length of the chord of the circle we want is $1$, we have $$ 2r\sin\left(\frac\theta2\right)=1\tag{7} $$ Since the area cut off by this chord is $1$, we have $$ r^2\left[\frac\theta2-\sin\left(\frac\theta2\right)\cos\left(\frac\theta2\right)\right]=1\tag{8} $$ Square $(7)$ to get $$ 2r^2(1-\cos(\theta))=1\tag{9} $$ and rewrite $(8)$ to get $$ \frac12r^2(\theta-\sin(\theta))=1\tag{10} $$ Solve $4(1-\cos(\theta))=\theta-\sin(\theta)$ to get $$ \theta=4.3760724130128873845\tag{11} $$ and then $(7)$ gives $$ r=0.61313651252231835636\tag{12} $$ This would lead to a minimum length of $$ L=r\theta=2.6831297778598481320\tag{13} $$ Unfortunately, since $\theta\gt\pi$, the minimizing curve is an arc that cannot be represented by a function. The minimizing curve that is closest to the graph of a function is the curve that joins $(0,0)$ and $(1,0)$ to the endpoints of $$ y=1-\frac\pi8+\sqrt{x-x^2}\tag{14} $$ which has a length of $$ 2+\frac\pi4=2.7853981633974483096\tag{15} $$ However, this curve is not the graph of a function. A Sequence of Approximations $$ f_n(x)=\frac1{c_n}\left(1-\frac\pi8+\sqrt{x-x^2}\right)\left(x-x^2\right)^{1/n}\tag{16} $$ where $$ c_n=\left(1-\frac\pi8\right)\frac{\Gamma\left(1+\frac1n\right)^2}{\Gamma\left(2+\frac2n\right)}+\frac{\Gamma\left(\frac32+\frac1n\right)^2}{\Gamma\left(3+\frac2n\right)}\tag{17} $$ As $n\to\infty$, the length of $f_n$ approaches $2+\frac\pi4$. At $n=100$, we get a length of $L=2.7857313936$, less than $\frac1{3000}$ above the minimum: At $n=1000$, we get a length of $L=2.7854017568$, less than $\frac1{250000}$ above the minimum. robjohn♦robjohn $\begingroup$ You have said it all! I would give you +4 for this if I could. $\endgroup$ – TonyK Feb 4 '15 at 10:48 $\begingroup$ This is such a sexy answer. $\endgroup$ – Daniel W. Farlow Feb 5 '15 at 3:42 $\begingroup$ You seem to have made a mistake, as the equation for curvature has an exponent of $\frac32$ on the denominator, not $\frac12$. That being said, constant curvature is the "true" minimal arc length (so the mistake is in the variations expression, not the conclusion). $\endgroup$ – Glen O Feb 6 '15 at 10:56 $\begingroup$ @GlenO: Ah, thanks. I miscomputed $\delta s$ and it should now be fixed. $\endgroup$ – robjohn♦ Feb 6 '15 at 11:17 The absolute least value you can get is a rectangle topped by a half circle (the circle has the best area to arc length ratio of any shape) with a total arc length of $2 \big(1 - \frac{\pi}{8}\big) + \frac{\pi}{2} \approx 2.78539$. If you use Fourier approximation, you can come arbitrarily close to this limit. (I assume the fun of this challenge is to find an arbitrarily "low-term" function.) xanthousphoenixxanthousphoenix $\begingroup$ Are you certain you can approximate this with a fourier transform? Fourier approximations tend to have "bunny ears" around corner points in graphs, which makes them fine for approximating area but not so fine for approximating arc length. $\endgroup$ – DanielV Jan 28 '15 at 5:56 $\begingroup$ You could try $C_n(x-x^2)^{1/n}\left(1-\frac\pi8+\sqrt{\frac14-(x-\frac12)^2}\right)$ with $n$ large ($C_n$ is a number a little larger than $1$ to make the area come out to $1$). $\endgroup$ – Jonas Meyer Jan 28 '15 at 6:39 $\begingroup$ @JonasMeyer: I honestly had not read your comment until this morning, although my approximations are precisely what you suggest here. The nice thing is that $C_n$ can be computed in terms of Beta functions. $\endgroup$ – robjohn♦ Feb 4 '15 at 16:40 Without a deeper thought or analysis, I thought it could be fun to look at parts of (translated) superellipses, and maybe make top 10 with it. And indeed it worked. Thus, I defined $g(x,n)=(1-|x|^n)^{1/n}$, and then $$ f(x,n)=g(2x-1,n) = (1-|2x-1|^n)^{1/n}. $$ Normalizing $c_n=1/\int_0^1 f(x,n)\,dx$ and then calculating the length of $c_n f(x,n)$, it looked like the optimum choice was $n=4$. The constant $c_4\approx 1.07871$. The arc length of $$ 1.07871(1-|2x-1|^4)^{1/4} $$ was numerically calculated to be $$ 2.81108, $$ which I leave as my contribution. The graph of $c_4f(x,4)$ is shown below: mickepmickep $\begingroup$ (+1) I've checked the length and I get $2.81108$ as well. This is between functions $1$ and $2$ in the list. $\endgroup$ – robjohn♦ Feb 4 '15 at 20:56 $\begingroup$ I just wrote a program in Mathematica 8 to compute arclength (though I see there is a built-in function in Mathematica 10 to do the same thing). It says that the graph of your function has $L=2.8110842164$ $\endgroup$ – robjohn♦ Feb 6 '15 at 17:55 A nice solution can be obtained by modifying the "exact" solution. The "exact" solution is $$ f(x) = \frac{8-\pi}8 + \sqrt{x(1-x)} $$ which has an arc length of $\frac{8+\pi}4$. As such, I propose a solution of the form $$ f(x) = \sqrt{x(1-x)}(1+g(x)) $$ where the "exact" solution uses $g(x)=(8-\pi)/(8\sqrt{x(1-x)})$. We want a solution similar to this, but with a finite value at $x=0$ and $x=1$. As such, I propose a simple modification. $$ f(x) = \sqrt{x(1-x)}\left(1+\frac{A}{\sqrt{(x+B)(1+B-x)}}\right) $$ Note that we recover the "exact" solution if $B=0$ and $A=\frac{8-\pi}8$. We can thus get arbitrarily close to this solution by selecting appropriate values for $A$ and $B$. Although a closed-form expression relating the two parameters isn't obvious, values can be chosen numerically. For example, for $B=0.0001$, we have $A\approx\frac{8-\pi}8+0.00058333971346\approx0.60788425801473$. For these, we have $$ \int_0^1 \sqrt{1+f'(x)^2}dx\approx 2.78567 \approx \frac{8+\pi}4 + 2.67\times10^{−4} $$ In this case, the expression works out to be $$ f(x)=\sqrt{x(1-x)}\left(1+\frac{0.60788425801473}{\sqrt{(x+0.0001)(1.0001-x)}}\right) $$ Note that this can also be expressed as $$ f(x)=\sqrt{x(1-x)}\left(1+\frac{0.60788425801473}{\sqrt{x(1-x)+0.00010001}}\right)\tag{$\dagger$} $$ Here is the graph of the $f(x)$ given in $(\dagger)$: robjohn♦ Glen OGlen O $\begingroup$ Using your $f$, I get a length of $L=2.7856654010$ $\endgroup$ – robjohn♦ Feb 6 '15 at 22:13 $\begingroup$ I see that your plot misses the lower part of the curve (below $y=0.6$). I noticed that in my plots, too, until I used ParametricPlot and specified PlotRange->All. Also, AspectRatio->Automatic gives a $1$-$1$ $x$-$y$ scaling. $\endgroup$ – robjohn♦ Feb 7 '15 at 0:49 $\begingroup$ @induktio - thanks for adding the graph. $\endgroup$ – Glen O Feb 7 '15 at 1:03 $\begingroup$ @robjohn - I rounded to match the values expressed in the question's top 10, in part because I didn't entirely trust the value I was getting from Maxima. Thanks for the higher-accuracy value. $\endgroup$ – Glen O Feb 7 '15 at 1:06 $\begingroup$ I wrote a program in Mathematica to recursively subdivide until a given error per vertex. Your function took $122037$ points and had a maximum error of $1.22\times10^{-10}$ (though the error is typically about $\frac14$ of that). $\endgroup$ – robjohn♦ Feb 7 '15 at 1:16 I would not be content without a proper derivation of Dido's problem of variational calculus with constraints of moving boundary considered. When properly done I expect the curvature would be proportional to the square or cube or some other function of $y$-coordinate. For time being proceeding purely on squared variation hypothesis for curvature as: $$ k_g = - y^2 / a^3, $$ where $a$ is a constant, I obtained the above stationary closed loop. Numerically adjusting constant $a$ and initial $y_i (a = 0.7925, y_i = 1.143)$, it is close to the results listed here. The constants are such that perturbation causes the loops to get either progressive or regressive. The area is not very accurately $1.0$ ($\sim 0.98$ only) satisfied, length is approximately $2.78$. Improvement of numerical accuracy possible, but proper theoretical basis is necessary. In this hypothetical case, hyper-Elliptic Integrals are involved. NarasimhamNarasimham $\begingroup$ Solving Dido's problem says that non-constrained (free-standing) pieces of the curve should have constant curvature (this can be done in a bit more generality, but the solution in my answer is good enough for this problem). Why do you say that the curvature should vary with the $y$-coordinate? $\endgroup$ – robjohn♦ Feb 7 '15 at 1:26 $\begingroup$ Curvature discontinuity at $ y = 1 - \pi/8 $ is ruled out. Since classical Dido needs constant profile curvature. Alternatively curvature is to be made variable, proportional to x or y wlog , starting with zero curvature @ y=0. The simplest I could think of is $ k_g = f(y) $ so that a single ODE characterizes the contour. $\endgroup$ – Narasimham Feb 11 '15 at 5:33 $\begingroup$ The constant curvature applies to unconstrained portions of the solution. Your solution is unconstrained over its entire length, yet does not have constant curvature. Therefore, this solution can be improved by adjusting its unconstrained portion. $\endgroup$ – robjohn♦ Feb 11 '15 at 9:58 An easy approach is to simply construct an ellipse with its upper half satisfying the above conditions. An ellipse is defined via $2$ numbers $a$ and $b$ which are each the half of the major and minor axis of the ellipse. Then all points $(x,y)$ which suffice the following equation are on the ellipse: $$ \frac{x^2}{a^2} + \frac{y^2}{b^2} = 1 $$ Or to get the upper half of the eclipse as a function: $$ y = b \, \sqrt{\left(1 - \frac{x^2}{a^2}\right) } $$ The area $A$ of the complete ellipse is given via $A = \pi\,a\,b$ and therefore our first condition translates to: $$ \frac{1}{2}\,\pi\,a\,b = 1$$ Also, as we want that $f(0) = 0 = f(1)$, we have: $$ 2\,a = 1 $$ That already gives us $$ a = \frac{1}{2} \\ b = \frac{4}{\pi} $$ and therefore an ellipse with the correct size. However, this results in an ellipse which intersects the $x$-Axis at $x_1=-0.5$ and $x_2=0.5$. To meet our conditions, we move the ellipse $0.5$ to the right and get: $$ y = \frac{4}{\pi} \, \sqrt{1 - 4\,(x-0.5)^2 }\tag{$\dagger$} $$ Now we simply let Wolfram Alpha do the computation for the arc length. The result is $$ 2.919463, $$ and the graph in $(\dagger)$ appears below: mstrkrftmstrkrft $\begingroup$ This is mickep's answer with $n=2$. I assume he used $n=4$ because it gives a smaller length. $\endgroup$ – robjohn♦ Feb 6 '15 at 19:02 $\begingroup$ For extra precision, I computed the length of your curve to be $L=2.9194626435$ $\endgroup$ – robjohn♦ Feb 6 '15 at 21:18 Consider the following: $$f(x)=\begin{cases} cx & \text{ if } 0 \le x \le \frac{1}{c} \\ -cx+c & \text{ if } 1-c < x \le 1 \\ 1 & \text{ otherwise } \\ \end{cases}$$ If we take $c\to\infty$ we get that it is an arc length of 3. Cyclohexanol.Cyclohexanol. $\begingroup$ I was thinking about a non-continuous $f$ too. In that case, why not just $$f(x)=\begin{cases} 1 \,\,\,if \,\,0<x < 1\\ 0 \,\,\, else \end{cases}$$ $\endgroup$ – graydad Jan 28 '15 at 4:58 $\begingroup$ You can approximate this very closely with a continuous function (even a polynomial). $\endgroup$ – Ross Millikan Jan 28 '15 at 5:11 $\begingroup$ @MathNoob: I was responding to the graydad comment. $\endgroup$ – Ross Millikan Jan 28 '15 at 5:12 $\begingroup$ Using continuous piecewise linear functions, you can get arbitrarily close to $3$ by going up quick, then horizontal, then down quick. $\endgroup$ – Jonas Meyer Jan 28 '15 at 5:27 As has already been explained at least twice, the best functions follow this pattern: a continuous function $f$ with $f(0)=f(1)=0$ that approximates $y = h(x) = 1-\frac\pi8+\sqrt{x-x^2}$ for $0<x<1.$ I propose a family of functions for $n$ a positive integer, $$f_n(x) = \sqrt{x-x^2} + \left(1-\frac\pi8\right)g_n(x),$$ where $$g_n(x) = \left(1 + \frac{1}{2n}\right)\left(1-(1-2x)^{2n}\right).$$ Since $$\int_0^1 1-(1-2x)^{2n}\; dx = \frac{2n}{2n+1},$$ we have $\int_0^1 g_n(x)\;dx = 1,$ and therefore $\int_0^1 f_n(x)\; dx = 1.$ The path integral is more difficult to compute than the area integral, but $1-(1-2x)^{2n}$ takes on its maximum value, $1$, at $x=\frac12$. So if we set $h_n(x) = h(x) + \frac{1}{2n}\left(1-\frac\pi8\right)$ we ensure that $f(x) \leq h_n(x)$ for $0 \leq x \leq 1.$ I claim the path length is less than the length of the bounding curve consisting of the graph of $h_n(x)$ from $x=0$ to $x=1$ and the two segments joining $(0,0)$ to $(0,h_n(0))$ and $(1,h_n(1))$ to $(1,0)$. The length of that bounding path is $$2+\frac\pi4 + \frac1n\left(1-\frac\pi8\right) < 2+\frac\pi4 + \frac{0.607301}{n}.$$ Therefore if we pick, say $n = 1000000,$ the resulting path exceeds the theoretical minimum by less than $6.074 \times 10^{-7},$ which is less than one part in $4.5 \times 10^6.$ To within the accuracy possible in any visual graph I could present here, the graph of $f_n(x)$ for large $n$ is the same as the graph of every other near-theoretical-minimum solution: Alternatively, stealing an idea from robjohn, we have $$\int_0^1 (x-x^2)^{1/n} = B\left(1+\frac1n, 1+\frac1n\right) = \frac{\Gamma\left(1+\frac1n\right)^2}{\Gamma\left(2+\frac2n\right)},$$ so we can set $$g_n(x) = \frac{\Gamma\left(2+\frac2n\right)}{\Gamma\left(1+\frac1n\right)^2}(x-x^2)^{1/n}$$ and proceed as before. This $n$th-root approach seems to converge faster than my $2n$th-power approach. David KDavid K The answer is that you will need to have a constant curvature, which is the partial circle solution by robjohn. If you do want the curve within (0,1) then the rectangle + 1/2 circle solution by both rob and xan. Why is that? it is actually a physics problem. The solution is a shape of a membrane under pressure. Parametric function: $$x= \left\{\begin{matrix} 1,& 0 \leq s \leq h \\ {1 \over 2}+{1 \over 2}cos(2(s-h)), & h < s < h + {2 \over \pi}\\ 0,& h + {2 \over \pi} < s < 2h + {2 \over \pi} \\ \end{matrix}\right. $$ $$y= \left\{\begin{matrix} s,& 0 \leq s \leq h \\ h+{1 \over 2}sin(2(s-h)), & h < s < h + {2 \over \pi}\\ (2h+{\pi \over 2}-s), & h + {2 \over \pi} < s < 2h + {2 \over \pi} \\ \end{matrix}\right. $$ Area: $$1=\int_{0}^{1}{y}dx$$ $$1=\int_{h+{\pi \over 2}}^{h}[h+{1 \over 2}sin(2(s-h))]d[{1 \over 2}+{1 \over 2}cos(2(s-h))]$$ Take: $$\theta=2(s-h)$$ $$4=\int_{\pi}^{0}[2h+sin(\theta)]d(cos(\theta))=4h+{\pi \over 2}$$ $$h=1-{\pi \over 8}$$ $$Length=s_{max}=2h+{\pi \over 2}=2 + {\pi \over 4} = 2.785398163...$$ PdotWangPdotWang $\begingroup$ The requirement of function continuity is for second order derivative also, is it not? $\endgroup$ – Narasimham Feb 10 '15 at 19:52 $\begingroup$ Even if it is required, we can modify the minimum range of s in the two points. As the length of the modified range goes to 0, it would not affect the result but make it continue in any order of continuity. It is mathematically true, just the way to express the equation may be different. $\endgroup$ – PdotWang Feb 10 '15 at 20:11 $p_0 = (0, 0), p_i = (i/n, y_n), p_n = (1,0)$ $A = \frac{1}{2} \cdot \sum_{i=1}^n{(y_i + y_{i-1})} \cdot \frac{1}{n} = 1$ $L = \sum{|p_{i+1} - p_i|}$ $\frac{\partial A}{\partial y_i} = \frac{\partial }{\partial y_i} \frac{1}{2n} \cdot (y_i + y_{i-1} + y_{i+1} + y_i) = \frac{1}{n}$ $\frac{\partial L}{\partial y_i} = \frac{\partial }{\partial y_i} \bigg[\sqrt{\frac{1}{n^2} + (y_i - y_{i-1})^2} + \sqrt{\frac{1}{n^2} + (y_{i+1} - y_i)^2}\bigg] = $ $ = \frac{y_i - y_{i-1}}{\sqrt{\frac{1}{n^2} + (y_i - y_{i-1})^2}} + \frac{y_i - y_{i+1}}{\sqrt{\frac{1}{n^2} + (y_{i+1} - y_i)^2}} = \lambda / n$ Lagrange multiplier Does anyone know how make this a differential equation? $n \rightarrow \infty, 1/n \rightarrow dx$ $\frac{\delta_i}{\sqrt{(dx)^2 + (\delta_i)^2}} - \frac{\Delta_i}{\sqrt{(dx)^2 + (\Delta_i)^2}} = \lambda \cdot dx$ $\frac{\delta_i^2}{{(dx)^2 + (\delta_i)^2}} = \lambda^2 \cdot (dx)^2 + \frac{\Delta_i^2}{{(dx)^2 + (\Delta_i)^2}} + 2 \frac{\Delta_i}{\sqrt{(dx)^2 + (\Delta_i)^2}} \lambda \cdot dx $ $\bigg[\frac{\delta_i^2}{{(dx)^2 + (\delta_i)^2}} - \lambda^2 \cdot (dx)^2 - \frac{\Delta_i^2}{{(dx)^2 + (\Delta_i)^2}}\bigg]^2 = 4 \frac{\Delta_i^2}{{(dx)^2 + (\Delta_i)^2}} \lambda^2 \cdot (dx)^2 $ Vinícius FerrazVinícius Ferraz Not the answer you're looking for? Browse other questions tagged calculus contest-math recreational-mathematics big-list calculus-of-variations or ask your own question. Which is bigger: $(\pi+1)^{\pi+1}$ or $\pi^{\pi+2}$? Prove that if $\lim\limits_{x\to 0}f\big(x\big(\frac{1}{x}-\big\lfloor\frac{1}{x}\big\rfloor\big)\big)$, then $\lim\limits_{x\to 0}f(x)=0$ Cutting a pie into 2 unequal peices with a single cut, minimising its length. Show that the arc length integral is continuous in $C^1$ Calculate arc length given integral Two curvature formulas when equal arc-length Sign of arc length Minimum of $f(x) = \frac{1}{a}\cos^4 \frac{\pi x}{2} + a \sin^4 \frac{\pi x}{2} +\sin\pi x (b\sin\pi x-c)$ for $x\in [0,1]$? Second Variation of Arc Length Determine all real $x$ such that $\arccos{\frac{1-x^2}{1+x^2}}=-2\arctan{x}.$ Arc length(calculus) Prove the following limit equals $\pi$ Calculate the length of the closed curve $x^{2/3} + y^{2/3} = 4$
CommonCrawl
Search SpringerLink Automatic term identification for bibliometric mapping Nees Jan van Eck1,2, Ludo Waltman1,2, Ed C. M. Noyons2 & Reindert K. Buter2 Scientometrics volume 82, pages 581–596 (2010)Cite this article A term map is a map that visualizes the structure of a scientific field by showing the relations between important terms in the field. The terms shown in a term map are usually selected manually with the help of domain experts. Manual term selection has the disadvantages of being subjective and labor-intensive. To overcome these disadvantages, we propose a methodology for automatic term identification and we use this methodology to select the terms to be included in a term map. To evaluate the proposed methodology, we use it to construct a term map of the field of operations research. The quality of the map is assessed by a number of operations research experts. It turns out that in general the proposed methodology performs quite well. Working on a manuscript? Avoid the common mistakes Bibliometric mapping is a powerful tool for studying the structure and the dynamics of scientific fields. Researchers can utilize bibliometric maps to obtain a better understanding of the field in which they are working. In addition, bibliometric maps can provide valuable insights for science policy purposes (Noyons 1999, 2004). Various types of bibliometric maps can be distinguished, which each visualize the structure of a scientific field from a different point of view. Some maps, for example, show relations between authors or journals based on co-citation data. Other maps show relations between words or keywords based on co-occurrence data (e.g., Rip and Courtial 1984; Peters and Van Raan 1993; Kopcsa and Schiebel 1998; Noyons 1999; Ding et al. 2001). The latter maps are usually referred to as co-word maps. In this paper, we are concerned with maps that show relations between terms. We refer to these maps as term maps. By a term we mean a word or a phrase that refers to a domain-specific concept. Term maps are similar to co-word maps except that they may contain any type of term instead of only single-word terms or only keywords. When constructing a bibliometric map, one-first has to select the objects to be included in the map. In the case of a map that contains authors or journals, this is usually fairly easy. To select the important authors or journals in a field, one can usually simply rely on citation counts. In the case of a term map, things are not so easy. In most cases, it is quite difficult to select the important terms in a field. Selection of terms based on their frequency of occurrence in a corpus of documents typically yields many words and phrases with little or no domain-specific meaning. Inclusion of such words and phrases in a term map is highly undesirable for two reasons. First, these words and phrases divert attention from what is really important in the map. Second and even more problematic, these words and phrases may distort the entire structure shown in the map. Because there is no easy way to select the terms to be included in a term map, term selection is usually done manually based on expert judgment (e.g., Noyons 1999; Van Eck and Waltman 2007b). However, manual term selection has serious disadvantages as well. The most important disadvantage is that it involves a lot of subjectivity, which may introduce significant biases in a term map. Another disadvantage is that it can be very labor-intensive. In this paper, we try to overcome the problems associated with manual selection of the terms to be included in a term map. To do so, we propose a methodology that aims to automatically identify the terms that occur in a corpus of documents. Term selection using the proposed methodology requires less involvement of domain experts than manual term selection. Consequently, we expect term maps constructed using the proposed methodology to be more objective representations of scientific fields. An additional advantage of the proposed methodology is that it makes the process of term selection less labor-intensive. The general idea of the methodology that we propose can be explained briefly as follows. Given a corpus of documents, we first identify the main topics in the corpus. This is done using a technique called probabilistic latent semantic analysis (Hofmann 2001). Given the main topics, we then identify in the corpus the words and phrases that are strongly associated with only one or only a few topics. These words and phrases are selected as the terms to be included in a term map. An important property of the proposed methodology is that it identifies terms that are not only domain-specific but that also have a high discriminatory power within the domain of interest. This is important because terms with a high discriminatory power are essential for visualizing the structure of a scientific field. Suppose, for example, that we want to construct a term map of the field of statistics. sample and chi-square test are both statistical terms. However, sample is a quite general statistical term, while chi-square test is more specific and, consequently, more discriminatory. Because of the relatively high discriminatory power of chi-square test, inclusion of this term in a term map may help to reveal the structure of the field of statistics. Inclusion of sample, on the other hand, probably does not provide much additional insight into the structure of the field. Hence, to visualize the structure of a scientific field, terms with a high discriminatory power play an essential role. The organization of this paper is as follows. We first provide a brief overview of the literature on automatic term identification. After discussing the literature, we propose a new methodology for automatic term identification. We then experimentally evaluate the proposed methodology, focusing in particular on its performance in the context of bibliometric mapping. Evaluation is done by applying the proposed methodology to the field of operations research and by asking a number of experts in this field to assess the results that are obtained. We end this paper with a discussion of the conclusions of our research. Overview of the automatic term identification literature In this section, we provide a brief overview of the literature on automatic term identification (also known as automatic term recognition or automatic term extraction).Footnote 1 For extensive reviews of the literature, we refer to Kageura and Umino (1996), Cabré Castellví et al. (2001), Jacquemin (2001), and Pazienza et al. (2005). We note that there are almost no studies on automatic term identification in the context of bibliometric mapping. Exceptions are the work of Janssens et al. (2006), Noyons (1999), and Schneider (2006), in which automatic term identification receives some attention. In the literature discussed in the rest of this section, automatic term identification is studied for purposes other than bibliometric mapping. We first discuss the notions of unithood and termhood (for the original definitions of these notions, see Kageura and Umino 1996). We define unithood as the degree to which a phrase constitutes a semantic unit. Our idea of a semantic unit is similar to that of a collocation (Manning and Schütze 1999). Hence, a semantic unit is a phrase consisting of words that are conventionally used together. The meaning of the phrase typically cannot be fully predicted from the meaning of the individual words within the phrase. We define termhood as the degree to which a semantic unit represents a domain-specific concept. A semantic unit with a high termhood is a term. To illustrate the notions of unithood and termhood, suppose that we are interested in statistical terms. Consider the phrases many countries, United States, and probability density function. Clearly, United States and probability density function are semantic units, while many countries is not. Hence, the unithood of United States and probability density function is high, while the unithood of many countries is low. Because United States does not represent a statistical concept, it has a low termhood. probability density function, on the other hand, does represent a statistical concept and therefore has a high termhood. From this it follows that probability density function is a statistical term. In the literature, two types of approaches to automatic term identification are distinguished, linguistic approaches and statistical approaches. Linguistic approaches are mainly used to identify phrases that, based on their syntactic form, can serve as candidate terms. Statistical approaches are used to measure the unithood and termhood of phrases. In many cases, linguistic and statistical approaches are combined in a single hybrid approach. Most terms have the syntactic form of a noun phrase (Justeson and Katz 1995; Kageura and Umino 1996). Linguistic approaches to automatic term identification typically rely on this property. These approaches identify candidate terms using a linguistic filter that checks whether a sequence of words conforms to some syntactic pattern. Different researchers use different syntactic patterns for their linguistic filters (e.g., Bourigault 1992; Dagan and Church 1994; Daille et al. 1994; Justeson and Katz 1995; Frantzi et al. 2000). Statistical approaches to measure unithood are discussed extensively by Manning and Schütze (1999). The simplest approach uses frequency of occurrence as a measure of unithood (e.g., Dagan and Church 1994; Daille et al. 1994; Justeson and Katz 1995). More advanced approaches use measures based on, for example, (pointwise) mutual information (e.g., Church and Hanks 1990; Damerau 1993; Daille et al. 1994) or a likelihood ratio (e.g., Dunning 1993; Daille et al. 1994). Another statistical approach to measure unithood is the C-value (Frantzi et al. 2000). The NC-value (Frantzi et al. 2000) and the SNC-value (Maynard and Ananiadou 2000) are extensions of the C-value that measure not only unithood but also termhood. Other statistical approaches to measure termhood can be found in the work of, for example, Drouin (2003) and Matsuo and Ishizuka (2004). In the field of machine learning, an interesting statistical approach to measure both unithood and termhood is proposed by Wang et al. (2007). In this section, we propose a three-step methodology for automatic term identification. An overview of the proposed methodology is provided in Fig. 1. Consider some domain or some scientific field, and suppose that we want to identify terms that belong specifically to this domain or this field. Our methodology assumes the availability of a corpus that is partitioned into a number of segments, each of which is concerned with a particular topic or a particular combination of topics within the domain of interest. Such a corpus may for example consist of a large number of documents or abstracts. In the first step of our methodology, a linguistic filter is applied to the corpus in order to identify noun phrases. In the second step, the unithood of noun phrases is measured in order to identify semantic units. In the third and final step, the termhood of semantic units is measured in order to identify terms. Termhood is measured as the degree to which the occurrences of a semantic unit are biased towards one or more topics. Compared with alternative approaches to automatic term identification, such as the ones discussed in the previous section, the innovative aspect of our methodology mainly lies in the third step, that is, in the measurement of termhood. We now discuss the three steps of our methodology in more detail. Overview of the proposed methodology Step 1: Linguistic filter In the first step of our methodology, we use a linguistic filter to identify noun phrases. We first assign to each word occurrence in the corpus a part-of-speech tag, such as noun, verb, or adjective. The appropriate part-of-speech tag for a word occurrence is determined using a part-of-speech tagger developed by Schmid (1994, 1995). We use this tagger because it has a good performance and because it is freely available for research purposes.Footnote 2 In addition to a part-of-speech tag, the tagger also assigns a so-called lemma to each word occurrence in the corpus. The lemma assigned to a word occurrence is the root form (or the stem) of the word. The words function and functions, for example, both have function as their lemma. In all further stages of our methodology, we use the lemmatized corpus instead of the original corpus. In this way, differences between, for example, uppercase and lowercase letters and singular and plural nouns are ignored. After the corpus has been tagged and lemmatized, we apply a linguistic filter to it. The filter that we use identifies all word sequences that meet the following three criteria: The sequence consists of nouns and adjectives only. The sequence ends with a noun. The sequence occurs at least a certain number of times in the corpus (ten times in the experiment discussed later on in this paper). Assuming an English language corpus, the first two criteria ensure that all identified word sequences are noun phrases. Notice, however, that our filter does not identify all types of noun phrases. Noun phrases that contain a preposition, such as the phrase degree of freedom, are not identified (for a discussion of such noun phrases, see Justeson and Katz 1995). We emphasize that the choice of an appropriate linguistic filter depends on the language of the corpus. The filter that we use works well for the English language but may not be appropriate for other languages. For all noun phrases that are identified by our linguistic filter, the unithood is considered in the second step of our methodology. Step 2: Measuring unithood In the second step of our methodology, we measure the unithood of noun phrases. Unithood is only relevant for noun phrases consisting of more than one word. For such noun phrases, unithood determines whether they are regarded as semantic units. The main aim of the second step of our methodology is to get rid of noun phrases that start with uninteresting adjectives such as first, many, new, and some. The most common approach to measure unithood is to determine whether a phrase occurs more frequently than would be expected based on the frequency of occurrence of the individual words within the phrase. This is basically also the approach that we take. To measure the unithood of a noun phrase, we first count the number of occurrences of the phrase, the number of occurrences of the phrase without the first word, and the number of occurrences of the first word of the phrase. In a similar way as Dunning (1993), we then use a so-called likelihood ratio to compare the first number with the last two numbers. We interpret this likelihood ratio as a measure of the unithood of the phrase. In the end, we use a cutoff value to determine which noun phrases are regarded as semantic units and which are not. (In the experiment discussed later on in this paper, noun phrases are regarded as semantic units if the natural logarithm of their likelihood ratio is less than −30.) For all noun phrases that are regarded as semantic units (which includes all single-word noun phrases), the termhood is considered in the third step of our methodology. Step 3: Measuring termhood In the third step of our methodology, we measure the termhood of semantic units. As mentioned earlier, we assume that we have a corpus that is partitioned into a number of segments, each of which is concerned with a particular topic or a particular combination of topics within the domain of interest. A corpus segment may for example consist of a document or an abstract, or it may consist of the set of all documents or all abstracts that appeared in a journal during a certain period of time. We use the following mathematical notation. There are K semantic units of which we want to measure the termhood. These units are denoted by u 1,…,u K . The corpus is partitioned into I segments, which are denoted by s 1,…,s I . The number of occurrences of semantic unit u k in corpus segment s i is denoted by n ik . Finally, there are J topics to be distinguished. These topics are denoted by t 1,…,t J . The main idea of the third step of our methodology is to measure the termhood of a semantic unit as the degree to which the occurrences of the unit are biased towards one or more topics. We first discuss an approach that implements this idea in a very simple way. We assume that there is a one-to-one relationship between corpus segments and topics, that is, each corpus segment covers exactly one topic and each topic is covered by exactly one corpus segment. Under this assumption, the number of corpus segments equals the number of topics, so I = J. To measure the degree to which the occurrences of semantic unit u k , where k ∈ {1,…,K}, are biased towards one or more topics, we use two probability distributions, namely the distribution of semantic unit u k over the set of all topics and the distribution of all semantic units together over the set of all topics. These distributions are denoted by, respectively, P(t j | u k ) and P(t j ), where j ∈ {1,…, J}. Assuming that topic t j is covered by corpus segment s j , the distributions are given by $$ P(t_{j} |u_{k} ) = {\frac{{n_{jk} }}{{\sum\nolimits_{j^{\prime} = 1}^{J} {n_{j'k} } }}} $$ $$ P(t_{j} ) = {\frac{{\sum\nolimits_{k = 1}^{K} {n_{jk} } }}{{\sum\nolimits_{j^{\prime} = 1}^{J} {\sum\nolimits_{k = 1}^{K} {n_{j^{\prime}k} } } }}}. $$ The dissimilarity between the two distributions indicates the degree to which the occurrences of u k are biased towards one or more topics. We use the dissimilarity between the two distributions to measure the termhood of u k . For example, if the two distributions are identical, the occurrences of u k are unbiased and u k most probably does not represent a domain-specific concept. If, on the other hand, the two distributions are very dissimilar, the occurrences of u k are strongly biased and u k is very likely to represent a domain-specific concept. The dissimilarity between two probability distributions can be measured in many different ways. One may use, for example, the Kullback–Leibler divergence, the Jensen–Shannon divergence, or a chi-square value. We use a somewhat different measure. Based on this measure, the termhood of u k is calculated as $$ {\text{termhood}}\,(u_{k} ) = \sum\limits_{j = 1}^{J} {p_{j} \log p_{j} } , $$ where 0 log 0 is defined as 0 and where $$ p_{j} = {\frac{{{{P(t_{j} |u_{k} )} \mathord{\left/ {\vphantom {{P(t_{j} |u_{k} )} {P(t_{j} )}}} \right. \kern-\nulldelimiterspace} {P(t_{j} )}}}}{{\sum\nolimits_{j^{\prime} = 1}^{J} {{{P(t_{j^{\prime}} |u_{k} )} \mathord{\left/ {\vphantom {{P(t_{{j^{\prime}}} |u_{k} )} {P(t_{j^{\prime}})}}} \right. \kern-\nulldelimiterspace} {P(t_{j^{\prime}} )}}} }}}. $$ It follows from (4) that p 1,…,p J define a probability distribution over the set of all topics. In (3), termhood (u k ) is calculated as the negative entropy of this distribution. Notice that termhood (u k ) is maximal if P(t j | u k ) = 1 for some j and that it is minimal if P(t j | u k ) = P(t j ) for all j. In other words, termhood (u k ) is maximal if the occurrences of u k are completely biased towards a single topic, and termhood (u k ) is minimal if the occurrences of u k do not have a bias towards any topic. The approach discussed above relies on the assumption of a one-to-one relationship between corpus segments and topics. For most corpora, this assumption is probably not very realistic. For example, if each segment of a corpus consists of a single document or a single abstract, there will most likely be some segments that are concerned with more or less the same topic. Or the other way around, if each segment of a corpus consists of a set of documents or abstracts that all appeared in the same journal, there will most likely be some segments (particularly segments corresponding to multidisciplinary journals) that are concerned with more than one topic. Below, we extend our approach in such a way that it no longer relies on the assumption of a one-to-one relationship between corpus segments and topics. Identifying topics In order to allow for a many-to-many relationship between corpus segments and topics, we make use of probabilistic latent semantic analysis (PLSA) (Hofmann 2001). PLSA is a quite popular technique in machine learning, information retrieval, and related fields. It was originally introduced as a probabilistic model that relates occurrences of words in documents to so-called latent classes. In the present context, we are dealing with semantic units and corpus segments instead of words and documents, and we interpret the latent classes as topics. When using PLSA, we first have to determine an appropriate value for the number of topics J. This value is typically much smaller than both the number of corpus segments I and the number of semantic units K. In this paper, we manually choose a value for J. PLSA assumes that each occurrence of a semantic unit in a corpus segment is independently generated according to the following probabilistic process. First, a topic t is drawn from a probability distribution P(t j ), where j ∈ {1,…,J}. Next, given t, a corpus segment s and a semantic unit u are independently drawn from, respectively, the conditional probability distributions P(s i | t), where i ∈ {1,…,I}, and P(u k | t), where k ∈ {1,…,K}. This then results in the occurrence of u in s. It is clear that, according to the generative process assumed by PLSA, the probability of generating an occurrence of semantic unit u k in corpus segment s i equals $$ P(s_{i} ,u_{k} ) = \sum\limits_{j = 1}^{J} {P(t_{j} )P(s_{i}\,|\,t_{j} )P(u_{k}\,|\,t_{j} )} . $$ The probabilities P(t j ), P(s i | t j ), and P(u k | t j ), for i = 1,…,I, j = 1,…,J, and k = 1,…,K, are the parameters of PLSA. We estimate these parameters using data from the corpus. Estimation is based on the criterion of maximum likelihood. The log-likelihood function to be maximized is given by $$ L = \sum\limits_{i = 1}^{I} {\sum\limits_{k = 1}^{K} {n_{ik} \log P(s_{i} ,u_{k} )} } . $$ We use the EM algorithm discussed by Hofmann (1999, Sect. 3.2) to perform the maximization of this function.Footnote 3 After estimating the parameters of PLSA, we apply Bayes' theorem to obtain a probability distribution over the topics conditional on a semantic unit. This distribution is given by $$ P(t_{j}\,|\,u_{k} ) = {\frac{{P(t_{j} )P(u_{k}\,|\,t_{j} )}}{{\sum\nolimits_{j^{\prime} = 1}^{J} {P(t_{j^{\prime}} )P(u_{k}\,|\,t_{j^{\prime}} )} }}}. $$ In a similar way as discussed earlier, we use the dissimilarity between the distributions P(t j | u k ) and P(t j ) to measure the termhood of u k . In this case, however, P(t j | u k ) is given by (7) instead of (1) and P(t j ) follows from the estimated parameters of PLSA instead of being given by (2). We again use (3) and (4) to calculate the termhood of u k . Experimental evaluation In this section, we experimentally evaluate our methodology for automatic term identification. We focus in particular on the performance of our methodology in the context of bibliometric mapping. Application to the field of operations research We apply our methodology to the field of operations research (OR), also known as operational research. The OR field was chosen because some of us have some background in this field and because we have easy access to a number of OR experts who can help us with the evaluation of our results. We note that sometimes a distinction is made between OR on the one hand and management science on the other hand (e.g., Eto 2000, 2002). For our purpose, however, such a distinction is not important. In this paper, the term OR therefore also includes management science. We start with a discussion of how we put together our corpus. We first selected a number of OR journals. This was done based on the subject categories of Thomson Reuters. The OR field is covered by the category Operations Research & Management Science. Since we wanted to focus on the core of the field, we selected only a subset of the journals in this category. More specifically, a journal was selected if it belongs to the category Operations Research & Management Science and possibly also to the closely related category Management and if it does not belong to any other category. This yielded 15 journals, which are listed in the first column of Table 1. We used the database of the Centre for Science and Technology Studies, which is similar to the Web of Science database of Thomson Reuters, to retrieve all documents, except those without an abstract, that were published in the selected journals between 2001 and 2006. For each journal, the number of documents retrieved from the database is reported in the second column of Table 1. Of each of the documents retrieved, we included the title and the abstract in our corpus. Table 1 Overview of the selected journals After putting together the corpus, we applied our methodology for automatic term identification. In the first step of our methodology, the linguistic filter identified 2662 different noun phrases. In the second step, the unithood of these noun phrases was measured. 203 noun phrases turned out to have a rather low unithood and therefore could not be regarded as semantic units. Examples of such noun phrases are first problem, good use, and optimal cost. The other 2459 noun phrases had a sufficiently high unithood to be regarded as semantic units. In the third and final step of our methodology, the termhood of these semantic units was measured. To do so, each title-abstract pair in the corpus was treated as a separate corpus segment. For each combination of a semantic unit u k and a corpus segment s i , it was determined whether u k occurs in s i (n ik = 1) or not (n ik = 0). Topics were identified using PLSA. This required the choice of the number of topics J. Results for various numbers of topics were examined and compared. Based on our own knowledge of the OR field, we decided to work with J = 10 topics. The output of our methodology consisted of a list of 2459 semantic units together with their termhood values. For the interested reader, this list is available online.Footnote 4 Evaluation based on precision and recall The evaluation of a methodology for automatic term identification is a difficult issue. There is no generally accepted standard for how evaluation should be done. We refer to Pazienza et al. (2005) for a discussion of the various problems. In this paper, we evaluate our methodology in two ways. We first perform an evaluation based on the well-known notions of precision and recall. We then perform a second evaluation by constructing a term map and asking experts to assess the quality of this map. Since our methodology for automatic term identification is intended to be used for bibliometric mapping purposes, we are especially interested in the results of the second evaluation. We first discuss the evaluation of our methodology based on precision and recall. The main aim of this evaluation is to compare the performance of our methodology with the performance of two simple alternatives. One alternative is a variant of our methodology. This variant assumes a one-to-one relationship between corpus segments and topics, and it therefore does not make use of PLSA. The other alternative is a very simple one. It uses frequency of occurrence as a measure of termhood. In the context of automatic term identification, precision and recall are defined as follows. Precision is the number of correctly identified terms divided by the total number of identified terms. Recall is the number of correctly identified terms divided by the total number of correct terms. Unfortunately, because the total number of correct terms in the OR field is unknown, we could not calculate the true recall. This is a well-known problem in the context of automatic term identification (Pazienza et al. 2005). To circumvent this problem, we defined recall in a slightly different way, namely as the number of correctly identified terms divided by the total number of correct terms within the set of all semantic units identified in the second step of our methodology. Recall calculated according to this definition provides an upper bound on the true recall. However, even using this definition of recall, the calculation of precision and recall remained problematic. The problem was that it is very time-consuming to manually determine which of the 2459 semantic units identified in the second step of our methodology are correct terms and which are not. We solved this problem by estimating precision and recall based on a random sample of 250 semantic units. The first two authors of this paper, who both have some knowledge of the OR field, independently determined which of these 250 units are correct terms and which are not. Units on which the authors did not agree were discussed until agreement was reached. To identify terms, we used a cutoff value that determined which semantic units were regarded as terms and which were not. Semantic units were regarded as terms if their termhood value was greater than the cutoff value. Obviously, a lower cutoff value leads to a larger number of identified terms and, consequently, to a higher recall. However, a lower cutoff value usually also leads to a lower precision. Hence, there is a trade-off between precision and recall. By varying the cutoff value, the relation between precision and recall can be obtained. In Fig. 2, the graphs labeled PLSA and No PLSA show this relation for, respectively, our methodology and the variant of our methodology that does not make use of PLSA. The third graph in the figure shows the relation between precision and recall for the approach based on frequency of occurrence. It is clear from the figure that our methodology outperforms the two simple alternatives. Except for very low and very high levels of recall, our methodology always has a considerably higher precision than the variant of our methodology that does not make use of PLSA. The low precision of our methodology for very low levels of recall is based on a very small number of incorrectly identified terms and is therefore insignificant from a statistical point of view. The approach based on frequency of occurrence has a very bad performance. For almost all levels of recall, the precision of this approach is even lower than the precision that would have been obtained if terms had been identified at random. Unfortunately, there is no easy way to compare the precision/recall performance of our methodology with that of other approaches proposed in the literature. This is due to the lack of a generally accepted evaluation standard (Pazienza et al. 2005). We refer to Cabré Castellví et al. (2001) for an overview of some precision/recall results reported for other approaches. The relationship between precision and recall for our methodology and for two simple alternatives Evaluation using a term map We now discuss the second evaluation of our methodology for automatic term identification. This evaluation is performed using a term map. The evaluation therefore focuses specifically on the usefulness of our methodology for bibliometric mapping purposes. A term map is a map, usually in two dimensions, that shows the relations between important terms in a scientific field. Terms are located in a term map in such a way that the proximity of two terms reflects their relatedness as closely as possible. That is, the smaller the distance between two terms, the stronger their relation. The aim of a term map usually is to visualize the structure of a scientific field. In order to evaluate our methodology, we constructed a term map of the OR field. The terms to be included in the map were selected based on the output of our methodology. It turned out that, out of the 2459 semantic units identified in the second step of our methodology, 831 had the highest possible termhood value. This means that, according to our methodology, 831 semantic units are associated exclusively with a single topic within the OR field. We decided to select these 831 semantic units as the terms to be included in the term map. This yielded a coverage of 97.0%, which means that 97.0% of the title-abstract pairs in the corpus contain at least one of the 831 terms to be included in the term map. The coverage per journal is reported in the third column of Table 1. The term map of the OR field was constructed using a procedure similar to the one used in our earlier work (Van Eck and Waltman 2007b). This procedure relies on the association strength measure (Van Eck and Waltman 2009) to determine the relatedness of two terms, and it uses the VOS technique (Van Eck and Waltman 2007a) to determine the locations of terms in the map. Due to the large number of terms, the map that was obtained cannot be shown in this paper. However, a simplified version of the map is presented in Fig. 3. This version of the map only shows terms that do not overlap with other more important terms. The complete map showing all 831 terms is available online.Footnote 5 A special computer program called VOSviewer (Van Eck and Waltman in press) allows the map to be examined in full detail. VOSviewer uses colors to indicate the different topics that were identified using PLSA. Simplified version of the term map of the OR field The quality of the term map of the OR field was assessed by five experts. Two of them are assistant professor of OR, one is associate professor of OR, and two are full professor of OR. All experts are working at Erasmus University Rotterdam. We asked each expert to examine the online term map and to complete a questionnaire. The questionnaire consisted of one multiple-choice question and ten open-ended questions. The main results of the questionnaire are discussed below. The full results are available on request. In the multiple-choice question, we asked the experts to indicate on a five-point scale how well the term map visualizes the structure of the OR field. Four experts answered that the map visualizes the structure of the field quite well (the second highest answer on the five-point scale). The fifth expert answered that the map visualizes the structure of the field very well (the highest answer on the five-point scale). Hence, overall the experts were quite satisfied with the map. The experts could also easily explain the global structure of the map, and for them the topics shown in the map (indicated using colors) generally had an obvious interpretation. We also asked the experts whether the map showed anything unexpected to them. One expert answered that he had not expected scheduling related terms to be located at the boundary of the map. Two other experts turned out to be surprised by the prominent position of economics related terms such as consumer, price, pricing, and revenue. None of these three experts regarded the unexpected results as a weakness of the map. Instead, two experts stated that their own perception of their field may not have been correct. Hence, it seems that these experts may have learned something new from the map. The experts also indicated some weak points of the term map. Some of these points were related to the way in which the terms shown in the map were selected. Other points were of a more general nature. The most serious criticism on the results of the automatic term identification concerned the presence of a number of rather general terms in the map. Examples of such terms are claim, conclusion, finding, item, and research. There were three experts who criticized the presence of terms such as these. We agree with these experts that some of the terms shown in the map are too general. Although the number of such terms is not very large, we consider it is highly desirable to get rid of them. To achieve this, further improvement of our methodology for automatic term identification is necessary. We will come back to this below. Another point of criticism concerned the underrepresentation of certain topics in the term map. There were three experts who raised this issue. One expert felt that the topic of supply chain management is underrepresented in the map. Another expert stated that he had expected the topic of transportation to be more visible. The third expert believed that the topics of combinatorial optimization, revenue management, and transportation are underrepresented. It seems likely that in many cases the perceived underrepresentation of topics was not due to our methodology for automatic term identification but was instead caused by the way in which the corpus used by our methodology was put together. As discussed earlier, when we were putting together the corpus, we wanted to focus on the core of the OR field and we therefore only included documents from a relatively small number of journals. This may for example explain why the topic of transportation is not clearly visible in the map. Thomson Reuters has a subject category Transportation Science & Technology, and it may well be that much transportation related OR studies are published in journals that belong to this category (and possibly also to the category Operations Research & Management Science). The corpus that we put together does not cover these journals and hence may contain only a small portion of the transportation related OR studies. It is then not surprising that the topic of transportation is difficult to see in the map. The remaining issues raised by the experts are of a more general nature, and most likely these issues would also have been raised if the terms shown in the term map had been selected manually. One of the issues had to do with the character of the OR field. When asked to divide the OR field into a number of smaller subfields, most experts indicated that there are two natural ways to make such a division. On the one hand, a division can be made based on the methodology that is being used, such as decision theory, game theory, mathematical programming, or stochastic modeling. On the other hand, a division can be made based on the area of application, such as inventory control, production planning, supply chain management, or transportation. There were two experts who noted that the term map seems to mix up both divisions of the OR field. According to these experts, one part of the map is based on the methodology-oriented division of the field, while the other part is based on the application-oriented division. One of the experts stated that he would be interested to see an explicit separation of the methodology and application dimensions. A final issue, which was raised by two experts, had to do with the more detailed interpretation of the term map. The experts pointed out that sometimes closely related terms are not located very close to each other in the map. One of the experts gave the terms inventory and inventory cost as an example of this problem. In many cases, a problem such as this is probably caused by the limited size of the corpus that was used to construct the map. In other cases, the problem may be due to the inherent limitations of a two-dimensional representation. The best solution to this kind of problems seems to be not to show individual terms in a map but to only show topics (e.g., Noyons and Van Raan 1998; Noyons 1999). Topics can then be labeled using one or more representative terms. In this paper, we have addressed the question how the terms shown in a term map can be selected without relying extensively on the judgment of domain experts. Our main contribution consists of a methodology for automatic identification of terms in a corpus of documents. Using this methodology, the process of selecting the terms to be included in a term map can be automated for a large part, thereby making the process less labor-intensive and less dependent on expert judgment. Because less expert judgment is required, the process of term selection also involves less subjectivity. We therefore expect term maps constructed using our methodology to be more objective representations of scientific fields. We have evaluated our methodology for automatic term identification by applying it to the OR field. In general, we are quite satisfied with the results that we have obtained. The precision/recall results clearly indicate that our methodology outperformed two simple alternatives. In addition, the quality of the term map of the OR field constructed using our methodology was assessed quite positively by five experts in the field. However, the term map also revealed a shortcoming of our methodology, namely the incorrect identification of a number of general noun phrases as terms. We hope to remedy this shortcoming in future work. Finally, we would like to place the research presented in this paper in a broader perspective. As scientific fields tend to overlap more and more and disciplinary boundaries become more and more blurred, finding an expert who has a good overview of an entire domain becomes more and more difficult. This poses serious difficulties for any bibliometric method that relies on expert knowledge. Term mapping is one such method. Fortunately, advanced computational techniques from fields such as data mining, machine learning, statistics, and text mining may be used to take over certain tasks in bibliometric analysis that are traditionally performed by domain experts (for an overview of various computational techniques, see Leopold et al. 2004). The research presented in this paper can be seen as an elaboration of this idea in the context of term mapping. We acknowledge, however, that our research is only a first step towards fully automatic term mapping. To produce accurate term maps, the output of our methodology for automatic term identification still needs to be verified manually and some amount of expert knowledge is still required. In future work, we intend to take even more advantage of the possibilities offered by various kinds of computational techniques. Hopefully, this allows the dependence of term mapping on expert knowledge to be reduced even further. A more elaborate overview of the literature can be found in an earlier version of this paper (Van Eck et al. 2008). See http://www.ims.uni-stuttgart.de/projekte/corplex/TreeTagger/. A MATLAB implementation of this algorithm is available on request. See http://www.neesjanvaneck.nl/term_identification/. Bourigault, D. (1992). Surface grammatical analysis for the extraction of terminological noun phrases. In: Proceedings of the 14th conference on computational linguistics (pp. 977–981). Morristown, NJ: Association for Computational Linguistics. Cabré Castellví, M. T., Estopà Bagot, R., & Vivaldi Palatresi, J. (2001). Automatic term detection: A review of current systems. In: D. Bourigault, C. Jacquemin, & M.-C. L'Homme (Eds.), Recent advances in computational terminology (pp. 53–87). Amsterdam: John Benjamins. Church, K. W., & Hanks, P. (1990). Word association norms, mutual information, and lexicography. Computational Linguistics, 16(1), 22–29. Dagan, I., & Church, K. (1994). TERMIGHT: Identifying and translating technical terminology. In: Proceedings of the 4th conference on applied natural language processing (pp. 34–40). Morristown, NJ: Association for Computational Linguistics. Daille, B., Gaussier, É., & Langé, J.-M. (1994), Towards automatic extraction of monolingual and bilingual terminology. In: Proceedings of the 15th conference on computational linguistics (pp. 515–521). Morristown, NJ: Association for Computational Linguistics. Damerau, F. J. (1993). Generating and evaluating domain-oriented multi-word terms from texts. Information Processing and Management, 29(4), 433–447. Ding, Y., Chowdhury, G. G., & Foo, S. (2001). Bibliometric cartography of information retrieval research by using co-word analysis. Information Processing and Management, 37(6), 817–842. Article MATH Google Scholar Drouin, P. (2003). Term extraction using non-technical corpora as a point of leverage. Terminology, 9(1), 99–115. Dunning, T. (1993). Accurate methods for the statistics of surprise and coincidence. Computational Linguistics, 19(1), 61–74. Eto, H. (2000). Authorship and citation patterns in operational research journals in relation to competition and reform. Scientometrics, 47(1), 25–42. Eto, H. (2002). Authorship and citation patterns in Management Science in comparison with operational research. Scientometrics, 53(3), 337–349. Frantzi, K., Ananiadou, S., & Mima, H. (2000). Automatic recognition of multi-word terms: The C-value/NC-value method. International Journal of Digital Libraries, 3(2), 117–132. Hofmann, T. (1999). Probabilistic latent semantic analysis. In: K. B. Laskey & H. Prade (Eds.), Proceedings of the 15th conference on uncertainty in artificial intelligence (pp. 289–296). San Fransisco, CA: Morgan Kaufmann. Hofmann, T. (2001). Unsupervised learning by probabilistic latent semantic analysis. Machine Learning, 42(1–2), 177–196. Jacquemin, C. (2001). Spotting and discovering terms through natural language processing. Cambridge, MA: MIT Press. Janssens, F., Leta, J., Glänzel, W., & De Moor, B. (2006). Towards mapping library and information science. Information Processing and Management, 42(6), 1614–1642. Justeson, J. S., & Katz, S. M. (1995). Technical terminology: Some linguistic properties and an algorithm for identification in text. Natural Language Engineering, 1(1), 9–27. Kageura, K., & Umino, B. (1996). Methods of automatic term recognition: A review. Terminology, 3(2), 259–289. Kopcsa, A., & Schiebel, E. (1998). Science and technology mapping: A new iteration model for representing multidimensional relationships. Journal of the American Society for Information Science, 49(1), 7–17. Leopold, E., May, M., & Paaß, G. (2004). Data mining and text mining for science & technology research. In H. F. Moed, W. Glänzel, & U. Schmoch (Eds.), Handbook of quantitative science and technology research (pp. 187–213). Dordrecht: Kluwer Academic Publishers. Manning, C. D., & Schütze, H. (1999). Foundations of statistical natural language processing. Cambridge: MIT Press. MATH Google Scholar Matsuo, Y., & Ishizuka, M. (2004). Keyword extraction from a single document using word co-occurrence statistical information. International Journal on Artificial Intelligence Tools, 13(1), 157–169. Maynard, D., & Ananiadou, S. (2000). Identifying terms by their family and friends. In: Proceedings of the 18th conference on computational linguistics (pp. 530–536). Morristown, NJ: Association for Computational Linguistics. Noyons, E. C. M. (1999). Bibliometric mapping as a science policy and research management tool. PhD thesis, Leiden University. Noyons, E. C. M. (2004). Science maps within a science policy context. In H. F. Moed, W. Glänzel, & U. Schmoch (Eds.), Handbook of quantitative science and technology research (pp. 237–255). Dordrecht: Kluwer Academic Publishers. Noyons, E. C. M., & Van Raan, A. F. J. (1998). Monitoring scientific developments from a dynamic perspective: Self-organized structuring to map neural network research. Journal of the American Society for Information Science, 49(1), 68–81. Pazienza, M. T., Pennacchiotti, M., & Zanzotto, F. M. (2005). Terminology extraction: An analysis of linguistic and statistical approaches. In: S. Sirmakessis (Ed.), Knowledge mining: Proceedings of the NEMIS 2004 final conference (pp. 255–279). Berlin Heidelberg: Springer. Peters, H. P. F., & Van Raan, A. F. J. (1993). Co-word-based science maps of chemical engineering. Part I: Representations by direct multidimensional scaling. Research Policy, 22(1), 23–45. Rip, A., & Courtial, J. P. (1984). Co-word maps of biotechnology: An example of cognitive scientometrics. Scientometrics, 6(6), 381–400. Schmid, H. (1994), Probabilistic part-of-speech tagging using decision trees. In: Proceedings of the international conference on new methods in language processing (pp. 44–49). Schmid, H. (1995), Improvements in part-of-speech tagging with an application to German. In: Proceedings of the ACL SIGDAT workshop (pp. 47–50). Schneider, J. W. (2006). Concept symbols revisited: Naming clusters by parsing and filtering of noun phrases from citation contexts of concept symbols. Scientometrics, 68(3), 573–593. Van Eck, N. J., & Waltman, L. (2007a). VOS: A new method for visualizing similarities between objects. In: H.-J. Lenz & R. Decker (Eds.), Advances in data analysis: Proceedings of the 30th annual conference of the German classification society (pp. 299–306). Berlin Heidelberg: Springer. Van Eck, N. J., & Waltman, L. (2007b). Bibliometric mapping of the computational intelligence field. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, 15(5), 625–645. Van Eck, N. J., & Waltman, L. (2009). How to normalize co-occurrence data? An analysis of some well-known similarity measures. Journal of the American Society for Information Science and Technology, 60(8), 1635–1651. Van Eck, N. J., & Waltman, L. (in press). Software survey: VOSviewer, a computer program for bibliometric mapping. Scientometrics. Van Eck, N. J., Waltman, L., Noyons, E. C. M., & Buter, R. K. (2008). Automatic term identification for bibliometric mapping. Technical Report ERS-2008-081-LIS, Erasmus University Rotterdam, Erasmus Research Institute of Management. Wang, X., McCallum, A., & Wei, X. (2007), Topical N-grams: Phrase and topic discovery, with an application to information retrieval. In: Proceedings of the 7th IEEE international conference on data mining (pp. 697–702). Washington, DC: IEEE Computer Society. We thank Rommert Dekker, Moritz Fleischmann, Dennis Huisman, Wilco van den Heuvel, and Albert Wagelmans for their help with the evaluation of the term map of the OR field. This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited. Econometric Institute, Erasmus School of Economics, Erasmus University Rotterdam, P.O. Box 1738, 3000 DR, Rotterdam, The Netherlands Nees Jan van Eck & Ludo Waltman Centre for Science and Technology Studies, Leiden University, Leiden, The Netherlands Nees Jan van Eck, Ludo Waltman, Ed C. M. Noyons & Reindert K. Buter Nees Jan van Eck Ludo Waltman Ed C. M. Noyons Reindert K. Buter Correspondence to Nees Jan van Eck. Open Access This is an open access article distributed under the terms of the Creative Commons Attribution Noncommercial License (https://creativecommons.org/licenses/by-nc/2.0), which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited. van Eck, N.J., Waltman, L., Noyons, E.C.M. et al. Automatic term identification for bibliometric mapping. Scientometrics 82, 581–596 (2010). https://doi.org/10.1007/s11192-010-0173-0 Bibliometric mapping Term map Automatic term identification Probabilistic latent semantic analysis Over 10 million scientific documents at your fingertips Academic Edition Corporate Edition Not affiliated © 2023 Springer Nature Switzerland AG. Part of Springer Nature.
CommonCrawl
Environmental Systems Research Soil erosion risk assessment in the Chaleleka wetland watershed, Central Rift Valley of Ethiopia Kebede Wolka1, Habitamu Tadesse1, Efrem Garedew2 & Fantaw Yimer1 Environmental Systems Research volume 4, Article number: 5 (2015) Cite this article Increased flooding and newly formed rills and gullies were observed in the Cheleleka wetland watershed, over the past three to five years. These events are due to problems related to land use changes and are adversely affecting land productivity. This study was conducted to quantify, analyze and map soil erosion risk areas using the Revised Universal Soil Loss Equation. Only 13.6 percent of the study area has a soil loss value less than 10 ton per hectare per year with the remaining area experiencing a higher soil loss value. A large area, 53.6 percent of the watershed, is under severe to extremely severe soil loss (>45 ton per hectare per year). Another 17.3 percent of the study area has annual soil loss of 20–45 ton per hectare. A significantly large area of the Cheleleka wetland watershed has non-tolerable soil erosion that threatens annual crop production, land productivity, and hydrological functioning of the area. From the conservation perspective, a large proportion of the watershed needs immediate watershed management intervention. Soil erosion is a natural geological phenomenon resulting from the removal of soil particles by water or wind (Gitas et al. 2009). This natural process can be accelerated by human activities creating soil loss that exceeds the soil formation rate in a given area. Human activities that change land use from a comparatively higher form of permanent vegetation cover, to a state of lesser vegetation cover, have increased soil erosion (Cebecauer and Hofierka 2008). Several studies in Ethiopia have revealed that soil erosion has become an alarming problem (Wagayehu and Drake (2003); Admasu (2005); Bewket and Teferi (2009); Haile and Fetene (2012)) and it is the major factor affecting the sustainability of agricultural production. The leading factors causing erosion include: increased population pressure resulting in forest/woodland clearance for wood and smallholding agriculture; traditional agricultural practices; and declining land productivity. In the Cheleleka wetland watershed, large expansions of new land uses were made mostly through the displacement of the original forests and pasture land. For example, using remote sensing and Geographic Information System (GIS) methods, Gessesse (2007), estimated an alarming 82% forest decline in the area when comparing 1972 forest cover to 2000. In many cases, this conversion occurred without evaluating the land use capacity, thus exposing the new cultivated fields to a high level of degradation. Moreover, the erosion and sedimentation processes disrupt the hydrological balance in the study watershed. In contrast to other lakes in the Rift Valley of Ethiopia, the level of Lake Hawassa has been increasing while the Cheleleka wetland is unfortunately drying up. Thus the increasing discharge from the wetland through the 'TikurWuha' River is partially related to the activities of erosion and the sedimentation processes in the Cheleleka watershed (Gessesse 2007; Tenalem et al. (2007); Kebede et al. (2014)). Increased flood and erosion risks were investigated as part of the field work for this study (Figure 1). Medo, Burara, and Gotu-Onoma akebeles were found to be victims of erosion and flooding which damaged large areas of cropland, forming new rills and gullies, and damaged ditches along concrete and asphalt roads. Recently, the watershed stream discharge has increased, causing flooding (Kebede et al. 2014). Therefore, the extent and spatial pattern of the watershed's soil erosion needs to be assessed. This will assist the identification of erosion susceptible areas for planning and implementation of a watershed-based soil and water conservation plan. No soil erosion susceptibility analysis and map were available for this watershed. This highly populated, socio-economically, and hydrological relevant rural watershed, supplying significant water resources to watershed residents and other towns outside the watershed, is in need of assessment and management. Partial view of the study site ('a'shows erosion site and 'b' shows topography). A relevant approach needs to be followed in estimating the spatial variation of soil erosion. In various corners of the globe, scientists apply related approaches and models for estimating soil erosion and related processes. For instance, the Hairsine-Rose model and hysteresis has been applied for estimating soil erosion sediment sizes, water discharge and contamination possibility (Sender et al. 2011). Other methods and approaches such as the Morgan method (Ustun 2008), ImpelERO model (Shahbazi et al. 2010), CORINE methodology (Gupta and Uniyal (2012)), and SWAT model (Gebreyesus et al. (2013)) have been applied. Many other researchers use Revised/Universal Soil Loss Equation (RUSLE/USLE) (Zhao et al. (2013); Gaatib and Larabi (2014)). This USLE method is preferred to others as it is commonly applied with reliable accuracy at watershed scales when detailed input data are less available (Kheir et al. (2008); (Raissouni et al. (2012)). The RUSLE has capacity to estimate soil erosion by incorporating the environment, having complex and varying gradients (Tongqian et al. 2009). Although various studies have been conducted within and/or around the Cheleleka watershed (e.g. Gessesse (2007)) none of them have assessed the soil erosion problem using RUSLE. Therefore, this study was designed to quantify, analyze and map soil erosion risk areas within the Cheleleka watershed using the Revised Universal Soil Loss Equation (RUSLE) and GIS tools. Results and discussions The soil loss amount, severity and extents varied for the different parts of the sub watershed. About 13.6 percent of the study area has a soil loss value of less than 10 t ha−1 yr−1 (Figure 2 and Table 1), mainly along the flat to gentle slopes of the downstream area. Among the sub-watersheds, the downstream areas of the Wedesa, Gemesho, Shonkora and Werka Rivers were dominated by low soil loss values. Annual soil loss of Cheleleka wetland watershed. Table 1 Soil erosion severity and conservation priority class More than half of the watershed area (53.6 percent) is experiencing 'severe' to 'extremely severe' soil losses, representing 80.1 percent of the total soil loss from the watershed. This includes the 'Burara-Abaro-Wijigira' mountain chain, and the valleys of the 'Wodesa' River and its tributary, at the upstream part of the watershed (Figures 2 and 3). Soil loss severity classification. NB: Med = Medo;LL = Lower Lango; UL = Upper Lango; LWsh = Lower Wesha;UWsh = Upper Wesha; LWrk = Lower Werka; UWrk = Upper Werka; LHal = Lower Hallo; UHal = Upper Hallo; LShen = Lower Shenkora; UShen = Upper Shenkora; LGem = Lower Gemesho; MGem = Middle Gemesho;UGem = Upper Gemesho;LWed = Lower Wedesa; MWed = Middle Wedesa; UWed = Upper Wedesa. 'Moderate' soil loss (10–20 t ha−1 yr−1) is observed in 15.5 percent of the watershed, covering the sub-watersheds in Upper Wesha, upper Hallo, and lower Lango. The soil loss severity class of 'high to very high' (20–45 ton/ha/year) occurs in 17.3percent of the total study area. The soil loss observed in the downstream sections of the Werka, Shenkora, Gemesho and Wedesa areas is, to a certain extent, in the 'tolerable' range, (less than 10 t ha−1 yr−1). The dominant factor for the retarded soil loss in these areas is the gentle slope gradient that causes reduced runoff velocity. Furthermore, the protective nature of the land use/cover, predominantly the perennial crops such as sugarcane and 'inset' based agroforestry system, could also contribute to arresting soil movement in these areas. A related study in Romania reported tolerable soil loss for watershed areas with lower slope gradients and having good vegetation cover (Arghiuş and Arghiuş 2011). Owing to elevated land demand resulting from increasing population and comparatively suitable land for cultivation, most marginal land has been economically utilized. In this effort, trees and perennial species planted around rivers can reduce soil erosion at the river bank (Lee et al. 2003). The deep (roughly 50 cm) furrowing commonly practiced for growing sugarcane also contributed significantly to reducing soil erosion. At the upstream areas of Hallo and Wesha, a moderate soil loss rate was observed primarily due to topography. In these areas, soil erodibility and rainfall erosivity is high but good crop cover, especially inset, and low topographic relief, moderated soil loss rate. Even though rainfall is low, soil is less erodible and better soil conservation practices exist in the Medo area it experiences high soil erosion rates due to topography. High soil erosion upstream from Gemesho is also attributed to the topography, high rainfall and comparatively high soil erodibility. The watershed area for which the rate of observed soil loss is not tolerable by any standard (Morgan (2005)) is categorized as 'severe', 'very severe' and 'extremely severe' soil loss areas. Such soil loss rates were observed in land areas with comparatively high slope gradients and in areas where the original forest cover has been transformed to arable land and degraded bush land. A report by Raissouni et al. (2012) for Northern Morocco, agrees with this finding, indicating a greater soil loss risk is observed in land area with higher gradient and damaged terrain. During the field work, we noted that only a small portion of the study area (Burara-Medo-Abaro-Wijigira mountain chain and adjacent lands) had highly degraded forest/shrub land compromising the stability of the natural environment. A research project in Cuba reported that conversion from forest to arable land increases soil loss in mountainous watersheds (Schiettecatte et al. 2008), which are fragile environments. The tolerable soil loss in such circumstances is no more than 2 t ha−1 yr−1 (Hudson (1981)). Cebecauer and Hofierka (2008) and Starchi et al. (2013) also observed that land cover has a significant influence on soil erosion patterns in mountainous and hilly areas. A report from China (Luo et al. 2014) indicated that land with lower vegetation cover produced greater runoff and sediment yield implying the extent of soil erosion. Similarly, a Nigerian study by (Oruk et al. 2012) reported greater soil erosion in lands with poor vegetation cover. The areas susceptib'le to 'severe' to 'very severe' soil erosion have a high rate of agricultural and agroforestry uses dominated by inset and khat farming in the sub-watersheds of the Wesha, Werka, Shonkora, Hallo, Gemesho and Wedesa rivers. This appeared to be a more protective land use than annual cropping systems, which is common for the Upper Lango and Medo areas, where 'very severe' soil loss was observed. Annual cropping leaves the land exposed for off-season grazing, direct rainfall impact and runoff with the resultant soil loss. In the study area, the increasing population pressure is a serious problem that increases forest land conversion into arable land, leading to land degradation (Gessesse (2007)). Most of the watershed area, with a high LS-factor and low C-factor has 'high' to 'very severe' soil loss values. The combination of these important factors, along with erosive rainfall, contributed to a non-tolerable soil erosion problem. A similar study in China indicates, that in mountainous watersheds, vegetation cover is the most important factor in reducing soil erosion (Zhou et al. (2008); Tongqian et al. (2009). The China's Grain for Green Program attempted to increase land cover and this significantly reduced sediment yield that implies reduction in soil erosion (Zhou et al. 2015). Bewket and Teferi (2009) also observed soil degradation problems, due to erosion, in cultivated steep lands of the Blue Nile basin in Ethiopia. The soil erosion susceptibility and risk analysis in Morocco by Gaatib and Larabi (2014) agree with this study's results, in that, watershed areas with 'very low' to 'low' slope gradient showed 'low' to 'moderate' soil erosion susceptibility and risk. Using RUSLE in combination with GIS allowed analysis of erosion problems in the Cheleleka wetland watershed. Our findings revealed that the study area is facing forest transformation, which is very likely intensified by the demand for subsistence agricultural land. Most of the parts of this watershed have experienced intensive soil erosion behavior, which is beyond the tolerable soil loss level. This threatens the annual crop production and the productivity of the land impacting the local farmers' food security (Brevik, 2013; Pimentel and Burgess 2013). The erosion may also have off-site consequences in the wetlands and have the possibility to modifying its nature and function (Gleason RA et al. 2003). In addition, the risk of flooding in the middle and lower part of the watershed may become serious if the protective land use/cover quality is compromised. The risk can be aggravated by steep slopes, high rainfall and poor conservation practices. The increased coverage with khat and inset based agroforestry practices can be considered a positive step to minimize the already intensified soil erosion risk in the watershed. This clearly shows there is a need for immediate action and intervention to protect and sustainably use the natural environment. Integrated watershed management could be a process to direct the intervention and encourage local people to participate in the process. Maintaining and improving land cover and practicing site specific soil conservation activities are important in reducing soil erosion and its consequences. The current national watershed management campaign can contribute to the success of this recommendation. Within this watershed, the suggested intervention priority (Table 1) should be considered for management implementation. The Cheleleka wetland watershed The study was undertaken in the Cheleleka wetland watershed, a sub-watershed of Lake Hawassa Catchment that is partially found in the Central Rift Valley of Ethiopia, 263 km south of Addis Ababa. The study area was located in a watershed where a chain of mountains stretch out and occupy a wide flat marshy land called Cheleleka. Geographically, it is located between 38°37ǀ E to 38°42ǀ E and 7°02ǀ N to 7°07ǀ N (Figure 4). It covers an area of about 27068.94 ha with wide altitudinal ranges of 1600 to 2700 meters above sea level. Map of the study site. Within the watershed some remnant natural forests were identified, which are composed of tree species including: Afrocarpus falcatus, Prunus africana, Albizia gummifera, Aningeria adolfi-friederici, Cordia africana, and Croton macrostachus. Plantation woodlots with exotic tree species such as Eucalyptus species, Gravilea robusta, and Cuprusus lustanica were common. At homestead and farm boundaries, agro-forestry activities are commonly practiced. Based on Zerihun (1999), the population density of the area was predicted to be 870 persons per square kilometer in 2013, which is much higher than the regional average population density (141 person/sq. km, CSA (Central Statistical Authority of Ethiopia) (2007)). Among available methods for predicting and quantifying soil erosion risks, RUSLE is widely applied by adapting it to various situations. The Universal Soil Loss Equation (USLE) was initially developed for predicting soil erosion from small areas of cultivated land (Bewket and Teferi (2009)). However, remote sensing and Geographic Information Systems (GIS) are now being integrated into RUSLE, in addition to the soil data, to increase the risk assessment precision. Initially, the watershed was delineated by considering contour lines pattern on topographic maps that have been produced by Ethiopian Mapping Agency. The delineated map was verified using the GIS Digital Elevation Model (DEM). The watershed was classified into seven slope categories, by percent (0–2; 2–5; 5–8; 8–16; 16–30; 30–45; >45), for field data collection, particularly for soil sampling, land use/cover description, and for soil and water conservation support practice analysis. The systematic sampling points were uniformly distributed on Universal Transfer Mercator (UTM) grid points using GIS. These points were located in the field with the aid of Global Positioning System (GPS). The Universal Soil Loss Equation (USLE) technique estimates water-caused, soil erosion risk, for varying climatic, soil, and topographic conditions. Since its development, USLE has been continuously improved to more accurately estimate soil loss and to adapt to more situations in a wider range of geographic areas. The Revised Universal Soil Loss Equation (RUSLE) is widely used and is represented by the following general equation: $$ \mathrm{A}=\mathrm{RKLSCP} $$ Where A refers to soil loss in t ha−1 yr−1, R represents rainfall erosivity, K represents soil erodibility, L represent slope length, S is slope gradient, C is land cover/use management, and P is supporting conservation practices. The equation shows that RUSLE is the product of six factors. In this investigation, RUSLE was integrated with GIS to estimate soil loss by exercising appropriate techniques and procedures to acquire relevant data for each of the component parameters as described below. R-factor Rainfall regime importantly influences soil loss patterns (Fang et al. (2012)). The erosivity (R) factor quantifies the effect of raindrop impacts. The R-factor requires long term rainfall data that records the energy available to erode soil. Various equations have been adapted to compute the R-factor; one option is that the erosion index for this parameter is calculated from the total storm energy multiplied by maximum 30 minute intensity. For areas that lack long term records of rainfall characteristics, other techniques have been developed for various countries. Within and around the study watershed, there is no record for rainfall characteristics /intensity. Thus, the R factor was estimated using rainfall depth data from the meteorological stations within the watershed and those neighboring it. Data from forty years of recorded rainfall at Wondo Genet and four years recorded at the Watera Resa station were available for the study watershed. Meteorological stations at Kofele, Shashemene, and Awassa, which are neighbor to the study watershed, were also considered in this study. Rainfall data recorded at these stations were obtained from the meteorological office. Using mean annual rainfall from these stations, the amount of rainfall between the stations was interpolated using GIS. Thus, the R-factor was calculated based on the following equation (Bewket and Teferi (2009); Shiferaw (2011)), $$ \mathrm{R}=-8.12+0.562\mathrm{P} $$ Where R is rainfall erosivity and P is the mean annual rainfall (mm). This form of the equation for estimating R-factor is particularly used for areas lacking rainfall intensity and kinetic energy records, as is true for many studies performed in Ethiopia. The K-factor The K-factor, or soil erodibility factor, is a critically important component in predicting soil erosion (Oruk et al. (2012); Wang et al. (2013)). According to Wischmeier and Smith (1978), soil organic matter and permeability were integrated into the equation for K. $$ \mathrm{K}=\left[{\mathrm{2.1.10}}^4\ {\mathrm{M}}^{1.14}\left(12-\mathrm{O}\mathrm{M}\right)+3.25\ \left(\mathrm{S}-2\right)+2.5\left(\mathrm{P}-3\right)\right]/759 $$ Where OM is the soil organic matter (percent) and M is a particle size parameter, which is estimated by multiplying the silt percentage (particles of 0.002 to 0.1 mm in size) by 100–% clay. The S and P parameters describe soil structure and permeability respectively. When the available or collected data set lacks any of these soil properties alternative methods can be adapted. To determine soil erodibility (K factor), 76 soil samples were collected and analyzed. Sample plots were identified using a grid method at 1 km intervals both longitudinally and latitudinal (Figure 5). Sample plots were located in the field with the help of GPS. Soil samples were taken at each plot at a depth of 0–20 cm. The samples were air dried at room temperature and analyzed for organic matter utilizing the Wackley-Black Method (Tan (2005)) and for texture, using the hydrometric method (Tan 2005). Using organic matter and soil texture for K-factor estimation was adapted from previous studies (Shiferaw (2011); Pal and Samanta (2011)) as shown in Table 2 below and mapped in Figure 6b. Sampling plots for soil, vegetation description and conservation support practices analysis. Table 2 Determination of K-factor from soil organic matter and texture analysis RUSLE's factor maps of erosion control practices (a), soil erodibility (b), crop management (c), rainfall erosivity (d) and topography (e) . LS-factor Various approaches have been used to estimate the LS (topographic) factor. Previously, USLE was assumed to apply to small areas where measuring the slope gradient and length was possible and practical. Since this study covers an area of 27068.94 ha, it is difficult to measure each slope's length. Therefore, Digital Elevation Model (DEM) and GIS techniques were used to obtain both slope gradient (S) and slope length (L) (Bewket and Teferi (2009); Nekhay et al. (2009)). The LS- factor was calculated based on the following equation (Tessema 2011): $$ \mathrm{L}\mathrm{S}=\left({\uplambda}^{0.3}/22.1\right)*{\left(\mathrm{S}/9\right)}^{1.3} $$ Where λ signifies the flow length and S is slope in percent. Figure 6e indicates map of the LS factor. C-factor The C-factor shows land cover/vegetation management. It is the most important factor in reducing soil erosion activities especially in mountainous watersheds (Zhou et al. 2008; Beskow et al. 2009). The land cover may vary in type including forest, shrub land, agricultural crops, and others. Various techniques have been applied to estimate the C-factor. One option is to compute the Normalized Difference Vegetation Index (NDVI) from satellite images in a GIS environment. This is a widely applied technique to estimate vegetation growth and it was confirmed to produce optimal C factor value (Karaburun 2010). The difference in spectral reflectance between the Near Infrared (NIR) and Red (R) bands are used to calculate NDVI (Arekhi et al. 2012): $$ NDVI=\frac{NIR-R}{NIR+R} $$ The NDVI indicates photosynthetic capacity and when there is healthy vegetation with good photosynthetic ability, it is positive when the land is bare or vegetation is in the dormant period, the NDVI value is negative. In general, the NDVI ranges from −1 to 1. The relationship between the C-factor and NDVI can be established as follows (Arekhi et al. 2012): $$ C=\left(1-\frac{NDVI}{2}\right) $$ Figure 6c indicates map of the NDVI. P-factor During the soil sampling process, the existing land use/support practices around the sample plots were assessed and recorded. The values suggested for the P-factor are presented in Table 3 (Wischmeier and Smith 1978) (Figure 6a). Table 3 P-factor determination Eventually, the risk of soil loss was quantified for the watershed by multiplying the six component parameters, via GIS, and the extent of soil loss per hectare per year is shown. aKebele is the lowest government structure in the country. CSA: Central Statistical Authority DEM: GIS: Global positioing system NDVI: Normalized Difference Vegetation Index NIR: Near Infrared RUSLE: Revised Universal Soil Loss Equation USLE: Universal Soil Loss Equation UTM: Universal Transfer Mercator Admasu A (2005) Study of sediment yield from the Watershed of Angereb reservoir. MSc thesis, Department of Agricultural Engineering, Alemaya University, Ethiopia Arekhi S, Darvishi AB, Shabani A, Fathizad H, Ahamdyasbchin S (2012) Mapping soil erosion and sediment yield susceptibility using RUSLE, remote sensing and GIS (Case study: Cham Gardalan Watershed, Iran). Adv Environ Biol 6(1):109–124 Arghiuş C, Arghiuş V (2011) The quantitative estimation of the soil erosion using USLE type ROMSEM model: Case-study-the Codrului ridge and Piedmont (Romania). Carpathian J Earth Environ Sci 6(2):59–66 Beskow S, Mello CR, Norton LD, Curi N, Viola MR, Avanzi JC (2009) Soil erosion prediction in the Grande River Basin, Brazil using distributed modeling. Catena 79:49–59 Bewket W, Teferi E (2009) Assessment of soil erosion hazard and prioritization for treatment at the watershed level: case study in the Chemoga watershed, Blue Nile basin, Ethiopia. Land Degrad Dev 20:609–622 Brevik EC (2013) The potential impact of climate change on soil properties and processes and corresponding influence on food security. Agriculture 3(3):398–417 Cebecauer T, Hofierka J (2008) The consequences of land-cover changes on soil erosion distribution in Slovakia. Geomorphology 98:187–198 CSA (Central Statistical Authority of Ethiopia) (2007) Report of population and housing census. Addis Ababa, Ethiopia. Fang N, Shi Z, Li L, Guo Z, Liu Q, Ai L (2012) The effects of rainfall regimes and land use changes on runoff and soil loss in a small mountainous watershed. Catena 99:1–8 Gaatib R, Larabi A (2014) Integrated evaluation of soil erosion hazard and risk management in the Oued Beht watershed using remote sensing and GIS techniques: impacts on El Kansra Dam Siltation (Morocco). J Geogr Inf Syst 2014(6):677–689 Gebreyesus BT, Vlek PG, Lulseged T (2013) Application of SWAT model to assess erosion hotspot for sub-catchment management at Mai-Negus catchment in northern Ethiopia. East Afr J Sci Tech 2(2):97–123 Gessesse D (2007) Forest Decline in South Central Ethiopia: Extent, History, and Process. Ph.D. thesis, Stockholm University, Stocholm, Sweden Gitas IZ, Douros K, Minakou C, Silleos GN, Karydas CG (2009) Multi-temporal soil erosion risk assessment. In: Chalkidiki N (ed) Using a modified USLE raster model. EARSeLe Proceedings 8, 1/2009., pp 40–52 Gleason RA E Jr, Hubbard DE, Duffy WG (2003) Effects of sediment load on emergence of aquatic invertebrates and plants from wetland soil egg and seed banks. Wetlands 23(1):26–34 Gupta P, Uniyal S (2012) A case study of Ramgad watershed, Nainital for soil erosion risk assessment using CORINE methodology. Int J Eng Res Technol 1(10):1–7 Haile GW, Fetene M (2012) Assessment of soil erosion hazard in kilie catchment, east shoa, Ethiopia. Land Degrad Dev 23:293–306 Hudson NW (1981) Soil Conservation. Batsford, London Karaburun A (2010) Estimation of C factor for soil erosion modeling using NDVI in Buyukcekmece watershed. Ozean J Appl Sci 3(1):77–85 Kebede W, Tefera M, Alemayehu T, Habtamu T (2014) Impact of land cover change on water quality and stream flow in lake Hawassa watershed of Ethiopia. Agric Sci 5:647–659 Kheir RB, Abdallah C, Runnstrom M, Mart-Ensson U (2008) Designing erosion management plans in Lebanon using remote sensing, GIS and decision-tree modeling. Landsc Urban Plan 88(2–4):54–63 Lee KH, Isenhart TM, Schultz RC (2003) Sediment and nutrient removal in an established multi-species riparian buffer. J Soil Water Conserv 58(1):1–7 Luo Z, Deng L, Yan C (2014) Soil erosion under different plant cover types and its influencing factors in Napahai Catchment, Shangri-La County, Yunnan Province, China, International. J Sustain Dev World Ecol. doi:10.1080/13504509.2014.924448. Morgan RPC (2005) Soil Erosion and Conservation (3rd edn). Blackwell Science, Oxford Nekhay O, Arriaza M, Boerboom L (2009) Evaluation of soil erosion risk using Analytic Network Process and GIS: a case study from Spanish mountain olive plantations. J Environ Manage 90:3091–3104 Oruk EO, Eric NJ, Ogogo AU (2012) Influence of soil textural properties and land use cover type on soil erosion of a characteristic ultisols in Betem, Cross River Sate, Nigeria. J Sustain Dev 5(7):104–110 Pal B, Samanta S (2011) Estimation of soil loss using remote sensing and geographic information system techniques: Case study of Kaliaghai River basin, Purba & Paschim Medinipur District, West Bengal, India. Indian J Sci Technol 4(10):1202–1207 Pimentel D, Burgess M (2013) Soil erosion threatens food production. Agriculture 3(3):443–463 Raissouni A, Khali Issa L, El Arrim A, Maâtouk M, Passalacqua R (2012) GIS-based model to assess erosion sensitivity in Northern Morocco. Laou watershed case study. Int J Geosci 3:610–626 Schiettecatte W, D'hondt L, Cornelis WM, Acosta ML, Leal Z, Lauwers N, Almoza Y, Alonso GR, Díaz J, Ruíz M, Gabriels D (2008) Influence of land use on soil erosion risk in the Cuyaguateje watershed (Cuba). Catena 74:1–12 Sender GC, Zheng T, Heng P, Zhong Y, Barry DA (2011) Sustainable soil and water resources: modeling soil erosion and its impact on the environment. 19th International Congress on Modelling and Simulation, Perth, Australia, 12–16 December 2011. http://mssanz.org.au/modsim2011 Shahbazi F, Jafarzadeh AA, Rosa DD, Anaya-Romero M (2010) Soil erosion assessment and monitoring by using ImpelERO model in east Azerbaijan province, Iran. 19th World Congress of Soil Science, Soil Solutions for a Changing World. 1 – 6 August 2010, Brisbane, Australia Shiferaw A (2011) Estimating soil loss rates for soil conservation planning in the Borena woreda of south Wollo highlands, Ethiopia. J Sustain Dev Afr 13(3):87–106 Starchi S, Freppaz M, Godone D, Zanini E (2013) Assessing the susceptibility of alpine soils to erosion using soil physical and land indicators. Soil Use Manag 29(4):586–596 Tan HK (2005) Soil Sampling, preparation, and analysis. 2nd edition. Taylor & Francis. Tenalem A, Robert B, van Arno L, Yemane G, Dagnachew L, Japheth O (2007) Hydrodynamics of topographically closed lakes in the Ethio-Kenyan Rift: The case of lakes Hawassa and Naivasha. J Spat Hydrol 7(1):81–100 Tessema I (2011) Soil Erosion Risk Assessment with RUSLE and GIS in Dire Dam Watershed. Masters thesis, Environmental Science, Addis Ababa University, Ethiopia Tongqian Z, Bosu Y, Hua Z (2009) Assessment of the erosion control function of forest ecosystems based on GIS: a case study in Zhangjiajie National Forest Park, China. Int J Sustain Dev World Ecol 16(5):356–361 Ustun B (2008) Soil erosion modelling by using GIS & Remote Sensing: a case study, Ganos mountain. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences Vol. XXXVII. Part B7, Beijing Wagayehu B, Drake L (2003) Soil and water conservation decision behavior of subsistence farmers in the Eastern Highlands of Ethiopia: a case study of the Hunde-Lafto area. Ecol Econ 46:437–451 Wang B, Zheng F, Römkens M, Darboux F (2013) Soil erodibility for water erosion: a perspective and Chinese experiences. Geomorphology 187:1–10 Wischmeier WH, Smith DD (1978) Predicting Rainfall Erosion Losses: A Guide to Conservation Planning. United States Department of Agriculture, Washington, p 58 Zerihun M (1999) Natural resource competition and Inter ethnic relations in Wondo Genet, South central Ethiopia. M.Sc. Thesis, Addis Ababa University, Addis Ababa Zhao W, Fu B, Qui Y (2013) An upscaling method for cover-management factor and its application in the loess Plateau of China. Int J Environ Res Public Health 10:4752–4766 Zhou P, Luukkanen O, Tokola T, Nieminen J (2008) Effect of vegetation cover on soil erosion in a mountainous watershed. Catena 75:319–325 Zhou ZC, Gan ZT, Shangguan ZP, Dong ZB (2015) China's Grain for Green Program has reduced soil erosion in the upper reaches of the Yangtze River and the middle reaches of the Yellow River. Int J Sustain Dev World Ecol 16(4):234–239 Hawassa University, Wondo Genet College of Forestry and Natural Resources provided financial support for field data collection and soil laboratory analysis. The authors thank Tatum Branaman and Bob Sturtevant for language editing. The authors thank anonymous reviewers for constructive comments. Hawassa University, Wondo Genet College of Forestry and Natural Resources, School of Natural Resources and Environmental Studies, P.O.Box 128, Shashemene, Ethiopia Kebede Wolka, Habitamu Tadesse & Fantaw Yimer Hawassa University, Wondo Genet College of Forestry and Natural Resources, School of Forestry, P.O.Box 128, Shashemene, Ethiopia Efrem Garedew Kebede Wolka Habitamu Tadesse Fantaw Yimer Correspondence to Kebede Wolka. KW: participated in designing the research idea, field data collection, data analysis, interpretation and report writing; HT: participated in field data collection, modeling of the data in GIS; EG: contributed in field data collection, structuring the report; FY: participated in designing the method and structuring the report. All authors read and approved the final manuscript. Kebede Wolka: Assistant professor at Hawassa University, Wondo Genet College of Forestry and Natural Resources. He teaches and undertakes research on soil science, soil erosion, soil and water conservation and watershed management. He has published 8 articles mainly in international journals. Habtamu Tadesse: lecturer at Hawassa University, Wondo Genet College of Forestry and Natural Resources. He teaches and undertakes research on GIS. He has published one article with previous author and other colleagues. Efrem Garedew (PhD): specializes on land use change, land evaluation and livelihood. Fantaw Yimer (PhD): Associate professor in soil science, Hawassa University, Wondo Genet College of Forestry. He has published many articles in international journals. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Wolka, K., Tadesse, H., Garedew, E. et al. Soil erosion risk assessment in the Chaleleka wetland watershed, Central Rift Valley of Ethiopia. Environ Syst Res 4, 5 (2015). https://doi.org/10.1186/s40068-015-0030-5 Land use change
CommonCrawl
arrow_back What is the dot product of $\mathbf{u}$ a column matrix and $\mathbf{v}$ a column matrix? asked in Mathematics Jan 13 8 views What is the dot product of $\mathbf{u}$ a column matrix and $\mathbf{v}$ a column matrix? Share your questions and answers with your friends. Facebook Twitter LinkedIn Reddit Email Vk.com The dot product of $\mathbf{u}$ a column matrix and $\mathbf{v}$ a column matrix is: \mathbf{u} \cdot \mathbf{v}=\mathbf{u}^{T} \mathbf{v}=\mathbf{v}^{T} \mathbf{u} \begin{aligned} &\mathbf{u}^{T} \mathbf{v}=\left[\begin{array}{lll} 1 & -3 & 5 \end{array}\right]\left[\begin{array}{l} 5 \\ \end{array}\right]=-7 \\ &\mathbf{v}^{T} \mathbf{u}=\left[\begin{array}{lll} 5 & 4 & 0 \end{array}\right]\left[\begin{array}{r} -3 \\ \end{array}\right]=-7 \end{aligned} answered Jan 13 Let $\mathbf{u}$ be a vector in $R^{100}$ whose $i$ th component is $i$, and let $\mathbf{v}$ be the vector in $R^{100}$ whose $i$ th component is $1 /(i+1)$. Find the dot product of $\mathbf{u}$ and $\mathbf{v}$. 0 answers 101 views Let $\mathbf{u}$ be a vector in $R^{100}$ whose $i$ th component is $i$, and let $\mathbf{v}$ be the vector in $R^{100}$ whose $i$ th component is $1 /(i+1)$. Find the dot product of $\mathbf{u}$ and $\mathbf{v}$.Let $\mathbf{u}$ be a vector in $R^{100}$ whose $i$ th component is $i$, and let $\mathbf{v}$ be the vector in $R^{100}$ whose $i$ th component is $1 ... Let $a, b$ and $c$ be vectors in $\mathrm{R}^{3}, \times$ denotes the cross product, . the dot product. Which of the following is false? Let $a, b$ and $c$ be vectors in $\mathrm{R}^{3}, \times$ denotes the cross product, . the dot product. Which of the following is false?Let $a, b$ and $c$ be vectors in $\mathrm{R}^{3}, \times$ denotes the cross product, . the dot product. Which of the following is false? A. $-a \time ... What is a column matrix? What is a column matrix?What is a column matrix? ... Find the dot product of $u = <-5,-6,-2>$ and $v = <-3,10,8>$. Are $u$ and $v$ orthogonal? Find the dot product of $u = <-5,-6,-2>$ and $v = <-3,10,8>$. Are $u$ and $v$ orthogonal?Find the dot product of $u = &lt;-5,-6,-2&gt;$ and $v = &lt;-3,10,8&gt;$. Are $u$ and $v$ orthogonal? ... asked in Mathematics Apr 12, 2021
CommonCrawl
Convert a quantum Phase Oracle into a Probability Oracle Asked 1 year, 10 months ago Suppose we have an oracle $O_f$ that given an initial state $|x\rangle$ maps it into the following state: $$ O_f : |x\rangle \mapsto e^{if(x)} |x\rangle $$ Now, assuming that $f(x) \in [0,1]$, is it possible to construct a quantum circuit $O_p$ such that: $$ O_p : |x\rangle \otimes |0\rangle \mapsto |x\rangle \otimes (\sqrt{f(x)} |0\rangle + \sqrt{1-f(x)} |1\rangle) $$ using $O_f$ ? If you can suggest some references, i would appreciate it. Thank you very much. algorithm oracles glS♦ As10_95As10_95 $\begingroup$ It should be possible see exercise 8.3.2 in Kaye, Mosca, and LaFlamme's "An introduction to quantum computing". $\endgroup$ – Condo As a bare minimum, you would need access to a controlled version of your oracle. This cannot be created from the oracle itself (I'm sure there's already an SE question about this part, but cannot immediately lay my hands on it). A typical construction would allow you to create (Hadamard - controlled oracle - Hadamard) would create an output $$ \cos\frac{f(x)}{2}|0\rangle+i\sin\frac{f(x)}{2}|1\rangle, $$ which is obviously not what you're asking for. There might be some simple modifications that let you approximate what you're after. To get what you're actually asking for, I suspect you have to do some quite sophisticated stuff. Essentially, perform phase estimation to estimate the value of $f(x)$ onto a second register, and use that register as a control to produce the state you want, with an accuracy defined by the size of the register. $\begingroup$ Is it clear that you cannot do it without a controlled version of the oracle? $\endgroup$ – smapers $\begingroup$ @smapers The problem is that if it acts on $|x\rangle$, it's only ever a global phase, which cannot have any observable influence. You need to be able to access it in superposition, which means a controlled-oracle with the control in superposition (since the target has to be the state $|x\rangle$). $\endgroup$ – DaftWullie $\begingroup$ Thanks, makes sense! $\endgroup$ $\begingroup$ @DaftWullie, first of all, thank you very much for your answer. The whole point of this question is because i want to avoid phase estimation, due to the extensive use of powers of the control unitaries C-U ,fourier transform and the probabilistic nature of the algorithm. I think that the controlled version of the oracle is not a problem, however producing $\sqrt{f(x)}$ is quite challenging, if possible at all. $\endgroup$ – As10_95 $\begingroup$ Well, my intuition, at least, suggests that you can't. What you can reasonably get at is some linear function of $e^{if(x)}$. The amplitudes you're asking for require a high-order expansion to get any reasonable accuracy, so you'd need to extensively use high powers of c-U. $\endgroup$ Given controlled access to the phase oracle, this is possible with surprisingly small overhead by avoiding phase estimation altogether. The technique you are after relies on applying "quantum singular value transformations" to objects that are known as "block encodings", and it was invented by Gilyén et al. in 2018. The idea was originally introduced in this paper, Appendix B, which builds on techniques from this paper. Alternatively, you can have a look at this master's thesis, Circuit 6.2.5. It appears that there is a slight error in the statement of the latter reference, as the action of $Q_f$ in the box referred to should actually be: $$Q_f : |x\rangle \otimes |0\rangle^{\otimes 3} \mapsto |x\rangle \otimes \left(\sqrt{\frac12 + \frac14f(x)}|0\rangle^{\otimes 3} + \sqrt{\frac12 - \frac14f(x)}|\psi(x)\rangle|1\rangle\right).$$ Similarly, the action of $Q_2$ further down in the box should be: $$Q_2 : |0\rangle^{\otimes 3} \otimes |x\rangle \mapsto \left(\frac12\sqrt{\frac12 + \frac14f(x)}|0\rangle^{\otimes 3} + \sqrt{\frac78 - \frac{1}{16}f(x)}|1\rangle|\phi(x)\rangle\right)|x\rangle.$$ All the rest should be correct as stated. Keep in mind that the operation you wish to implement, i.e., the probability oracle of $f$, makes little sense whenever $f$ takes negative values. Moreover, the square root that appears in the probability oracle behaves erratically close to $0$, so it makes sense to assume that the function values of $f$ are bounded away from $0$. Gilyén et al. overcome this by assuming that the values of $f$ are contained in $(\delta,1-\delta)$. The latter reference essentially does the same thing, but overcomes it by assuming that $|f| \leq 1/2$ and implementing the probability oracle of $\frac12 + \frac14f(x)$. As a final remark, note that the conversion you are after up to norm error $\varepsilon$ takes $O(\log(1/\varepsilon)^2)$ queries to the phase oracle, which is surprisingly little compared to the number of queries $O(1/\varepsilon)$ you would need if you used phase estimation as an intermediate step. An explanation can be given along the following lines: phase estimation gives you a binary representation of the function value $f(x)$, which you subsequently postprocess to implement the probability oracle. This is a difficult task, as it requires learning the value of $f(x)$ in the process (as you could measure after phase estimation to get a binary value of $f(x)$). The new techniques circumvent writing down such a binary representation of $f(x)$. This is why I like to call the new technique an instance of analog computation, and I refer to subroutines that give you binary representations, like phase estimation, as instances of digital computation. arriopolisarriopolis Not the answer you're looking for? Browse other questions tagged algorithm oracles or ask your own question. Preparing a quantum state from a classical probability distribution Does the oracle in Grover's algorithm need to contain information about the entirety of the database? Bernstein–Vazirani problem in book as exercise Grover's algorithm – DES circuit as oracle? Hamming with prefix oracle How to transform e.g., a 4 by 4 unitary matrix into a quantum circuit Speed up in Bernstein-Vazirani algorithm and Gottesman-Knill theorem What is the point of Grover's algorithm, really?
CommonCrawl
depth first search time complexity , {\displaystyle N(v)} [12]:189, A depth-first search ordering (not necessarily the lexicographic one), can be computed by a randomized parallel algorithm in the complexity class RNC. − Perhaps the following simpler example will make this clear: In each execution of A, line 1 of B is executed $n$ times, and B itself is executed $n$ times. ν For general graphs, replacing the stack of the iterative depth-first search implementation with a queue would also produce a breadth-first search algorithm, although a somewhat nonstandard one. O ≤ {\displaystyle \sigma } ; If the graph is represented as adjacency list:. To avoid processing a node more than once, use a boolean visited array. … Here's some pseudo-code to analyze. When did PicklistEntry label become null? Our mission is to provide a free, world-class education to anyone, anywhere. Sometimes tree edges, edges which belong to the spanning tree itself, are classified separately from forward edges. 1 Unlike BFS, a DFS algorithm traverses a tree or graph from the parent vertex down to its children and grandchildren vertices in … For example, analyzing networks, mapping routes, and scheduling are graph problems. And then it concluded that the total complexity of DFS() is O(V + E). Breadth-first search (BFS) is an algorithm for traversing or searching tree or graph data structures. In Depth Limited Search, we first set a constraint on how deep (or how far from root) will we go. The algorithm starts at the root (top) node of a tree and goes as far as it can down a given branch (path), then backtracks until it finds an unexplored path, and then explores it. v ) Thus the possible preorderings are A B D C and A C D B, while the possible postorderings are D B C A and D C B A, and the possible reverse postorderings are A C B D and A B C D. Reverse postordering produces a topological sorting of any directed acyclic graph. Depth-First Search. … Performing the same search without remembering previously visited nodes results in visiting nodes in the order A, B, D, F, E, A, B, D, F, E, etc. The time complexity of the algorithm is given by O (n*logn). v Breadth first search (BFS) algorithm also starts at the root of the Tree (or some arbitrary node of a graph), but unlike DFS it explores the neighbor nodes first, before moving to the next level neighbors. The depth-first algorithm is attributed to Charles Pierre Tremaux, a 19th century French mathematician. Note that it visits the not visited vertex. i As Depth Limited Search (DLS) is important for IDDFS, let us take time to understand it first. Now we'll modify it to perform breadth-first search (BFS). The time and space analysis of DFS differs according to its application area. is said to be a DFS ordering (with source Let me direct you towards our, Could you give the exact quote of the text you don't understand? It is not currently accepting answers. Computer Science Stack Exchange is a question and answer site for students, researchers and practitioners of computer science. Introduction to Depth Limited Search. j For most algorithms boolean classification unvisited / visitedis quite enough, but we show general case here. Next lesson. ,[4] linear in the size of the graph. The vertex set of G is denoted V(G),or just Vif there is no ambiguity. Further learning. This algorithm will traverse the shortest path first in the queue. 1 , there exists a neighbor Performing DFS upto a certain allowed depth is called Depth Limited Search (DLS). Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. , ) V forever, caught in the A, B, D, F, E cycle and never reaching C or G. Iterative deepening is one technique to avoid this infinite loop and would reach all nodes. When an appropriate depth limit is not known a priori, iterative deepening depth-first search applies DFS repeatedly with a sequence of increasing limits. ( 0 A graph G consists of two types of elements:vertices and edges.Each edge has two endpoints, which belong to the vertex set.We say that the edge connects(or joins) these two vertices. For such applications, DFS also lends itself much better to heuristic methods for choosing a likely-looking branch. A convenient description of a depth-first search of a graph is in terms of a spanning tree of the vertices reached during the search. , if such a {\displaystyle O=(v_{1},\dots ,v_{n})} O v How does the title "Revenge of the Sith" suit the plot? {\displaystyle v_{i}} The two variants of Best First Search are Greedy Best First Search and A* Best First Search. When we reach the dead-end, we step back one vertex and visit the other vertex if it exists. Can Depth-first search (DFS) with alphabetical traversal of neighbors be run in O(|V|+|E|) time? We didn't mention it at the time, but reachable_nodes performs a depth-first search (DFS). This ordering is also useful in control flow analysis as it often represents a natural linearization of the control flows. (),: 5 where is the branching factor and is the depth of the goal. Yuval sir@ i have got a doubt.In your provided Example algorithm, what if $A$ is called $n$ times which implies $O(n^{2})$.sir please help me out ! Your question is a very basic one. Depth-first search (DFS) is an algorithm that visits all edges in a graph G that belong to the same connected component as a vertex v. Algorithm DFS(G, v) if v is already visited return Mark v as visited. < v v The algorithm starts at the root (top) node of a tree and goes as far as it can down a given branch (path), then backtracks until it finds an unexplored path, and then explores it. is a vertex based technique for finding a shortest path in graph.. {\displaystyle V} The unbounded tree problem happens to appear in the depth-first search algorithm, and it can be fixed by imposing a boundary or a limit to the depth of the search domain. Each possible solution is called a node. Depth First Search (DFS) The DFS algorithm is a recursive algorithm that uses the idea of backtracking. Time complexity of Depth First Search [closed], MAINTENANCE WARNING: Possible downtime early morning Dec 2, 4, and 9 UTC…, "Question closed" notifications experiment results and graduation, Algorithm that finds the number of simple paths from $s$ to $t$ in $G$, Time Complexity for Creating a Graph from a File. Based on this spanning tree, the edges of the original graph can be divided into three classes: forward edges, which point from a node of the tree to one of its descendants, back edges, which point from a node to one of its ancestors, and cross edges, which do neither. He assumes you are familiar with the idea. is a DFS ordering if, for all v ∖ Here is another example, in which an array $T[1\ldots n]$ is involved. w ∖ Depth-first search (DFS) is an algorithm for traversing or searching tree or graph data structures. Performing DFS upto a certain allowed depth is called Depth Limited Search (DLS). By using our site, you acknowledge that you have read and understand our Cookie Policy, Privacy Policy, and our Terms of Service. as a binary tree. … is a vertex based technique for finding a shortest path in graph.. {\displaystyle \sigma } Adrian Sampson shows how to develop depth-first search (dfs) and breadth-first search (bfs). It uses a Queue data structure which follows first in first out. The book is counting the number of times each line is executed throughout the entire execution of a call of DFS, rather than the number of times it is executed in each call of the subroutine DFS-VISIT. V Note that repeat visits in the form of backtracking to a node, to check if it has still unvisited neighbors, are included here (even if it is found to have none). < v Let's define N as the total number of nodes. in the worst case to store the stack of vertices on the current search path as well as the set of already-visited vertices. | {\displaystyle 0} The Greedy BFS algorithm selects the path which appears to be the best, it can be known as the combination of depth-first search and breadth-first search. ) = … Depth limited search is the new search algorithm for uninformed search. Time Complexity The time complexity of both DFS and BFS traversal is O(N + M) where N is number of vertices and M is number of edges in the graph. V {\displaystyle V} 1 In theoretical computer science, DFS is typically used to traverse an entire graph, and takes time $${\displaystyle O(|V|+|E|)}$$, linear in the size of the graph. site design / logo © 2020 Stack Exchange Inc; user contributions licensed under cc by-sa. E {\displaystyle v_{m}} v ( In the artificial intelligence mode of analysis, with a branching factor greater than one, iterative deepening increases the running time by only a constant factor over the case in which the correct depth limit is known due to the geometric growth of the number of nodes per level. σ be a list of distinct elements of It involves exhaustive searches of all the nodes by going ahead, if possible, else by backtracking. v I added another example which is more similar to what happens in DFS. Depth First search (DFS) is an algorithm for traversing or searching tree or graph data structures. < of An edge between vertices u and v is written as {u, v}.The edge set of G is denoted E(G),or just Eif there is no ambiguity. be the ordering computed by the standard recursive DFS algorithm. In general, the time complexity of a depth-first search to depth d is O(ed). Running a $\Theta(f(n))$ procedure $g(n)$ times takes time $\Theta(f(n)g(n))$. [7], Another possible implementation of iterative depth-first search uses a stack of iterators of the list of neighbors of a node, instead of a stack of nodes. In these applications it also uses space n NB. | The algorithm does this until the entire graph has been explored. DFS Time Complexity- The total running time for Depth First Search is θ (V+E). Let us understand DLS, by performing DLS on the above example. ) Finding 2-(edge or vertex)-connected components. DFS is a search algorithm to traverse through a tree. Is it possible to compute time complexity of Depth First Search (recursive version) which is O(E+V) using a recurrence relation? , A version of depth-first search was investigated in the 19th century by French mathematician Charles Pierre Trémaux[1] as a strategy for solving mazes.[2][3]. < If a node is asolution to the problem, then it is called a goalnode. w σ ) How can a hard drive provide a host device with file/directory listings when the drive isn't spinning? , saving the first and second 2 minutes of a wmv video in Ubuntu Terminal, Why does C9 sound so good resolving to D major 7. … V ≤ Note that depth-limited search does not explore the entire graph, but just the part … is the set of neighbors of ) ) , The time complexity of an algorithm is an estimate, how fast it works depending on the size of input data. i I'll show the actual algorithm below. ν How easy it is to actually track another person credit card? {\displaystyle v_{i}} ( < 1 Tree Edge- A tree edge is an edge that is included in the DFS tree. This is because the program has never ended when re-visiting. Another drawback, however, to depth-first search is … = is it possible to determine using a single depth-first search, in O(V+E) time, whether a directed graph is singly connected? , let The disadvantage of this algorithm is that it requires a lot of memory space because it has to store each level of nodes for the next one. In BFS, one vertex is selected at a time when it is visited and marked then its adjacent are visited and stored in … rev 2020.11.30.38081, The best answers are voted up and rise to the top, Computer Science Stack Exchange works best with JavaScript enabled, Start here for a quick overview of the site, Detailed answers to any questions you might have, Discuss the workings and policies of this site, Learn more about Stack Overflow the company, Learn more about hiring developers or posting ads with us, Welcome to Computer Science! {\displaystyle v_{1}} v − Removing an experience because of a company's fraud. Types of Edges in DFS- After a DFS traversal of any graph G, all its edges can be put in one of the following 4 classes- Tree Edge; Back Edge; Forward Edge; Cross Edge . The procedure COUNT counts the number of 1s in the input array T. Even though ADVANCE could be called up to $n$ times by COUNT and the worst-case running time of ADVANCE is $O(n)$, lines 1–2 of ADVANCE run at most $n$ times, and so the overall running time is $O(n)$ rather than $O(n^2)$. For example, when searching the directed graph below beginning at node A, the sequence of traversals is either A B D B A C A or A C D C A B A (choosing to first visit B or C from A is up to the algorithm). 1 It … n be the greatest {\displaystyle \sigma =(v_{1},\dots ,v_{n})} E i Both algorithms are used to traverse a graph, "visiting" each of its nodes in an orderly fashion. Challenge: Implement breadth-first search. i i {\displaystyle i} 7. Stack Exchange Network Stack Exchange network consists of 176 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. [13] As of 1997, it remained unknown whether a depth-first traversal could be constructed by a deterministic parallel algorithm, in the complexity class NC. is maximal. Since the space used by depth-first search grows only as the log of the time required, the algorithm is time-bound rather than space-bound in practice. ∖ is the vertex Algorithmic Complexity Big-O However, incomplete DFS, similarly to incomplete BFS, is biased towards nodes of high degree. k v How to effectively defeat an alien "infection". j Understanding Depth First Search. ( m Variants of Best First Search. N such that It only takes a minute to sign up. Depth first search (DFS) is an algorithm for traversing or searching tree or graph data structures. {\displaystyle G=(V,E)} otherwise. Can Spiritomb be encountered without a Nintendo Online account? The time complexity for breadth first search is b d where b (branching factor) is the average number of child nodes for any given node and d is depth. 3 $\begingroup$ Closed. , {\displaystyle w\in V\setminus \{v_{1},\dots ,v_{i-1}\}} The unbounded tree problem happens to appear in the depth-first search algorithm, and it can be fixed by imposing a boundary or a limit to the depth of the search domain. An enumeration of the vertices of a graph is said to be a DFS ordering if it is the possible output of the application of DFS to this graph. m {\displaystyle n} ( One starts at the root (selecting some arbitrary node as the root in the case of a graph) and explores as far as possible along each branch before backtracking. In such cases, search is only performed to a limited depth; due to limited resources, such as memory or disk space, one typically does not use data structures to keep track of the set of all previously visited vertices. 1 i In these applications it also uses space $${\displaystyle O(|V|)}$$ in the worst case to store the stack of vertices on the current search path as well as the set of already-visited vertices. O CLRS Exercise 24.3-4 - Confirm Output of a Program Claiming to Implement Dijkstra's Algorithm, Time complexity of removing a vertex from a graph represented as collection of adjacency lists, How to migrate data from MacBook Pro to new iPad Air. [14], Cormen, Thomas H., Charles E. Leiserson, and Ronald L. Rivest. v a depth-first search starting at A, assuming that the left edges in the shown graph are chosen before right edges, and assuming the search remembers previously visited nodes and will not repeat them (since this is a small graph), will visit the nodes in the following order: A, B, D, F, E, C, G. The edges traversed in this search form a Trémaux tree, a structure with important applications in graph theory. Basically, it repeatedly visits the neighbor of the given vertex. For applications of DFS in relation to specific domains, such as searching for solutions in artificial intelligence or web-crawling, the graph to be traversed is often either too large to visit in its entirety or infinite (DFS may suffer from non-termination). . V If a person is dressed up as non-human, and is killed by someone who sincerely believes the victim was not human, who is responsible? V Complexity Analysis of Depth First Search Time Complexity. ) Sort by: Top Voted. 1. The recursive implementation will visit the nodes from the example graph in the following order: A, B, D, F, E, C, G. The non-recursive implementation will visit the nodes as: A, E, F, B, D, C, G. The non-recursive implementation is similar to breadth-first search but differs from it in two ways: If G is a tree, replacing the queue of the breadth-first search algorithm with a stack will yield a depth-first search algorithm. Viewed 25k times 7. The enumeration ∈ Reading time: 15 minutes | Coding time: 5 minutes. G Different topologically sorted order based on DFS vertex ordering. Note that priority queue is implemented using Min(or Max) Heap, and insert and remove operations take O(log n) time. , Search algorithms Depth-first search. Depth First Traversal (or Search) for a graph is similar to Depth First Traversal of a tree.The only catch here is, unlike trees, graphs may contain cycles, a node may be visited twice. , He also figures out the time complexity of these algorithms. = | V ( Depth limited search is the new search algorithm for uninformed search. Depth First search (DFS) is an algorithm for traversing or searching tree or graph data structures. The Depth first search (DFS) algorithm starts at the root of the Tree (or some arbitrary node for a graph) and explores as far as possible along each branch before backtracking. John Reif considered the complexity of computing the lexicographic depth-first search ordering, given a graph and a source. σ , let v Depth-first search (DFS) is an algorithm for searching a graph or tree data structure. { 1 worst case time complexity for Best First Search is O(n * Log n) where n is number of nodes. In worst case, we may have to visit all nodes before we reach goal. This page talks about the time complexity (there is space complexity too - please look yourself).. As for graphs - their size is usually described by two numbers - number of vertices $|V|$ and number of edges $|E|$. When search is performed to a limited depth, the time is still linear in terms of the number of expanded vertices and edges (although this number is not the same as the size of the entire graph because some vertices may be searched more than once and others not at all) but the space complexity of this variant of DFS is only proportional to the depth limit, and as a result, is much smaller than the space needed for searching to the same depth using breadth-first search. This is the currently selected item. {\displaystyle v_{i}\in N(v_{k})\setminus N(v_{j})} {\displaystyle 1 Qsc Kw181 Deep Mode, Simplified World Map Vector, Justin's Peanut Butter Packets, Average Rent In Simi Valley, Ca, Edexcel As Pure Maths Past Papers, Tiger Initiative Website, Chinese Long Bean Recipe, Msi Gf75 Thin 9sc 278 Manual, Properties Of Lime Pdf, depth first search time complexity 2020
CommonCrawl
Rigged configurations Rigged configurations were introduced by Kirillov and Reshetikhin [KR88]. One of the main applications is to compute Kostka–Foulkes polynomials in an efficient manner. We shall now introduce the terminology to define these objects, by following the introduction in [DLT94]. Let $\lambda$ and $\mu$ be partitions of $n.$ A matrix of type $(\lambda,\mu)$ is a matrix $M=(m_{ij})$ with a finite number of non-zero in $\setZ,$ such that \[ \sum_{j\geq 1} m_{ij} = \lambda_j \qquad \sum_{i\geq 1} m_{ij} = \mu'_j. \] We also associate matrices $P$ and $Q$ with $M,$ via \[ P_{ij} = \sum_{k \leq j } \left( m_{i,k} - m_{i+1,k} \right) \qquad Q_{ij} = \sum_{k \geq i+1 } \left( m_{k,j} - m_{k,j+1} \right). \] The matrix $M$ is admissible if and only if the entries in $P$ and $Q$ are non-negative. The numbers $P_{ij}$ for which $Q_{ij}\gt 0$ are called the vacancy numbers of $M.$ An admissible matrix can be represented via a sequence of partitions, $\nuvec = (\nu^0,\nu^1,\dotsc),$ such that \[ (\nu^i)'_j = \sum_{k \geq i+1} m_{kj}. \] Note that $\nu^0 = \mu$ and that $\lambda_i = |\nu^i|-|\nu^{i-1}|$ for $i \geq 1.$ Each entry $Q_{ij}$ gives the multiplicity of occurrence of the value $j$ as a part of the partition $\nu^i.$ A rigging of a configuration is defined as follows. For each $i,$ $j,$ an integer partition $I^i_j$ is assigned to each $j$ which appear as a part of $\nu^i,$ in such a way that the largest part of $I^i_j$ does not exceed $P_{ij},$ and the length of $I^i_j$ does not exceed $Q_{ij},$ which is also the number of parts of size $j$ in $\nu^i.$ The parts in the partitions $I^i_j$ are traditionally written in the leftmost box of the part in the Young diagram associated with $\nu^i.$ Example (Rigged configuration). Consider the following matrix $M,$ and the associated $P$ and $Q.$ The vacancy numbers in $P$ have been marked in bold. \[ M= \begin{pmatrix} 3 & 2 & 1 & -1 & 1 & 1 \\ 2 & 1 & 1 & 1 & 0 & 1 \\ 2 & 1 & 1 & 1 & 0 & 0 \\ 1 & 1 & 1 & 1 & 1 & 0 \\ 1 & 1 & 1 & 0 & 0 & 0 \end{pmatrix} \] \[ P= \begin{pmatrix} \mathbf{1} & 2 & \mathbf{2} & \mathbf{0} & 1 & \mathbf{1} \\ \mathbf{0} & 0 & \mathbf{0} & \mathbf{0} & \mathbf{0} & 1 \\ 1 & 1 & \mathbf{1} & 1 & \mathbf{0} & 0 \\ 0 & 0 & \mathbf{0} & 1 & 2 & 2 \\ 1 & 2 & 3 & 3 & 3 & 3 \\ \end{pmatrix} \; Q= \begin{pmatrix} 2 & 0 & 1 & 2 & 0 & 1 \\ 1 & 0 & 1 & 1 & 1 & 0 \\ 0 & 0 & 1 & 0 & 1 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ \end{pmatrix} \] We have that the type $(\lambda,\mu)$ of $M$ is $(7 6 5 5 3, 6 6 3 3 3 2 1 1 1)$ and the associated $(\nu^i)_{i=0}^4$ is given by \[ \nu^0= 663332111,\quad \nu^1 = 6 4 4 3 1 1 ,\quad \nu^2 = 5 4 3 1 ,\quad \nu^3= 53 ,\quad \nu^4 = 3. \] A rigged configuration for this $\nuvec$ is given by the following decorated sequence of Young diagrams. $\,$ $ 1$ $ 1$ $ 0$ $ 1\,$ $ 1$ $ 0\,$ The vacancy numbers have been written to the right of the parts (parts of the same size have the same vacancy). The shaded boxes is an example of a rigged configuration. Each such entry must not exceed the vacancy of that part. Furthermore, in diagram $i,$ for sequence of block of parts of the same size $j,$ the numbers in the shaded boxes are weakly decreasing, so that they form an integer partition $I^i_j.$ The pair $(\nuvec, I)$ is called a rigged configuration. The set of rigged configurations of type $(\lambda,\mu)$ are in bijection with $\SSYT(\lambda,\mu).$ Given a configuration $\nuvec,$ there are $\prod_{i,j \geq 1} \binom{P_{ij} + Q_{ij}}{Q_{ij}}$ different rigged configurations. The charge of a configuration $\nuvec$ is \[ \charge(\nuvec) = \sum_{i,j} \binom{ m_{ij} }{2}, \] and the charge of a rigged configuration is defined as \[ \charge(\nuvec,I) = \charge(\nuvec) + \sum_{i,j} I^i_j. \] Theorem (See [DLT94]). The Kostka–Foulkes polynomial $K_{\lambda\mu}(q)$ is equal to \[ K_{\lambda\mu}(q) = \sum_{\nu} \prod_{i,j \geq 1} q^{\charge(\nu)} \qbinom{P_{ij} + Q_{ij}}{Q_{ij}}_q. \] [DLT94] Jacques Désarménien, Bernard Leclerc and Jean-Yves Thibon. Hall-Littlewood functions and Kostka–Foulkes polynomials in representation theory. Séminaire Lotharingien de Combinatoire [electronic only], 32:38, 1994. [KR88] A. N. Kirillov and N. Yu. Reshetikhin. The Bethe ansatz and the combinatorics of Young tableaux. Journal of Mathematical Sciences, 41:925–955, 1988.
CommonCrawl
Dynamical systems with cross-sections by Dean A. Neumann PDF The problem of classifying dynamical systems (flows) with global cross-sections in terms of the associated diffeomorphisms of the cross-sections is considered. Suppose that, for $i = 1,2,{\phi _i}$ is a ${C^r}$ flow $(r \geqslant 0)$ on the ${C^r}$ manifold ${M_i}$ that admits a global cross-section ${S_i} \subseteq {M_i}$ with associated diffeomorphism ('first return map') ${d_i}$. If rank $({H_1}({M_1};{\mathbf {Z}})) = 1$, then $({M_1},{\phi _1})$ is ${C^s}$ equivalent $(s \leqslant r)$ to $({M_2},{\phi _2})$ if and only if ${d_1}$ is ${C^s}$ conjugate to ${d_2}$. If rank $({H_1}({M_1};{\mathbf {Z}})) \ne 1$ and ${\phi _1}$ has a periodic orbit, then there are infinitely many global cross-sections ${T_i} \subseteq {M_1}$ of ${\phi _1}$, such that the associated diffeomorphisms are pairwise nonconjugate. Ralph Abraham and Joel Robbin, Transversal mappings and flows, W. A. Benjamin, Inc., New York-Amsterdam, 1967. An appendix by Al Kelley. MR 0240836 W. Browder and J. Levine, Fibering manifolds over a circle, Comment. Math. Helv. 40 (1966), 153–160. MR 195104, DOI 10.1007/BF02564368 F. B. Fuller, The existence of periodic points, Ann. of Math. (2) 57 (1953), 229–230. MR 52764, DOI 10.2307/1969856 Sze-tsen Hu, On singular homology in differentiable spaces, Ann. of Math. (2) 50 (1949), 266–269. MR 30198, DOI 10.2307/1969451 Gikô Ikegami, On classification of dynamical systems with cross-sections, Osaka Math. J. 6 (1969), 419–433. MR 266224 Gikô Ikegami, Flow equivalence of diffeomorphisms. I, II, Osaka J. Math. 8 (1971), 49–69; ibid. 8 (1971), 71–76. MR 287577 William S. Massey, Algebraic topology: An introduction, Harcourt, Brace & World, Inc., New York, 1967. MR 0211390 Ivan Kupka, Contribution à la théorie des champs génériques, Contributions to Differential Equations 2 (1963), 457–484 (French). MR 165536 Retrieve articles in Proceedings of the American Mathematical Society with MSC: 58F99, 57D50 Retrieve articles in all journals with MSC: 58F99, 57D50 MSC: Primary 58F99; Secondary 57D50
CommonCrawl
Cost of illness of ischemic heart disease in Japan: a time trend and future projections Toshiharu Gochi1, Kunichika Matsumoto1,2, Rebeka Amin1, Takefumi Kitazawa2, Kanako Seto2 & Tomonori Hasegawa1,2 Ischemic heart disease (IHD/ICD10: I20-I25) is the second leading cause of deaths in Japan and accounts for 40% of deaths due to heart diseases. This study aimed to calculate the economic burden of IHD using the cost of illness (COI) method and to identify key factors that drive the change of the economic burden of IHD. We calculated the cost of illness (COI) every 3 years from 1996 to 2014 using governmental statistics. We then predicted the COI for every 3 years starting from 2017 up to 2029 using the fixed and variable model estimations. Only the estimated future population was used as a variable in the fixed model estimation. By contrast, variable model estimation considered the time trend of health-related indicators over the past 18 years. We derived the COI from the sum of direct and indirect costs (morbidity and mortality). The past estimation of COI slightly increased from 1493.8 billion yen in 1996 to 1708.3 billion yen in 2014. Future forecasts indicated that it would decrease from 1619.0 billion yen in 2017 to 1220.5 billion yen in 2029. The past estimation showed that the COI of IHD increased; in the mixed model, the COI was predicted to decrease with the continuing trend of health-related indicators. The COI of IHD in the future projection showed that, although the average age of death increased by social aging, the influence of the number of deaths and mortality cost decreased. Ischemic heart disease (IHD/ICD10: I20-I25) is the second leading cause of death in Japan and accounts for 40% of deaths due to heart diseases. Recently, the number of deaths (NDy) by IHD has been almost stable; however, the average age of death has increased [1]. In 2014, 73,862 individuals (men 41,902; women 31,960) died because of IHD in Japan. The influence of the social burden in IHD is expected to change with aging of the Japanese population, because IHD has a high mortality rate in the older age group. The objectives of this study were to estimate the social burden of IHD using the cost of illness (COI) method and to predict future changes. The COI method has been widely used since the 1960s after being introduced by Rice et al. as a method that can economically evaluate disease burden [2,3,4,5,6,7,8,9,10,11,12]. As the disease was already studied by another group, the results of which were similar to those evaluated above, the calculation method used in this study is similar to that used in our previous studies [13,14,15,16,17,18,19]. We estimated the economic burden of major forms of cancer from 1996 and projected the future economic burden [13,14,15,16,17]. These analyses showed that social aging and increase in the average age of death had an impact on the decreased economic burden by devaluing human capital. However, there were no previous studies concerning the economic burden of IHD, despite it being a significant cause of deaths. This study estimated the economic burden of IHD on time trends and determined the effects of change and the aging of the burden that IHD poses on society by surveying the change predicted in the future. This COI study used a top–down method to estimate resource consumption using governmental aggregated data. COI was estimated using the incidence-based approach. The calculation method used in this study was the same as that in our previous studies investigating COI [13,14,15,16,17,18,19]. COI was estimated as the sum of direct cost (DC) and indirect cost (IC), with IC divided into morbidity cost (MbC) and mortality cost (MtC) as follows: $$ \mathrm{COI}=\mathrm{DC}+\mathrm{MbC}+\mathrm{MtC}. $$ DC is a medical cost directly related to the disease and includes costs associated with treatment, hospitalization, laboratory investigations, and drugs. Furthermore, DC comprises hospitalization cost (HC) and outpatient cost (OC). DC was calculated using the following equation: $$ \mathrm{DC}=\mathrm{HC}+\mathrm{OC}=\mathrm{iCd}\times \mathrm{THD}+\mathrm{oCd}\times \mathrm{TOVy}. $$ HC was determined by multiplying the inpatient cost per day (iCd) with the total person-days of hospitalization (THD). OC was determined by multiplying the OC per day (oCd) with the total person-days of outpatient visits (TOVy). Here, we calculated the annual medical costs from the total medical expenses using the "Survey of National Medical Care Insurance Services" [20]. MbC is the opportunity cost lost resulting from hospitalization and visit to hospitals. We calculated MbC using the following equation: $$ \mathrm{MbC}=\mathrm{TOVy}\times \mathrm{LVd}/2+\mathrm{THD}\times \mathrm{LVd}. $$ LVd is the 1-day labor value per person. We calculated TOVy and THD in 5-year age groups based on the "Patient Survey" [21] conducted every 3 years by the Japanese government. We determined the labor values in the 5-year age groups based on data from "Basic Survey on Wage Structure" [22], "Labor Force Survey" [23], "Estimates of Monetary Valuation of Unpaid Work" and "Evaluations of Domestic Labor" [24]. We determined MbC by assuming a 1-day labor value loss per day at the hospital and a half-day labor value loss per outpatient visit. We calculated LVd and THD as follows: $$ \mathrm{LVd}=\left(\mathrm{Iy}+\mathrm{ULVy}\right)/365 $$ $$ \mathrm{THD}=\mathrm{HPy}\times \mathrm{ALOS} $$ where Iy is the annual income per person, ULVy is the annual monetary valuation of unpaid work per person, HPy is the annual number of hospitalized patients, and ALOS is the average length of hospital stay. MtC is measured as the loss of human capital (human capital method), which was calculated using the following equation: $$ \mathrm{MtC}=\mathrm{NDy}\times \mathrm{LVl} $$ where NDy is the number of deaths and LVl is the lifetime labor value per person. We calculated NDy of IHD for each 5-year age group based on "Vital Statistics of the Ministry of Health, Labor and Welfare" [1]. We calculated LVl by summing the present value of the potential future income should the patient survives. By estimating and projecting the COI values, the rate of contribution of DC, MbC, and MtC to the overall COI variation was calculated as follows: $$ \frac{{\mathrm{Cost}}_t^i-{\mathrm{Cost}}_0^i}{{\mathrm{COI}}_t-{\mathrm{COI}}_0} $$ \( {\mathrm{Cost}}_t^i \): DC, MbC, and MtC costs at year t \( {\mathrm{Cost}}_0^i \): DC, MbC, and MtC costs at the baseline year COI t : COI at year t COI0: COI at the baseline year With regard to the potential future labor value, we conducted a sensitivity analysis for the discount rate. Thus, the base case discount rate was 3%, and our analyses included a discount rate of 0–5%. We used data from "Population Estimates" [25] published by the Ministry of Internal Affairs and Communications for 1996–2014. The data for 2017–2029 were drawn from the "Population Statistics of Japan" [26] published by the National Institute of Population and Social Security Research. We used the projection based on medium fertility and medium mortality assumptions. The year 2014 was selected as the benchmark for the 1-day labor value by sex and 5-year age groups. Two methods were used for the future projection of MbC and MtC: the "fixed" and "variable" methods. We estimated the rate of change of four health-related indicators over the past 18 years, namely, the mortality rate, per capita outpatient visits, per capita outpatient hospitalizations, and ALOS. The "fixed" method fixed the health-related indicators of each age group at the 2014 level and changed only the future population and age structure. The "variable" method accounted for changes in health-related indicators in addition to population and age structure. Regarding variable model estimation, we estimated linear (linear model) and logarithmic regression for upward trends or exponential regression for downward trends (exponential/logarithmic model) by changing health-related indicators. We then assessed the mixed model estimation by adopting a higher approximation of the decision coefficient every 5 years. Theoretically, the mixed model was the most valid, and the fixed model could be considered the reference. With regard to ALOS, we adopted the value of 2.7 days from Norway, which was obtained from OECD Health Data 2014 (Statistics and Indicators), because it was the lowest value among all 34 OECD countries (2014) for IHD. This "2.7 days" was applied for age groups that were calculated to be < 2.7 days from the estimated value of 2017–2029. The study protocol was approved by the Ethics Committee of the Toho University School of Medicine (reference number: A16019). Past estimation of COI COI increased by an average of 14.4% from 1996 to 2014. DC and MtC increased by 18.1 and 15.0%, respectively, whereas MbC decreased by 18.8% from 1996 to 2014. The contribution rates of DC, MbC, and MtC were 42.7, − 6.1, and 63.4%, respectively. NDy increased by 2.8%, but the proportion of NDy in persons aged ≥ 65 years and the average age of death increased from 1996 to 2014. As for the mortality rate, decreases were observed in almost all age groups. The proportion of MtC of persons aged ≥ 65 years, MtC per capita, which was calculated by dividing MtC by NDy, increased by 11.9% from 1996 to 2014 (Table 1). Table 1 The time trend of cost of illness (COI) of ischemic heart disease Regarding the calculation of DC from 1996 to 2014, THD and TOVy decreased, whereas iCd and oCd increased. The rates of change for THD and TOVy were − 66.1 and − 44.0%, respectively, whereas those for iCd and oCd were 440.9 and 143.1%, respectively. Future projection of COI Fixed model COI tended to increase from 2017 to 2026. It was projected to increase by 11.8% from 2014 to 2029. DC, MbC, and MtC also increased in the same period. The contribution rates of DC, MbC, and MtC were 64.5, 1.6, and 21.9% from 2014 to 2029, respectively. NDy was also predicted to increase, and the increase rate was 36.6% from 2014 to 2029. The proportion of NDy in persons aged ≥ 65 years increased, accounting for 90.7% of NDy due to IHD in 2029, and the average age of death was also predicted to increase from 77.5 years in 2014 to 80.1 years in 2029. The proportion of MtC of persons aged ≥ 65 years was predicted to be 55.6% in 2029, but the MtC per capita was projected to decrease by 22.0% from 2014 to 2029 (Table 2, Fig. 1). Table 2 Future prediction of cost of illness (COI) of ischemic heart disease Cost of illness (COI) projection with cost elements and projection of average age of death Linear model and exponential/logarithmic model The linear model indicated that COI tended to decrease from 2017 to 2026. It was predicted to decrease by 44.7% from 2014 to 2029, and the contribution rates of DC, MbC, and MtC were 30.4, 2.2, and 67.4%, respectively. The linear model in the exponential/logarithmic projection indicated that COI tended to decrease from 2017 to 2026. It was predicted to decrease by 29.5% from 2014 to 2029, and the contribution rates of DC, MbC, and MtC were 49.1, 1.4, and 49.5%, respectively. Both types of projection predicted a decrease in NDy, DC, MbC, and MtC. All projected values were lower than those of the fixed model. Mixed model The mixed model was estimated by combinations of models of higher coefficients and was considered the most valid model in this study. This mixed model indicated that COI tended to decrease from 2017 to 2026. It was predicted to decrease by 28.6% from 2014 to 2029, and the contribution rates of DC, MbC, and MtC were 44.9, 2.9, and 52.3%, respectively. Regarding the calculation of DC from 2017 to 2029, THD and TOVy decreased, whereas iCd and oCd increased. The rates of change for THD, TOVy, iCd, and oCd were − 51.0, − 10.9, 149.5, and 103.9%, respectively. NDy was also predicted to decrease by 17.4% from 2014 to 2029, but no change was observed in the proportion of NDy of persons aged ≥ 65 years. The average age of death was 78.9 years in 2029, which slightly increased from 77.5 years in 2014. The proportion of MtC of persons aged ≥ 65 years was predicted to be 45.1% in 2029, and the MtC per capita was projected to decrease by 8.5% from 2014 to 2029. Sensitivity analysis by discount rate The results of the sensitivity analysis for the past estimation and mixed model projection are shown in Fig. 2. The change in the discount rate from 0 to 5% did not influence the trends observed in COI, and MtC remained the highest contributor to the COI. Sensitivity analysis discount rate varied from 0 to 5%; trend of COI remained the same The past estimation showed that a slight increase was found in the COI of IHD. DC had an unevenly increasing trend from 1996 to 2014. MbC showed decrease after peaking in 1999. MtC also had a tendency to increase until 2011, but decreased in 2014. DC was influenced by THD, TOVy, iCd, and oCd. THD and TOVy decreased from 1996 to 2014. However, iCd and oCd remarkably increased, and the effect of this increase exceeded the effect of THD and TOVy decrease. The development of new therapies for IHD, such as drug and intra-vascular interventions, has been speculated to influence the increase in the unit cost. MbC was calculated by multiplying TOVy/THD with LVd, and the main factor of the decrease of MbC was the decrease of ALOS. Recently, ALOS of all age groups have shown a remarkable decrease. Health policies encouraging shorter ALOS were introduced, including DPC/PDPS (Diagnosis Procedure Combination/Per-Diem Payment System), which is a reimbursement system introduced in 2003, where hospitals can receive more money with shorter length of stay, and admission criteria based on severity and needs for acute care hospitals in 2008 [27, 28]. MtC showed a tendency to increase. MtC was calculated by multiplying NDy with LVl (mean human capital value). In case of IHD, both NDy and LVl showed increasing tendencies until 2014. MtC was considered to be influenced by change in mortality rate, social aging (i.e., increasing number of elderly people), and change in LVl. At first, the reduction in mortality rate decreases NDy. Previously, a decrease in the mortality rate was observed in the elderly group (Table 1). If population structure, relative population of each age group, was stable, the NDy would decrease. In contrast, social aging has the effect of increasing overall NDy by increasing the population of elderly people with high mortality rate. Therefore, the increase in NDy in the past estimation has been speculated to occur because social aging effect exceeded the effect of the decrease in mortality rate. The increase in the average age of death by social aging also has the effect of reducing LVl at the time of death because the LVl of elderly people is low. In contrast, the increase in LVl of each sex and age group has the effect of increasing MtC. LVl showed remarkable increase in the elderly in particular from 1996 to 2014. Particularly, an increase of > 36.0% was found in the 70–74-year age group of both sexes, which was considered to be the result of the increase in labor force participation rate in the elderly and the increase in unpaid work, such as care for the spouse [21, 29]. The average age of death increased from 1996 to 2014, but the effect of reducing the LVl was speculated to be limited because the average age of death was already high in 1996, and the rate of increase was small. In other words, we can infer that the MtC increased because the effect of the increase in LVl in the elderly exceeded the effect of increase in the average age of death. Under the influence of these results, COI has been speculated to increase from 1996 to 2014. Future projection of COI using the fixed model indicated that the COI of IHD would increase. However, if the trend of health-related indicators (mortality rate, per capita outpatient visits/hospitalizations, and ALOS) continues, COI will decrease in both linear and exponential/logarithmic models. The mixed model, which we consider to be the most credible model, also predicted that COI will show a decreasing tendency. In the future, the proportion of the population in the ≥ 65 year age group will increase further, and NDy due to IHD will increase, but LVl at the time of death (value of human capital) will decrease because of the higher average age of death. In the fixed model, NDy, DC, MbC, and MtC were predicted to increase. The effect of the decrease in per capita MtC was considered to offset the effect of the increase in NDy. In contrast, the mixed model showed that COI would decrease. As for the unit cost, the increase was estimated by the past trends; however, a remarkable decrease was predicted in THD and TOVy from 2017 to 2029. Therefore, it was estimated that DC will decrease. NDy and MtC decreased, the average age of death increased, and the per capita MtC decreased. Previous studies have suggested that COI of major cancers decreased because of devaluation of human capital caused by the increase in the average age of death [13,14,15,16,17]. However, the average age of death by IHD was already high in 2014, and it is predicted to increase gradually. Therefore, COI of IHD was projected to be influenced mainly by the decrease in MtC due to the decrease in NDy. The effect of the decrease in the human capital value was expected to be small. This study has several limitations. First, the COI method used here did not take the quality of the medical treatment provided or patients' quality of life into consideration. Therefore, it does not examine the cost effectiveness of individual medical management. However, it is still useful because it enables future estimations and to regard the impacts of the aging population. Second, our model could not catch the effect of LVl increase on future MtC because our model fixed LVl in 2014. Our past estimation showed that LVl of the elderly was increasing, and it increased the MtC, which might underestimate our future projection. However, projecting precise LVl per person classified by age groups is difficult. We cannot assume a permanent increase of labor force participation rate and unpaid work of the elderly, and these effects on future LVl are considered to be restrictive. In addition, the study period was relatively short, and dramatic changes occurred in the health care system during this period. However, the change among the different methods for determining projections was small, and therefore, the projections are likely to be accurate for the near future. The COI of IHD was estimated using Japanese government statistics. The past estimation showed that the COI of IHD increased. DC increased because of the increasing unit costs of developing new IHD therapies, MbC decreased because of the decrease in ALOS, and MtC increased because of social aging. In the mixed model, which we believe has the highest degree of relevance in future projection, the COI of IHD was predicted to decrease from 2017 to 2029 if the trend of health-related indicators (mortality rate, per capita outpatient visits/hospitalizations, and ALOS) continues. Possible factors contributing the change in the COI of IHD include aging of the population, DC, MtC, and NDy, and the impact of the aging of the population was considered the most significant. ALOS: Average length of hospital stay COI: Cost of illness COI0 : COI at the baseline year COI t : COI at year t \( {\mathrm{Cost}}_0^i \) : DC, MbC, and MtC costs at the baseline year \( {\mathrm{Cost}}_t^i \) : DC, MbC, and MtC costs at year t Direct cost Hospitalization cost HPy: Number of annual hospitalized patients iCd: Inpatient cost per day Iy: Annual income per person LVd: One-day labor value per person Lifetime labor value per person MbC: Morbidity cost MtC: Mortality cost NDy: Number of deaths OC: Outpatient cost oCd: OC per day THD: Total person-days of hospitalization TOVy: Total person-days of outpatient visits ULVy: Annual monetary valuation of unpaid work per person Minister of Health, Labour and Welfare. Vital statistics. http://www.mhlw.go.jp/toukei/saikin/hw/jinkou/kakutei14/index.html. Accessed 1 Oct 2017. Rice DP. Estimating the cost of illness. Am J Public Health Nations Health. 1967;7(3):424–40. Rice DP, Hodgson TA. The value of human life revisited. Am J Public Health. 1982;72:536–8. Article PubMed PubMed Central CAS Google Scholar Crum GE, Rice DP, Hodgson TA. The priceless value of human life. Am J Public Health. 1982;72:536–8. Rice DP, Hodgson TA, Kopstein AN. The economic costs of illness, a replication and update. Health Care Financ Rev. 1985;7:61–80. PubMed PubMed Central CAS Google Scholar Rice DP. Cost-of-illness studies, fact or fiction? Lancet. 1994;344:1519–20. Article PubMed CAS Google Scholar Rice DP. Cost-of-illness studies, what is good about them? Inj Prev. 2000;6:177–9. Tarricone R. Cost-of-illness analysis. What room in health economics? Health Policy. 2006;77:51–63. Hodgson TA, Meiners MR. Cost-of-illness methodology, a guide to current practices and procedures. Milbank Mem Fund Q Health Soc. 1982;60:429–62. Ceilleachair AO, Costello L, Finn C, Timmons A, Fitzpatrick P, Kapur K, et al. Inter-relationships between the economic and emotional consequences of colorectal cancer for patients and their families, a qualitative study. BMC Gastroenterol. 2012;12:62. Bradley CJ, Lansdorp-Vogelaar I, Yabroff KR, Dahman B, Mariotto A, Feuer EJ, et al. Productivity savings from colorectal cancer prevention and control strategies. Am J Prev Med. 2011;41(2):e5–14. Torres US, Almeida TE, Netinho JG. Increasing hospital admission rates and economic burden for colorectal cancer in Brazil, 1996-2008. Rev Panam Salud Publica. 2010;28(4):244–8. Matsumoto K, Haga K, Hanaoka S, Kitazawa T, Hasegawa T. Costs of illness for major cancers in Japan. JJHM. 2012;13(1):2–6. Haga K, Matsumoto K, Kitazawa T, Seto K, Fujita S, Hasegawa T. Cost of illness of the stomach cancer in Japan—a time trend and future projections. BMC Health Serv Res. 2013;13:283. Gochi T, Matsumoto K, Haga K, Kitazawa T, Seto K, Fujita S, et al. Estimation of cost of illness (COI) for colon cancer. JJHM. 2015;16(1):2–7. Matsumoto K, Haga K, Kitazawa T, Seto K, Fujita S, Hasegawa T. Cost of illness of breast cancer in Japan-trends and future projections. BMC Res Notes. 2015;8:539. Hayata E, Seto K, Kitazawa T, Matsumoto K, Morita M, Hasegawa T. Informing health policy in Japan: a mixed-model estimation to compare the cost of illness of cervical cancer and endometrial cancer. J Obstet Gynaecol Res. 2016;42(4):446–56. Kitazawa T, Matsumoto K, Fujita S, Seto K, Wu Y, Hirao T, et al. Cost of illness of the non-alcoholic liver cirrhosis in Japan: a time trend analysis and future projections. Hepatol Res. 2018;48(2):176–83. Hanaoka S, Matsumoto K, Kitazawa T, Seto K, Fujita S, Hasegawa T. Cost of illness of the dementia in Japan, a time trend analysis based on Japanese governmental statistics. JJHM. 2016;17(1):8–13. Minister of health, Labour and Welfare. Statistics of medical care activities in public health insurances. http://www.mhlw.go.jp/toukei/list/26-19c.html. Accessed 1 Oct 2017. Minister of Health, Labour and Welfare. Patient survey. http://www.mhlw.go.jp/toukei/list/10-20.html. Accessed 1 Oct 2017. Ministry of Health, Labour and Welfare. Basic survey on wage structure. http://www.mhlw.go.jp/toukei/list/chinginkouzou.html. Accessed 31 Jan 2018. Statistics Bureau, Ministry of Internal Affairs and Communications. Labor force survey. http://www.stat.go.jp/data/roudou/index.html. Accesesed 31 Jan 2018. Economic and Social Research Institute, Cabinet Office, Government of Japan. Satellite account. http://www.esri.cao.go.jp/jp/sna/sonota/satellite/satellite_top.html. Accessed 31 Jan 2018. Statistics Bureau, Ministry of Internal Affairs and Communications. Population Estimation. http://www.stat.go.jp/data/jinsui/index.html. Accessed 30 June 2017. National Institute of Population and Social Security Research. Population Projections for Japan. http://www.ipss.go.jp/syoushika/tohkei/suikei07/suikei.html. Accessed 30 June 2017. Wang K, Li P, Chen L, Kato K, Kobayashi M, Yamauchi K. Impact of the Japanese diagnosis procedure combination-based payment system in Japan. J Med Syst. 2010;34(1):95–100. Hironori Hamada, Miho Sekimoto, Yuichi Imanaka, (2012) Effects of the per diem prospective payment system with DRG-like grouping system (DPC/PDPS) on resource usage and healthcare quality in Japan. Health Policy 107 (2-3):194–201. Economic Planning Agency (existing Ministry of Economy, Trade and Industry) Economic Research Institute. Is a price of your housework how much? The report about the money evaluation of the gratuitous labor. The Ministry of Finance Printing Bureau (existing National Printing Bureau); 1997. The authors would like to thank Enago (www.enago.jp) for the English language review. The datasets used the Japanese governmental statistics. Department of Social Medicine, Toho University Graduate School of Medicine, 5-21-16 Omori-nishi, Ota-ku, Tokyo, 143-8540, Japan Toshiharu Gochi, Kunichika Matsumoto, Rebeka Amin & Tomonori Hasegawa Department of Social Medicine, Toho University School of Medicine, 5-21-16 Omori-nishi, Ota-ku, Tokyo, 143-8540, Japan Kunichika Matsumoto, Takefumi Kitazawa, Kanako Seto & Tomonori Hasegawa Toshiharu Gochi Kunichika Matsumoto Rebeka Amin Takefumi Kitazawa Kanako Seto Tomonori Hasegawa KM, TK, KS, and TH contributed to the study concept, design, and data acquisition. TG and KM were responsible for the analysis and interpretation of data and the drafting of the manuscript. RA assisted in the manuscript preparation. All authors read and approved the final manuscript. Correspondence to Tomonori Hasegawa. Gochi, T., Matsumoto, K., Amin, R. et al. Cost of illness of ischemic heart disease in Japan: a time trend and future projections. Environ Health Prev Med 23, 21 (2018). https://doi.org/10.1186/s12199-018-0708-1 Ischemic heart disease
CommonCrawl
npj quantum information Biphoton shaping with cascaded entangled-photon sources Arash Riazi ORCID: orcid.org/0000-0003-2404-30161, Changjia Chen1, Eric Y. Zhu1, Alexey V. Gladyshev ORCID: orcid.org/0000-0002-1188-20912, Peter G. Kazansky3, J. E. Sipe4 & Li Qian1 npj Quantum Information volume 5, Article number: 77 (2019) Cite this article Fibre optics and optical communications Quantum information Quantum optics Qubits Quantum entanglement is an integral part of quantum optics and has been exploited in areas such as computation, cryptography and metrology. The entanglement between photons can be present in various degrees of freedom (DOFs), and even the simplest bi-partite systems can occupy a large Hilbert space. Therefore, it is desirable to exploit this multi-dimensional space for various quantum applications by fully controlling the properties of the entangled photons in multiple DOFs. While current entangled-photon sources are capable of generating entanglement in one or more DOFs, there is currently a lack of practical techniques that can shape and control the entanglement properties in multiple DOFs. Here we show that cascading two or more entangled-photon sources with tunable linear media in between allows us to generate photon-pairs whose entanglement properties can be tailored and shaped in the frequency and polarisation domains. We first develop a quantum mechanical model to study the quantum state generated from the cascade structure with special considerations paid to the effects of pump temporal coherence, linear dispersion, and in-structure polarisation transformation applied between the entangled-photon sources. We then experimentally generate photon-pairs with tunable entanglement properties by manipulating the dispersion and birefringence properties of the linear medium placed in between two entangled-photon sources. This is done in an all-fibre, phase stable, and alignment-free configuration. Our results show that the cascade structure offers a great deal of flexibility in tuning the properties of entangled photons in multiple DOFs, opening up a new avenue in engineering quantum light sources. Entanglement is an essential resource in quantum optics and can be exploited for quantum information processing1,2,3,4,5 and the study of fundamental physics.6,7,8 New developments in quantum optics aim to generate entangled photons whose properties in various degrees of freedom (DOFs) can be tailored and controlled. Frequency and polarisation of photons are robust DOFs often used in practical applications. As a result, a vast number of protocols and platforms have already been developed to exploit these two DOFs; for example, the spectrum of entangled photons (biphotons) has been exploited for scalable quantum information processing1,2,9,10,11 and large alphabet quantum key distribution;12 the ability to generate various biphoton polarisation states has also been recognised as a useful resource13,14,15,16 for tests of local realism6,7 and complementarity in physics.8,17 In light of this, we can envision that the ability to tailor and shape the entanglement properties of biphotons in both frequency and polarisation DOFs would allow us to increase the amount of information that can be encoded into a biphoton state,5 enabling a variety of new applications in quantum optics. In order to achieve this goal, we first need spectral and polarisation shaping techniques for biphotons that are compatible with each other; these techniques should also be implementable in an integrated and scalable fashion. So far, various techniques of biphoton spectral shaping have been demonstrated with spatial light modulators,18,19 spectral filtering,20,21,22 and tailoring the phase-matching structure of the nonlinear medium itself.23,24 However, these techniques either introduce undesirable loss due to coupling and filtering,20,21,22 or impose considerable complications in the precise fabrication of the nonlinear structure.23,24 Additionally, some of these techniques18,19 cannot yet be integrated with waveguide-based biphoton sources, and therefore cannot take advantage of the greater mode confinement. Various techniques to shape the polarization state of biphotons have also been demonstrated, typically through a combination of biphoton interference,14,15,16 unitary polarisation transformation,14,15 decoherence,14,16 and spatial mode selection.16 However, these techniques have all been implemented using free-space setups and cannot be integrated with waveguide-based biphoton sources in a single platform. Moreover, precise beam alignment, spatial filtering, and phase stabilization are required for these techniques, which make them difficult to implement in integrated photonics. Finally, these techniques have not been shown to be simultaneously compatible with spectral tailoring. In fact, no practical approach to shape the biphotons simultaneously in both the spectral and polarization domains has been demonstrated. In this paper, we demonstrate a technique that can shape biphoton states in both the frequency and polarisation domains by cascading two fibre-based entangled-photon sources25,26 with a linear medium placed in between, which we refer to as the middle section (see Fig. 1a). Our cascade structure, which is essentially a nonlinear interferometer,27 can be pumped either with a long- or short-coherence-time laser, with each option providing a specific functionality for shaping the properties of biphotons. The spectrum and polarisation state of the biphotons generated from the cascade structure can be tailored by altering the dispersion and birefringence of the linear middle section. The all-fibre common-path configuration used here eliminates major issues in biphoton shaping, such as the requirement for beam alignment, coupling and filtering loss, and phase stabilization. More importantly, spectral and polarisation shaping techniques are now compatible with each other and can be simultaneously implemented in such a structure. a A two-segment cascade structure made up of two second-order nonlinear media of the lengths L1 and L2; the pump coherent packets with V polarisation, \(\left| {\{ \alpha _{{\cal{L}},V}\} } \right\rangle\), enter the structure at z = 0. Depending on the transformation in the middle section \(\hat R_n\), the polarisation states of the downconverted photon pairs at the output of the cascade structure could be a superposition of all four states \(( {\left| {\mathrm{HH}} \right\rangle _{\omega _A,\omega _B},\left| {\mathrm{HV}} \right\rangle _{\omega _A,\omega _B},\left| {\mathrm{VH}} \right\rangle _{\omega _A,\omega _B},\left| {\mathrm{VV}} \right\rangle _{\omega _A,\omega _B}})\). b The quasi-monochromatic model of the pump field49; the pump consists of coherent packets, each of which has a length of \(\Delta _{\cal{L}}\). The initial phase of the electric field within each packet changes randomly from packet to packet.50 It is worth mentioning that the cascade structure we use here belongs to a more general class known as SU(1,1) nonlinear interferometer.27,28 The high-gain regime of these interferometers has been extensively studied27,28,29,30,31 and utilised32,33,34 to obtain the Heisenberg limit in phase measurement. On the other hand, the spontaneous regime of these interferometers has also been studied both theoretically,28,35,36,37 and experimentally38,39,40,41 to investigate more abstract concepts such as "induced coherence" effect.38,42,43 These studies have since found their applications in measuring absorption,44 refractive index,45 and dispersion46 of linear media. Our work, however, differs from previous studies in that we utilise cascaded biphoton sources (a nonlinear interferometer) in two DOFs (frequency and polarisation) to generate biphotons with tunable entanglement properties in spectral and polarisation domains. Furthermore, the quantum mechanical treatment of the cascade structure we present here is more comprehensive as it takes into account the collective effects of the pump temporal coherence, the chromatic dispersion in the structure, and the polarisation transformations on the biphoton state generated from the cascade structure. Finally, our formulation can be generalised to other waveguide-based entangled-photon sources, including those in integrated photonics. The organization of the paper is as follows: We first present a quantum mechanical model for the biphoton state at the output of the cascade structure, taking into account (1) the temporal coherence of the pump (referred to below simply as the pump coherence), (2) the dispersion properties of both linear and nonlinear segments, and (3) the polarisation transformation applied in the linear middle section of the cascaded structure. We then use our model to study the spectrum and polarisation state of the biphotons under various pump coherence conditions and polarisation transformations caused by the middle (linear) section. Finally, using two periodically-poled silica fibres26 (PPSFs) as biphoton sources, we experimentally demonstrate: (1) the ability to generate biphotons with modified spectra, for various pump coherence conditions and the linear properties of the middle section; and (2) the ability to generate various biphoton polarisation states with properties such as tunable degree of polarisation entanglement. The cascade structure and theoretical framework The general two-segment cascade structure is shown in Fig. 1a. Two identical second-order nonlinear segments are connected via a "middle section" consisting of a linear optical medium, and an inline polarisation controller (PC). By using the nonlinear waveguide scattering theory presented earlier,47,48 we model biphoton generation in our cascade structure. For our formulation, we consider only type-II SPDC phase-matching; however, it can be trivially generalised to other SPDC phase-matchings as well. We define horizontal (H), and vertical (V) polarisations according to the principal axes (polarisation eigenmodes) of the nonlinear segments. Note that due to the polarisation transformation in the middle section, light polarised along one of the principal axes in the first nonlinear segment will not generally be polarised along the same principal axes of the second nonlinear segment. We model the pump in Fig. 1a as a quasi-monochromatic field with a finite coherence time of τC. In our model, the pump field is a succession of coherent packets49,50 (see Fig. 1b) in which the electric field oscillates with a constant angular frequency of \(\bar \omega _p\) (see Supplement, Section 2); the initial phase of the electric field within each packet is assumed to be constant, however, it is statistically distributed for each packet.49,50 The generalised creation operator for the \({\cal{L}}^{{\mathrm{th}}}\) pump packet with polarisation S is denoted by \(\hat A_{{\cal{L}},S}^{\dagger} = {\int {\mathrm{d}} k_Pf_{\cal{L}}^ {\ast} \left( {k_P} \right)\hat a^{\dagger} _{PSk_P}}\), where \(f_{\cal{L}}\left( {k_P} \right)\) includes the spectral behaviour of that packet (see Supplement, Section 2) and is normalised according to \({\int} {\mathrm{d}} k_P\left| {f_{\cal{L}}\left( {k_P} \right)} \right|^2 = 1\). The quantum state of each pump packet incident on the structure is taken to be a coherent state in vertical polarisation and can be written as \(\left| {\alpha _{{\cal{L}},V}} \right\rangle = e^{\alpha _{\cal{L}}\hat A_{{\cal{L}},V}^\dagger - h.c.}\left| {\mathrm{vac}} \right\rangle\), where \(\left| {\alpha _{\cal{L}}} \right|^2\) is the average photon number inside the \({\cal{L}}^{{\mathrm{th}}}\) pump packet. Since the field operators of different packets commute due to \(\left[ {\hat A_{{\cal{L}}{\mathrm{,V}}},\hat A_{{\cal{L}}^{\prime} {\mathrm{,V}}}^{\dagger} } \right] = \delta _{{\cal{L}},{\cal{L}}^{\prime} }\) (see Supplement, Section 2), we can write down the quantum state of the pump at the input of the cascade structure as: $$\left| {\left\{ {\alpha _{{\cal{L}},V}} \right\}} \right\rangle = e^{\sum\limits_{\cal{L}} \left( {\alpha _{\cal{L}}\hat A_{{\cal{L}},V}^{\dagger} - h.c.} \right)}\left| {\mathrm{vac}} \right\rangle .$$ In the weak conversion limit with negligible probability of multi-pair generation, the quantum state of the downconverted light for an individual nonlinear segment with a length of L1 can be described as: $$\begin{array}{*{20}{l}} {\left|{{\Psi}_{\mathrm{seg}}} \right\rangle } \hfill & = \hfill & {\left| {\mathrm{vac}} \right\rangle _{\omega _{A},\omega _{B}} + \sum\limits_{\cal{L}} {\alpha _{\cal{L}}} {\int} {\mathrm{d}} \omega _{P}{\mathrm{d}}\omega _{A}{\mathrm{d}}\omega _{B}{\cal{A}}_{0}^{\mathrm{type} \,\mathrm{II}}(\omega _{P},\omega _{A},\omega _{B})f_{\cal{L}}(\omega _{P})\delta \left( {\omega _{P} - \omega _{A} - \omega _{B}} \right)} \hfill \\ {} \hfill & {} \hfill & { \times \left[ {L_{1}{\mathrm{sinc}}\left( {\frac{{{\mathrm{\Delta }}k_{\mathrm{VHV}}^{\left( 1 \right)}L_{1}}}{2}} \right)\hat a_{AH}^{\dagger} \hat a_{BV}^{\dagger} + e^{i\Lambda (\omega _{A},\omega _{B})}L_{1}{\mathrm{sinc}}\left( {\frac{{\Delta k_{\mathrm{VVH}}^{(1)}L_1}}{2}} \right)\hat a_{AV}^{\dagger} \hat a_{BH}^{\dagger} } \right]\left| {\mathrm{vac}} \right\rangle _{\omega _{A},\omega _{B}},} \hfill \end{array}$$ where the sum is over all the pump packets, and ωP, ωA, and ωB are the angular frequencies of the pump, signal, and idler fields, respectively; \(\hat a_{AS^{\prime} }^{\dagger} \left( {\hat a_{BS^{\prime\prime} }^{\dagger} } \right)\) is the creation operator of the signal (idler) mode with S′(S″) polarisation; the quantity \({\cal{A}}_0(\omega _P,\omega _A,\omega _B)\) includes the nonlinear susceptibilities and other phase factors (see Supplement, Section 5); \(\Delta k_{SS^{\prime} S^{\prime\prime} }^{(m)} = k_{PS}^{(m)}(\omega _{P}) - k_{AS^{\prime} }^{(m)}(\omega _{A}) - k_{BS^{\prime\prime} }^{(m)}(\omega _{B}) - k_{\mathrm{QPM}}\), where the first and the second subscripts of the wavevector refer to the field and its polarisation, respectively, while the superscript m refers to the nonlinear media; kQPM is the quasi-phase-matching wavevector of the nonlinear medium. Note that \(\Delta k_{\mathrm{VHV}}^{(1)}L_1\) [Eq. (2)] in general is different from \(\Delta k_{\mathrm{VVH}}^{(1)}L_1\); however, due to the slowly-varying nature of the sinc function with respect to frequency [in comparison with other phase factors in Eq. (2)] and small difference between \(\Delta k_{\mathrm{VHV}}^{(1)}L_1\) and \(\Delta k_{\mathrm{VVH}}^{(1)}L_1\) over the phase-matching (full width at half maximum) bandwidth of the signal and idler, we have \({\mathrm{sinc}}\left( {\frac{{{\mathrm{\Delta }}k_{\mathrm{VHV}}^{(1)}L_1}}{2}} \right) \approx {\mathrm{sinc}}\left( {\frac{{\Delta k_{\mathrm{VVH}}^{(1)}L_1}}{2}} \right)\). Finally, the quantity Λ(ωA, ωB) [related to group birefringence, see ref. 51] is defined as: $$\Lambda (\omega _A,\omega _B) = \left[k_{AV}^{(1)}(\omega _A) - k_{AH}^{(1)}(\omega _A) + k_{BH}^{(1)}(\omega _B) - k_{BV}^{(1)}(\omega _B)\right]\frac{{L_1}}{2}.$$ Henceforth, we shall drop the angular frequency notation for the wavevector k(ω); for \({\cal{A}}_0^{\mathrm{type}\,\mathrm{II}}(\omega _P,\omega _A,\omega _B)\) and δ(ωP − ωA − ωB), we simply write \({\cal{A}}_0^{\mathrm{type}\,\mathrm{II}}\), and δ, respectively. Note that the SPDC emission within the PPSF [see Eq. (2)] results in both signal and idler photons travelling in one spatial mode. However, we can distinguish the two based on their angular frequencies; photons with angular frequencies greater than \(\frac{{\bar \omega _p}}{2}\) are called signal; otherwise, they are called idler. Note that our definition of signal and idler best describes cases with narrowband pump [e.g. continuous wave(cw) pump]. We also remark that for most second-order nonlinear media, the biphoton state generated from type-II SPDC is not polarisation-entangled due to the walk-off caused by the frequency-dependent factor \(e^{{\mathrm{i}}\Lambda (\omega _A,\omega _B)}\) in Eq. (2) [see ref. 51]; however, because of the unique dispersive properties of poled-fibre [Λ(ωA, ωB) ≪ 1, see ref. 51], type-II SPDC in PPSFs allows for the direct generation of polarisation-entangled photon-pairs.26,51,52 Since we are using PSSF as our nonlinear medium, whenever type-II SPDC is involved, the biphoton state is polarisation-entangled. The state of the generated biphotons in the cascade structure Now we consider a cascade of two identical nonlinear segments pumped for type-II SPDC; the two nonlinear segments are connected via a linear medium (with a length of L0), by which we shape the spectrum and polarisation state of the biphotons generated from the cascade structure. We derive the quantum state of the biphotons by employing several assumptions: (1) The collective transformation of the middle section in Jones space can be modelled by two consecutive transformations: A phase accumulation \(e^{{\mathrm{i}}k_n^{(0)}L_0}\hat I\), where \(\hat I\) is a 2 × 2 identity matrix, and a unitary polarisation transformation \(\hat U_n = \left( {\begin{array}{*{20}{c}} {U_{1n}} & {U_{2n}} \\ {U_{3n}} & {U_{4n}} \end{array}} \right)\) [see ref. 53], where the subscripts n = P, A, and B. Accordingly, the collective transformation of the middle section (see Fig. 1a) becomes: \(\hat R_n = e^{ik_n^{(0)}L_0}\left( {\begin{array}{*{20}{c}} {U_{1n}} & {U_{2n}} \\ {U_{3n}} & {U_{4n}} \end{array}} \right)\) (see Supplement, Section 3); (2) The middle section is assumed to have a weak wavelength-dependent birefringence such that \(\hat R_A = \hat R_B \ne \hat R_P\). In other words, the signal and idler are assumed to undergo the same polarisation transformation, while the pump does not necessarily do so; (3) While the presence of the middle section may result in the pump polarisation having both H and V components when entering the second nonlinear segment, due to the phase-matching constraint (wavelength) for type-II SPDC, the H component of the pump will not contribute to other SPDC types (such as type-0) in the second segment (see Supplement, Section 5.1). So the effect of the middle section is to merely transform the polarisation state of the biphotons that could be generated in the first segment. Under these assumptions, the quantum state of the biphotons at the output of the cascade structure can be written as a linear superposition of all possible biphoton states, $$\begin{array}{*{20}{l}} {\left| {\psi _{\mathrm{CAS}}} \right\rangle } \hfill & = \hfill & {\left| {\mathrm{vac}} \right\rangle _{\omega _A,\omega _B} + \mathop {\sum}\limits_{\cal{L}} {\left\{ {{\int} {\mathrm{d}} } \right.} \omega _A{\mathrm{d}}\omega _B\phi _{{\cal{L}},HH}\left( {\omega _A,\omega _B} \right)\hat a_{\mathrm{AH}}^\dagger \hat a_{\mathrm{BH}}^\dagger } \hfill \\ {} \hfill & {} \hfill & { + {\int} {\mathrm{d}} \omega _A{\mathrm{d}}\omega _B\phi _{{\cal{L}},{\mathrm{HV}}}\left( {\omega _A,\omega _B} \right)\hat a_{\mathrm{AH}}^\dagger \hat a_{\mathrm{BV}}^\dagger + {\int} {\mathrm{d}} \omega _A{\mathrm{d}}\omega _B\phi _{{\cal{L}},{\mathrm{VH}}}(\omega _A,\omega _B)\hat a_{\mathrm{AV}}^\dagger \hat a_{\mathrm{BH}}^\dagger } \hfill \\ {} \hfill & {} \hfill & {\left. { + {\int} {\mathrm{d}} \omega _A{\mathrm{d}}\omega _B\phi _{{\cal{L}},VV}(\omega _A,\omega _B)\hat a_{\mathrm{AV}}^\dagger \hat a_{\mathrm{BV}}^\dagger } \right\}\left| {\mathrm{vac}} \right\rangle _{\omega _A,\omega _B},} \hfill \end{array}$$ where \(\phi_{{\cal{L}},S^{\prime}S^{\prime\prime}}(\omega_{A},\omega_{B})\) is the biphoton wavefunction, corresponding to the \({\cal{L}}^{{\mathrm{th}}}\) pump packet, which can be determined by the Hamiltonian treatment of the cascade structure (see Supplement, Section 4–6). Henceforth neglecting the vacuum contribution, we can write the quantum state of the biphotons generated from the cascade structure as: $$\begin{array}{*{20}{l}} {\left| {\psi _{\mathrm{CAS}}} \right\rangle } \hfill & = \hfill & {\mathop {\sum}\limits_{\cal{L}} {\alpha _{\cal{L}}} {\int} {\mathrm{d}} \omega _{P}{\mathrm{d}}\omega _{A}{\mathrm{d}}\omega _{B}{\cal{A}}_{0}^{\mathrm{type}\,\mathrm{II}}f_{\cal{L}}(\omega _{P})L_{1}{\mathrm{sinc}}\left( {\frac{{\Delta k_{\mathrm{VHV}}^{(1)}L_{1}}}{2}} \right)\delta } \hfill \\ {} \hfill & {} \hfill & { \times \left\{ { - e^{{\mathrm{i}}\Gamma _{i}}\left[ {\left( {U_{4A}U_{3B}} \right)^ {\ast} \,+\, e^{{\mathrm{i}}\Lambda }\left( {U_{4B}U_{3A}} \right)^ {\ast} } \right]\left| {\mathrm{HH}} \right\rangle _{\omega _{A},\omega _{B}}} \right.} \hfill \\ {} \hfill & {} \hfill & +{ \left[ {\left( {U_{4A}U_{1B}} \right)^ {\ast} \,+\, e^{{\mathrm{i}}\Lambda }\left( {U_{3A}U_{2B}} \right)^ {\ast} \,+\, U_{4p}e^{{\mathrm{i}}(\Delta k^{(0)}L_{0} + \Delta k_{\mathrm{VHV}}^{(1)}L_{1})}} \right]\left| {\mathrm{HV}} \right\rangle _{\omega _A,\omega _B}} \hfill \\ {} \hfill & {} \hfill & +\,{e^{2{\mathrm{i}}\Lambda }\left[ {\left( {U_{2A}U_{3B}} \right)^ {\ast} \,+\, e^{{\mathrm{i}}\Lambda }\left( {U_{1A}U_{4B}} \right)^ {\ast} \,+\, e^{ - i\Lambda }U_{4p}e^{i(\Delta k^{(0)}L_{0} + \Delta k_{\mathrm{VHV}}^{(1)}L_1)}} \right]\left| {\mathrm{VH}} \right\rangle _{\omega _{A},\omega _{B}}} \hfill \\ {} \hfill & {} \hfill & {\left. {- e^{{\mathrm{i}}\Gamma _{s}}\left[ {\left( {U_{2A}U_{1B}} \right)^ {\ast} \,+\, e^{{\mathrm{i}}\Lambda }\left( {U_{2B}U_{1A}} \right)^ {\ast} } \right]\left| {\mathrm{VV}} \right\rangle \omega _{A},\omega _{B}} \right\},} \hfill \end{array}$$ where we assumed the two nonlinear media have the same length (L1 = L2) and identical dispersion properties [i.e. \(k_{H(V)}^{(1)}(\omega ) = k_{H(V)}^{(2)}(\omega )\)]; here \(\Gamma _{A(B)} = (k_{A(B)V}^{(1)} - k_{A(B)H}^{(1)})L_1\), and \(\Delta k^{(0)}L_0 = (k_P^{(0)} - k_A^{(0)} - k_B^{(0)})L_0\) is the phase introduced by the middle section. Note that due to the polarisation transformation in the middle section, \(\phi _{{\cal{L}},HH}(\omega _A,\omega _B)\) and \(\phi _{{\cal{L}},VV}(\omega _A,\omega _B)\) are now nonzero and the extra biphoton polarisation states \(\left| {\mathrm{HH}} \right\rangle _{\omega _A,\omega _B}\) and \(\left| {\mathrm{VV}} \right\rangle _{\omega _A,\omega _B}\) appear at the output of the cascade structure. Moreover, \(\phi _{{\cal{L}},HV}(\omega _A,\omega _B)\) and \(\phi _{{\cal{L}},VH}(\omega _A,\omega _B)\) [the prefactors of \(\left| {\mathrm{HV}} \right\rangle _{\omega _A,\omega _B}\) and \(\left| {\mathrm{VH}} \right\rangle _{\omega _A,\omega _B}\) in Eq. (5)] now contain contributions from both nonlinear segments, which eventually leads to interference between the biphoton amplitudes from the two nonlinear segments. Biphoton spectrum In this section, we assume there is no polarisation transformation in the middle section [i.e. \(\hat R_n = e^{{\mathrm{i}}k_n^{(0)}L_0}\left( {\begin{array}{*{20}{c}} 1 & 0 \\ 0 & 1 \end{array}} \right)\) with n = A, B], and only focus on the spectrum of the biphotons generated from the cascade structure in that limit. For this case, if we assume the nonlinear media have Λ(ωA, ωB) ≪ 1 [see Eq. (6)], the quantum state of the biphotons generated from the cascade structure becomes: $$\begin{array}{*{20}{l}} {\left| {\psi _{\mathrm{CAS}}} \right\rangle } \hfill & = \hfill & {\mathop {\sum}\limits_{\cal{L}} {\alpha _{\cal{L}}} {\int} {\mathrm{d}} \omega _{P}{\mathrm{d}}\omega _{A}{\mathrm{d}}\omega _{B}{\cal{A}}_{0}^{\mathrm{type}\,\mathrm{II}}f_{\cal{L}}(\omega _{P})L_{1}{\mathrm{sinc}}\left( {\frac{{\Delta k_{\mathrm{VHV}}^{(1)}L_{1}}}{2}} \right)\delta } \hfill \\ {} \hfill & {} \hfill & { \times \left( {1 + U_{4P}e^{{\mathrm{i}}(\Delta k^{(0)}L_{0} + \Delta k_{\mathrm{VHV}}^{(1)}L_{1})}} \right)\left\{ {\left| {\mathrm{HV}} \right\rangle _{\omega _{A},\omega _{B}} \,+\, \left| {\mathrm{VH}} \right\rangle _{\omega _{A},\omega _{B}}} \right\}.} \hfill \end{array}$$ Here U4P is the fourth element of the transformation matrix \(\hat U_P\), which we write as \(\left| {U_{4P}} \right|e^{{\mathrm{i}}\phi _P}\). We now study the relative emission spectrum of the biphotons by expanding the total biphoton brightness Btot = 〈ψCAS|ψCAS〉. As the expression for Btot involves the statistical phases of different pump packets, we first average Btot over the ensemble of sequences of the pump packets according to $$\begin{array}{*{20}{l}} {\left\langle {{\mathrm{B}}_{{\mathrm{tot}}}} \right\rangle _{avg}} \hfill & = \hfill & {{\int} {\mathrm{d}} \omega _{P}{\mathrm{d}}\omega _{A}{\mathrm{d}}\omega _{B}\left\langle {\mathop {\sum}\limits_{\cal{L}} {\left| {\alpha _{\cal{L}}f_{\cal{L}}(\omega _P)} \right|^{2}} } \right\rangle _{avg}\left| {{\cal{A}}_{0}^{\mathrm{type}\,\mathrm{II}}L_1\,{\mathrm{sinc}}\left( {\frac{{\Delta k_{\mathrm{VHV}}^{(1)}L_1}}{2}} \right)} \right|^{2}\delta } \hfill \\ {} \hfill & {} \hfill & { \times \left| {1 + U_{4P}e^{{\mathrm{i}}(\Delta k^{(0)}L_{0} + \Delta k_{\mathrm{VHV}}^{(1)}L_1)}} \right|^{2}} \hfill \end{array}$$ We next replace the ensemble average \(\left\langle {\mathop {\sum}\limits_{\cal{L}} {\left| {\alpha _{\cal{L}}f_{\cal{L}}(\omega _P)} \right|^2} } \right\rangle _{avg}\) with |α|2|f(ωP)|2, where |α|2 is the number of photons in the entire pump packets (see Supplement, Section 8); |f(ωP)|2 is now the spectral lineshape of the pump, which is assumed to be Lorentzian.49 Note that the integral of the form \({\int} {\mathrm{d}} \omega _{P}e^{i(\Delta k^{(0)}L_{0} + \Delta k_{\mathrm{VHV}}^{(1)}L_1)}\left| {f(\omega _P)} \right|^2\) that appears in Eq. (7) is related to the first-order coherence of the pump, g(1)(τ) [see ref. 49]. Given that, we can re-write Eq. (7) as: $$\begin{array}{*{20}{l}} {\left\langle {{\mathrm{B}}_{{\mathrm{tot}}}} \right\rangle _{\mathrm{avg}}} \hfill & = \hfill & {{\int} {\mathrm{d}} \omega _{P}{\mathrm{d}}\omega _{A}{\mathrm{d}}\omega _{B}\left| {{\cal{A}}_0^{{\mathrm{type}}\,{\mathrm{II}}}\alpha f(\omega _{P})L_{1}\,{\mathrm{sinc}}\left( {\frac{{\Delta k_{\mathrm{VHV}}^{(1)}L_1}}{2}} \right)} \right|^{2}\delta } \hfill \\ {} \hfill & {} \hfill & { \times \left\{ {1 + \left| {U_{4P}} \right|^{2} + 2\left| {U_{4P}} \right|e^{\frac{{ - \left| {\Delta \tau _{0} + \Delta \tau _{1}} \right|}}{{\tau _{C}}}}\,{\mathrm{cos}}\,({\Delta}k^{(0)}L_{0} + \Delta k_{\mathrm{VHV}}^{(1)}L_{1} + \Phi _{P})} \right\},} \hfill \end{array}$$ where the factor \(e^{\frac{{ - |{\mathrm{\Delta }}\tau _0 + {\mathrm{\Delta }}\tau _1|}}{{\tau _C}}}\) appears in Eq. (8) as a result of first-order coherence function of a pump field with coherence time of τc (see Supplement, Section 8); Δτ0 (Δτ1) is the group delay difference between pump and biphotons in the middle section (first nonlinear medium), which can be expressed as: $$\Delta \tau _{0/1} = \tau _{0/1,P} - \frac{1}{2}\left( {\tau _{0/1,A} + \tau _{0/1,B}} \right) = \left( {\left. {\frac{{\mathrm{d}}k^{(0/1)}}{{\mathrm{d}\omega }}} \right|_{\bar \omega _p} \,-\, \left. {\frac{{\mathrm{d}}k^{(0/1)}}{{\mathrm{d}\omega }}} \right|_{\frac{{\bar \omega _p}}{2}}} \right)L_{0/1},$$ where \(\left. {\frac{{\mathrm{d}}k^{(0/1)}}{{\mathrm{d}\omega }}} \right|_{\omega^ \prime }\) is the first-order dispersion of the middle section/PPSF at frequency ω′. The factor \(e^{\frac{{ - |{\mathrm{\Delta }}\tau _0 + {\mathrm{\Delta }}\tau _1|}}{{\tau _C}}}\) in Eq. (8) determines to what extent the biphoton amplitudes from two nonlinear segments interfere. Note that the integrand in Eq. (8) corresponds to the biphoton spectrum (or joint spectral intensity). In the following subsections, we study the biphoton spectrum under two different pump coherence conditions. Biphoton spectrum: coherent cascading When |Δτ0 + Δτ1| ≤ τC, the pump field remains coherent throughout both nonlinear segments; we call this mode of operation "coherent cascading". Here the biphoton amplitudes from the two different nonlinear segments interfere with each other, resulting in fringes in the biphoton spectrum (see Supplement, Section 8). Adding more nonlinear segments (Fig. 2a) results in more interference terms, which gives us greater flexibility in shaping the biphoton spectrum. As an example, we could generate biphotons with discrete frequency modes (in the form of a frequency comb20,22) by cascading three PPSFs whose spectra are initially continuous (Fig. 2b, c). Note that the spacing between the frequency modes in Fig. 2c can be controlled by tailoring the dispersion of the middle section, without resorting to any spectral filtering or modification of the nonlinear media. It is also worth mentioning that since we are utilising type-II SPDC and using PPSFs as our nonlinear media [Λ(ωA, ωB) ≪ 1, Eq. (3)], biphotons generated from the cascade structure (Fig. 2c) are also entangled in the polarisation DOF as well.26,51 a A generalised cascade structure consisting of N nonlinear segments; \(\hat U_n^{(1)}\),\(\hat U_n^{(2)}\), and \(\hat U_n^{(N - 1)}\) are the polarisation transformation matrices of the PCs in the middle sections. The lengths of the nonlinear media and middle sections are denoted by Li and L0,i, respectively. The emission spectrum of the biphotons generated from b a 20 cm PPSF, and c a cascade structure consisting of three identical 20 cm-long PPSFs connected with two 6 m-long SMF28TM; The subset shows discretization of the frequency modes When |Δτ1| ≤ τC and the middle section has no dispersion (equivalent to L0 = 0), the coherent cascade of two nonlinear segments becomes equivalent to a longer biphoton source with the total nonlinear interaction length of LNL = L1 + L2 = 2L1. In this case, the brightness of the biphotons generated in the cascade structure increases by a factor of 23/2 (scaling with \(L_{{\mathrm{NL}}}^{3/2}\), see ref. 48) with respect to the individual nonlinear segment, while the emission bandwidth [now determined by \({\mathrm{sinc}}^2\left( {\Delta k_{\mathrm{VHV}}^{(1)}L_1} \right)\)] is reduced by a factor of 21/2 (scaling with \(L_{{\mathrm{NL}}}^{ - 1/2}\), see ref. 48) with respect to each of individual nonlinear segment. Note that both of the scaling factors mentioned here generally applies for degenerate SPDC processes in which the signal and idler have the same polarisation. However, as the group birefringence of PPSF is negligible51 over the bandwidth of the downconverted photons (see Supplement, Section 8.2), the scaling factors mentioned above also apply for the type-II SPDC phase-matching in the case of PPSF. Figure 3 shows the brightness (3a) and the emission bandwidth (3b) of N identical PPSFs that are coherently cascaded. As can be seen in Fig. 3b, coherent cascade of multiple nonlinear segments (equivalent to increasing the length of the nonlinear medium) reduces the emission bandwidth of the biphotons, which is particularly undesirable for broadband biphoton sources; however, we will show in the following that this issue can be overcome through incoherent cascade of multiple nonlinear segments. Trade-off between the brightness and the emission bandwidth of the biphotons generated from the cascade structure with coherent and incoherent pumping of N identical PPSFs. a Brightness scales linearly with respect to N (or LNL) for incoherent cascading, while it scales by factor of N3/2 (or \(L_{{\mathrm{NL}}}^{3/2}\)) for coherent cascading. b The emission bandwidth is independent of N (or LNL) for incoherent cascading, while it decreases with a factor of N−1/2 (or \(L_{{\mathrm{NL}}}^{ - 1/2}\)) for coherent cascading Biphoton spectrum: incoherent cascading When |Δτ1| ≤ τC ≪ |Δτ0|, which we refer to as "incoherent cascading", the factor \(e^{\frac{{ - \left| {\Delta \tau _{0} + \Delta \tau _1} \right|}}{{\tau _C}}}\) in Eq. (8) vanishes and the biphoton brightness simplifies to: $$\left\langle {B_{{\mathrm{tot}}}} \right\rangle _{\mathrm{avg}} = {\int} {\mathrm{d}} \omega _{P}{\mathrm{d}}\omega _{A}{\mathrm{d}}\omega _{B}\left| {{\cal{A}}_{0}^{\mathrm{type}\, \mathrm{II}}\alpha f(\omega _{P})L_{1}{\mathrm{sinc}}\left( {\frac{{{\mathrm{\Delta }}k_{\mathrm{VHV}}^{(1)}L_1}}{2}} \right)} \right|^{2}\delta \,\times\, \left\{ {1 + \left| {U_{4P}} \right|^{2}} \right\}.$$ With such a low coherence pump, the biphoton amplitudes from the two nonlinear segments will not interfere, and the brightness of the biphotons becomes twice that of an individual nonlinear segment as |U4P| → 1. More generally, the brightness increases linearly with respect to the total nonlinear interaction length in the cascade structure. On the other hand, the emission bandwidth of the biphotons [determined by \({\mathrm{sinc}}^{2}\left( {\frac{{\Delta k_{\mathrm{VHV}}^{(1)}L_1}}{2}} \right)\)] remains the same as the emission bandwidth of an individual segment. As the number of identical cascaded nonlinear segments N increases, the emission bandwidth remains constant, while the brightness increases linearly (Fig. 3). This suggests that with incoherent cascading, we can arbitrarily increase the brightness of the biphoton source without sacrificing the emission bandwidth of the biphotons which is in contrast to coherent cascading, where increasing the total nonlinear interaction length LNL was accompanied by a reduction in the emission bandwidth. Biphoton polarisation state In the following subsections, we study the effect of cascading on the degree of polarisation entanglement (quantified by concurrence54) and the polarisation state of the biphotons generated from the cascade structure. Although our approach can be applied to all SPDC phase-matching processes, in the interest of brevity, we discuss only the cases where type-II SPDC process occurs in both nonlinear media. As in previous cases, we account for the collective effects of the pump coherence, dispersion of the linear and nonlinear media, and the polarisation transformation applied in the middle section. We model the unitary transformation of the middle section for signal and idler by a general unitary matrix of the form $$\hat U_{A(B)}\left( {\theta ,\phi _1,\phi _2} \right) = \left( {\begin{array}{*{20}{c}} {e^{{\mathrm{i}}\phi _1}{\mathrm{cos}}\,\theta } & { - e^{{\mathrm{i}}\phi _2}{\mathrm{sin}}\,\theta } \\ {e^{ - {\mathrm{i}}\phi _2}{\mathrm{sin}}\,\theta } & {e^{ - {\mathrm{i}}\phi _1}{\mathrm{cos}}\,\theta } \end{array}} \right),$$ where θ is the angle of the polarisation rotation; note that ϕ1 and ϕ2 are the phase parameters, which define an arbitrary polarisation transformation; note that, ϕ1 and ϕ2 physically correspond to the birefringence introduced by the optical elements in the middle section, such as the PC. The collective transformation matrix of the middle section then becomes \(\hat R_n = e^{{\mathrm{i}}k_{0n}L_0}\hat U_n\left( {\theta ,\phi _1,\phi _2} \right)\). Given the polarisation transformation, we can use Eq. (5) and form the density matrix \(\hat \rho = \left\langle {{\mathrm{B}}_{{\mathrm{tot}}}} \right\rangle _{\mathrm{avg}}^{ - 1}\left| {\psi _{{\mathrm{CAS}}}} \right\rangle \left\langle {\psi _{{\mathrm{CAS}}}} \right|\) (so that \({\mathrm{Tr}}\left( {\hat \rho } \right) = 1\)) to characterise the polarisation state of the biphotons generated from the cascade structure. Degree of polarisation entanglement In order to study the effect of cascading on the degree of polarisation entanglement, we limit ourselves to transformation of the form \(\hat U_n\left( {\theta = \phi _1 = \phi _2 = 0} \right)\). The ensemble-averaged density matrix of the biphoton state in polarisation bases \(( {\left| {\mathrm{HH}} \right\rangle _{\omega _A,\omega _B},\left| {\mathrm{HV}} \right\rangle _{\omega _A,\omega _B},\left| {\mathrm{VH}} \right\rangle _{\omega _A,\omega _B},\left| {\mathrm{VV}} \right\rangle _{\omega _A,\omega _B}})\) can be written as: $$\begin{array}{*{20}{l}} {\left\langle {\hat \rho _{{\mathrm{\theta }} = \phi _1 = \phi _2 = 0}^{\mathrm{type}\,\mathrm{II}}} \right\rangle _{\mathrm{avg}}} \hfill & = \hfill & {\left\langle {{\mathrm{B}}_{{\mathrm{tot}}}} \right\rangle _{\mathrm{avg}}^{ - 1}{\int} {\mathrm{d}} \omega _P{\mathrm{d}}\omega _A{\mathrm{d}}\omega _B\left| {{\cal{A}}_0^{\mathrm{type}\, \mathrm{II}}\alpha f(\omega _P)L_1{\mathrm{sinc}}\left( {\frac{{\Delta k_{\mathrm{VHV}}^{(1)}L_1}}{2}} \right)} \right|^2\delta } \hfill \\ {} \hfill & {} \hfill & { \times \left\{ {1 + \left| {U_{4P}} \right|^2 + 2\left| {U_{4P}} \right|e^{\frac{{ - |\Delta \tau _0 + \Delta \tau _1|}}{{\tau _C}}}\cos \left( {\Delta k^{(0)}L_0 + \Delta k_{\mathrm{VHV}}^{(1)}L_1 + \Phi _P} \right)} \right\}{\times}\left( {\begin{array}{*{20}{c}} 0 & 0 & 0 & 0 \\ 0 & 1 & {e^{ - {\mathrm{i}}2\Lambda }} & 0 \\ 0 & {e^{{\mathrm{i}}2\Lambda }} & 1 & 0 \\ 0 & 0 & 0 & 0 \end{array}} \right).} \hfill \end{array}$$ When \({\int} {\mathrm{d}} \omega _A\mathrm{d}\omega _Be^{{\mathrm{i2}}\Lambda } = 0\), the density matrix for both pump coherence conditions has zero concurrence.54 This is due to the walk-off between the two biphoton polarisation states \(\left| {\mathrm{HV}} \right\rangle _{\omega _A,\omega _B}\) and \(\left| {\mathrm{VH}} \right\rangle _{\omega _A,\omega _B}\), which is introduced by the nonlinear segments. The walk-off in the cascade structure can usually be compensated by placing a birefringent element in the path of biphotons. However, it is more desirable to use nonlinear media with Λ ≪ 1 (such as poled-fibres51), especially when dealing with complex configurations55 consisting of multiple cascaded nonlinear segments. The use of such nonlinear media (Λ ≪ 1) in the cascade structure also allows us to, for example, preserve polarisation entanglement (if present) and simultaneously perform spectral shaping, similar to what we mentioned in previous sections. Shaping the polarisation state of the biphotons Now we study the role of polarisation transformation in shaping the polarisation state of the biphotons generated from the cascade structure. We consider a polarisation rotation of the form \(\hat U_n\left( {\theta = \frac{{\mathrm{\pi }}}{4},\phi _1 = \phi _2 = 0} \right)\), for which the density matrix of the biphoton state becomes: $$\begin{array}{*{20}{l}} {\left\langle {\hat \rho _{\theta \,=\, \frac{{\mathrm{\pi }}}{4},\,\phi _1 \,=\, \phi _2 \,=\, 0}^{\mathrm{type}\,\mathrm{II}}} \right\rangle _{\mathrm{avg}}} \hfill & = \hfill & {\left\langle {{\mathrm{B}}_{{\mathrm{tot}}}} \right\rangle _{\mathrm{avg}}^{ - 1}{\int} {\mathrm{d}} \omega _P{\mathrm{d}}\omega _A{\mathrm{d}}\omega _B\left| {{\cal{A}}_0^{\mathrm{type}\,\mathrm{II}}\alpha f(\omega _P)L_1{\mathrm{sinc}}\left( {\frac{{\Delta k_{\mathrm{VHV}}^{(1)}L_1}}{2}} \right)} \right|^2\delta } \hfill \\ {} \hfill & {} \hfill & { \times \left( {\begin{array}{*{20}{c}} 1 & { - \rho _1^ \ast } & { - \rho _1^ \ast } & { - 1} \\ { - \rho _1} & 1 & 1 & {\rho _2} \\ { - \rho _1} & 1 & 1 & {\rho _2} \\ { - 1} & {\rho _2^ \ast } & {\rho _2^ \ast } & 1 \end{array}} \right),} \hfill \end{array}$$ $$\begin{array}{l}\rho _1 = \left| {U_{4P}} \right|e^{\frac{{ - \left| {{\mathrm{\Delta }}\tau _0 \,+\, {\mathrm{\Delta }}\tau _1} \right|}}{{\tau _C}}}e^{{\mathrm{i}}(\Delta k^{(0)}L_0 \,+\, \Delta k_{\mathrm{VHV}}^{(1)}L_1 \,+\, \Phi _P \,-\, \Gamma _B)},\\ \rho _2 = \left| {U_{4P}} \right|e^{\frac{{ - \left| {\Delta \tau _0 \,+\, \Delta \tau _1} \right|}}{{\tau _C}}}e^{{\mathrm{i}}(\Delta k^{(0)}L_0 \,+\, \Delta k_{\mathrm{VHV}}^{(1)}L_1 \,+\, \Phi _P \,-\, \Gamma _A)}.\end{array}$$ Note that we have assumed the nonlinear segments satisfy Λ ≪ 1.For coherent cascading (|Δτ0 + Δτ1| ≪ τC), the matrix in Eq. (13) is wavelength-dependent due to ρ1 and ρ2 elements [see Eq. (14)]. In fact, it can be shown that for a small wavelength range of the signal and idler photons, the concurrence varies between 0 and 1 (see Fig. 4a). Note that the signal and idler photons are still spectrally correlated, while the degree of polarisation entanglement varies with respect to signal and idler wavelengths. The effect shown in Fig. 4a is a direct consequence of simultaneously manipulating the dispersion and birefringence of the middle section. a Concurrence as a function of signal and idler wavelengths for coherent cascading of two 30-cm-long PPSFs connected via a 3-m-long SMF28TM (used as the middle section). The polarisation transformation is set to \(\hat U_n\left( {\theta = \frac{\pi }{4},\phi _1 = \phi _2 = 0} \right)\). For certain signal (idler) wavelengths, such as those denoted by blue (red) strips closer to the degeneracy point, the concurrence is 1, while for the adjacent strips, the concurrence is 0. b Concurrence as a function of the angle of the polarisation rotation in the middle section (θ) for incoherent cascading; the designated red circle corresponds to \(\hat U_n\left( {\theta = \frac{\pi }{4},\phi _1 = \phi _2 = 0} \right)\) For incoherent cascading (|Δτ1| ≤ τC ≪ |Δτ0|), on the other hand, ρ1, ρ2 → 0 in Eq. (13) and the density matrix has now zero concurrence for the entire signal and idler wavelength range. Here the variation of the concurrence is a result of polarisation rotation, and mixing of the two maximally polarisation-entangled biphoton states. In fact, it can be shown that by varying θ in \(\hat U_n\left( {\theta ,\phi _1 = \phi _2 = 0} \right)\), we can vary the value of the concurrence between 0 and 1, obtaining a biphoton state with arbitrary degree of polarisation entanglement (see Fig. 4b). Note that here we only considered type-II SPDC for both pump coherence conditions; however, in practice, two biphoton sources with differing SPDC phase-matchings can be combined within our cascade structure to generate an arbitrary biphoton polarisation state. A cw tunable 780-nm external-cavity diode-laser (ECDL, Toptica DL PRO) with a coherence time of τC ≈ 3 μs (coherence length of LC ≈ 1 km) is used as a pump for coherent cascading. For incoherent cascading, we either decrease the time-averaged pump coherence by modifying the external cavity or separately pump the two PPSFs while the biphotons still travel in a common path (see Methods section). The pump power is adjusted for a pair generation rate of approximately 106 pairs s−1, for which the probability of multi-pair generation is so small that it can be ignored. To demonstrate biphoton shaping in the spectral and polarisation domains, we add 5 m of SMF28TM alongside an inline polarisation controller (PC2 in the inset of Fig. 5) to manipulate the dispersion and birefringence of the middle section. Experimental setup. The source under test, illustrated in the inset, is pumped by a tunable cw diode laser. a A standard L-C band wavelength-division multiplexer (WDM) is used to separate signal (shorter wavelength) and idler (longer wavelength) photons into two different fibres for coincidence measurement. b For spectral measurements, a dispersive medium (20 km Corning SMF28TM), and a beam splitter (BS) are used as a fibre spectrometer that extracts the biphoton spectrum. The overall detection time jitter is ~200 ps, based on which we select our coincidence time window to be 256 ps for spectral measurements. The nominal dispersion-length product of the 20 km fibre spool is 340 ps nm−1, which gives a spectral resolution of 0.75 nm in our measurement. c For the QST experiment, two sets of HP 8169 A polarisation analyzers (PAs) are used, each of which includes a quarter waveplate (QWP), a half waveplate (HWP), and a polarizer (POL) For our proof-of-principle demonstration, three types of measurements are performed on each individual PPSF sample, as well as the cascade structure as a whole (Fig. 5): (1) Measurement of the biphoton spectrum to observe the spectral interference, and to obtain the emission bandwidth of the biphotons; (2) a coincidence measurement to quantify the biphoton brightness; and (3) quantum state tomography56 (QST) to study the polarisation state of the biphotons under various polarisation transformations. The detection apparatus consists of two single photon detectors (SPDs, IDQ ID220), and a time-interval analyzer (TIA, Hydraharp 400). The biphoton spectrum is measured with an in-house fibre spectrometer57 (Fig. 5b). The spectral resolution of our spectrometer is approximately 0.75 nm (100 GHz), limited primarily by the time jitter of our detectors. For our setup, we choose two similar PPSFs by using the approach mentioned in the Methods section. The type-0 and type-II SPDC emission spectra of the two PPSFs are shown in Fig. 6. We observe the spectral overlap (~80 nm) is large around the degeneracy point, which allows us to obtain interference [predicted in Eq. (8)] in the spectrum of the biphotons generated from the cascade structure. Emission spectrum of each PPSF for a type-0 and b type-II SPDC; the emission spectra of the two segments are similar and overlap over a large bandwidth of ~80 nm Biphoton spectral properties: coherent cascading We cascade the two PPSFs and perform type-0 and type-II SPDC, depending on the pump wavelength used (Fig. 7). The emission spectra of the biphotons at the output of the cascade structure are shown in Fig. 7a, b. Note that we have not yet applied any polarisation rotation in the middle section [\(\hat U_n\left( {\theta = 0,\phi _1,\phi _2} \right)\), see Eq. (11)]. The spectral interference fringes appear in the biphoton spectrum due to the dispersion of a 5-m-long SMF28TM in the middle section, which connects the two fibre-pigtailed PPSFs. The less-than-unity fringe visibility is mainly due to the spectral and brightness discrepancy between the two PPSFs. In our case, the fringe visibility of type-0 SPDC (~86%) is almost similar to what is observed for type-II SPDC (~81%) around the degeneracy wavelength, where the emission spectra of the two PPSFs are well matched (see Fig. 6). However, as we go further from the degeneracy wavelength, the discrepancy in the spectra of the two PPSFs increases (especially for type-II), and the fringe visibility of type-0 and type-II drops to ~80% and ~50%, respectively. a The emission spectrum of the biphotons generated from the cascade structure for type-0 SPDC (pumped at 780.05 nm) when |Δτ1 + Δτ0| ≪ τC (coherent cascade), and |Δτ1|≤τC ≪ |Δτ0| (incoherent cascade, red trace). b The emission spectrum of the biphotons generated from the cascade structure for type-II SPDC, when coherently pumped at 782.05 nm; the inset shows the effect of the dispersion in the middle section on discretization of the frequency components. c Simulation result of the biphoton spectrum for type-0 SPDC cascade; the cascade structure consists of two identical PPSFs, each of length 25 cm (= L1 = L2), connected by an SMF28TM patchcord of length L0 = 5 m To highlight the effect of the pump coherence on the biphoton interference, we now decrease the time-averaged coherence of the pump (see Methods) and measure the spectrum of the biphotons at the output of the cascade structure for the type-0 SPDC process. We use type-0 since the emission spectra of the two PPSFs are similar, so that the initial assumption of identical emission spectra of the two nonlinear sources holds true. The fringe visibility disappears in Fig. 7a (red trace), and the biphoton spectrum is now just an incoherent sum of the two individual PPSF spectra; this is in a good agreement with our simulation result shown in Fig. 7c. Biphoton spectral properties: incoherent cascading In this section, we study the brightness and emission spectrum of the biphotons generated from the cascade structure under incoherent pumping. To quantify the biphoton brightness [〈Btot〉avg in Eq. (10)], we measure the equivalent quantity, the coincidence rate of the biphotons (see Supplement, Section 8), for each individual sample as well as the cascade structure. We chose type-0 SPDC (pumped at 780.05 nm) since the emission spectra of the two PPSFs largely overlap (see Fig. 6a), allowing us to observe the variation in the emission bandwidth of the biphotons. We first pump each PPSF and measure the coincidence rates with respect to the pump power. Taking into account the effect of loss for the pump and the signal (idler) fields, the expected coincidence rate for the cascade structure becomes: $$R_{\exp } = \eta _{A(B),2}^2R_{\mathrm{PPSF1}} + \eta _{P,1}R_{\mathrm{PPSF2}},$$ where ηA(B),2 is the transmission of the signal (idler) field from the output of the PPSF1 to the output of the PPSF2, and ηP,1 is the transmission of the pump field from the input of PPSF1 to the input of PPSF2; RPPSF1 and RPPSF2 are the coincidence rate of the first and the second PPSF, respectively. We then use separate pumping technique (see Methods) to ensure incoherent cascading and measure the coincidence rate of the biphotons generated from the cascade structure and compare it with Rexp in Eq. (15). Note that the polarisation transformation in the middle section is set to \(\hat U_n\left( {\theta = 0,\phi _1,\phi _2} \right)\) [see Eq. (11)] during our measurements. The result in Fig. 8a shows that for incoherent cascading, the brightness increases additively, and therefore scales linearly with the total nonlinear interaction length. a Coincidence rates are plotted as a function of pump power for type-0 SPDC. Symbols are measured data, while the solid and dashed lines are linear fits to the data points. For PPSF1, the displayed data points are the measured coincidence rates corrected by \(\eta _{{\mathrm{A}}({\mathrm{B}}),2}^2\). For PPSF2, the displayed data points are the measured coincidence rates corrected by ηP,1. The error bars are so small that they cannot be shown in the figure. b Type-0 SPDC spectra of the individual segment, and the cascaded. Due to nearly equal contribution of each PPSF sample in the output, the biphoton spectrum at the output of the cascade structure is the average of the two PPSF spectra, and shows no bandwidth reduction We then measure the emission spectrum of the biphotons generated from the cascade structure. As can be seen in Fig. 8b, the emission spectrum is the arithmetic mean of the two individual PPSF's spectra due to almost equal contribution of the two PPSFs at the output of the cascade structure. The result in Fig. 8b also shows no bandwidth reduction, which indicates that the emission bandwidth in incoherent cascading becomes independent of total nonlinear interaction length inside the cascade structure. Biphoton polarisation state: incoherent cascading We now study the degree of polarisation entanglement and the polarisation state of the biphotons generated from the cascade structure by considering two specific transformations: (1) \(\hat U_n\left( {\theta = 0,\phi _1,\phi _2} \right)\), and (2) \(\hat U_n\left( {\theta = \frac{{\mathrm{\pi }}}{4},\phi _1,\phi _2} \right)\). We first characterize the polarisation state of the biphotons generated from each PPSF when pumped for type-II SPDC at 782.05 nm. Results in Fig. 9a, b show that both PPSFs generate biphoton states with a high concurrence, and high fidelity to triplet state |ψ+〉 (see also Table 1). Real and imaginary part of the output density matrix of a PPSF1, b PPSF2 and cascade structure corresponding to the polarisation transformation (c), \(\hat U_n\left( {\theta = \phi _1 = \phi _2 = 0} \right)\) (d), and \(\hat U_n\left( {\theta = \phi _1 = \frac{\pi }{4},\phi _2 = 0} \right)\) in the middle section. Note that the relative contributions of the two samples at the output are set to be similar (48% from PPSF1 and 52% from PPSF2) by adjusting the pump power for each one of them Table 1 Fidelity, concurrence, and purity [Tr(\(\hat \rho ^2\))] of the biphoton state measured in polarisation basis The setup for cascading is similar to that of Fig. 5, except that we separately pump the two PPSFs (see Methods) to ensure incoherent cascading. This method of pumping helps us to precisely control the pairwise contributions of each PPSF segment in the final quantum state and at the same time enables us to shape the polarisation state of the biphotons. We now change the settings of PC2 (see Fig. 5) so that there would be no polarisation rotation in the middle section [\(\hat U_n\left( {\theta = 0,\phi _1,\phi _2} \right)\), see Eq. (11)] and then measure the biphoton state again. It can be seen from Fig. 9c that the measured density matrix corresponds to a highly polarisation-entangled state. Note that due to the negligible value of Λ for the PPSFs, no walk-off is introduced between \(\left| {\mathrm{HV}} \right\rangle _{\omega _A,\omega _B}\) and \(\left| {\mathrm{VH}} \right\rangle _{\omega _A,\omega _B}\), and the degree of the polarisation entanglement remains unchanged after cascading. By applying a polarisation rotation of \(\theta = \frac{\pi }{4}\) in the middle section \(\left[ {\hat U_n\left( {\theta = \phi _1 = \frac{{\mathrm{\pi }}}{4},\phi _2 = 0} \right)} \right]\), the density matrix of the output state changes into the one shown in Fig. 9d. The concurrence drops to ~0.1 despite both PPSF segments individually generating high-concurrence polarisation-entangled biphotons. This value of the concurrence is consistent with the one predicted in Fig. 4b and suggests that, for \(0 \le \theta \le \frac{{\mathrm{\pi }}}{4}\), we can arbitrary tune the concurrence between 0 and 1. Note that here we have only considered type-II SPDC with two polarisation transformations; however, by applying various transformations \(\hat U_n\left( {\theta ,\phi _1,\phi _2} \right)\) with PC2 in the middle section and utilizing different SPDC phase-matchings, one can generate any biphoton polarisation states. We have shown here that cascading biphoton sources in a common-path configuration can be used as a versatile tool to simultaneously tailor the frequency and polarisation DOFs of entangled photons. In this strategy, the pump coherence plays a major role in obtaining various biphoton states. With a long-coherence pump, the entire cascade structure can be considered as one unified source, capable of generating biphotons with tunable spectral properties58,59; in fact, one can obtain various biphoton spectra (Fig. 7) simply by engineering the dispersion of the linear medium in the middle section. For example, by cascading counter-propagating path-entangled biphoton sources,60 one could obtain biphoton frequency combs (similar to the works reported earlier1,22) of constant spacing whose free spectral range can be tuned by manipulating the dispersion of the middle section; note that, this can be done without any dispersion modification of the nonlinear medium. Since our spectral and polarisation shaping techniques share the same configuration, we can simultaneously control biphotons in both DOFs. For example, by coherently pumping the cascade structure and manipulating the dispersion and birefringence property of the middle section, we can generate biphotons whose degree of polarisation entanglement is frequency-dependent (Fig. 4a); this new effect, arising from the interplay between coherence and entanglement, directly links the entanglement existing in the polarisation DOF to the frequency DOF of biphotons. With incoherent pumping, the effects arising from biphoton-biphoton amplitude interference disappear, and the final state will become an incoherent mixture of the individual states generated from each nonlinear segment (see Figs. 8 and 9). The immediate application would be the ability to increase the brightness of the biphoton sources (at the expense of greater noise) by increasing the total nonlinear interaction length without sacrificing the emission bandwidth of the generated biphotons (see Figs. 3 and 8). In addition, the incoherent cascade scheme allows us to generate arbitrary biphoton polarisation states,13,14 and also control the degree of polarisation entanglement of biphotons (see Figs. 4b and 9d). We remark that our configuration greatly simplifies the schemes previously used for generating arbitrary biphoton polarisation state13,14 and removes the requirement for phase stabilization and pump coherence due to its common-path configuration. It is worth mentioning that using linear and nonlinear materials with negligible group birefringence (such as poled-fibres51) and small dispersion is of great importance in the cascade strategy as no walk-off between different biphoton polarisation states is essentially introduced. This feature allows us to preserve polarisation entanglement (if present), or generate arbitrary polarisation states13,14 without the need for complex walk-off compensation schemes. This can also be beneficial for the configurations recently proposed for generating multi-photon entanglement,55 which often involve complex scheme of multiple cascaded nonlinear media. In addition, nonlinear media with small dispersion generate broadband biphotons and with the simple spectral shaping technique presented here, they can serve as versatile quantum sources for various quantum information processing applications.1,2 The technique presented in this work can also be generalised to all other waveguide-based photon-pair sources, including those in integrated photonics devices, and one can use the effect of biphoton-biphoton amplitude interference to tune the properties of entangled photons, not only in the frequency and polarisation DOFs, but also in other DOFs such as path and orbital angular momentum as well. From this perspective, the cascade strategy can be invaluable for generating a host of entangled-photon states that could be useful for quantum information processing, quantum sensing, and the study of the foundations of quantum mechanics. Choosing PPSF samples The SHG spectrum of several PPSF samples are examined with a cw tunable laser (Agilent 8164A) in the 1550–1565 nm wavelength range.61 Depending on the input polarization of the fundamental lightwave, the type-0 or type-II phase-matching can be observed.61 Two PPSFs whose SHG peaks and SPDC spectra are well-matched are then selected as nonlinear segments for our cascade structure. Fibre spectrometer The biphotons generated in the cascade structure are sent to a dispersive medium (20 km of Corning SMF-28), which maps their wavelengths onto the arrival time at the single photon detectors.57 After time-tagged detection with single photon detectors, the spectrum of the biphotons can be recovered by translating the time delays into wavelength.46,57 The minimum resolution of our spectrometer depends primarily on the timing jitter of the single photon detectors.46 As the overall detection time-jitter is ~200 ps, we chose our coincidence time window to be 256 ps for these measurements. Based on this coincidence window and the nominal dispersion-length product of the 20 km fibre spool, which is 340 ps nm−1, we can obtain spectral resolution of ~0.75 nm with our spectrometer. Incoherent pumping schemes Depending upon suitability, one of two following methods is used to achieve incoherent pumping: (1) We decrease the effective coherence time of the pump laser by periodically modulating the cavity length of ECDL (the 780 nm laser) so that its time-averaged linewidth (as measured by a Fabry-Perot spectrometer) increases, effectively reducing the pump temporal coherence; (2) We pump the two PPSFs separately while the biphotons still travel in a common path. Since the two pump fields reaching the PPSF segments travel in two different and unstabilized fibre paths, no coherence is preserved between the two fields, guaranteeing incoherent cascading. Data are available from the authors upon reasonable request. Kues, M. et al. On-chip generation of high-dimensional entangled quantum states and their coherent control. Nature 546, 622–626 (2017). Reimer, C. et al. High-dimensional one-way quantum processing implemented on d-level cluster states. Nat. Phys. 15, 148–153 (2018). Walther, P. et al. Experimental one-way quantum computation. Nature 434, 169–176 (2005). Jennewein, T., Simon, C., Weihs, G., Weinfurter, H. & Zeilinger, A. Quantum cryptography with entangled photons. Phys. Rev. Lett. 84, 4729 (1999). Barreiro, J. T., Langford, N. K., Peters, N. A. & Kwiat, P. G. Generation of hyperentangled photon pairs. Phys. Rev. Lett. 95, 260501 (2005). Hardy, L. Nonlocality for two particles without inequalities for almost all entangled states. Phys. Rev. Lett. 71, 1665 (1993). Shalm, L. K. et al. Strong loophole-free test of local realism. Phys. Rev. Lett. 115, 250402 (2015). Qian, X.-F., Vamivakas, A. N. & Eberly, J. H. Entanglement limits duality and vice versa. Optica 5, 942–947 (2018). Lukens, J. M. & Lougovski, P. Frequency-encoded photonic qubits for scalable quantum information processing. Optica 4, 8–16 (2017). Lu, H.-H. et al. Quantum interference and correlation control of frequency-bin qubits. Optica 5, 1455–1460 (2018). Lu, H.-H. et al. A controlled-NOT gate for frequency-bin qubits. npj Quantum Inf. 5, 24 (2019). Zhong, T. et al. Photon-efficient quantum key distribution using time–energy entanglement with high-dimensional encoding. New. J. Phys. 17, 022002 (2016). White, A. G., James, D. F. V., Eberhard, P. H. & Kwiat, P. G. Nonmaximally entangled states: production, characterization, and utilization. Phys. Rev. Lett. 83, 3103 (1999). Peters, N. A. et al. Maximally entangled mixed states: creation and concentration. Phys. Rev. Lett. 92, 133601 (2004). Wei, T.-C. et al. Synthesizing arbitrary two-photon polarization mixed states. Phys. Rev. A 71, 032329 (2005). Cinelli, C., Di Nepi, G., De Martini, F., Barbieri, M. & Mataloni, P. Parametric source of two-photon states with a tunable degree of entanglement and mixing: experimental preparation of Werner states and maximally entangled mixed states. Phys. Rev. A 70, 022321 (2004). Jaeger, G., Shimony, A. & Vaidman, L. Two interferometric complementarities. Phys. Rev. A 51, 51–54 (1995). Pe'er, A., Dayan, B., Friesem, A. A. & Silberberg, Y. Temporal shaping of entangled photons. Phys. Rev. Lett. 94, 073601 (2005). Lu, H.-H., Odele, O. D., Leaird, D. E. & Weiner, A. M. Arbitrary shaping of biphoton correlations using near-field frequency-to-time maapping. Opt. Lett. 43, 743–746 (2018). Lu, Y. J., Campbell, R. L. & Ou, Z. Y. Mode-Locked two-photon states. Phys. Rev. Lett. 91, 163602 (2003). Olislager, L. et al. Frequency-bin entangled photons. Phys. Rev. A 82, 013804 (2010). Xie, Z. et al. Harnessing high-dimensional hyperentanglement through a biphoton frequency comb. Nat. Photonics 9, 536–542 (2015). Nasr, M. B. et al. Ultrabroadband biphotons generated via chirped quasi-phase-matched optical parametric down-conversion. Phys. Rev. Lett. 100, 183601 (2008). Brańczyk, A. M., Fedrizzi, A., Stace, T. M., Ralph, T. C. & White, A. G. Engineered optical nonlinearity for quantum light sources. Opt. Express 19, 55–65 (2011). Bonfrate, G., Pruneri, V., Kazansky, P. G., Tapster, P. & Rarity, J. G. Parametric fluorescence in periodically poled silica fibers. Appl. Phys. Lett. 75, 2356 (1999). Zhu, E. Y. et al. Direct generation of polarization-entangled photon pairs in a poled fiber. Phys. Rev. Lett. 108, 213902–213905 (2012). Chekhova, M. V. & Ou, Z. Y. Nonlinear interferometers in quantum optics. Adv. Opt. Photonics 8, 108–155 (2016). Yurke, B., McCall, S. L. & Klaud, J. R. SU(2) and SU(1,1) interferometers. Phys. Rev. A 33, 4033 (1986). Kong, J. et al. Experimental investigation of the visibility dependence in a nonlinear interferometer using parametric amplifiers. Appl. Phys. Lett. 102, 011130 (2013). Vered, R. Z., Shakhed, Y., Ben-Or, Y., Rosenbluh, M. & Pe'er, A. Classical-to-quantum transition with broadband four-wave mixing. Phys. Rev. Lett. 114, 063902 (2015). Shaked, Y., Pomerantz, R., Vered, R. Z. & Pe'er, A. Observing the nonclassical nature of ultra-broadband bi-photons at ultrafast speed. New J. Phys. 16, 053012 (2014). Jing, J., Liu, Z., Ou, Z. Y. & Zhang, W. Realization of a nonlinear interferometer with parametric amplifiers. Appl. Phys. Lett. 99, 011110 (2011). Lemieux, S. et al. Engineering the frequency spectrum of bright squeezed vacuum via group velocity dispersion in an SU(1,1) interferometer. Phys. Rev. Lett. 117, 183601 (2016). Manceau, M., Leuchs, G., Khalili, F. & Chekhova, M. V. Detection loss tolerant supersensitive phase measurement with an SU(1,1) interferometer. Phys. Rev. Lett. 119, 223604 (2017). Klyshko, D. N. Ramsey interference in two-photon parametric scattering. JETP 77, 222 (1993). Milonni, P. W., Fearn, H. & Zeilinger, A. Theory of two-photon down-conversion in the presence of mirrors. Phys. Rev. A 53, 4556 (1995). Burlakov, A. V. et al. Interference effects in spontaneous two-photon parametric scattering from two macroscopic regions. Phys. Rev. A 56, 3214–3225 (1997). Zou, X. Y., Wang, L. J. & Mandel, L. Induced coherence and indistinguishability in optical interference. Phys. Rev. Lett. 67, 318 (1991). Ou, Z. Y., Wang, L. J., Zou, X. Y. & Mandel, L. Evidence for phase memory in two-photon down-conversion through entanglement with the vacuum. Phys. Rev. A 41, 566 (1990). Herzog, T. J., Rarity, J. G., Weinfurter, H. & Zeilinger, A. Frustrated two-photon creation via interference. Phys. Rev. Lett. 72, 629 (1994). Herzog, T. J., Kwiat, P. G., Weinfurter, H. & Zeilinger, A. Complementarity and the quantum eraser. Phys. Rev. Lett. 75, 3034 (1995). Wang, I. J., Zou, X. Y. & Mandel, L. Induced coherence without induced emission. Phys. Rev. A 44, 4614 (1991). Lemos, G. B. et al. Quantum imaging with undetected photons. Nature 512, 409–412 (2014). Kulik, S. P. et al. Two-photon interference in the presence of absorption. JETP 98, 31–38 (2004). Kalashnikov, D. A., Petrova, A., Kulik, S. & Krivitsky, L. A. Infrared Spectroscopy with visible light. Nat. Photonics 10, 98–102 (2016). Riazi, A. et al. Alignment-free dispersion measurement with interfering biphotons. Opt. Lett. 44, 1484–1487 (2019). Liscidini, M., Helt, L. G. & Sipe, J. E. Asymptotic fields for a Hamiltoninan treatment of nonlinear electromagnetic phenomena. Phys. Rev. A 85, 013833 (2012). Helt, L. G., Liscidini, M. & Sipe, J. E. How does it scale? Comparing quantum and classical nonlinear optical processes in integrated devices. J. Opt. Soc. Am. B 29, 2199–2212 (2012). Loudon, R. The Quantum Theory of Light. (Oxford University Press, New York, 2000). Hertel, I. V. & Schulz, C. Atoms, Molecules and Optical Physics 2. (Springer, Berlin, 2015). Chen, C. et al. Compensation-free broadband entangled photon pair sources. Opt. Express 25, 22667–22678 (2017). Chen, C. et al. Turn-key diode-pumped all-fiber broadband polarization-entangled photon source. OSA Continuum 1, 981–987 (2018). Yariv, A. & Yeh, P. Photonics. (Oxford University Press, New York, 2007). Wootters, W. K. Entanglement of formation of an arbitrary state of two qubits. Phys. Rev. Lett. 80, 2245–2248 (1998). Krenn, M., Hochrainer, A., Lahiri, M. & Zeilinger, A. Entanglement by path identity. Phys. Rev. Lett. 118, 080401 (2017). James, D. F. V., Kwiat, P. G., Munro, W. J. & White, A. G. Measurement of qubits. Phys. Rev. Lett. 64, 0523121-15 (2001). Zhu, E. Y., Corbari, C., Kazansky, P. G. & Qian, L. Self-calibrating fiber spectrometer for the measurement of broadband down-converted photon pairs. arXiv:1505.01226v1 (2015). Riazi, A. et al. Quantum interferometry through cascading broadband entanglement sources, presented at Conference on Lasers and Electro-Optics (CLEO), 1–2 (San Jose, CA, 2018). Su, J. et al. Versatile and precise quantum state engineering by using nonlinear interferometers. Opt. Express 27, 20479–20492 (2019). Saravi, S., Pertsch, T. & Setzpfandt, F. Generation of counterpropagating path-entangled photon pairs in a single periodic waveguide. Phys. Rev. Lett. 118, 183603 (2017). Zhu, E. Y. et al. Measurement of χ(2) symmetry in a poled fiber. Opt. Lett. 35, 1530–1532 (2010). The authors would like to thanks the anonymous reviewer for the comments regarding the pump coherence. This work is supported by Natural Sciences and Engineering Research Council of Canada (NSERC), grant no. (RGPIN-2014-06425, RGPIN-2014-05, RGPAS 462021-2014). Natural Sciences and Engineering Research Council of Canada (RGPIN-2014-06425, RGPIN-2014-05, RGPAS 462021-2014). Department of Electrical and Computer Engineering, University of Toronto, 10 King's College Rd., Toronto, M5S 3G4, Canada Arash Riazi , Changjia Chen , Eric Y. Zhu & Li Qian Fibre Optics Research Center, Russian Academy of Sciences, 38 Vavilov Street, 119333, Moscow, Russia Alexey V. Gladyshev Optoelectronics Research Centre, University of Southampton, Southampton, SO17 1BJ, UK Peter G. Kazansky Department of Physics, University of Toronto, 60 St. George St., Toronto, M5S 1A7, Canada J. E. Sipe Search for Arash Riazi in: Search for Changjia Chen in: Search for Eric Y. Zhu in: Search for Alexey V. Gladyshev in: Search for Peter G. Kazansky in: Search for J. E. Sipe in: Search for Li Qian in: A.R. and L.Q. devised the experiment. J.E.S. and A.R. developed the theoretical framework for the cascade structure. A.R. performed the experiment, with help from E.Y.Z and C.C. Fabrication of the nonlinear devices was performed by A.V.G. and P.G.K. All authors contributed to the writing of the manuscript. Correspondence to Arash Riazi. The authors declare no competing interests. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Riazi, A., Chen, C., Zhu, E.Y. et al. Biphoton shaping with cascaded entangled-photon sources. npj Quantum Inf 5, 77 (2019) doi:10.1038/s41534-019-0188-1 npj Quantum Information menu For Authors & Referees
CommonCrawl
The influence of radiotherapy techniques on the plan quality and on the risk of secondary tumors in patients with pituitary adenoma Marius Treutwein ORCID: orcid.org/0000-0002-6538-99801, Felix Steger1, Rainer Loeschel2, Oliver Koelbl1 & Barbara Dobler1 BMC Cancer volume 20, Article number: 88 (2020) Cite this article This planning study compares different radiotherapy techniques for patients with pituitary adenoma, including flatness filter free mode (FFF), concerning plan quality and secondary malignancies for potentially young patients. The flatness filter has been described as main source of photon scatter. Eleven patients with pituitary adenoma were included. An Elekta Synergy™ linac was used in the treatment planning system Oncentra® and for the measurements. 3D plans, IMRT, and VMAT plans and non-coplanar varieties were considered. The plan quality was evaluated regarding homogeneity, conformity, delivery time and dose to the organs at risk. The secondary malignancy risk was calculated from dose volume data and from measured dose to the periphery using different models for carcinoma and sarcoma risk. The homogeneity and conformity were nearly unchanged with and without flattening filter, neither was the delivery time found substantively different. VMAT plans were more homogenous, conformal and faster in delivery than IMRT plans. The secondary cancer risk was reduced with FFF both in the treated region and in the periphery. VMAT plans resulted in a higher secondary brain cancer risk than IMRT plans, but the risk for secondary peripheral cancer was reduced. Secondary sarcoma risk plays a minor role. No advantage was found for non-coplanar techniques. The FFF delivery times were not shortened due to additional monitor units needed and technical limitations. The risk for secondary brain cancer seems to depend on the irradiated volume. Secondary sarcoma risk is much smaller than carcinoma risk in accordance to the results of the atomic bomb survivors. The reduction of the peripheral dose and resulting secondary malignancy risk for FFF is statistically significant. However, it is negligible in comparison to the risk in the treated region. Treatments with FFF can reduce secondary malignancy risk while retaining similar quality as with flattening filter and should be preferred. VMAT plans show the best plan quality combined with lowest peripheral secondary malignancy risk, but highest level of second brain cancer risk. Taking this into account VMAT FFF seems the most advantageous technique for the treatment of pituitary adenomas with the given equipment. Radiotherapy of pituitary adenomas is often applied as postoperative therapy of tumors that cannot be removed completely. For over one decade patients with pituitary adenoma have been treated primarily with two parallel opposed fields or a three field technique [1,2,3,4,5,6,7]. Only a few recent publications describe the application of intensity modulated radiotherapy (IMRT) or volumetric modulated arc (VMAT) technique in this region [8,9,10], none of them mentioning the flatness filter free (FFF) mode. FFF is applicable in combination with fluence modulating techniques like IMRT or VMAT. In this mode a considerably higher dose rate is achieved by omitting the flatness filter. This planning study compares different plans for patients with pituitary adenoma. The plans were optimized using both modes: flattened beam (FB) and FFF. The flatness filter has been described as the main source of photon scatter in the treatment head [11]. Additional peripheral dose resulting from this contribution has been confirmed in some publications referring to other entities [12,13,14,15,16,17,18] and increases the risk for secondary malignancies. Pituitary adenomas represent about 10% of all intracranial tumors [19]. The incidence increases over the years, starting at about an age of 10 years and has a possible decline in high age [20]. Therefore, the risk for secondary malignancies should be considered. The aim of this study is to evaluate statistically significant differences for FB and FFF plans regarding the plan quality and the risk for secondary malignancies in the treated region and the periphery. Plans with two and three fixed beams were taken as reference. Data sets of 11 patients (five female, six male) with pituitary adenoma were used for this retrospective planning study. All patients have got a cranial X-ray CT scan in supine positioning with the head in neutral position. The head was immobilized using thermoplastic mask systems. The CT scans were fused with the pretherapeutic cranial MRI (contrast enhanced T1-weighted sequence). GTV, CTV and PTV as well as organs at risk were delineated using the treatment planning system (TPS) Oncentra® External Beam v4.5 (Nucletron®, an Elekta company) on all axial slices. The GTV included the macroscopic (residual) tumor volume. The CTV was based on GTV extended for the resection cavity in postoperative cases. The PTV was defined by CTV plus an isotropic margin of 3–5 mm, depending on setup error and reproducibility of positioning. We used a linear accelerator (linac) of type Elekta Synergy™ with Agility™ head (Elekta AB, Stockholm, Sweden) operated by the desktop software Integrity 3.1 and record and verify system Mosaiq 2.50. The head is equipped with 80 interdigitating leaf pairs, projecting a leaf width of 5 mm to the isocenter. For all plans 6 MV photons were used. The beam quality of both modes FB and FFF has been shown to be equivalent for this machine type [21, 22]. The linac offers a maximum dose rate of 550 MU per minute in FB mode and 1700 MU per minute in FFF. The planning was performed with TPS Oncentra® External Beam using the collapsed cone algorithm. A grid size of 2 mm has been chosen. Variable gantry speed with a set maximum value of 6.0 degree per second is supported by the software. The variable dose rate was set to a minimum value of 20 MU per minute. A dynamic and static minimum leaf gap of 1.0 cm had to be observed. The optimizer module in Oncentra® used the step-and-shoot algorithm for IMRT optimizations [23]. This module has been developed by RaySearch Laboratories (Stockholm, Sweden) and therefore has the same roots as the SmartArc module in Pinnacle3 TPS (Philips, Amsterdam, Netherlands) and the proprietary development RayArc module in RayStation TPS. The objectives for the PTV were set to a minimum dose of 49.4 Gy and a maximum dose of 51.4 Gy in 28 fractions, aiming for a fraction dose of 1.8 Gy. A uniform dose objective to 50.4 Gy was added to improve the homogeneity. Further objectives were set for the following organs at risk (OAR) (Table 1) according to [24,25,26]: brain, brainstem, chiasm, both lenses, bulbs, lacrimal glands, and parotids. Additionally, the surrounding dose fall-off objective has been applied to shape the dose gradient from the PTV into the normal tissue [27]. It supports an improvement of the conformity. The same set of dose volume objectives has been used for all plans to get comparable results [12, 18, 28, 29]. The aim of this set was to keep the risk for the OARs on an acceptable level, but to leave freedom for the optimizer to achieve good conformity and homogeneity. In both modes (FB and FFF) IMRT plans with nine equispaced coplanar fields were generated; in a second variant a tenth non-coplanar field was added. Similarly two different VMAT plans were optimized: one single arc rotation (182°-178°), and the second with an added half rotation in the sagittal patient plane (0°-180°). Table 1 Treatment planning objectives The average dose to the PTV was set to 100% after the optimization process. An average dose value in a range of 50.4 Gy ± 0.8 Gy - which represents an interval of about 1.5% around the target value – was accepted in the sense of a dose prescription according to ICRU 83 [30]. No rescaling has been performed as this would have affected the dose to the normal tissue and organs at risk which are also part of the optimization process [31]. Plan evaluation The following parameters were evaluated: the average dose to the PTV DavPTV, homogeneity index HI, and the conformity index CI. For HI the definition of ICRU report 83 [30] was used: HI = (D2%PTV – D98%PTV)/DavPTV with D2%PTV and D98%PTV as dose to 2 and 98% of the PTV. CI was defined according to Paddick [32]: CI = TV49.4 Gy2/(V49.4 Gy x VPTV). TV49.4 Gy is the volume within the PTV which receives at least 49.4 Gy, V49.4 Gy is the volume enclosed by the corresponding isodose within the complete patient contour, and VPTV is the volume of the PTV. For all plans the observance of the objectives for the OAR was investigated and regarded as criterion of acceptability. The evaluation of the peripheral dose (PD) was performed using the upper part of a male Alderson phantom (RSD Inc., Long Beach, CA, USA) (Fig. 1). Two slabs of the phantom were replaced by copies of PA material with bores for ionization chambers. The first chamber in a distance of 16.3 cm from isocenter corresponded to the position of the thyroid gland, the second in a distance of 30.3 cm in the upper thoracic region corresponded to an esophageal position. The dose to these points, PDthyr and PDesoph has been measured with chambers type M30016 (0.3 cm3) and M23331 (1.0 cm3), respectively, both connected to Unidos dosimeters (PTW, Freiburg, Germany). It is reasonable to assume an uncertainty of 5% for these measurements considering the statistical uncertainty, the positioning inaccuracy and the calibration of the detector for megavoltage beam quality [33]. Setup of the Alderson phantom for the peripheral dose measurements. The ionization chambers are inserted in the white slabs. The arrow indicates the isocenter plane The plan verifications were accomplished with the SRS MapCHECK™ array and StereoPHAN™ phantom equipment (Fig. 2) in combination with the SNS Patient™ software version 8.1 (Sun Nuclear Corporation, Melbourne, FL, USA). The array is a rather new developed device. Its size is 77 × 77 mm2, the diodes have a spacing of 2.47 mm. The active detector area is 0.48 × 0.48 mm2. The signals are sampled with a frequency of 20 Hz. Gantry angles are analyzed by the angular dependency of two opposing detector planes. Therefore, no gantry angle sensor is necessary. The phantom has already been used with other modular inserts for films [34] and ionization chambers. The system is especially designed for small volumes. Setup for the plan verification with SRS MapCHECK™ array inserted in the StereoPHAN™ phantom We applied a hybrid plan verification as described for other systems [35,36,37] transferring the patient plan unchanged to the phantom and performing a dose calculation on the phantom with a dose grid of 1 mm. A CT scan of the phantom has been offered by the manufacturer, slices in 1 mm distance; a uniform relative mass density of 1.2 has been assigned to the external structure as recommended. The array was positioned horizontally in the isocenter plane. The software assists a cross-calibration procedure. Offered shifts of the measured profiles relative to the calculated ones for best coincidence were accepted, but were clearly smaller than 1.0 mm. The measured and calculated dose distributions were evaluated by the gamma index [38] with a dose tolerance of 3% referring to the maximum dose and a distance to agreement of 3 mm. The area of the evaluation was confined to dose values above 10% of the dose maximum [39]. As an additional plan quality parameter the delivery time (DT) was recorded from pressing the beam on button until the last beam off. Secondary malignancy risk The calculations for the risk of secondary malignancies use the models presented by Schneider et al. [40]. Their work combines the data of the Japanese A-bomb survivors [41] and secondary cancer data of Hodgkin's patients from a Western population [42]. The excess absolute risk (EAR) describes the risk of malignancy incidence after irradiation. It is expressed as absolute difference of the number of malignancies in comparison to an untreated control group. Commonly it is given per 10.000 persons per year. It is a function of the dose d, the sex s, the age at exposure e and the attained age a. $$ EAR\left(d,s,e,a\right)=\mu \left(s,e,a\right)\times f(d) $$ In the present work gender averaged values are used. Therefore, the parameter s can be neglected. Using the tables given in [40], the EAR can be calculated from the dose volume data from the TPS for different organs of volume VT: $$ {EAR}^{org}=\frac{1}{V_T}\sum \limits_iV\left({D}_i\right)\times {\beta}_{EAR}\times RED\left({D}_i\right)\times \mu \left(e,a\right) $$ The summation is performed over all voxels of the organ with dose entry Di. μ can be used to calculate the risk for different ages (e: age at exposition, a: attained age): $$ \mu\ \left(e,a\right)=\mathit{\exp}\left(\ {\gamma}_e\left(e-30\right)+{\gamma}_a\ \ln \left(\frac{a}{70}\right)\right) $$ The parameters γe and γa were derived by Preston et al. [41]. For our calculations we have chosen e = 35 years, which is close to the mean age given by Yamanaka et al. [43] (37 years) and corresponds to the mean age minus one standard deviation by Jiang et al. [44]. The attained age was set to a = 70 years. βEAR is the initial slope, the risk equivalent dose RED the dose dependent part, for which Schneider et al. present different models for carcinoma induction: The mechanistic model which considers cell killing and fractionation effects The bell-shaped dose response model which neglects any repopulation or repair effect The plateau model with full repopulation or repair The mechanistic model is given by the form $$ RED(D)=\frac{e^{-\alpha \hbox{'}D}}{\alpha \hbox{'}R}\ \left(1-2R+{R}^2{e}^{\alpha \hbox{'}D}-{\left(1-R\right)}^2{e}^{\frac{\alpha \hbox{'}R}{1-R}D}\right) $$ assuming a fractionated treatment schedule of single fractions with dose d up to a total dose D. α' has been derived from the linear quadratic model, assuming α/β = 3 Gy for all tissues. $$ \alpha \hbox{'}=\alpha +\beta d $$ R is the repopulation and repair parameter and equals 0 for no and 1 for full repair or repopulation. The bell-shaped model is got in the limit of R to 0: $$ RED(D)=D\ {e}^{-\alpha \hbox{'}D} $$ In the limit of R to 1 the plateau model is described: $$ RED(D)=\left(1-{e}^{-\alpha \hbox{'}D}\right)/\alpha \hbox{'} $$ The authors emphasize that there is only little knowledge yet about the dose-response relationships in the investigated dose range [40]. The data could not be fitted by all models for all organs, not even the most complex mechanistic model. Therefore, we performed our calculations for all three models. At last we also applied the model for sarcoma induction of bone and soft tissue. The formula is quite similar to the mechanistic model for carcinoma induction with one additional term: $$ RED(D)=\frac{e^{-\alpha \hbox{'}D}}{\alpha \hbox{'}R}\ \left(1-2R+{R}^2{e}^{\alpha \hbox{'}D}-{\left(1-R\right)}^2{e}^{\frac{\alpha \hbox{'}R}{1-R}D}-\alpha \hbox{'} RD\right) $$ We confined our calculations to an intermediate repopulation and repair effect with R = 0.5. βEAR, γe, and γa are given in Table 2 for the investigated organs. Table 2 Initial slope and age modifying parameters for EAR calculation [40] The two peripheral dose points were situated in the low dose region. It has been shown by Preston et al. [41] that for these points up to a total dose of 2 Gy the simple linear model is applicable. $$ {EAR}^{org}={\beta}_{EAR}\times D\times \mu \left(e,a\right) $$ The factors βEAR for the selected OAR were taken from this publication and applied according [40]. They are given per 10.000 persons per year and Gy as 1.2 for the thyroid gland and 0.58 for the esophagus. For the esophagus no age dependency has been found. The age correction factor μ(e,a) for secondary cancer risk in the thyroid gland was calculated with γe and γa from Table 2. For this calculation the measured point dose has been taken representative for the whole organ. An a priori power analysis has been performed to determine the sample size using the software G*Power version 3.1.9.2 [45, 46]. We set α = 0.05, power (1-β) = 0.8, and the effect size to 1.0. The Wilcoxon signed-rank test for paired samples was chosen as statistical test, as it does not require a normal distribution of the variables. Taking various parent distributions into account (normal, Laplace, logistic) the maximum sample size of N = 11 for a normal distribution was selected. IBM® SPSS® Statistics v23 (IBM Corporation, Armonk, NY, USA) was applied to perform the Wilcoxon test with a significance level of 0.05. The Bonferroni-Holm method was used to control the maximum experimentwise error rate for multiple testing [47]. Five variables were considered in this process: HI, CI, DT, the sum of all calculated EAR from the dose volume histograms EARsumplan, and the sum of the EAR calculated from the PD measurements EARsumPD. Differences in the application of VMAT and IMRT were investigated secondary with a significance level of 0.05 without corrections for multiple testing. Plan quality Nearly all plans ended the optimization process with an average dose in the PTV in the required interval as described in the section "treatment planning" in material and methods. Only one plan (VMAT FB) failed 0.2 Gy below the required minimum value. All other treatment plan objectives have been met in all plans except of the chiasm. This objective has slightly been violated by nearly all plans as the chiasm has been part of the PTV. The maximum value of D2%Chiasm was 1.7% above the limit in Table 1. Fig. 3 demonstrates the dose distribution of all calculated plans in the isocenter plane for one representative patient. Isodose distributions in the isocenter plane for one representative patient. The first row shows the classical 2 and 3 field techniques, below in the left column plans with FB mode, in the right column the FFF plans. From 2nd row to bottom: coplanar IMRT, non-coplanar IMRT, coplanar VMAT, non-coplanar VMAT. PTV in red and brain stem in cyan are made visible Fig. 4 shows the indexes HI and CI. No statistically significant difference has been found for FB and FFF. HI is equivalent for the classical 2Fd and 3Fd techniques and all IMRT plans, but significantly improved for all VMAT plans. CI is lowest for 2Fd plans, shows all IMRT variants on one level and all VMAT variants again improved on a significantly higher level. HI (above) and CI (below) as boxplots for all plans: The boxes indicate the inner quartiles, the whiskers the outer quartiles. Outliers and extreme values are indicated by circles and asterisks In Fig. 5 the delivery time DT is presented for the coplanar plans. For IMRT the difference of FB and FFF is not significant, for VMAT the FFF mode needs statistically significant more time. However, the difference is small (7 s). VMAT takes about the same time as two opposing fields (2Fd) and is faster than 3Fd and takes less than half the time of IMRT. The non-coplanar plans have not been evaluated in detail. Sample measurements have shown that for our local conditions nearly 2 mins additional time are required to enter the treatment room and arrange gantry and table position. Delivery time for the different plan groups from pressing the start button to the last beam off Fig. 6 illustrates the EAR for secondary brain cancer depending on the technique and mode for the mechanistic model. The other two models are not shown in the figure to gain more clarity. For fluence modulating techniques the values for the bell-shaped model were about 10% higher, for the plateau model about 8% higher than for the mechanistic model. For the 2Fd and the 3Fd technique the differences between the models were up to 19%, but again the bell-shaped model above the plateau model. The reduction of EAR by application of FFF instead of FB is statistically significant. Although the difference between both groups is very small, the significance of the statistics can be explained by the pairing of the samples in the Wilcoxon test: for all 11 pairings, the value for FFF was lower than for FB. The lowest risk is achieved by the simplest technique (2Fd). The differences between the three models are small. The non-coplanar techniques IMRT10 and VMAT2 create a higher risk than the coplanar techniques. EAR for secondary brain cancer calculated by the mechanistic model, dependent on the different techniques. FFF (blue) is statistically significant lower than FB (yellow) Fig. 7 a and b show the EAR for secondary sarcoma. The risk is one magnitude smaller than for secondary cancer. It is very similar for all techniques. There is nearly no difference between FB and FFF. a. EAR for secondary bone sarcoma dependent on the different techniques. b. EAR for secondary sarcoma of soft tissue dependent on the different techniques The EAR for secondary cancer in the periphery (thyroid and esophagus) has been derived directly from the PD measurements. In both organs we took the point dose as representative for the whole organ for our calculations as a simple approach. Sample measurements for the non-coplanar techniques resulted in much higher doses with factors from 6 to over 100 and were aborted. The EAR at the esophagus was one magnitude smaller than at the thyroid. For the sake of simplicity we added up both values. The corresponding boxplots are given in Fig. 8. The risk has been reduced statistically significant by the application of FFF mode. VMAT caused the lowest dose and therefore the lowest risk of all techniques. Summed EAR for secondary carcinoma at the peripheral organs thyroid and esophagus. FFF (blue) is statistically significant lower than FB (yellow) Plan verification The previous results of the plan quality and the secondary malignancy risk demonstrated no benefit for the non-coplanar techniques. Therefore, we verified the coplanar plans only. All plans except of one IMRT plan fulfilled the gamma value acceptance criteria. That means that 95% of the pixels were within the tolerance level mentioned in the material and methods section. No rescaling of the dose to the PTV has been performed after the optimization as described in the section material and methods. The failure of reaching the required dose interval for the PTV of one plan could easily be corrected in clinical routine by a rescaling of the MU. The remaining treatment plan objectives would also have been observed after rescaling, except of the chiasm. Choosing other weights for the corresponding objectives of PTV and chiasm might have avoided this little deviation but would probably have impaired the values for HI and CI. Goldsmith et al. [48] assume a tolerance dose of 54 Gy for radiation-induced optic neuropathy when applied in fractions of 1.8 Gy. Therefore, our values were found acceptable. An improved CI is an indicator of a conformal high dose area and therefore of reduced risk for adverse reactions in the organs at risk and unspecified tissue. Consideration and observance of the dose volume objectives is a main criterion for the clinical acceptability of a treatment plan. The dose limits were taken from published recommendations and resulting risks for the patient were regarded tolerable. No further ranking has been derived from actually achieved mean dose values below the limits. However, in individual cases, e.g. pretreated patients, further dose reduction in a selected organ at risk might be accomplished by another treatment technique, setting, or mode. The results regarding the plan quality cannot simply be generalized to other equipment or tumors. Dobler et al. found no difference in the application of IMRT or VMAT, FB or FFF for patients with hypopharynx/larynx carcinoma [15]. Alvarez-Moret et al. stated in their study about pediatric patients with ependymoma comparable HI and CI for VMAT FB and FFF, but IMRT FFF superior to IMRT FB [12]. Similarly were the results in an investigation about patients with localized prostate cancer [18]. The 2Fd and 3Fd plans were generated using a 5 mm margin around the PTV. A larger margin probably would improve the HI but also decrease the CI and increase the volume of high dose to unspecified tissue and organs at risk. The reduction of delivery time in the application of VMAT compared to IMRT was also found in the same order for other entities [12, 18, 49], using the same equipment. Most comparisons of FB and FFF in these studies and also [29] found shorter or statistically not significantly different DTs for FFF. Treutwein et al. [18] discussed that additional MU are required to compensate the profile fall off which also takes additional time. And the potential higher dose rate cannot be exploited between all control points, because the gantry speed and also the speed of the collimating elements are limiting factors [15]. In the present case these influences lead to a slight increase in the DT which is of no clinical importance. This result confirms that findings for other entities cannot simply be transferred to all applications. This has also been stated by Dobler et al. [15] who could not confirm all advantages found for the re-irradiation of spinal column metastasis [49] in their study about the treatment of hypopharynx and larynx carcinoma with the same equipment. It might be surprising that VMAT can compete with 2Fd technique regarding the DT. However, there is one switching off and on the beam less and there is no inactive rotation time needed. Schneider et al. [40] presented plots of the EAR as a function of the dose. The plot for the brain and central nervous system shows all three models close together. Although these plots end at a maximum dose of 40 Gy, it seems natural that our results which are based on this study, have also similar results for the three models. Fractionation and recovery show only little influence in the EAR. On the other hand there seems to be a dependency on the irradiated volume. The risk increases from 2Fd over 3Fd, IMRT9, up to the non-coplanar VMAT2 technique. The excess risk of sarcoma has been found one magnitude smaller than for carcinoma. This is in accordance to the data of Preston et al. [41] of the A-bomb survivors. Schneider et al. [40] concluded from data of radiotherapy patients that the risk might be at comparable magnitude for therapeutic doses. This has not been confirmed for our conditions. It has been described in the background section that other researchers found reduced PD when applying FFF instead of FB which has been explicated by the missing photon scatter from the flatness filter. Most of these works confined to the documentation of the measured dose. In the present work we calculated the resulting EAR. To our knowledge, similar calculations have only been performed by Murray et al. [17] for a small sample of three patients with early prostate cancer and Alvarez-Moret et al. [12] in a study of pediatric patients with ependymoma. Both confirmed a slightly lower EAR for FFF. The difference between FB and FFF in our study is statistically significant. However, comparing the scales in Figs. 6 and 8 we find the risk at the periphery some magnitude smaller than in the treated region. The magnitude of this ratio is in accordance with the outcomes of Murray et al. The calculated risks are based on mathematical models and are not directly derived from clinical results. Furthermore, they represent only a part of the risks to which patients with pituitary adenomas are exposed in radiotherapy. Therefore, our results can only support a decision for a specific technique but other factors must be considered. The successful plan verifications show that there is no technical problem in the application of FB and FFF, IMRT and VMAT plans. This will probably also be true for the non-coplanar techniques. A hybrid plan verification with original couch angles would not have been possible with our equipment: To avoid the irradiation of the array electronics the patient table could not have been turned to the original angle. As we could not demonstrate any benefit for non-coplanar plans for our standard setup we passed on their verification. However, in individual cases, e.g. pretreated persons, their application can be reasonable. It has been shown that in the treatment of pituitary adenomas plans which use the FFF mode are of equal quality as FB plans regarding the homogeneity, the conformity, and the dose to the organs at risk. FFF plans are superior in the respect of secondary malignancy induction. VMAT is the fastest advanced technique, on the same level as opposing fields. Non-coplanar techniques showed no benefit for the investigated parameters but need much more time for couch rotation. Opposing fields cause the lowest secondary brain cancer risk but have the lowest conformity. For most patients we regard VMAT the better choice than IMRT due to slightly improved HI and CI and clearly shortened treatment times. We regard the risk for secondary malignancies as a minor effect which is of the same magnitude for all techniques and modes. However, it can be used as a subordinated criterion with reduced risk using FFF. Taking all results into account coplanar VMAT FFF seems the most preferable technique for the treatment of pituitary adenomas with the given equipment. 2Fd: Two opposing fields Three field technique Conformity index CTV: Clinical target volume DT: EAR: Excess absolute risk Flattened beam (with flatness filter) FFF: Flattness filter free GTV : Gross target volume Homogeneity index IMRT: Intensity modulated radiotherapy MU: Monitor units Peripheral dose PTV: Planning target volume Risk equivalent dose VMAT: Volumetric modulated arc therapy van den Bergh ACM, van den Berg G, Schoorl MA, Sluiter WJ, van der Vliet AM, Hoving EW, et al. Immediate postoperative radiotherapy in residual nonfunctioning pituitary adenoma: beneficial effect on local control without additional negative impact on pituitary function and life expectancy. Int J Radiat Oncol Biol Phys. 2007;67:863–9. https://doi.org/10.1016/j.ijrobp.2006.09.049. Erridge SC, Conkey DS, Stockton D, Strachan MWJ, Statham PFX, Whittle IR, et al. Radiotherapy for pituitary adenomas: long-term efficacy and toxicity. Radiother Oncol. 2009;93:597–601. https://doi.org/10.1016/j.radonc.2009.09.011 Estrada J, Boronat M, Mielgo M, Magallon R, Millan I, Diez S, et al. The long-term outcome of pituitary irradiation after unsuccessful transsphenoidal surgery in Cushing's disease. N Engl J Med. 1997;336:172–7. https://doi.org/10.1056/NEJM199701163360303. Jallad RS, Musolino NR, Salgado LR, Bronstein MD. Treatment of acromegaly: is there still a place for radiotherapy? Pituitary. 2007;10:53–9. https://doi.org/10.1007/s11102-007-0002-5. Jenkins PJ, Bates P, Carson MN, Stewart PM, Wass JAH. Conventional pituitary irradiation is effective in lowering serum growth hormone and insulin-like growth factor-I in patients with acromegaly. J Clin Endocrinol Metab. 2006;91:1239–45. https://doi.org/10.1210/jc.2005-1616. Minniti G, Osti M, Jaffrain-Rea ML, Esposito V, Cantore G, Maurizi ER. Long-term follow-up results of postoperative radiation therapy for Cushing's disease. J Neuro-Oncol. 2007;84:79–84. https://doi.org/10.1007/s11060-007-9344-0. Snead FE, Amdur RJ, Morris CG, Mendenhall WM. Long-term outcomes of radiotherapy for pituitary adenomas. Int J Radiat Oncol Biol Phys. 2008;71:994–8. https://doi.org/10.1016/j.ijrobp.2007.11.057. Chae S-M, Lee KW, Son SH. Dosimetric impact of multileaf collimator leaf width according to sophisticated grade of technique in the IMRT and VMAT planning for pituitary adenoma lesion. Oncotarget. 2016;7:78119–26. https://doi.org/10.18632/oncotarget.12974. Lamiman K, Wong KK, Tamrazi B, Nosrati JD, Olch A, Chang EL, Kiehna EN. A quantitative analysis of craniopharyngioma cyst expansion during and after radiation therapy and surgical implications. Neurosurg Focus. 2016;41:E15. https://doi.org/10.3171/2016.9.FOCUS16298. Minniti G, Osti MF, Niyazi M. Target delineation and optimal radiosurgical dose for pituitary tumors. Radiat Oncol. 2016;11:1–14. https://doi.org/10.1186/s13014-016-0710-y. Georg D, Knöös T, McClean B. Current status and future perspective of flattening filter free photon beams. Med Phys. 2011;38:1280–93. https://doi.org/10.1118/1.3554643. Alvarez Moret J, Obermeier T, Pohl F, Loeschel R, Koelbl O, Dobler B. Second cancer risk after radiation therapy of ependymoma using the flattening filter free irradiation mode of a linear accelerator. J Appl Clin Med Phys. 2018;19:632–9. https://doi.org/10.1002/acm2.12438. Bell K, Dzierma Y, Palm J, Nuesken F, Licht N, Rube C. mARC prostate treatment planning with Varian eclipse for flat vs. FFF beams Phys Med. 2016;32:474–8. https://doi.org/10.1016/j.ejmp.2016.02.011. Dobler B, Maier J, Knott B, Maerz M, Loeschel R, Koelbl O. Second cancer risk after simultaneous integrated boost radiation therapy of right sided breast cancer with and without flattening filter. Strahlenther Onkol. 2016. https://doi.org/10.1007/s00066-016-1025-5. Dobler B, Obermeier T, Hautmann MG, Khemissi A, Koelbl O. Simultaneous integrated boost therapy of carcinoma of the hypopharynx/larynx with and without flattening filter - a treatment planning and dosimetry study. Radiat Oncol. 2017;12:114. https://doi.org/10.1186/s13014-017-0850-8. Maier J, Knott B, Maerz M, Loeschel R, Koelbl O, Dobler B. Simultaneous integrated boost (SIB) radiation therapy of right sided breast cancer with and without flattening filter - a treatment planning study. Radiat Oncol. 2016;11:111. https://doi.org/10.1186/s13014-016-0687-6. Murray LJ, Thompson CM, Lilley J, Cosgrove V, Franks K, Sebag-Montefiore D, Henry AM. Radiation-induced second primary cancer risks from modern external beam radiotherapy for early prostate cancer: impact of stereotactic ablative radiotherapy (SABR), volumetric modulated arc therapy (VMAT) and flattening filter free (FFF) radiotherapy. Phys Med Biol. 2015;60:1237–57. https://doi.org/10.1088/0031-9155/60/3/1237. Treutwein M, Hipp M, Koelbl O, Dobler B. Volumetric-modulated arc therapy and intensity-modulated radiation therapy treatment planning for prostate cancer with flattened beam and flattening filter free linear accelerators. J Appl Clin Med Phys. 2017;18:307–14. https://doi.org/10.1002/acm2.12168. Petersenn S, Lüdecke DK, Fahlbusch R, Renner U, Buchfelder M, Wowra B, Saeger W. Therapie von Hypophysentumoren. Dtsch Arztebl Int. 2006;103:A-474. McDowell BD, Wallace RB, Carnahan RM, Chrischilles EA, Lynch CF, Schlechte JA. Demographic differences in incidence for pituitary adenoma. Pituitary. 2011;14:23–30. https://doi.org/10.1007/s11102-010-0253-4. Paynter D, Weston SJ, Cosgrove VP, Evans JA, Thwaites DI. Beam characteristics of energy-matched flattening filter free beams. Med Phys. 2014;41:52103. https://doi.org/10.1118/1.4871615. Treutwein M, Härtl PM, Gröger C, Katsilieri Z, Dobler B. Linac twins in radiotherapy. In: Nenoi M, editor. Evolution of: Ionizing Radiation Research. London: InTech; 2015. https://doi.org/10.5772/60427. Treutwein M, Hipp M, Kölbl O, Bogner L. IMRT of prostate cancer. Strahlenther Onkol. 2009;185:379–83. https://doi.org/10.1007/s00066-009-1950-7. Morales-Paliza MA, Coffey CW, Ding GX. Evaluation of the dynamic conformal arc therapy in comparison to intensity-modulated radiation therapy in prostate, brain, head-and-neck and spine tumors. J Appl Clin Med Phys. 2011;12:5–19. RTOG. RTOG | Clinical Trials | Study Number 0225. https://www.rtog.org/ClinicalTrials/ProtocolTable/StudyDetails.aspx?study=0225. Accessed 6 Nov 2018. Scoccianti S, Detti B, Gadda D, Greto D, Furfaro I, Meacci F, et al. Organs at risk in the brain and their dose-constraints in adults and in children: a radiation oncologist's guide for delineation in everyday practice. Radiother Oncol. 2015;114:230–8. https://doi.org/10.1016/j.radonc.2015.01.016. Treutwein M, Hipp M, Kölbl O, Dobler B. Ein neuer Typ von Dose-Volume-Objectives bei der VMAT-Planung von Patienten mit Prostatakarzinom: Surrounding Dose Fall-Off. In: Georg D, editor. Wien; 2011. p. 126–127. Bzdusek K, Friberger H, Eriksson K, Hardemark B, Robinson D, Kaus M. Development and evaluation of an efficient approach to volumetric arc therapy planning. Med Phys. 2009;36:2328–39. https://doi.org/10.1118/1.3132234. Sun W-Z, Chen L, Yang X, Wang B, Deng X-W, Huang X-Y. Comparison of treatment plan quality of VMAT for esophageal carcinoma with: flattening filter beam versus flattening filter free beam. J Cancer. 2018;9:3263–8. https://doi.org/10.7150/jca.26044. ICRU. Prescribing, recording, and reporting photon-beam intensity-modulated radiation therapy (IMRT): contents. J ICRU. 2010;10:1–106. Williams G, Tobler M, Leavitt D. Pitfalls in normalization for intensity-modulated radiation therapy planning. Med Dosim. 2005;30:194–200. https://doi.org/10.1016/j.meddos.2005.06.003. Paddick I. A simple scoring ratio to index the conformity of radiosurgical treatment plans. Technical note. J Neurosurg. 2000;93(Suppl 3):219–22. https://doi.org/10.3171/jns.2000.93.supplement. Kragl G, Baier F, Lutz S, Albrich D, Dalaryd M, Kroupa B, et al. Flattening filter free beams in SBRT and IMRT: dosimetric assessment of peripheral doses. Z Med Phys. 2011;21:91–101. https://doi.org/10.1016/j.zemedi.2010.07.003. Sarkar V, Huang L, Huang Y-HJ, Szegedi MW, Rassiah-Szegedi P, Zhao H, Salter BJ. Head to head comparison of two commercial phantoms used for SRS QA. J Radiosurgery and SBRT. 2016;4:213–23. Bogner L, Scherer J, Treutwein M, Hartmann M, Gum F, Amediek A. Verification of IMRT: techniques and problems. Strahlenther Onkol. 2004;180:340–50. https://doi.org/10.1007/s00066-004-1219-0. Dobler B, Streck N, Klein E, Loeschel R, Haertl P, Koelbl O. Hybrid plan verification for intensity-modulated radiation therapy (IMRT) using the 2D ionization chamber array I'mRT MatriXX - a feasability study. Phys Med Biol. 2010;55:N39–55. van Esch A, Basta K, Evrard M, Ghislain M, Sergent F, Huyskens DP. The Octavius1500 2D ion chamber array and its associated phantoms: Dosimetric characterization of a new prototype. Med Phys. 2014;41:91708. https://doi.org/10.1118/1.4892178. Low DA, Harms WB, Mutic S, Purdy JA. A technique for the quantitative evaluation of dose distributions. Med Phys. 1998;25:656–61. https://doi.org/10.1118/1.598248. Ezzell GA, Burmeister JW, Dogan N, LoSasso TJ, Mechalakos JG, Mihailidis D, et al. IMRT commissioning: multiple institution planning and dosimetry comparisons, a report from AAPM task group 119. Med Phys. 2009;36:5359–73. https://doi.org/10.1118/1.3238104. Schneider U, Sumila M, Robotka J. Site-specific dose-response relationships for cancer induction from the combined Japanese A-bomb and Hodgkin cohorts for doses relevant to radiotherapy. Theor Biol Med Model. 2011;8:1–27. https://doi.org/10.1186/1742-4682-8-27. Preston DL, Ron E, Tokuoka S, Funamoto S, Nishi N, Soda M, et al. Solid cancer incidence in atomic bomb survivors: 1958-1998. Radiat Res. 2007;168:1–64. https://doi.org/10.1667/RR0763.1. Dores GM, Metayer C, Curtis RE, Lynch CF, Clarke EA, Glimelius B, et al. Second malignant neoplasms among long-term survivors of Hodgkin's disease: a population-based evaluation over 25 years. J Clin Oncol. 2002;20:3484–94. https://doi.org/10.1200/JCO.2002.09.038. Yamanaka R, Abe E, Sato T, Hayano A, Takashima Y. Secondary intracranial tumors following radiotherapy for pituitary adenomas: a systematic review. Cancers (Basel). 2017. https://doi.org/10.3390/cancers9080103. Jiang X, Liu Z, Huang X, Wang H, Zhao H. Extended endoscopic endonasal approach for pituitary adenoma: a single-center experience of 171 patients. Chin Neurosurg Jl. 2017;3:613. https://doi.org/10.1186/s41016-017-0080-9. Faul F, Erdfelder E, Lang A-G, Buchner A. G*power 3: a flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behav Res Methods. 2007;39:175–91. https://doi.org/10.3758/BF03193146. Faul F, Erdfelder E, Buchner A, Lang A-G. Statistical power analyses using G*power 3.1: tests for correlation and regression analyses. Behav Res Methods. 2009;41:1149–60. https://doi.org/10.3758/BRM.41.4.1149. Bender R, Lange S. Adjusting for multiple testing--when and how? J Clin Epidemiol. 2001;54:343–9. Goldsmith BJ, Rosenthal SA, Wara WM, Larson DA. Optic neuropathy after irradiation of meningioma. Radiology. 1992;185:71–6. https://doi.org/10.1148/radiology.185.1.1523337. Dobler B, Khemissi A, Obermeier T, Hautmann MG, Katsilieri Z, Kölbl O. Re-irradiating spinal column metastases using IMRT and VMAT with and without flattening filter - a treatment planning study. Radiat Oncol. 2016;11:1280. https://doi.org/10.1186/s13014-016-0603-0. SunNuclear Corporation provided phantom, array and software for the plan verification. Availability of data and material The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request. This study was funded by the Bavarian State Ministry of the Environment and Consumer Protection. The funder did not influence the design of the study, the collection, analysis and interpretation of the data, nor writing the manuscript. Department for radiotherapy, Regensburg University Medical Center, Regensburg, Germany Marius Treutwein, Felix Steger, Oliver Koelbl & Barbara Dobler Faculty of computer science and mathematics, Ostbayerische Technische Hochschule, Regensburg, Germany Rainer Loeschel Marius Treutwein Felix Steger Oliver Koelbl Barbara Dobler MT, OK, and BD developed the concept of the study. MT performed the RTPS planning, the measurements, and the evaluation and drafted the manuscript. FS selected the patients, contoured the regions of interest, was responsible for the clinical evaluation of the treatment plans, and drafted the patients section in material and methods. RL developed the software for the calculation of secondary malignancy risk. OK and BD helped to draft the manuscript. All authors read and approved the final manuscript. Correspondence to Marius Treutwein. Written informed consent was gained from the patients. The survey was approved by the Ethical Committee of the University of Regensburg. Treutwein, M., Steger, F., Loeschel, R. et al. The influence of radiotherapy techniques on the plan quality and on the risk of secondary tumors in patients with pituitary adenoma. BMC Cancer 20, 88 (2020). https://doi.org/10.1186/s12885-020-6535-y Pituitary adenoma Secondary malignoma risk Flattening filter free Medical and radiation oncology
CommonCrawl
Maps between short exact sequences Suppose I have a short exact sequence of modules $0\rightarrow A \rightarrow B \rightarrow C \rightarrow 0$. Let $A' \subseteq A$ and $C'\subseteq C$ be submodules and suppose I have a short exact sequence $0\rightarrow A'\rightarrow B'\rightarrow C'\rightarrow 0$. Then I have a diagram $$ \newcommand{\ra}[1]{\!\!\!\!\!\!\!\!\!\!\!\!\xrightarrow{\quad#1\quad}\!\!\!\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.} % \begin{array}{lllllllll} 0 & \ra{} & A' & \ra{} & B' & \ra{} & C' & \ra{} & 0 \\ & & \da{} & & & & \da{} & & \\ 0 & \ra{} & A & \ra{} & B & \ra{} & C & \ra{} & 0 & \\ \end{array} $$ I know in general there is no map $B' \rightarrow B$ that would make the diagram commute, but are there sufficient conditions for such a map to exist? abstract-algebra commutative-algebra modules representation-theory RogerRoger In general, given a diagram $$\begin{array}{cccccccccc} \varepsilon'\colon & 0 & \to & A' & \to & B' & \to & C' & \to & 0\\ &&& \phantom{\alpha}\downarrow\alpha&&&& \phantom{\gamma}\downarrow\gamma\\ \varepsilon\colon & 0 & \to & A & \to & B & \to & C & \to & 0 \end{array}$$ you can fill it in with a map $\beta\colon B'\to B$ if and only if the pushout $\alpha\varepsilon'$ equals the pullback $\varepsilon\gamma$ as extensions in $\mathrm{Ext}^1(C',A)$. Note that this doesn't required $\alpha$ and $\gamma$ to be monomorphisms. Note that, in your situation, the pullback yields a submodule $B_2\leq B$ (the preimage of $C'$). Also, if there is such a map, then $B'$ would necessarily be (isomorphic to) a submodule of $B$. Andrew HuberyAndrew Hubery $\begingroup$ If $alpha$ is the identity map, then how is the consequence? $\endgroup$ – user29422 Mar 24 '20 at 18:47 $\begingroup$ If $\alpha$ is the identity, then this becomes: there exists $\beta$ making the diagram commute if and only if $\varepsilon'$ equals the pullback $\varepsilon\gamma$. $\endgroup$ – Andrew Hubery Mar 24 '20 at 22:22 $\begingroup$ Oh I see. Thank you very much! $\endgroup$ – user29422 Mar 25 '20 at 4:34 Not the answer you're looking for? Browse other questions tagged abstract-algebra commutative-algebra modules representation-theory or ask your own question. How to prove surjectivity part of Short Five Lemma for short exact sequences. Short Exact Sequence of R-Modules & Chains About Exact Sequences Diagram with exact sequences of modules If 2 out of 3 maps between short exact sequences is zero, is the third also zero? Doubt in snake lemma Is the diagram commutative
CommonCrawl
The humoral immune response is essential for successful vaccine protection against paratuberculosis in sheep Hannah B. Pooley ORCID: orcid.org/0000-0003-2238-39061, Douglas J. Begg1, Karren M. Plain1, Richard J. Whittington1, Auriol C. Purdie1 na1 & Kumudika de Silva1 na1 The role played by the humoral immune response in animals vaccinated against a mycobacterial disease such as paratuberculosis, is not well understood. Sheep vaccinated against Mycobacterium avium subsp. paratuberculosis (MAP) can still become infected and in some cases succumb to clinical disease. The strength and location of the humoral immune response following vaccination could contribute to the ability of sheep to clear MAP infection. We examined the peripheral antibody response along with the localised humoral response at the site of paratuberculosis infection, the ileum, to better understand how this contributes to MAP infection of sheep following vaccination and exposure. Through assessing MAP specific serum IgG1 and IgG levels we show that the timing and strength of the humoral immune response directly relates to prevention of infection following vaccination. Vaccinated sheep that subsequently became infected had significantly reduced levels of MAP specific serum IgG1 early after vaccination. In contrast, vaccinated sheep that did not subsequently become infected had significantly elevated MAP specific serum IgG1 following vaccination. Furthermore, at 12 months post MAP exposure, vaccinated and subsequently uninfected sheep had downregulated expression of genes related to the humoral response in contrast to vaccinated infected sheep where expression levels were upregulated. The timing and strength of the humoral immune response following vaccination against paratuberculosis in sheep directly relates to subsequent infection status. An initial strong IgG1 response following vaccination was crucial to prevent infection. Additionally, vaccinated uninfected sheep were able to modulate that response following apparent MAP clearance, unlike vaccinated infected animals where there was apparent dysregulation of the humoral response, which is associated with progression to clinical disease. Humoral immunity is believed to play a role in the protective response against intracellular mycobacterial pathogens, such as Mycobacterium avium subsp. paratuberculosis (MAP), the causative agent of paratuberculosis in ruminants [1, 2]. The pathogenesis of paratuberculosis was traditionally characterised by an increase in the humoral response measured through antibody production at the end stages of disease [3, 4]. The switch from an initial dominant cell mediated immune (CMI) response to a humoral response is often thought to signify a breakdown of disease control by the host and progression to clinical disease [5]. While it is undisputed that Interferon gamma (IFNγ) production is essential for overcoming mycobacterial infection [6], the pattern of a protective immune response to MAP infection is actually not so clear [7], with some studies showing MAP-specific antibody responses in sheep as early as two weeks post exposure (wpe) [8] and occurring at the same time as an IFNγ response [9]. The protection provided by vaccines against mycobacterial pathogens, particularly MAP, is often incomplete [10,11,12]. Sheep and cattle vaccinated against MAP have reduced incidence of clinical disease and faecal shedding, however commercial vaccines fail to prevent infection in all animals [13,14,15]. Understanding the mechanisms behind how some vaccinated animals successfully clear infection when others do not, would allow development of new vaccines to specifically target a protective immune response in all animals. Traditional markers to assess vaccine efficacy such as IFNγ and total antibody response in isolation are not able to differentiate between animals protected by vaccination and those that are not [16]. Therefore, there is a need to explore alternate or additional markers of vaccine protection to truly understand a protective vaccine response. To this point, most work on correlates of vaccine-induced protection against paratuberculosis has focused on the CMI response, however recent studies have suggested a role for B cells as well [16, 17]. B cells are pivotal in the activation and modulation of both CMI and humoral immune responses [18, 19]. B cells function as antigen presenting cells but also produce antibodies enabling immune complexes that can regulate the function of effector cells such as macrophages [17, 20,21,22,23]. In ruminants, the proliferative capacity of peripheral B cells is reduced in animals where vaccination fails to provide protection against MAP [16]. This response was noted as early as 13 weeks' post MAP exposure. Additionally, a study by Begg and Griffin [24], found significantly higher percentages of B cells in the gut of vaccinated animals that survived MAP challenge compared to diseased animals. Therefore, although peripheral B cells may be functionally impaired, the humoral response at the site of infection might be more important to vaccine-induced protection. The activity and survival of B cells at the site of infection has been examined in relation to disease progression for mycobacterial infections, but not in response to vaccination. B cells in the tissue can be activated by several different mechanisms, including direct antigen contact, ligation of the CD40 receptor by T cells and binding of B cell activating factor (BAFF) [17]. CD40 ligand (CD40L) deficiency predisposes humans to opportunistic infections by intracellular bacteria [25] and can be correlated with severe tuberculosis in macaques [26]. In contrast, the expression of BAFF by circulating populations of CD4+ T cells is associated with active tuberculosis [27]. Other indicators of the humoral immune response, including B cell surface markers (CD81), cytokines that promote B cell survival (MIFF) and transcription factors (JUN), have all been implicated in the progression or prevention of mycobacterial infections [28,29,30]. These apparent contradictions suggest that detailed examination of B cell functionality and the humoral response at the site of infection is required. Furthermore, the ability to understand whether the humoral response does provide vaccine-elicited protection against MAP requires an understanding of the host's ability to successfully mount this response at the site of infection. We hypothesise that differences in B cell functionality may be correlated with infection status following exposure to MAP in vaccinated sheep. Therefore, we characterised the role of humoral immunity in sheep that were vaccinated and then became infected compared to vaccinated sheep that did not become infected. We assessed MAP-specific IgG1 and IgG levels and compared these with other aspects of B cell functionality, such as cell survival, differentiation, activation and receptor signalling, by gene expression in the gut tissues of these animals. Animal trial Infection outcome in vaccinated and non-vaccinated sheep at the conclusion of the animal trial was determined by culture of live MAP from the intestine (Table 1.). In the vaccinated animals exposed to MAP, two were found to be infected with live bacteria at 52 wpe and 18 had no culturable MAP at this time. In contrast, in the non-vaccinated sheep, 10 were found to be infected with MAP and 10 were uninfected at 52 wpe. Table 1 Numbers of animals included in IgG1 and tissue gene expression analyses Vaccinated and non-vaccinated infected sheep shed significantly more MAP in the faeces from 35 weeks post MAP exposure till the end of the trial, compared to both uninfected and control animals (p < 0.05) (Fig. 1). Quantity of MAP DNA shed in the faeces. Gudair™ vaccinated (vac) (6 weeks prior to MAP exposure) and non-vaccinated (non-vac) sheep were either exposed or left unexposed (control) to MAP. MAP exposed sheep were grouped based on infection status (infected and uninfected), determined by tissue culture at necropsy. The quantity (pg) of MAP DNA shed in the faeces was determined by direct faecal PCR at 3 timepoints throughout the trial. * denotes groups significantly different to all other groups not marked with an asterisk (p < 0.05) Serum antibody Vaccination significantly enhanced MAP-specific IgG1 levels in serum of sheep (p < 0.001) (Fig. 2a). In the vaccinated exposed animals, serum MAP-specific IgG1 was significantly higher in the cohort found to be uninfected at necropsy, compared to the infected animals. This polarised response was first evident prior to MAP exposure, as early as 1-month post vaccination (p < 0.05). Serum MAP-specific IgG1 peaked in the vaccinated uninfected animals at 19 wpe and then tended to wane until the final sampling at 52 wpe. In contrast, the vaccinated infected animals had significantly lower levels of MAP-specific IgG1 (p < 0.05). The IgG1 response over time was also dissimilar to the vaccinated uninfected sheep, with the vaccinated infected animal's MAP-specific IgG1 peaking at 11 wpe, decreased at 19 weeks and then increasing until the final sampling at 52 wpe. MAP-specific serum antibody response in sheep. Gudair™ vaccinated (vac) (6 weeks prior to MAP exposure) and non-vaccinated (non-vac) sheep were either exposed or left unexposed (control) to MAP. MAP exposed sheep were grouped based on infection status (infected and uninfected), determined by tissue culture at necropsy. MAP-specific IgG1 (a) and MAP-specific IgG (b) levels in serum were determined by ELISA. Data presented are the predicted mean and standard error, from the linear mixed model analysis. The arrow indicates the timepoint of vaccination. The animal number in each group is denoted in the legend Non-vaccinated animals had very low levels of MAP-specific IgG1 in serum compared to vaccinates. There were no significant differences between the non-vaccinated controls and the two non-vaccinated exposed groups (infected or uninfected) at any time point. However, in the non-vaccinated infected group, similar to the vaccinated infected sheep, MAP-specific IgG1 levels in serum tended to increase at the final sampling time point. A similar pattern to the serum MAP-specific IgG1 response was also seen in the MAP-specific IgG data (Fig. 2b). Vaccinated animals had significantly greater MAP-specific IgG levels than non-vaccinates (p < 0.001). As with the MAP-specific IgG1, vaccinated uninfected sheep had higher MAP-specific IgG levels, which remained high throughout the animal trial. Interestingly, the vaccinated infected sheep had very low MAP-specific IgG levels that were not significantly different from the non-vaccinated sheep, until 11 wpe. At 19 wpe, MAP-specific IgG levels began to increase in these vaccinated infected animals, with a rapid increase from 27 to 35 wpe. The polarised pre-exposure responses seen in the vaccinated infected and uninfected sheep was mimicked in the IgG1 response of vaccinated control animals. The vaccinated control sheep could be differentiated into high and low IgG1 responder groups at the pre-exposure time point (Fig. 3.). MAP specific-IgG1 serum antibody levels in vaccinated sheep prior to MAP exposure. MAP-specific serum IgG1 levels in vaccinated unexposed sheep (black lines, numbers are individual sheep identifiers) were examined at pre-vaccination and pre-exposure (4 weeks post-vaccination, 2 weeks prior to MAP exposure) time points. The group average values for the vaccinated exposed sheep, categorised as vaccinated infected (tissue culture positive) (n = 2) and vaccinated uninfected (tissue culture negative) (n = 18) are also shown (dashed grey lines) Gene expression in the ileum Gene expression analysis of sheep ileal tissue was conducted to assess functionality of B cells at the site of predilection for MAP infection. Fold changes were determined in comparison to a baseline group established from specific research questions (Table 2). This method of analysis was adopted to ensure that the correct controls (baseline groups) were being used to understand the impacts of vaccination, exposure and the spectrum of disease. Table 2 Analysis matrix for sheep B cell-related gene expression in intestinal tissue for specific research questions In relation to research question 1, gene expression responses were divergent in vaccinated and non-vaccinated animals that were uninfected at necropsy (Table 3). The majority of genes were down-regulated in the vaccinated uninfected animals, apart from Lyn and NFIL3. In contrast, in the non-vaccinated uninfected animals only 4 of the 11 genes examined were down-regulated. This dissimilarity of response was also seen in the vaccinated infected and the non-vaccinated infected animals, where only ERG1 and GRB2 were down regulated in the non-vaccinated infected, but 4 of the 11 genes were down regulated in the vaccinated infected. Interestingly, vaccinated uninfected animals had significantly decreased expression of CD84 (fold change 0.112, CI 0.021–0.605, p < 0.05) and BAFF (fold change 0.06, CI 0.004–0.89, p < 0.05) compared to non-vaccinated infected sheep. These same trends were seen in MIF expression, with an increase in expression in non-vaccinated infected animals and decreased expression in vaccinated controls and vaccinated uninfected sheep. Table 3 Gene expression changes (fold change) in the ileum of the different treatment groups compared to non-vaccinated control sheep In research question 2, only the vaccinated animals were examined due to their dissimilarity in expression responses from non-vaccinated sheep. MAP exposure in vaccinated sheep was associated with upregulation of the majority of the genes of interest (Table 4). CD84 was the only differentially regulated gene, where vaccinated uninfected sheep had decreased expression compared to vaccinated infected sheep. The differences between vaccinated infected and uninfected sheep were further explored in research question 3. The majority of the genes examined were up-regulated in the vaccinated infected sheep compared to the vaccinated uninfected sheep. This increased expression was strongest for CD40LG, BAFF, MIF, CD84 and NFIL3 (Table 5). In contrast, both EGR1 and Lck tended to have reduced expression in the vaccinated infected animals compared to the vaccinated uninfected sheep. Table 4 Gene expression changes (fold change) in the ileum of the vaccinated infected and uninfected sheep compared to vaccinated control sheep Table 5 Gene expression changes (fold change) in the ileum of sheep to answer specific research questions In research question 4, the differences in gene expression between non-vaccinated and vaccinated sheep that were infected at necropsy was explored in more detail. The majority of the genes of interest were down regulated in the vaccinated infected animals compared to the non-vaccinated infected sheep (Q4 Table 2). This decreased expression was especially evident in BAFF, CD84 and Lck (Table 5). The exact role of B cells and antibodies in host immunity against intracellular pathogens has been a long-debated topic. We show here that B cell functionality is important in vaccine-induced clearance of infection in animals exposed to MAP. A rapid and potent IgG1 antibody response was seen in all uninfected vaccinated sheep. Furthermore, at 12 months post exposure, these uninfected animals had a dampening of the humoral response at the site of paratuberculosis predilection, in contrast to the late switch to a humoral dominated response often associated with the transition to clinical paratuberculosis. On the other hand, vaccinated infected sheep had a slower IgG1 response to vaccination and had up regulated expression of genes related to the humoral immune response in the ileum at 12 months post exposure. The serum MAP-specific IgG1 response in vaccinated uninfected sheep was elevated compared to the vaccinated controls and vaccinated infected sheep. This significant difference was already evident prior to MAP exposure suggesting a failure of the host to effectively mount an early IgG1 response post vaccination. The route of entry of MAP into the host macrophage can impact the ability of the host cell to kill it [31, 32]. Opsonised bacteria are more likely to interact with the Fc receptor (FcR) on the surface of monocytes and macrophages [33]. Phagocytosis of opsonised MAP via the FcR increases trafficking of intracellular bacteria to the lysosome and increases killing [33,34,35,36]. IgG1 has the highest affinity for the Fc receptor out of all IgG subclasses [37], and it stands to reason that animals that have a high IgG1 response to vaccination would be better equipped to eliminate MAP from the gut [19, 38]. The polarised IgG1 response in the vaccinated control animals, suggests that MAP specific IgG1 levels after vaccination, even in animals without MAP exposure, could be a useful predictor of vaccine efficacy. The initial low level of serum MAP-specific IgG1 in vaccinated infected sheep, followed by a sharp increase, could signify the change from a Th1 to Th2 mediated response. The switch from a Th1 to Th2 immune response in paratuberculosis has been associated with progression to clinical disease [5]. It is possible that the lack of an early antibody response in these animals aids early tissue invasion and leads to persistent infection, as animals with an inadequate IgG1 response appeared to be incapable of eliminating or controlling MAP. A similar late increase was also seen in the non-vaccinated infected animals, although to a much smaller magnitude. In stark contrast to these two groups, the vaccinated control and vaccinated uninfected animals appeared to have a gradual reduction in MAP-specific IgG1 in the serum towards the end of the trial. This reduction could signify a waning of the initial strong antibody response, either due to clearance of bacteria and decreased chronic antigenic stimulation, or the return to a more balanced immune state. The reduction of antibody levels, and presumably B cell response, could also lead to reductions in B cell related immunopathologies associated with an excessive response [39]. In the non-vaccinated sheep, there was no significant difference in levels of either IgG or IgG1 between the control, infected or uninfected animals at any sampling point. Low levels of MAP-specific IgG and IgG1 antibody are not unexpected in MAP exposed animals during the early stages of disease [4]. The differences seen between the vaccinated and non-vaccinated uninfected animals in terms of MAP-specific IgG1 could be the result of differences in the stimulus to induce protective immune responses. The immune response generated by a vaccine such as Gudair® is much stronger than that generated by natural exposure to MAP, due to continual antigen presentation via depot formation and the immune enhancing properties of the adjuvant [40]. The comparison of gene expression changes in the gut of sheep from multiple treatment groups created a dilemma as to what makes the best baseline group to determine fold change. As such, several different research questions were presented to explore the effects of vaccination, exposure to MAP and infection status on the expression of B cell related genes in the gut of ruminants. Initially, to examine the impacts of both vaccination and infection outcome, the non-vaccinated control sheep were used as the baseline (Q1 Table 2). Vaccination alone decreased B cell-related gene expression and a similar pattern was found in vaccinated animals that resisted or recovered from infection (vaccinated uninfected). Previous work has suggested that early loss of B cell functionality could contribute to vaccine non-response [16], this lack of B cell activation in the current study could be due to samples being taken at a much later time point as well as differences in sample type (blood vs ileal tissue). There was an up-regulation of genes in ileal tissues in infected animals (both vaccinated and non-vaccinated) compared to healthy controls. Overall the results from this study suggests that a B cell response in the later stages of disease, at the site of infection, is not beneficial. Similarly, to the serum antibody results, a strong initial B cell mediated response could be beneficial, whilst a late response signifies progression to disease. The pattern of reduced expression in the vaccinated uninfected animals is likely to be due to clearance of MAP and subsequent return to an inactive or homeostatic immune state, which would be a response similar to the vaccinated unexposed animals where there is also no MAP to activate expression of B cell related genes. In vaccinated animals, increased expression of CD40 ligand (CD40LG), JUN and B cell activating factor (BAFF) were related to infection status (Q2 Table 2). Engagement of CD40LG with the CD40 receptor on B cells is required for initiation and maintenance of the humoral immune response [41]. In the initial stages of humoral immunity after exposure, CD40/CD40LG signalling is required for the generation of high titres of class switched, high affinity antibodies [41]. During the progression of the immune response, signalling through these receptors encourages the development of memory B cells [39, 41]. The ligation of CD40 has been associated with production of AP-1 early response transcription factor, which is a heterodimer of cFOS and cJUN [42]. Overexpression of cJUN has been associated with prevention of apoptosis, highly increased proliferation and even immortalisation of B cells [43]. BAFF also plays a pivotal role in promoting the survival of plasmablasts, especially after CD40/CD40LG activation [39]. Proliferation and class switch triggered by CD40/CD40LG and then the prevention of apoptosis by BAFF and AP-1 would push the host's immune response to one that is Th2 dominated [44]. The reduced expression of this gene in the vaccinated uninfected animals suggests that, at this timepoint, the humoral response in these animals is being down regulated or suppressed. Interestingly, only Lck and EGR1 were down-regulated in the vaccinated infected sheep compared to the vaccinated uninfected animals, although this was not statistically significant. The expression of both of these genes influences B cell maturation, differentiation and antigen receptor signalling [45, 46]. Host tyrosine kinases, like Lck, are key host molecules utilised by intracellular pathogens to prevent killing [47]. In agreement with our work, the suppression of members of the Src family in chicken B cells renders them largely unresponsive to stimulation through the antigen receptor [45]. EGR1 expression is induced upon B cell antigen receptor signalling and accelerates B cell maturation [46]. The dysfunction of the B cell antigen receptor as a possible result of Lck gene down regulation could result in the reduced expression of EGR1. Therefore, it is possible that either upon vaccination or following MAP exposure B cell maturation and differentiation has been reduced through the supressed expression of these two genes, which is likely to diminish the memory response and could contribute to MAP persistence. Although statistically significant differences were seen in this study, only a small number of the vaccinated exposed animals were infected at necropsy (n = 2). To validate the findings of this research, a larger number of vaccinated infected animals should be examined. In conclusion, B cell responses were shown to be important to vaccine-mediated immune protection. A strong initial B cell response, characterised by MAP-specific IgG1 levels in serum, was seen in vaccinated sheep that cleared infection. Furthermore, this response appears to be toned down or tightly regulated towards the later stages of infection to prevent the dominance of the humoral response that likely marks the progression to clinical disease. The unique insight into the mechanisms behind vaccine immunity provided by this study will allow vaccine development to promote a strong initial humoral response and could possibly contribute to genetic selection for vaccine response in the future. Fifty Merino sheep were purchased from a farm participating in the Australian Market Assurance scheme for Paratuberculosis. The wether lambs aged 4 months were sourced from a flock in Armidale, New South Wales (NSW), an area that has no prior history of Johne's disease (JD). Absence of MAP infection was confirmed through extensive whole flock faecal tests and serum antibody ELISA [48]. On arrival at the university farm in Camden, NSW, sheep were randomly allocated into 4 treatment groups, Gudair™ vaccinated MAP unexposed (n = 5) and exposed (n = 20), Non-vaccinated MAP unexposed (n = 5) and exposed (n = 20). Sheep were managed under conventional Australian sheep farming conditions by grazing in open paddocks on unimproved pasture. During the trial all animals were examined daily and were weighed monthly. Faecal samples were collected from the rectum and blood samples via jugular venepuncture of all animals prior to inoculation and then every 3–4 months; collection order was non-purposive by selecting animals as they entered the handling yards. Blood and faecal samples were collected in sheep handling yards within the university farm land, and undertaken at a time of day when the weather was mild (between 15 and 25 °C) to reduce stress to the animals. Vaccination with Gudair® was performed in accordance with the manufacturer's instructions and at 6 weeks post-vaccination a cohort were moved to quarantine paddocks and inoculated with MAP (Telford 9.2). The inoculation consisted of 3 oral doses (within a 4-week period) as described by Begg et al. [48] with a total of 9.25 × 108 viable MAP organisms. A non-vaccinated cohort was similarly exposed to MAP at the same time. Unexposed animals, vaccinated and non-vaccinated, were kept in separate paddocks to their exposed counterparts. At the conclusion of the trial (52wpe), all animals were euthanised using an intravenous injection of barbiturate (Lethabarb™) at 0.5 ml/kg bodyweight. The disease status of all MAP exposed animals was then categorised based on liquid culture of MAP from gut tissues collected at necropsy, as described previously [49,50,51]. Animals with positive tissue cultures were classified as infected and those with negative results were classified as uninfected. A smaller subset of animals was used for gene expression examination in gut tissue (Table 1). Serum antibody levels MAP specific IgG1 ELISA ELISA plates (Nunc Maxisorb) were coated with 5 μg/ml MAP 316v, Protoplasmic antigen A (PPA) or heat-killed M. phlei. The ELISA plates were machine washed 5 times (Tecan, Austria) using wash buffer (phosphate buffered saline with 0.05% v/v Tween 20). Diluted serum (1/100) was added in duplicate to each antigen. Plates were incubated at 37 °C for 1 h, washed as described above and anti-IgG1 antibody (AbD Serotec, MCA 2440) was added. After a 1-h incubation at 37 °C, plates were machine-washed 5 times. Goat anti-mouse HRP (Dako P0447) was then added to each well and incubated at 37 °C for 30 min. Plates were washed as described earlier and TMB substrate (Pierce) added prior to incubating at room temperature in the dark for 20 min, the reaction was stopped with 2 M sulphuric acid and plates were read at 450 nm. A single batch of positive and negative controls were included on each plate to standardise the ELISA. The positive serum controls were sourced from a sheep with high MAP specific serum antibody levels, as identified by the commercial IDEXX ELISA and the IgG1 ELISA. The negative control was serum from a sheep consistently test-negative for MAP-specific antibodies, as determined by the IDEXX ELISA and the IgG1 ELISA. The MAP 316v antigen-specific IgG1 response was calculated using the following formula: $$ SP\%=\frac{OD_{sample}-{OD}_{PPA\ negative\ control}}{OD_{PPA\ positive\ control}-{OD}_{PPA\ negative\ control}}\times 100 $$ M. phlei was included as a mycobacterial cross-reactivity control to ensure that responses seen in the IgG1 ELISA were MAP specific. PPA was included in the plate as a second MAP specific antigen and the ratio of PPA and 316 V response (1:0.8) was used to monitor positive control performance. Furthermore, the stronger PPA response was utilised to calculate SP%. MAP specific IgG ELISA The commercial IDEXX Pourquier ELISA (Idexx Laboratories, Australia) was used to determine MAP-specific serum IgG antibody levels. The ELISA was performed based on the manufacturer's instructions. Faecal MAP detection A high throughput direct faecal PCR was used to quantify the amount of MAP shed in the faeces of all sheep, as previously described [52]. Gene expression at the site of infection Gene selection The genes to be examined were selected from a normalised and statistically analysed data set generated from previous microarray gene expression analysis (Affymetrix GeneChip) on sheep peripheral blood mononuclear cells (PBMC). This data set was generated from two sheep experimental infection trials that compared differential gene expression in animals vaccinated with Gudair™ and experimentally exposed to MAP compared to non-vaccinated MAP-exposed animals. Samples for gene expression analysis were taken at 13 wpe and 18 animals were used (9 vaccinated infected and 9 vaccinated uninfected). The raw data was normalised using the RMA (Robust Multichip Averaging) algorithm and significant differences were determined with ANOVA [53]. The data set was then examined using Ingenuity Pathway Analysis (IPA) software (version 01–01, Qiagen Bioinformatics). IPA was used to search for genes related to B cell functionality, survival, receptor signalling, migration and interaction with other immune cells within the data set. The overlay function was then used to examine the gene expression fold change in the microarray data set in relation to vaccination and disease outcome. Genes for qPCR analysis for the current study (Table 6) were then selected based on a fold change of greater than +/− 1.5 in PBMC. Table 6 Selected genes and primers used for gene expression analysis of intestinal tissue Tissue sections At the conclusion of the trial (52 wpe), all animals were necropsied. The gastrointestinal tract was removed and 3–4 cm sections of the mid to terminal ileum were excised. The sections were frozen at − 80 °C prior to RNA extraction. RNA extraction and quality and quantity assessment RNA extraction was performed using RNAzol® RT (Merc) per the manufacturer's instructions. The quality and quantity of purified RNA was assessed using a NanoDrop® ND-1000 UV-Vis Spectrophotometer (Thermo Scientific, Wilmington, DE), using the Nucleic Acid module. The absorbance at 260 nm was used to determine the RNA concentration where an A260 nm reading of 1.0 is equivalent to 40 μg/mL of RNA. Purity was characterised as a A260/A280 ratio between 1.8 and 2 and a A260/A230 ratio between 1.4 to 2.2. To remove contaminating genomic DNA and increase RNA purity, the samples were DNase treated and ethanol precipitated following extraction. cDNA generation cDNA was generated from RNA using the iScript™ cDNA Synthesis kit (Bio-Rad) per the manufacturer's instructions, diluted 1/10 and stored at − 80 °C until required. Primer selection and validation Forward and reverse primers (Table 6) were designed specifically for the gene regions of interest using online software Primer 3 [54] and checked for specificity using a BLAST search. As genes were selected based on expression levels in PBMC, gene expression in intestinal tissues was confirmed using cDNA generated from a paratuberculosis infected sheep. Three additional housekeeping genes were assessed with geNorm analysis in the qBASE plus analysis software (Biogazelle) [55]. This analysis identified the most stable reference; for subsequent analyses one reference gene was used based on the geNorm analysis (Table 5). qPCR and gene expression level analysis qPCR was performed using an Mx3000P Real-time PCR system (Stratagene, Agilent) using the SensiMix™ SYBR® kit (Bioline). Assays were prepared in 96 well plates and included duplicates of each sample. Reaction volumes of 25 μl (including 10 μl of target cDNA at a 1/10 dilution) were prepared and amplified under the following conditions: 95 °C for 10 min, then 40 cycles of 95 °C for 20 s, 56 °C for 30 s and 72 °C for 30 s, with fluorescence acquisition at the end of each annealing step. The specificity of the reaction was confirmed using melting curve analysis. Standard curves were performed on each plate for each primer set. Data collected from the quantitative reverse transcription (qRT)-PCR were analysed using qBASE plus analysis software (Biogazelle) utilising a modified Comparative Ct (ΔΔCt) method [56]. Fold changes were determined in comparison to pre-selected baseline group (Table 2) and the biological significance was set at a change of +/− 1.5 fold. Restricted maximum likelihood (REML) in a linear mixed model (Genstat 16th edition; VSN International Ltd., Hemel Hempstead, United Kingdom) was used to analyse the MAP-specific IgG1, IgG serum ELISA results (S/P %) and MAP DNA quantity in the faeces. Sheep were grouped based on treatment coupled with infection status (vaccinated control, vaccinated infected, vaccinated uninfected, non-vaccinated control, non-vaccinated infected or non-vaccinated uninfected), which along with sampling time point was included as a fixed effect in the model. Animal ear tag number was included as a random effect. When the REML analysis was significant, post-hoc tests to determine the significant differences between pairs of predicted means using the Fisher's Least Significant Difference procedure were performed. The datasets used and/or analysed during the current study available from the corresponding author on reasonable request. BAFF: B cell activating factor CD40L: CD40 ligand CMI: Cell mediated immunity Enzyme linked immunosorbent assay FcR: IFNγ: Interferon gamma IPA: LcK: Lymphocyte specific protein kinase Mycobacterium avium subspecies paratuberculosis MIF: Macrophage migration inhibitory factor PCR: PPA: Protoplasmic antigen A REML: Restricted maximum likelihood wpe: Weeks post exposure Achkar JM, Prados-Rosales R. Updates on antibody functions in Mycobacterium tuberculosis infection and their relevance for developing a vaccine against tuberculosis. Curr Opin Immunol. 2018;53:30–7. Ganusov VV, Klinkenberg D, Bakker D, Koets AP. Evaluating contribution of the cellular and humoral immune responses to the control of shedding of Mycobacterium avium spp. paratuberculosis in cattle. Vet Res. 2015;46:62. Stabel JR. Host responses to Mycobacterium avium subsp. paratuberculosis: a complex arsenal. Anim Health Res Rev. 2006;7(1–2):61–70. Sweeney RW. Pathogenesis of Paratuberculosis. Vet Clin N Am Food Anim Pract. 2011;27(3):537–46. Stabel J. Transitions in immune responses to Mycobacterium paratuberculosis. Vet Microbiol. 2000;77(3-4):465-73. Flynn JL, Chan J, Triebold KJ, Dalton DK, Stewart TA, Bloom BR. An essential role for interferon γ in resistance to Mycobacterium tuberculosis infection. J Exp Med. 1993;178(6):2249–54. de Silva K, Plain K, Purdie A, Begg D, Whittington R. Defining resilience to mycobacterial disease: characteristics of survivors of ovine paratuberculosis. Vet Immunol Immunopathol. 2018;195:56–64. Waters WR, Bannantine JP, Miller JM, Palmer MV, Stabel JR, Jones DE, et al. Early induction of humoral and cellular immune responses during experimental Mycobacterium avium subsp. paratuberculosis infection of calves. Infect Immun. 2003;71(9):5130–8. Begg DJ, de Silva K, Carter N, Plain KM, Purdie A, Whittington RJ. Does a Th1 over Th2 dominancy really exist in the early stages of Mycobacterium avium subspecies paratuberculosis infections? Immunobiology. 2011;216(7):840–6. Eppleston J, Reddacliff L, Windsor P, Links I, Whittington R. Preliminary observations on the prevalence of sheep shedding Mycobacterium avium subsp paratuberculosis after 3 years of a vaccination program for ovine Johne's disease. Aust Vet J. 2005;83(10):637–8. Windsor PA. Understanding the efficacy of vaccination in controlling ovine paratuberculosis. Small Rumin Res. 2013;110(2–3):161–4. Kaufmann SHE. Future vaccination strategies against tuberculosis: thinking outside the box. Immunity. 2010;33(4):567–77. Bannantine JP, Hines ME, Bermudez LE, Talaat AM, Sreevatsan S, Stabel JR, et al. Rational framework for evaluating the next generation of vaccines against mycobacterium avium subspecies paratuberculosis. Front Cell Infect Microbiol. 2014;4:126. Reddacliff L, Eppleston J, Windsor P, Whittington R, Jones S. Efficacy of a killed vaccine for the control of paratuberculosis in Australian sheep flocks. Vet Microbiol. 2006;115(1–3):77–90. Kalis CHJ, Hesselink JW, Barkema HW, Collins MT. Use of long-term vaccination with a killed vaccine to prevent fecal shedding of Mycobacterium avium subsp paratuberculosis in dairy herds. Am J Vet Res. 2001;62(2):270–4. de Silva K, Plain KM, Begg DJ, Purdie AC, Whittington RJ. CD4(+) T-cells, gammadelta T-cells and B-cells are associated with lack of vaccine protection in Mycobacterium avium subspecies paratuberculosis infection. Vaccine. 2015;33(1):149–55. Kozakiewicz L, Phuah J, Flynn J, Chan J. The role of B cells and humoral immunity in mycobacterium tuberculosis infection. Adv Exp Med Biol. 2013;783:225–50. Coussens PM. Model for immune responses to Mycobacterium avium subspecies paratuberculosis in cattle. Infect Immun. 2004;72(6):3089–96. Achkar JM, Chan J, Casadevall A. B cells and antibodies in the defense against Mycobacterium tuberculosis infection. Immunol Rev. 2015;264(1):167–81. Elkins KL, Bosio CM, Rhinehart-Jones TR. Importance of B cells, but not specific antibodies, in primary and secondary protective immunity to the intracellular bacterium Francisella tularensis live vaccine strain. Infect Immun. 1999;67(11):6002–7. Linton PJ, Harbertson J, Bradley LM. A critical role for B cells in the development of memory CD4 cells. J Immunol. 2000;165(10):5558–65. Shen H, Whitmire JK, Fan X, Shedlock DJ, Kaech SM, Ahmed R. A specific role for B cells in the generation of CD8 T cell memory by recombinant listeria monocytogenes. J Immunol. 2003;170(3):1443–51. Whitmire JK, Asano MS, Kaech SM, Sarkar S, Hannum LG, Shlomchik MJ, et al. Requirement of B cells for generating CD4+ T cell memory. J Immunol. 2009;182(4):1868–76. Begg DJ, Griffin JF. Vaccination of sheep against M. paratuberculosis: immune parameters and protective efficacy. Vaccine. 2005;23(42):4999–5008. Cabral-Marques O, Ramos RN, Schimke LF, Khan TA, Amaral EP, Barbosa Bomfim CC, et al. Human CD40 ligand deficiency dysregulates the macrophage transcriptome causing functional defects that are improved by exogenous IFN-γ. J Allergy Clin Immunol. 2016;139(3):900-912. Roodgar M, Ross CT, Tarara R, Lowenstine L, Dandekar S, Smith DG. Gene expression and TB pathogenesis in rhesus macaques: TR4, CD40, CD40L, FAS (CD95), and TNF are host genetic markers in peripheral blood mononuclear cells that are associated with severity of TB lesions. Infect Genet Evol. 2015;36:396–409. Liu K, Zhang Y, Hu S, Yu Y, Yang Q, Jin D, et al. Increased levels of BAFF and APRIL related to human active pulmonary tuberculosis. PLoS One. 2012;7(6):e38429. Gore Y, Starlets D, Maharshak N, Becker-Herman S, Kaneyuki U, Leng L, et al. Macrophage migration inhibitory factor induces B cell survival by activation of a CD74-CD44 receptor complex. J Biol Chem. 2008;283(5):2784–92. Lutay N, Håkansson G, Alaridah N, Hallgren O, Westergren-Thorsson G, Godaly G. Mycobacteria bypass mucosal NF-kB signalling to induce an epithelial anti-inflammatory IL-22 and IL-10 response. PLoS One. 2014;9(1):e86466. Raju B, Hoshino Y, Belitskaya-Lévy I, Dawson R, Ress S, Gold JA, et al. Gene expression profiles of bronchoalveolar cells in pulmonary TB. Tuberculosis. 2008;88(1):39–51. Souza CD, Evanson OA, Weiss DJ. Role of cell membrane receptors in the suppression of monocyte anti-microbial activity against Mycobacterium avium subsp. paratuberculosis. Microb Pathog. 2008;44(3):215–23. Maglione PJ, Xu J, Casadevall A, Chan J. Fcγ receptors regulate immune activation and susceptibility during Mycobacterium tuberculosis infection. J Immunol. 2008;180(5):3329–38. Joller N, Weber SS, Müller AJ, Spörri R, Selchow P, Sander P, et al. Antibodies protect against intracellular bacteria by fc receptor-mediated lysosomal targeting. Proc Natl Acad Sci U S A. 2010;107(47):20441–6. Jolly A, Colavecchia SB, Fernandez B, Fernandez E, Mundo SL. Antibodies induced by Lipoarabinomannan in bovines: characterization and effects on the interaction between Mycobacterium Avium Subsp. Paratuberculosis and macrophages in vitro. Vet Med Int. 2011;2011:258479. Jolly A, Lompardía S, Hajos SE, Mundo SL. Evidence of a pro-apoptotic effect of specific antibodies in a bovine macrophage model of infection with Mycobacterium avium subsp. paratuberculosis. Vet Immunol Immunopathol. 2016;169:47–53. Armstrong JA, D'Arcy Hart P. Phagosome lysosome interactions in cultured macrophages infected with virulent tubercle bacilli. Reversal of the usual nonfusion pattern and observations on bacterial survival. J Exp Med. 1975;142(1):1–16. Collins AM, Jackson KJL. A temporal model of human IgE and IgG antibody function. Front Immunol. 2013;4:235. Everman JL, Bermudez LE. Antibodies against invasive phenotype-specific antigens increase Mycobacterium avium subspecies paratuberculosis translocation across a polarized epithelial cell model and enhance killing by bovine macrophages. Front Cell Infect Microbiol. 2015;5:58. Avery DT, Kalled SL, Ellyard JI, Ambrose C, Bixler SA, Thien M, et al. BAFF selectively enhances the survival of plasmablasts generated from human memory B cells. J Clin Investig. 2003;112(2):286–97. Awate S, Babiuk LA, Mutwiri G. Mechanisms of action of adjuvants. Front Immunol. 2013;4:114. Elgueta R, Benson MJ, De Vries VC, Wasiuk A, Guo Y, Noelle RJ. Molecular mechanism and function of CD40/CD40L engagement in the immune system. Immunol Rev. 2009;229(1):152–72. Francis DA, Karras JG, Ke X, Sen R, Rothstein TL. Induction of the transcription factors NF-κB, AP-1 and NF-AT during B cell stimulation through the CD40 receptor. Int Immunol. 1995;7(2):151–61. Schreiber M, Kolbus A, Piu F, Szabowski A, Möhle-Steinlein U, Tian J, et al. Control of cell cycle progression by c-Jun is p53 dependent. Genes Dev. 1999;13(5):607–19. Khalifeh MS, Stabel JR. Clinical disease upregulates expression of CD40 and CD40 ligand on peripheral blood mononuclear cells from cattle naturally infected with Mycobacterium avium subsp. paratuberculosis. Clin Vaccine Immunol. 2013;20(8):1274–82. Sefton BM, Taddie JA. Role of tyrosine kinases in lymphocyte activation. Curr Opin Immunol. 1994;6(3):372–9. Dinkel A, Warnatz K, Ledermann B, Rolink A, Zipfel PF, Bürki K, et al. The transcription factor early growth response 1 (Egr-1) advances differentiation of pre-B and immature B cells. J Exp Med. 1998;188(12):2215–24. Karim AF, Chandra P, Chopra A, Siddiqui Z, Bhaskar A, Singh A, et al. Express path analysis identifies a tyrosine kinase Src-centric network regulating divergent host responses to Mycobacterium tuberculosis infection. J Biol Chem. 2011;286(46):40307–19. Begg DJ, de Silva K, Di Fiore L, Taylor DL, Bower K, Zhong L, et al. Experimental infection model for Johne's disease using a lyophilised, pure culture, seedstock of Mycobacterium avium subspecies paratuberculosis. Vet Microbiol. 2010;141(3–4):301–11. Whittington R, Whittington A, Waldron A. Development and validation of a liquid medium (M7H9C) for routine culture of Mycobacterium avium subsp. paratuberculosis to replace modified Bactec 12B. J Clin. 2013;51(12):3993-4000. Plain KM, Waldron AM, Begg DJ, de Silva K, Purdie AC, Whittington RJ. Efficient, validated method for detection of mycobacterial growth in liquid culture media by use of bead beating, magnetic-particle-based nucleic acid isolation, and quantitative PCR. J Clin Microbiol. 2015;53(4):1121–8. Whittington RJ, Begg DJ, de Silva K, Purdie AC, Dhand NK, Plain KM. Case definition terminology for paratuberculosis (Johne's disease). BMC Vet Res. 2017;13(1):328. High-Throughput Direct Fecal PCR Assay for Detection of Mycobacterium avium subsp. paratuberculosis in Sheep and Cattle, (2014). Irizarry RA, Hobbs B, Collin F, Beazer-Barclay YD, Antonellis KJ, Scherf U, et al. Exploration, normalization, and summaries of high density oligonucleotide array probe level data. Biostatistics. 2003;4(2):249–64. Rozen S, Skaletsky H. Primer3 on the WWW for general users and for biologist programmers. Methods Mol Biol. 2000;132:365–86. Vandesompele J, De Preter K, Pattyn F, Poppe B, Van Roy N, De Paepe A, et al. Accurate normalization of real-time quantitative RT-PCR data by geometric averaging of multiple internal control genes. Genome Biol. 2002;3(7):research0034.1. Livak KJ, Schmittgen TD. Analysis of relative gene expression data using real-time quantitative PCR and the 2-ΔΔCT method. Methods. 2001;25(4):402–8. The authors would like to thank Anan Waldron, Ann-Michele Whittington, Nicole Carter and Rebecca Maurer for laboratory support. Additionally, we would like to thank Craig Kristo, James Dalton and Nobel Toribio for animal husbandry and sample collection assistance. This work was supported by Meat and Livestock Australia and by the Cattle Council of Australia, Sheep Meat Council of Australia, and Wool Producers Australia through Animal Health Australia (Grant no. P.PSH.0576) and an Australian Postgraduate Award and scholarship from Meat and Livestock Australia (HP). The funding body had no role in the design of the study, collection, analysis, and interpretation of data or in the writing of this manuscript. Auriol C. Purdie and Kumudika de Silva contributed equally to this work. Sydney School of Veterinary Science, Faculty of Science, The University of Sydney, Private Bag 4003, Narellan, Camden, NSW, 2567, Australia Hannah B. Pooley, Douglas J. Begg, Karren M. Plain, Richard J. Whittington, Auriol C. Purdie & Kumudika de Silva Hannah B. Pooley Douglas J. Begg Karren M. Plain Richard J. Whittington Auriol C. Purdie Kumudika de Silva Conceptualization, HP, DB, KP, RW, AP, and KdS; Formal Analysis, HP; Investigation, HP; Data Curation, AP; Methodology, DB, HP; Writing-Original Draft Preparation, HP; Writing-Review & Editing, HP, KP, KdS, DB, RW and AP; Supervision, KdS; Funding Acquisition, RW and HP. All authors have read and approved the publication of this manuscript. Correspondence to Hannah B. Pooley. The experimental trial was approved by the University of Sydney Animal Ethics Committee (AEC) (ref no 6064) and conducted in compliance with the Animal Research Act 1985, Animal Research Regulation 2010 and the Australian code for the care and use of animals for scientific purposes 8th edition 2013 (the Code). Pooley, H.B., Begg, D.J., Plain, K.M. et al. The humoral immune response is essential for successful vaccine protection against paratuberculosis in sheep. BMC Vet Res 15, 223 (2019). https://doi.org/10.1186/s12917-019-1972-z Paratuberculosis Humoral immunity Ileum
CommonCrawl
APPENDIX I. (To Chapters III, IV, V) The Proof that every Equation has a Root Let \[Z = P(z) = \alpha_{0} z^{n} + \alpha_{1} z^{n-1} + \dots + \alpha_{n}\] be a polynomial in \(z\), with real or complex coefficients. We can represent the values of \(z\) and \(Z\) by points in two planes, which we may call the \(z\)-plane and the \(Z\)-plane respectively. It is evident that if \(z\) describes a closed path \(\gamma\) in the \(z\)-plane, then \(Z\) describes a corresponding closed path \(\Gamma\) in the \(Z\)-plane. We shall assume for the present that the path \(\Gamma\) does not pass through the origin. To any value of \(Z\) correspond an infinity of values of \(\operatorname{am} Z\), differing by multiples of \(2\pi\), and each of these values varies continuously as \(Z\) describes \(\Gamma\).1 We can select a particular value of \(\operatorname{am} Z\) corresponding to each point of \(\Gamma\), by first selecting a particular value corresponding to the initial value of \(Z\), and then following the continuous variation of this value as \(Z\) moves along \(\Gamma\). We shall, in the argument which follows, use the phrase 'the amplitude of \(Z\)' and the formula \(\operatorname{am} Z\) to denote the particular value of the amplitude of \(Z\) thus selected. Thus \(\operatorname{am} Z\) denotes a one-valued and continuous function of \(X\) and \(Y\), the real and imaginary parts of \(Z\). When \(Z\), after describing \(\Gamma\), returns to its original position, its amplitude may be the same as before, as will certainly be the case if \(\Gamma\) does not enclose the origin, like path (a) in Fig. B, or it may differ from its original value by any multiple of \(2\pi\). Thus if its path is like (b) in Fig. B, winding once round the origin in the positive direction, then its amplitude will have increased by \(2\pi\). These remarks apply, not merely to \(\Gamma\), but to any closed contour in the \(Z\)-plane which does not pass through the origin. Associated with any such contour there is a number which we may call 'the increment of \(\operatorname{am} Z\) when \(Z\) describes the contour', a number independent of the initial choice of a particular value of the amplitude of \(Z\). We shall now prove that if the amplitude of \(Z\) is not the same when \(Z\) returns to its original position, then the path of \(z\) must contain inside or on it at least one point at which \(Z = 0\). We can divide \(\gamma\) into a number of smaller contours by drawing parallels to the axes at a distance \(\delta_{1}\) from one another, as in Fig. C.2 If there is, on the boundary of any one of these contours, a point at which \(Z = 0\), what we wish to prove is already established. We may therefore suppose that this is not the case. Then the increment of \(\operatorname{am} Z\), when \(z\) describes \(\gamma\), is equal to the sum of all the increments of \(\operatorname{am} Z\) obtained by supposing \(z\) to describe each of these smaller contours separately in the same sense as \(\gamma\). For if \(z\) describes each of the smaller contours in turn, in the same sense, it will ultimately (see Fig. D) have described the boundary of \(\gamma\) once, and each part of each of the dividing parallels twice and in opposite directions. Thus \(PQ\) will have been described twice, once from \(P\) to \(Q\) and once from \(Q\) to \(P\). As \(z\) moves from \(P\) to \(Q\), \(\operatorname{am} Z\) varies continuously, since \(Z\) does not pass through the origin; and if the increment of \(\operatorname{am} Z\) is in this case \(\theta\), then its increment when \(z\) moves from \(Q\) to \(P\) is \(-\theta\); so that, when we add up the increments of \(\operatorname{am} Z\) due to the description of the various parts of the smaller contours, all cancel one another, save the increments due to the description of parts of \(\gamma\) itself. Hence, if \(\operatorname{am} Z\) is changed when \(z\) describes \(\gamma\), there must be at least one of the smaller contours, say \(\gamma_{1}\), such that \(\operatorname{am} Z\) is changed when \(z\) describes \(\gamma_{1}\). This contour may be a square whose sides are parts of the auxiliary parallels, or may be composed of parts of these parallels and parts of the boundary of \(\gamma\). In any case every point of the contour lies in or on the boundary of a square \(\Delta_{1}\) whose sides are parts of the auxiliary parallels and of length \(\delta_{1}\). We can now further subdivide \(\gamma_{1}\) by the help of parallels to the axes at a smaller distance \(\delta_{2}\) from one another, and we can find a contour \(\gamma_{2}\), entirely included in a square \(\Delta_{2}\), of side \(\delta_{2}\) and itself included in \(\Delta_{1}\) such that \(\operatorname{am} Z\) is changed when \(z\) describes the contour. Now let us take an infinite sequence of decreasing numbers \(\delta_{1}\), \(\delta_{2}\), …, \(\delta_{m}\), …, whose limit is zero.3 By repeating the argument used above, we can determine a series of squares \(\Delta_{1}\), \(\Delta_{2}\), …, \(\Delta_{m}\), … and a series of contours \(\gamma_{1}\), \(\gamma_{2}\), …, \(\gamma_{m}\), … such that (i) \(\Delta_{m+1}\) lies entirely inside \(\Delta_{m}\), (ii) \(\gamma_{m}\) lies entirely inside \(\Delta_{m}\), (iii) \(\operatorname{am} Z\) is changed when \(z\) describes \(\gamma_{m}\). If \((x_{m}, y_{m})\) and \((x_{m} + \delta_{m}, y_{m} + \delta_{m})\) are the lower left-hand and upper right-hand corners of \(\Delta_{m}\), it is clear that \(x_{1}\), \(x_{2}\), …, \(x_{m}\), … is an increasing and \(x_{1} + \delta_{1}\), \(x_{2} + \delta_{2}\), …, \(x_{m} + \delta_{m}\), … a decreasing sequence, and that they have a common limit \(x_{0}\). Similarly \(y_{m}\) and \(y_{m} + \delta_{m}\) have a common limit \(y_{0}\), and \((x_{0}, y_{0})\) is the one and only point situated inside every square \(\Delta_{m}\). However small \(\delta\) may be, we can draw a square which includes \((x_{0}, y_{0})\), and whose sides are parallel to the axes and of length \(\delta\), and inside this square a closed contour such that \(\operatorname{am} Z\) is changed when \(z\) describes the contour. It can now be shown that \[P(x_{0} + iy_{0}) = 0.\] For suppose that \(P(x_{0} + iy_{0}) = a\), where \(|a| = \rho > 0\). Since \(P(x + iy)\) is a continuous function of \(x\) and \(y\), we can draw a square whose centre is \((x_{0}, y_{0})\) and whose sides are parallel to the axes, and which is such that \[|P(x + iy) – P(x_{0} + iy_{0})| < \tfrac{1}{2}\rho\] at all points \(x + iy\) inside the square or on its boundary. At all such points \[P(x + iy) = a + \phi,\] where \(|\phi| < \frac{1}{2}\rho\). Now let us take any closed contour lying entirely inside this square. As \(z\) describes this contour, \(Z = a + \phi\) also describes a closed contour. But the latter contour evidently lies inside the circle whose centre is \(a\) and whose radius is \(\frac{1}{2}\rho\), and this circle does not include the origin. Hence the amplitude of \(Z\) is unchanged. But this contradicts what was proved above, viz. that inside each square \(\Delta_{m}\) we can find a closed contour the description of which by \(z\) changes \(\operatorname{am} Z\) Hence \(P(x_{0} + iy_{0}) = 0\). All that remains is to show that we can always find some contour such that \(\operatorname{am} Z\) is changed when \(z\) describes \(\gamma\). Now \[Z = a_{0} z^{n} \left(1 + \frac{a_{1}}{a_{0}z} + \frac{a_{2}}{a_{0} z^{2}} + \dots + \frac{a_{n}}{a_{0} z^{n}}\right).\] We can choose \(R\) so that \[\frac{|a_{1}|}{|a_{0}| R} + \frac{|a_{2}|}{|a_{0}| R^{2}} + \dots + \frac{|a_{n}|}{|a_{0}| R^{n}} < \delta,\] where \(\delta\) is any positive number, however small; and then, if \(\gamma\) is the circle whose centre is the origin and whose radius is \(R\), we have \[Z = a_{0} z^{n} (1 + \rho),\] where \(|\rho| < \delta\), at all points on \(\gamma\). We can then show, by an argument similar to that used above, that \(\operatorname{am}(1 + \rho)\) is unchanged as \(z\) describes \(\gamma\) in the positive sense, while \(\operatorname{am} z^{n}\) on the other hand is increased by \(2n\pi\). Hence \(\operatorname{am} Z\) is increased by \(2n\pi\), and the proof that \(Z = 0\) has a root is completed. We have assumed throughout the argument that neither \(\Gamma\), nor any of the smaller contours into which it is resolved, passes through the origin. This assumption is obviously legitimate, for to suppose the contrary, at any stage of the argument, is to admit the truth of the theorem. We leave it as an exercise to the reader to infer, from the discussion which precedes and that of § 43, that when \(z\) describes any contour \(\gamma\) in the positive sense the increment of \(\operatorname{am} Z\) is \(2k\pi\), where \(k\) is the number of roots of \(Z = 0\) inside \(\gamma\), multiple roots being counted multiply. There is another proof, proceeding on different lines, which is often given. It depends, however, on an extension to functions of two or more variables of the results of § 102 et seq. We define, precisely on the lines of § 102, the upper and lower bounds of a function \(f(x, y)\), for all pairs of values of \(x\) and \(y\) corresponding to any point of any region in the plane of \((x, y)\) bounded by a closed curve. And we can prove, much as in § 102, that a continuous function \(f(x, y)\) attains its upper and lower bounds in any such region. Now \[|Z| = |P(x + iy)|\] is a positive and continuous function of \(x\) and \(y\). If \(m\) is its lower bound for points on and inside \(\gamma\), then there must be a point \(z_{0}\) for which \(|Z| = m\), and this must be the least value assumed by \(|Z|\). If \(m = 0\), then \(P(z_{0}) = 0\), and we have proved what we want. We may therefore suppose that \(m > 0\). The point \(z_{0}\) must lie either inside or on the boundary of \(\gamma\): but if \(\gamma\) is a circle whose centre is the origin, and whose radius \(R\) is large enough, then the last hypothesis is untenable, since \(|P(z)| \to \infty\) as \(|z| \to \infty\). We may therefore suppose that \(z_{0}\) lies inside \(\gamma\). If we put \(z = z_{0} + \zeta\), and rearrange \(P(z)\) according to powers of \(\zeta\), we obtain \[P(z) = P(z_{0}) + A_{1}\zeta + A_{2}\zeta^{2} + \dots + A_{n}\zeta^{n},\] say. Let \(A_{k}\) be the first of the coefficients which does not vanish, and let \(|A_{k}| = \mu\), \(|\zeta| = \rho\). We can choose \(\rho\) so small that \[|A_{k+1}|\rho + |A_{k+2}|\rho^{2} + \dots + |A_{n}|\rho^{n-k} < \tfrac{1}{2}\mu.\] Then \[|P(z) – P(z_{0}) – A_{k}\zeta^{k}| < \tfrac{1}{2}\mu\rho^{k},\] and \[|P(z)| < |P(z_{0} + A_{k}\zeta^{k}| + \tfrac{1}{2}\mu\rho^{k}.\] Now suppose that \(z\) moves round the circle whose centre is \(z_{0}\) and radius \(\rho\). Then \[P(z_{0}) + A_{k}\zeta^{k}\] moves \(k\) times round the circle whose centre is \(P(z_{0})\) and radius \(|A_{k}\zeta^{k}| = \mu\rho^{k}\), and passes \(k\) times through the point in which this circle is intersected by the line joining \(P(z_{0})\) to the origin. Hence there are \(k\) points on the circle described by \(z\) at which \(|P(z_{0}) + A_{k}\zeta^{k}| = |P(z_{0})| – \mu\rho^{k}\) and so \[|P(z)| < |P(z_{0})| – \mu\rho^{k} + \tfrac{1}{2}\mu\rho^{k} = m – \tfrac{1}{2}\mu\rho^{k} < m;\] and this contradicts the hypothesis that \(m\) is the lower bound of \(|P(z)|\). It follows that \(m\) must be zero and that \(P(z_{0}) = 0\). EXAMPLES ON APPENDIX I 1. Show that the number of roots of \(f(z) = 0\) which lie within a closed contour which does not pass through any root is equal to the increment of \[\{\log f(z)\}/2\pi i\] when \(z\) describes the contour. 2. Show that if \(R\) is any number such that \[\frac{|a_{1}|}{R} + \frac{|a_{2}|}{R^{2}} + \dots + \frac{|a_{n}|}{R^{n}} < 1,\] then all the roots of \(z^{n} + a_{1}z^{n-1} + \dots + a_{n} = 0\) are in absolute value less than \(R\). In particular show that all the roots of \(z^{5} – 13z -7 = 0\) are in absolute value less than \(2\frac{1}{67}\). 3. Determine the numbers of the roots of the equation \(z^{2p} + az + b = 0\) where \(a\) and \(b\) are real and \(p\) odd, which have their real parts positive and negative. Show that if \(a > 0\), \(b > 0\) then the numbers are \(p – 1\) and \(p + 1\); if \(a < 0\), \(b > 0\) they are \(p + 1\) and \(p – 1\); and if \(b < 0\) they are \(p\) and \(p\). Discuss the particular cases in which \(a = 0\) or \(b = 0\). Verify the results when \(p = 1\). [Trace the variation of \(\operatorname{am}(z^{2p} + az + b)\) as \(z\) describes the contour formed by a large semicircle whose centre is the origin and whose radius is \(R\), and the part of the imaginary axis intercepted by the semicircle.] 4. Consider similarly the equations \[z^{4q} + az + b = 0,\quad z^{4q-1} + az + b = 0,\quad z^{4q+1} + az + b = 0.\] 5. Show that if \(\alpha\) and \(\beta\) are real then the numbers of the roots of the equation \(z^{2n} + \alpha^{2} z^{2n-1} + \beta^{2} = 0\) which have their real parts positive and negative are \(n – 1\) and \(n + 1\), or \(n\) and \(n\), according as \(n\) is odd or even. 6. Show that when \(z\) moves along the straight line joining the points \(z = z_{1}\), \(z = z_{2}\), from a point near \(z_{1}\) to a point near \(z_{2}\), the increment of \[\operatorname{am} \left(\frac{1}{z – z_{1}} + \frac{1}{z – z_{2}}\right)\] is nearly equal to \(\pi\). 7. A contour enclosing the three points \(z = z_{1}\), \(z = z_{2}\), \(z = z_{3}\) is defined by parts of the sides of the triangle formed by \(z_{1}\), \(z_{2}\), \(z_{3}\), and the parts exterior to the triangle of three small circles with their centres at those points. Show that when \(z\) describes the contour the increment of \[\operatorname{am} \left(\frac{1}{z – z_{1}} + \frac{1}{z – z_{2}} + \frac{1}{z – z_{3}}\right)\] is equal to \(-2\pi\). 8. Prove that a closed oval path which surrounds all the roots of a cubic equation \(f(z) = 0\) also surrounds those of the derived equation \(f'(z) = 0\). [Use the equation \[f'(z) = f(z) \left( \frac{1}{z – z_{1}} + \frac{1}{z – z_{2}} + \frac{1}{z – z_{3}} \right),\] where \(z_{1}\), \(z_{2}\), \(z_{3}\) are the roots of \(f(z) = 0\), and the result of Ex. 7.] 9. Show that the roots of \(f'(z) = 0\) are the foci of the ellipse which touches the sides of the triangle \((z_{1}, z_{2}, z_{3})\) at their middle points. [For a proof see Cesàro's Elementares Lehrbuch der algebraischen Analysis, p. 352.] 10. Extend the result of Ex. 8 to equations of any degree. 11. If \(f(z)\) and \(\phi(z)\) are two polynomials in \(z\), and \(\gamma\) is a contour which does not pass through any root of \(f(z)\), and \(|\phi(z)| < |f(z)|\) at all points on \(\gamma\), then the numbers of the roots of the equations \[f(z) = 0,\quad f(z) + \phi(z) = 0\] which lie inside \(\gamma\) are the same. 12. Show that the equations \[e^{z} = az,\quad e^{z} = az^{2},\quad e^{z} = az^{3},\] where \(a > e\), have respectively (i) one positive root (ii) one positive and one negative root and (iii) one positive and two complex roots within the circle \(|z| = 1\). It is here that we assume that \(\Gamma\) does not pass through the origin.↩︎ There is no difficulty in giving a definite rule for the construction of these parallels: the most obvious course is to draw all the lines \(x = k\delta_{1}\), \(y = k\delta_{1}\), where \(k\) is an integer positive or negative.↩︎ We may, , take \(\delta_{m} = \delta_{1}/2^{m-1}\).↩︎ $\leftarrow$ Chapter X Main Page Appendix II $\rightarrow$
CommonCrawl
Bishop, Errett (3) Foguel, S. R. (3) Putnam, C. R. (3) Yood, Bertram (3) Baxter, W. E. (2) Pacific Journal of Mathematics (71) Subject: 46.00 Publication Title: Pacific Journal of Mathematics On rings with proper involution. Baxter, W. E. Pacific Journal of Mathematics Volume 27, Number 1 (1968), 1-12. More by W. E. Baxter Projections in ${\cal L}_{1}$ and ${\cal L}_{\infty}$-spaces. McCarthy, C. A. and Tzafriri, L. Pacific Journal of Mathematics Volume 26, Number 3 (1968), 529-546. More by C. A. McCarthy More by L. Tzafriri Extreme points and dimension theory. Peck, N. T. More by N. T. Peck Complex inversion for the generalized convolution transformation. Pandey, J. N. and Zemanian, A. H. More by J. N. Pandey More by A. H. Zemanian Integral kernel for one-part function spaces. Bear, H. S. and Walsh, Bertram More by H. S. Bear More by Bertram Walsh Unitary invariants for nests. Erdos, J. A. More by J. A. Erdos Tensor products of group algebras. Gelbaum, Bernard R. More by Bernard R. Gelbaum On the Stone-Weierstrass approximation theorem for valued fields. Cantor, David G. More by David G. Cantor Symmetry in group algebras of discrete groups. Bonic, Robert A. Pacific Journal of Mathematics Volume 11, Number 1 (1961), 73-94. More by Robert A. Bonic Weak compactness and separate continuity. Glicksberg, Irving More by Irving Glicksberg
CommonCrawl
Hybridization of Na in [Na(H2O)6]+ In the complex ion $\ce{[Na(H2O)6]+}$, the sodium cation forms 6 coordinate bonds with water ligands. Typically this octahedral form is associated with $\mathrm{sp^3d^2}$ hybridization as far as I know, but in the case of sodium the $d$ orbitals aren't readily available, so how can this be described as $\mathrm{sp^3d^2}$ hybridization? I would guess that its hybridization is $\mathrm{s^2p^4}$? On further thought I think the d orbitals probably are accessible and that it is indeed $\mathrm{sp^3d^2}$ but this has a high hybridisation energy and so we dont see $\ce{[Na(H2O)6]+}$ in large amounts instead we see the ion solvated by partial charges and electrostatic rather than dative covalent interactions water aqueous-solution ions hybridization GoodsGoods $\begingroup$ Hybridisation is the wrong concept here and for most other coordination complexes. I am sorry I cannot answer right now, molecular orbital theory will give the answer though. I am confident it won't take long here. $\endgroup$ – Martin - マーチン♦ May 16 '15 at 15:55 $\begingroup$ $s^2p^4$ is definitely wrong. The 2s (or 4s) and 2p (or 4p) orbitals are waaay to far removed in energy to be involved in hybridisation. As Martin said though, hybridisation is a poor model for this type of bonding interaction. $\endgroup$ – bon Apr 20 '16 at 9:11 Solvation is the process by which a species is dissolved in a solvent. The most classic example of this is when a metal ion is dissolved in water. The electronegative oxygens in the water molecules are attracted electrostatically to the positive charge on the metal ion. A solvation shell of water molecules result. According to this article, which written about the hydration of alkali metal ions: In spite of many conducted studies the knowledge of the structures and bonding properties of the hydrated alkali metal ions in aqueous solution is scarce and deviating. Also according to another letter, which reports research done on the stability of the different possible forms of the ion, there is not a consensus on only one form. This letter discusses three possible forms of the ion all in the form of $n_1 +n_2$, where $n_1$ is the number of molecules in the first shell of hydration and $n_2$ is the number of molecules in the second shell of hydration. The forms discussed are: demonstrating that the water molecules do not necessarily have to be coordinately bonded to the metal itself, but in fact that the second shell of hydration could include water molecules interacting with the water molecules in the first shell of hydration. In short the hydration of sodium (and the other alakali metals) is not well understood yet, though it is under study. ringoringo Not the answer you're looking for? Browse other questions tagged water aqueous-solution ions hybridization or ask your own question. Why Does Coordination of Metal Ions Happen Anyways? Hybridization sp3 and sp2 Why does sp2 hybridization imply adjacent sp2 hybridization? sp2 hybridization Group V elements Bonding of the complex when both weak and strong field ligand are present Hybridization molecule
CommonCrawl
Recent Lecture Videos Public Goods, from Biofilms to Societies. Simon Levin Mathematical ecology: A century of progress, and challenges for the... Geometricity and Galois actions on fundamental groups Daniel Litt Recent Lecture Notes Landing a Faculty Position Phillip Loewen Living the Good Life at a Liberal Arts College Gizem Karaali Landing an Industry Position Math Outreach: Many Needs, Many Ways Melania Alvarez PIMS, University of Manitoba PIMS-UManitoba Distinguished Lecture Math Outreach: Many Needs, Many Ways I will present a wide range of math educational outreach activities involving elementary, middle and secondary level students. By expecting children to succeed, introducing new and exciting ways to teach mathematics, and promoting role models, the Pacific Institute for the Mathematical Sciences (PIMS) is making a significant difference in the way students view science and technology and their own mathematical ability. These activities are designed to transform the way students look at mathematics and empower them to see themselves as fully capable of succeeding at math. I will describe PIMS outreach programs specifically designed for First Nations schools in British Columbia. We have also developed a variety of programs to support teachers, in particular I will describe our must recent one: a 4-week Summer School for Elementary School Teachers. At this camp, teachers work with mathematicians, educators and math specialists to increase their mathematical knowledge and capability and boost their confidence, as well as foster a positive attitude towards learning mathematics. WORKSHOPS: 1) Bar Model Workshop (Friday, October 27, 1:00 - 2:15 pm) The main purpose of this workshop is to show how the Bar Model method can be used not only as a problem solving technique, but also to develop in students a deeper understanding of fundamental concepts in mathematics. 2) Kindergarten to Grade 2: An important foundation for success (Friday, October 27, 2:30 - 3:45 pm) What students learn during these early years will make a significant difference in how they approach mathematics later on. Research is showing that mathematics knowledge and skills are the most important predictors not only for later math achievement but also for achievement in other content areas (Claessens A. and Engel M., 2013). This workshop presents connection of various concepts and ideas, teaching sequencing, and hands-on fun activities for retention. See the event webpage for more information on Two Weeks in Vancouver - A Summer School for Women in Math. Women in Math 2016 Mon, Aug 15, 2016 - Thu, Aug 25, 2016 Juggling Mathematics & Magic Ronald Graham PIMS, University of Calgary Louise and Richard K. Guy Lecture Series The popular Richard & Louise Guy lecture series celebrates the joy of discovery and wonder in mathematics for everyone. Indeed, the lecture series was a 90th birthday present from Louise Guy to Richard in recognition of his love of mathematics and his desire to share his passion with the world. Richard Guy is the author of over 100 publications including works in combinatorial game theory, number theory and graph theory. He strives to make mathematics accessible to all. Dr. Ronald Graham, Chief Scientist at the California Institute for Telecommunications and Information Technology and the Irwin and Joan Jacobs Professor in Computer Science at UC San Diego. Dr. Ronald Graham, Chief Scientist at the California Institute for Telecommunications and Information Technology and the Irwin and Joan Jacobs Professor in Computer Science at UC San Diego, will the present the lecture, Juggling Mathematics & Magic. Dr. Graham's talk will demonstrate some of the surprising connections between the mystery of magic, the art of juggling, and the some interesting ideas from mathematics. Ronald Graham, the Irwin and Joan Jacobs Professor in Computer Science and Engineering at UC San Diego (and an accomplished trampolinist and juggler), demonstrates some of the surprising connections between the mystery of magic, the art of juggling, and some interesting ideas from mathematics. The lecture is intended for a general audience. A topological look at the vector (cross) product in three dimensions Peter Zvengrowski Sat, May 9, 2015 PIMS, University of Lethbridge Alberta Mathematics Dialog The vector product (or cross product) of two vectors in 3-dimensional real space $\mathbb{R}^3$ is a standard item covered in most every text in calculus, advanced calculus, and vector calculus, as well as in many physics and linear algebra texts. Most of these texts add a remark (or "warning") that this vector product is available only in 3-dimensional space. In this talk we shall start with some of the early history, in the nineteenth century, of the vector product, and in particular its relation to quaternions. Then we shall show that in fact the 3-dimensional vector product is notthe only one, indeed the Swiss mathematician Beno Eckmann (a frequent visitor to Alberta) discovered a vector product in 7-dimensional space in 1942. Further- more, by about 1960 deep advances in topology implied that there were no further vector products in any other dimension. We shall also, following Eckmann, talk about the generalization to r-fold vector products for $r\geq 1$ (the familiar vector product is a 2-fold vector product), and give the complete results for which dimensions n and for which $r$ these can exist. In the above work it is clear that the spheres $S^3$, $S^7$ play a special role (as well as their "little cousin" $S^1$). In the last part of the talk we will briefly discuss how these special spheres also play a major part in the recent solution of the Kervaire conjecture by Hill, Hopkins, and Ravenel, as well as their relation to the author's own research on the span of smooth manifolds. It's All in the Follow Through – what research in math education says ... and doesn't say Rob Craigen We'll be examining a few classic cases of how educational research has been handled that explain a lot about how we got where we are in public school math education today. Robustness of Design: A Survey Doug Wiens Fri, May 8, 2015 When an experiment is conducted for purposes which include fitting a particular model to the data, then the 'optimal' experimental design is highly dependent upon the model assumptions - linearity of the response function, independence and homoscedasticity of the errors, etc. When these assumptions are violated the design can be far from optimal, and so a more robust approach is called for. We should seek a design which behaves reasonably well over a large class of plausible models. I will review the progress which has been made on such problems, in a variety of experimental and modelling scenarios - prediction, extrapolation, discrimination, survey sampling, dose-response, etc Measurement, Mathematics and Information Technology M. Ram Murty In this talk, we will highlight the importance of measurement, discuss what can and cannot be measured. Focusing on the measurement of position, importance, and shape, we illustrate by discussing the mathematics behind, GPS, Google and laser surgery. The talk will be accessible to a wide audience. A Triangle has Eight Vertices (but only one center) Richard Guy Quadration regards a triangle as an orthocentric quadrangle. Twinning is an involution between orthocentres and circumcentres. Together with variations of Conway's Extraversion, these give rise to symmetric sets of points, lines and circles. There are eight vertices, which are also both orthocentres and circumcentres. Twelve edges share six midpoints, which, with six diagonal points, lie on the 50-point circle, better known as the 9-point circle. There are 32 circles which touch three edges and also touch the 50-point circle. 32 Gergonne points, when joined to their respective touch-centres, give sets of four segments which concur in eight deLongchamps points, which, with the eight centroids, form two harmonic ranges with the ortho- and circum-centres on each of the four Euler lines. Corresponding points on the eight circumcircles generate pairs of parallel Simson-Wallace lines, each containing six feet of perpendiculars. In three symmetrical positions these coincide, with twelve feet on one line. In the three orthogonal positions they are pairs of parallel tangents to the 50-point circle, forming the Steiner Star of David. This three-symmetry is shared with the 144 Morley triangles which are all homothetic. Time does not allow investigation of the 256 Malfatti configurations, whose 256 radpoints probably lie in fours on 64 guylines, eight through each of the eight vertices. Native American Mathematics Edward Doolittle One sometimes hears that the indigenous peoples of the Americas are for some reason not predisposed to be able to do mathematics. This belief is surprising, since the mathematical traditions of the Western Hemisphere prior to European contact were already rich and extensive. This talk will focus on some of those traditions, primarily Central American but with some information about mathematical traditions in Algonkian cultures such as the Blackfoot. Almost all of this talk will be accessible to any interested listener, with perhaps five minutes in the middle using a small amount of very elementary number theory. Along the way any listener who has ever eaten an 18 Rabbits granola bar will learn why doing so celebrates indigenous mathematics. ABOUT THE RICHARD AND LOUISE GUY LECTURE SERIES: The Richard & Louise Guy lecture series celebrates the joy of discovery and wonder in mathematics for everyone. Indeed, the lecture series was a 90th birthday present from Louise Guy to Richard in recognition of his love of mathematics and his desire to share his passion with the world. Richard Guy is the author of over 100 publications including works in combinatorial game theory, number theory and graph theory. He strives to make mathematics accessible to all. The other contributions to the lecture series have been made by Elwyn Berlekamp (2006), John Conway (2007), Richard Nowakowski (2008), William Pulleyblank (2009), Erik Demaine (2010), Noam Elkies (2011), Ravi Vakil (2012) and Carl Pomerance (2013). Emerging Aboriginal Scholars Debra Martel University of British Columbia, Vancouver, Canada Emerging Aboriginal Scholars Summer Camp This five week summer camp is for students currently attending grades 9 to 12. The main purpose of this camp is to help students with their academics and for them to get work experience at the university. Students take 90 minutes of math and English every day and three days a week they will be working with a faculty member in the area of their choice. Students will get $100 a week for 7.5 hours of work experience. The summer camp takes place at UBC, and students will take classes at PIMS and the Long House. Last year we had students working with the nuclear accelerator, and working at labs in the physics and chemistry departments, among other opportunities. For more information on the program see Emerging Aboriginal Scholars Program. 2019 PIMS Workshop on Arithmetic Topology 2019 Workshop on Mathematical Sciences and Clean Energy Applications Visualising data with ggplot2 Hadley Wickham Univalence as a New Principle of Logic Steve Awodey Approximating Functions in High Dimensions
CommonCrawl
An excursion into particle physics and cosmology for non-science students Home A Little Bit About This On-line Text Lesson 1. Once Upon a Time... Lesson 2. What Can We Know, and How? Lesson 3. Mathematics, The M Word Lesson 4. Motion, Getting From Here to There Lesson 5. The Big Mo, Force and Momentum Lesson 6. Collisions, Banging Things Together Lesson 7. Energy, It Just Keeps Going and Going Lesson 8. Early Cosmology Lesson 9. The Cosmologies of Galileo and Newton Lesson 10. Charge It! Charges and Magnets Lesson 11. Faraday's Experiments and An Extraordinary Idea Lesson 12. Faraday's E&M Fields Lesson 13. Maxwell's Fields Annotate me Distributed with an MIT license. QS&BB Quarks, Spacetime, and the Big Bang Lesson 3. Mathematics, The M Word When I imagine a triangle, even though such a figure may exist nowhere in the world except in my thought, indeed may never have existed, there is nonetheless a certain nature or form, or particular essence, of this figure that is immutable and eternal, which I did not invent, and which in no way depends on my mind. René Descartes (1596-1650), Meditations on First Philosophy (1641) This lesson is constructed like most. We'll start with some goals, move smoothly into a short biography of an important character in the story, and then to the content. This one's a little different. I'm going to use some mathematics in QS&BB and it's important for you to understand why and gain confidence that like Goldilocks, we'll do just enough. This lesson is a little more review and reference-like than the rest. Some of you may have seen much of the content of this lesson. I'd point you to the Descartes, M Word, and Functions, the QS&BB Way sections. A Little Bit of Descartes Descartes' Algebra-fication of Geometry… Descartes' Philosophy: New Knowledge Just By Thinking? The M Word A Tiny Bit Of Algebra Functions, the QS&BB Way A Gentle Review 1: Skills You'll Frequently Need Some Algebra Practice The Powers That Be: Exponents The Big 10: "Powers Of 10," That Is The Big and the Small of QS&BB: Sizes in the Universe Big Stuff Geometry, curves…Formulas From Your Past? A Gentle Review 2: Skills for Only a Few Times Log-Log and Semi-Log Plots Unit Conversions Approximating Functions Formulas From Your Past That Might Only Be Referenced Informally What to Remember from Lesson 3? Powers of 10. Goals of this lesson: I'd like you to Understand: Simple one-variable algebra. Exponential notation. Scientific notation. Unit conversion. Graphical vector addition and subtraction. I'd like you to Appreciate: The approximation of complicated functions in an expansion. I'd like you to become Familiar With: Aspects of Descartes' life. The importance of Descartes' merging of algebra and geometry. The 16th and 17th centuries hosted a proliferation of pre-scientific and scientific "Fathers of" figures: Galileo Galilei, the Father of Physics; Nicholas Copernicus and Johannes Kepler, arguably the Fathers of Astrophysics; and Tycho Brahe, the Father of Astronomy. That leaves out some lesser-known, but influential dads-of, like Roger Bacon, Frances Bacon (no relation), and Walter Gilbert, all of whom share paternity as Fathers of Experimentation. But the Granddaddy…um…Father of them all was René Descartes (1596-1650), often referred to as the Father of Western Philosophy and a Father of Mathematics, if not a favorite Uncle of Physics. If you've ever plotted a point in a coordinate system, you've paid homage to Descartes. Frankly, if you've ever plotted a function, you've paid homage to Descartes. If you've ever looked at a rainbow? Yes. Him again. If you ever felt that the mind and the body are perhaps two different things, then you're paying homage to Descartes and if you were taught to be skeptical of authority and to work things out for yourself? Descartes. But above all—for us—René Descartes was the Father of analytic geometry. He was born in 1596 in a little French village called, Descartes—what are the odds? (Okay. That came later.) By this time Galileo was a professor in Padua inventing physics and Caravaggio was in Rome inventing the Baroque. Across the Channel Shakespeare was in London inventing theater and Elizabeth had cracked the Royal Glass Ceiling and was reinventing moderate political rule. This was a time of discovery when intellectuals began to think for themselves. This is the beginning of the end of the suffocating domination of Aristotle. René Descartes 1596-1650 René was sent to a prominent Jesuit school at the age of 10 and a decade later emerged with his mandated law degree. Apart from his success in school, his most remarkable learned skill was his lifelong manner of studying: often ill, he was allowed to spend his mornings in bed, a habit he retained until the last year of his life. There's a story there. His school required physical fitness and in spite of his health, he became a proficient swordsman and soldier—wearing a sword throughout his life as befitting a "gentleman." For a while he was essentially a soldier of fortune, alternating between raucous partying in Paris with friends and combat assignments (a Catholic, fighting with the Dutch Protestants) in various of the innumerable Thirty Years War armies. Somewhere in that period Descartes became serious and decided that he had important things to say. He wrote a handful of unpublished tracts and became well-known through a steady correspondence with European intellectuals. By 1628 he began to suspect that his ideas were not going to sit well in Catholic France ( confirmed for him when Galileo was censured in 1633) and so he moved to Holland where he lived for more than 20 years. He'd been playing with mathematics during his playboy-soldier period and little did he know, he found he was a mathematical genius, solving problems that others couldn't. He enrolled himself as a "mature student" in Leiden and devoted himself to mathematics. By 1637, he changed the landscape forever. …or geometri-fication of algebra! Descartes brought geometry and algebra together for the first time. The fledgling field of algebra ("al-jabr" from the Arabic, "reunion of broken parts" ) was slowly creeping into European circles…along with the decimal point (Galileo had neither) and solutions of some kinds of polynomial equations were appearing. The notation was clumsy. So geometry held on as king of mathematics. What Descartes did was link the solutions of geometry problems—which would have been done with rule-obsessive construction of proofs—to solutions using symbols. He did this work in a small book called Le Géométrie (The Geometry), which he published in 1637, the same year he published his philosophical blockbuster, Discourse on Method. In it he instituted a number of conventions which we use today. For example, he reserved the letters of the beginning of the alphabet $a, b, c,…$ for things that are constants or which represent fixed lines. An important strategic approach was to assume that the solution of a mathematical problem may be unknown, but can still be found and he reserved the last letters of the alphabet $x, y, z…$ to stand for unknown quantities—variables. He further introduced the compact notation of exponents to describe how many times a constant or a variable is multiplied by itself, $x^2$ for example. The early translators of al-jabr to, um, algebra considered equations in two unknown variables like $y = \text{some combination of } x$'s to be unsolvable. But Descartes linked one variable, say $y$ to the other as points on a curve that related them through an algebraic equation—what came to be called a function. He called one of those variable's domain the abscissa and the other, the ordinate. The use of perpendicular axes, which we call $x$ and $y$, stems from Descartes' inspiration which is why they're called Cartesian Coordinates. Mathematicians picked up on these ideas and extended them into the directions that we know and love. One of those was John Wallis (1616-1703), the most important Cambridge influence on Isaac Newton. The rigor of the mathematical deductive method stayed with him and became a new kind of philosophy that he called "analytic." Famously, he convinced himself that he had deduced a method to truth: whatever cannot be logically doubted, is true. The clue was that when you mentally and relentlessly doubted something and can't go any further, then that idea has become "clear and distinct." True, for him. Using this method, he decided that this demonstrated that his mind exists and that he, a thinker, is thinking these things and therefore he exists. So by using a mathematical-like deductive path, he believed that he had made an important discovery—a proof of his existence. This is his famous bumper sticker conclusion called forever "the cogito": Cogito ergo sum, I think, therefore, I am. But that's not what he wrote in Meditations on First Philosophy. This is closer: "So after considering everything very thoroughly, I must finally conclude that this proposition, I am, I exist, is necessarily true whenever it is put forward by me or conceived in my mind." Big bumper. But you know how legends go. Descartes believed that when an idea was incapable of being doubted, then you can believe it. But the Bible and pretty much all that Aristotle wrote and with it the post-Thomas Aquinas Church teachings were off-limits to doubt. In fact those sources were themselves the only authority used to determine truth and falsity. Descartes pretty much changed that in philosophy. That's why he felt safer in the Netherlands. Not only was he questioning Authority, he'd been inspired by Galileo's telescopic discoveries and became a committed Copernican. In 1633 he was completely spooked by the Galileo's troubles with the Inquisition and so he stayed put. Eventually, the Pope did ban his writings 12 years after he died. This is the philosophy of Rationalism which he is the king—the discovery of knowledge through pure thought. Rationalism has been in direct philosophical conflict with the philosophy of Empiricism— and as you'll see, often physics is caught in the middle. Rationalism is in the spirit of Plato, but unlike Descartes, the Greek gave up on the sensible world as simply a bad copy of the Real World, which is one of Ideas…"out there" somewhere. By contrast, by asserting that mind and matter were both existent realms, Descartes decided that one could understand the universe by blending thinking (mind) with observing (body). We physicists take some inspiration through Descartes' approach. Theoretical physicists are often motivated to gain knowledge through thought, always deploying mathematics—so maybe thought and paper. Experimental physicists sometimes claim that knowledge can only be obtained through observation (and in modern form, experiment). Most of us are of the latter devotion, but can sometimes be amazed at how often smart physicists by just thinking can lead to new knowledge of the world. We'll meet many of these folks. It's sometimes a strange way to make a living. One of Descartes' application of his Rationalism was assigning the posulates in order to do his physics---a mechanical cause for all phenomena. So he populated the universe with invisible balls moving in vortices (to move the planets) and little left-handed screw-type objects to cause magnetism, to name two. Science without such prior causes was unacceptable to him and led to an eventual dispute between the "Cartesian" French and continental scientists and the "Newtonian" British scientists...since Newton decided that such "hypotheses" were unnecessary. We'll visit this in a bit. After a public dispute—even in the Netherlands—Descartes began to imagine that his time among the Dutch was coming to a close. Queen Christina of Sweden, was an admirer and an intellectual and she invited Descartes to Stockholm to work in her court and to instruct her. After multiple refusals, not being a monarch to whom "no" is an easy answer, she sent a ship to Amsterdam to pick him up. He eventually accepted the position which was the beginning of his end. The Queen required his presence at 4 AM for lessons. This, from the fellow who had spent every morning of his life in bed until noon! He caught a serious respiratory infection and died on February 11th, 1650 at the age of only 53. We moderns owe an enormous debt to this soldier-philosopher-mathematician. Both for what he said that was useful and for what he said that was nonsense, but which stimulated a productive reaction. I think that there is a direct line from every QS&BB lesson that goes right back to René Descartes. I promise that the math of QS&BB will not be hard and we'll get through it together. In this lesson I'll develop most of the tools that we'll return to repeatedly: simple algebra, some familiar geometry, exponents, and powers of ten. Wait. Why use mathematics in a book for non-science people? I'm not a math person! Glad you asked. Two reasons. First, there is a direct connection between a mathematical description of a phenomenon and nature itself. As I said, we don't know why that's the case and the argument about whether mathematics is "discovered" or "invented" is endless. Second, it's much more economical than using words. Finally, it's a little deductive engine for many of our purposes. You can "discover" things by manipulating the symbols…things that will further explain the physics. I guess I lied. That's three reasons. Oh. There's no such thing as a "math person" at the level we'll be using math! I had a decision to make in designing a set of lessons about physics for non-experts: use no mathematics or use some. Let me show you what I decided, and why. But first, here's my guide to the use of mathematics in QS&BB: We'll use mathematics as a language to be "actively read" and a part of the narrative. But you'll not have to derive things on your own from scratch. Wait. What's "active reading"? Glad you asked. It means reading with your pencil moving. When you see this suggestive symbol: You don't have a notebook? Please get one for the full QS&BB experience ;) Our algebraic experience here will involve some simple solutions to simple equations. I'll need the occasional square root and the occasional exponent, but no trigonometry or solving simultaneous equations and certainly no calculus. I'll refer to vectors, but you'll not need to do even two-dimensional vector-component calculations. What's not to like? Some units, introduced prematurely: In this brief defense I'll use some units that we'll learn more about: a kilogram is a unit of mass, a Newton is a unit of force – an apple has a weight on earth of about 1 Newton, and of course a meter is a unit of length of about 3 feet. If I'd chosen to avoid all mathematics in QS&BB then I think something important would be missing. To learn about QS&BB ideas would be like learning how to paint but ignoring a particular color…where "red" should be, you'd insert a tiny note saying that "red should be here." I'm convinced that absorbing a simple equation, which stands for something in the world, is a cognitively different experience from reading its symbols in a sentence. An example of the power in symbols Later we'll learn the most fantastic model of motion that Isaac Newton invented—his Universal law of Gravitation. It explained the moon's orbit around the earth, the planets' motions around the sun, and still guides spacecraft through the solar system today. I could just tell you about it, or I could write it as an equation…a model. Let's compare two extreme approaches: I'll write out the content of the Gravitation rule in an English paragraph and in its algebraic form. Then we'll compare. Unlike in Fight Club, let's talk about this battle: In this corner: Newton's Gravitational law as a paragraph "The force of attraction experienced by two masses on one another is directly proportional to the product of those two masses and inversely proportional to the square of the distances that separate their centers. The constant of proportionality is called the Gravitational Constant which is $0.0000000000667408\; \text{m}^3 \text{kg}^{-1}\text{s}^{-2}.$" There. A perfectly good, if not moving, literary description of Newton's rule. Lots of words, but it's complete and it's accurate. But it's also inefficient and worse, it's… lifeless. Let's contrast this with the mathematical opponent: And in this corner: Newton's law of Gravitation in symbols: $F$ stands for the force of gravitation, $m$ and $M$ stand for two masses, $R$ is the distance between them, and $G$ is a number…that tiny number in the paragraph. I claim that in addition to the obvious efficiency of the symbolic, compact notation…there's physics buried inside of an equation that's not in an English sentence. For example, here's a perfectly good interesting question about gravitation: Sun and Earth and Moon What is the approximate force of attraction that the moon feels from the Sun compared with the force of attraction that the moon feels from the earth? The paragraph-representation is not immediately helpful—it just sits there. But the symbol-equation-representation is very easily manipulated to answer a question of it. Twist it around and it's ready to tell you something new. We could answer the question by forming the ratio of the two situations. Here's just the answer, postponing the actual solution to the lesson on gravity: Putting in the values for masses and distances, you'd find that the moon feels the Sun almost twice as much as it feels the earth. That information was buried inside of the symbolic representation…but not in the paragraph. Here's another question that the paragraph can't deal with. construct an experiment Suppose Myrtle wants to study Newton's law of gravitation by measuring the force that one mass ($m$) has for another mass ($M$). She would set them up at some distance and measure the force between them. Simple. Let's pretend that she owns a scale that measures force in Newton's and the smallest force it can measure is 1 N. Her lab only has a 1 meter space in which to set up her apparatus. Finally, Myrtle has a single mass at her disposal and it's 1 kg…that's $m$. So she needs to choose the other, $M$, in order to carry out her experiment. What does it need to be? With some even-sided manipulation she can isolate $M$ in the equation and calculate what she needs: Putting in the values from the question and she finds that the mass required is 14,992,000,000 kg. About the mass of 200 aircraft carriers. Myrtle needs to do a different experiment. But…she didn't know that until she let the original equation tell the story its way, which she encouraged through some simply symbol manipulation. The symbolic approach has the agility to tell us how to decide about the experiment. The paragraph just sat there. Watching. Neither of these solutions came from difficult algebra, but look what they uncovered! They're alive! There's physical insight to be gained by looking at a function that describes—or maybe is?—nature. The language of physics is mathematics, uttered Galileo a long time ago (although he said that the language of the universe is mathematics). Well, he was right. And we have no idea why that seems to reliably be the case! "The miracle of the appropriateness of the language of mathematics for the formulation of the laws of physics is a wonderful gift which we neither understand nor deserve. We should be grateful for it and hope that it will remain valid in future research…" This is from the last paragraph of a notorious and actually, delightful essay written by the physicist Eugene Wigner in 1960. The title is The Unreasonable Effectiveness of Mathematics in the Natural Sciences where he tries to dig into the strange relationship between the physical world and mathematics. It's famous: ask Mr Google to find "Unreasonable Effectiveness" and you'll get 150,000 references to Wigner's essay. I'll go further and suggest that a mathematical model cognitively---you know, in your brain---functions similarly to how a painting represents the object of its representative landscape. Functional MRI investigations suggest that mathematicians and physicists appreciate beauty in mathematics by heating up the same region of the brain that's stimulated by someone appreciating a work of art. So this is deep stuff. I hope you see my point: The English paragraph and the succinct mathematical function on the surface do the same job. However, that "miracle" of the connection between the universe and mathematics is really only apparent when we make full use of the manipulative features of symbols in an equation where we can uncover new things. Functions rule One of the remarkable consequences of the mathematization of physics that began with Descartes is that we've come to expect that our descriptions of the universe will be in the language of mathematical functions. Do you remember what a function is? The fancy definition of a function can be pretty involved, but you do know about function machines and I'll remind you how. When I was a senior in college, finishing my electrical engineering degree, our department had a visitor from the Hewlett Packard Company. It was either Bill Hewlett or Dave Packard, I can't remember which. But he promised to do away with the slide rule that we all carried around with us everywhere and showed us a brand new product: a portable scientific calculator, that they called the electronic slide rule. This was 1972 and he showed us the first HP calculator, the HP-35. Needless to say, I couldn't afford it—it cost $400 then— but later in graduate school I bought my first scientific calculator, the HP-25, pictured here, along with that slide rule that I carried for four years. Left: the venerable HP-25 programable (!) scientific calculator. Right: a slide rule used for all calculations until the early 1970's. It was not programmable (although it was wireless) Today I've got more processing power in my watch then I had in that calculator. But I'll bet you've got something like it…calculators are nothing but electronic function machines. In fact, this is the arithmetic circuit inside of that original calculator: The AMI 1820-1523 Arithmetic, Control Timing processor: the heart of a function machine. Adapted for my silly purposes: the tabs at the blue arrows are actually connecting the processor to the keyboard. That's how data got in. The calculator guts shows what a function does: if you enter data through the keypad—a value of $x$—and hit the appropriate button, the display shows the value of the function. So if the function was the formula $f(x) = x^2$ and if I keyed in "4" and pushed the $x^2$ button, the display would read "16," the value of $f(4)$ for that particular function. Notice that it doesn't give you more than one result, and that's a requirement of a function: one result. Your algebra teacher would have called the inputs (e.g., x,y,...) the independent variables, which would have been members of the function's "Domain," and the output [e.g., $f (x,y,...)$ or often $y$)] the dependent variable, which would have been inside the "Range." That's all a function is: a little mathematical machine that reports a single result for one or more inputs according to some rule. For us, functions can be represented by a formula, an algorithm, a table, or a graph. In all cases, it's one or more variables $x$ or $x \; \& \; y…$ or $x \; \& y \; \& z…$ in, a rule about what happens to them, and one numerical result out. Nature seems to live by functions and since we're all about nature, we'll need to use functions. Our way. Turn the crank, QS&BB style We'll accomplish a lot without much mathematical effort, I promise you. In fact, we'll find that the raw mathematics of modern physics is simple. What's hard is the conceptual visualization. Don't fear the math! Your adversary will be your hard-wired common sense! Some physics models are simple and we can easily deal with them in their "raw" mathematical form…and some are really quite complicated. With a little help from Mr Descartes or some whimsy, we'll manage. So three ways we'll work with mathematical models. 1. We'll write them and manipulate them for real. Can you solve the equation: $y = ax$ for $x$? You'll be surprised just how often an equation of that form will describe sophisticated aspects of the world. 2. Plot them. If I want to deal with a model that uses sophisticated mathematics quantitatively, instead of working with the formulas…I'll just show you the functions are plotted. Here's an example, a really important model in physics which we'll refer to a number of times. The function is messy and I don't want to lose track of the physics by getting you bogged down in evaluating the it. This is the intensity of a heated object as a function of wavelength of the radiation for different temperatures. You'd have no problem telling me the relative intensities of say the blue and the red curves—4000 degrees and 6000 degrees—at a wavelength of 1 micron, $1 \times 10^{-6}$ m. Right? Much easier than plugging into Mr Planck's formula. 3. Represent with a circuit. A silly circuit. If we're talking about a really complicated model, perhaps one with many formulas and lots of high-brow calculus or even worse, I'll show you a cartoon and ask you to take my word for it. Remember in your past you might have heard someone use the phrase, "turn the crank…" that's sort of old language for plugging in some numbers and doing some laborious calculation…. We'll not do that. I'll just show you this figure, properly filled out for the circumstance at hand: The evaluation of a candidate physical model (which is a function or a set of functions) requires some inputs [the equation(s), maybe some data, some core beliefs (other trusted models),and some plan…a strategy]. The result is a prediction which would almost always be a prediction for some measurable quantity. What will be important here is that we all agree on the assumptions in the model, the tools used to solve it, and any data that might be a part of the calculation of interest. You'd have no problem imagining a computer solving something or running some (game?) simulation and that's all this fake circuit represents. Something in, some instructions, and something out. We move on. You might find it useful to brush off some previous skills that we'll need to make progress. I have in mind simple algebra, exponents, powers of 10, and some simple geometry formulas. I'll show you some examples of the level of algebra we'll need, but let's first pause and salute the most important thing about algebra: The Fairness Doctrine of Algebra: If you do something to one side of an equation, you must do the same thing to the other side. Words to live by. Armed with this, here we go. You'll be surprised how simple this will be. Our appetite for algebraic complexity will be very modest. We'll not encounter formulas that are much more complicated than these: (Were you writing? I'll wait.) Can you do each of these? Then you're good: that's about all that you'll need to remember of algebra. Just remember the rule. Then…it's merely a game—a puzzle to solve. Once in a while, we'll need to multiply or divide terms that have eyponents. There are simple rules for this, but let's figure them out by hand…so to speak. The first thing to remember about exponents is that in a term like $y^n$, a positive integer $n$ tells you how many times you must multiply $y$ by itself. So: $y^1 = y .$ Here, there's just one $y$, so: $y^1 = y.$ An Example: Exponents together The Question: What is $x^2x^1x^4$ ? $x^2x^1x^4=x^{2+1+4}=x^{7}$ Interactive: 3.1; 2018.10.15; 17:06 Exponents, you do it One more time, but different. Another rule recalling that a negative power means "one over…": $x^{-n} = \frac{1}{x^n}.$ If the same rule for adding exponents works—and it does—then we can multiply factors with powers by keeping track of the positive and negative signs of the exponents. So here's an easy one: $\frac{x\times x \times x}{x\times x} = x$ which you quickly get by crossing out two $x$'s in the numerator and the denominator leaving you with one left over. Or, by using the powers and the rule: Interactive: 3.2; 2018.10.17; 14:31 What is $x^{−2}x^1x^4$? One more thing. The powers don't have to be integers. Perhaps you'll remember that square roots can be written: That's it. Now we have everything we need to turn numbers into sizes of…stuff. One of the more difficult things for us to get our heads around will be the sizes of things, the speeds of things, and the masses of things that fill the pages of QS&BB. Lots and lots of zeros for a large or small number means: lots of mistakes and a hopelessness for the relative magnitudes of one big or small thing compared to another. Big and small numbers are really difficult to process for all of us. I have no idea how much bigger is the Milky Way Galaxy (950,000,000,000,000,000,000 meters) than the diameter of Jupiter (143,000,000 meters). It all blends together. Wait. That sounds pretty grim. Glad you asked. But wait. There's a solution: the beauty of "10" or lots of "10's." Exponential notation…using our power rules and the number 10. It's easy. A number expressed in exponential notation as: Let's think about this in two parts. First, the 10-power part. The rules above work for 10 just like any number, so $10^n$ is shorthand for the number that you get when you multiply 10 by itself $n$ times. This has benefits because of the features of 10-multiples, that we count in base-10, and now you can just count zeros. So for example: The power counts the zeros, or more specifically, the position to the right of the decimal point from 1. So if you have any number, you can multiply it by the 10-power part and have a compact way of representing big and small numbers. The second thing is the number in front that multiplies the power of 10. It's called the "mantissa" and that's all it is…a number. The powers of 10 come with handy nicknames that imply a particular amount… like "kilo-gram," meaning 1,000 grams. You already know many of them. Here are more powers of 10 than you ever want to know: power of ten septillionth yocto- y 0.000000000000000000000001 $10^{-24}$ sextillionth zepto- z 0.000000000000000000001 $10^{-21}$ quintillionth atto- a 0.000000000000000001 $10^{-18}$ quadrillionth femto- f 0.000000000000001 $10^{-15}$ trillionth pico- p 0.000000000001 $10^{-12}$ billionth nano- n 0.000000001 $10^{-9}$ millionth micro- $\mu$ 0.000001 $10^{-6}$ thousandth milli- m 0.001 $10^{-3}$ hundredth centi- c 0.01 $10^{-2}$ tenth deci- d 0.1 $10^{-1}$ one 1 $10^{0}$ ten deca- da 10 $10^{1}$ hundred hecto- h 100 $10^{2}$ thousand kilo- k 1,000 $10^{3}$ million mega- M 1,000,000 $10^{6}$ billion giga- G 1,000,000,000 $10^{9}$ trillion tera- T 1,000,000,000,000 $10^{12}$ quadrillion peta- P 1,000,000,000,000,000 $10^{15}$ quintillion exa- E 1,000,000,000,000,000,000 $10^{18}$ sextillion zetta- Z 1,000,000,000,000,000,000,000 $10^{21}$ septillion yotta- Y 1,000,000,000,000,000,000,000,000 $10^{24}$ You'll notice that "google" is not listed. There's a story there. When the Google.com inventors were trying to come up with a name that would represent an enormous number of searches, someone suggested the name "googol,"" which is the nickname for $10^{100}$, which is indeed a lot. Well, when they went searching to see if anyone had registered the name, one of them misspelled it "google" and so that's what the company became. You're welcome. A "feel" for sizes is pretty much limited to our puny human experiences. I can probably estimate a length of about 10 feet. So I might be able to approximate the hight of a building, for example. Or compare the distances of two plots of land. But much more than that, I'm out of in-grained tools. Well, this is "particle physics" and "cosmology"…the smallest items and the largest ones in the whole universe. So the prefixes in the list above? We'll need many of them. Here's a list of "things" from normal to, well, extreme. Here is a ranked list of big and small things with approximate sizes, along with the nicknames that we use. We'll span these enormous distance ranges and eventually Tera-this and pico-that will just roll off your tongue. African elephant, 4 m Height of a six story hotel, 30 m, $3.0 \times 10^1$ m Statue of Liberty, 90 m, $9.0 \times 10^1$ m Height of Great Pyramid of Giza, 140 m, $1.4 \times 10^2$ m Eiffel Tower, 300 m, $3.0 \times 10^2$ m Mount Rushmore 1700 m, $1.7 \times 10^3$ m, 1.7 km District of Columbia, 16,000 m square, $16.0 \times 10^3$ m, or $1.6 \times 10^4$ m Texas, East to West, 1,244,000 m, $1.244 \times 10^6$ m, 1244 km, or 1.244 mega-m Pluto, 2,300,000 m diameter, $2.3 \times 10^6$ m Moon, 3,500,000 m diameter, $3.5 \times 10^6$ m Earth, 12,800,000 m diameter, $12.8 \times 10^6$ m, or $1.28 \times 10^7$ m Jupiter, 143,000,000 m diameter, $143.0 \times 10^6$ m, or $1.43 \times 10^8$ m Distance Earth to Moon, 384,000,000 m, $384.0 \times 10^6$ m, or $3.84 \times 10^8$ m Sun, 1,390,000,000 m diameter, $1.39 \times 10^9$ m, 1.39 giga-m Distance, Sun to Pluto, 5,900,000,000,000 m, $5.9 \times 10^{12}$ m, 5.9 tera-m Distance to nearest star (Alpha Centuri), 41,300,000,000,000,000,000 m, $41.3 \times 10^{18}$ m, or $4.13 \times 10^{19}$ m, 41.3 Exa-m Diameter of the Milky Way Galaxy, 950,000,000,000,000,000,000 m, $950 \times 10^{18}$ m, or $9.5 \times 10^{19}$ m, 9.5 Exa-m Distance to the Andromeda Galaxy, 24,000,000,000,000,000,000,000 m, $24.0 \times 10^{21}$ m, or $2.4 \times 10^{22}$ m, 24 zetta-m Size of the Pisces–Cetus Supercluster Complex, our supercluster, 9,000,000,000,000,000,000,000,000 m, $9.0 \times 10^{24}$ m, 9 zetta-m* Distance to UDFj-39546284, the furthest object observed, 120,000,000,000,000,000,000,000,000 m, $120 \times 10^{24}$ m or $1.2 \times 10^{26}$ m * This is out of hand. we have different units for astronomical objects! Circumference of a basketball, (regulation 30 inches) 0.762 m, 76.2 cm Diameter of a golf ball, 0.043 m, 4.3 cm, or 0.043 m, 43 milli-m, mm Diameter of a green pea, 0.001 m, 1 cm, or 0.01 m, 10 mm A small ladybug, 0.5 cm, 0.005 m or $5\times 10^{-3}$m, 5 mm A human hair diameter. $10^{-4}$ m, 100 $ \text{micro-m, } 100 \mu$m Wavelength of mid infrared wave, $100 \times 10^{-6}$ m, $100 \mu$m A human cell. $10^{-6}$ m, 10 $\mu$m A large molecule. Sugar, $6 \times 10^{-10}$m, 0.6 nm, 600 pico-m, pm A large atom. Cesium atom (largest) $ 2.225 \times 10^{-10}$ m, 0.225 nm, 225 pm Wavelength of soft X-ray (12.4 keV), $100 \times 10^{-12}$ m, 100 pm Wavelength of hard X-ray (124 keV, 300 EHz), $10 \times 10^{-12}$ m, 10 pm Compton wavelength of an electron, $2.4 \times 10^{-12}$ m, 2.4 pm Bohr radius, $53 \times 10^{-11}$ m, 53 pm 1 Angstrom, $10^{-10}$ m, 100 pm Radius of Helium nucleus (alpha particle). $1.6 \times 10^{-15}$m, 0.016 femto-m, fm Radius of a gold nucleus. $7 \times 10^{-15}$ m, 0.000007 nm, 7 fm A hydrogen nucleus (a proton). $ 1.3 \times 10^{-15}$ m, 0.0000013 nm , 1.3 fm Radius of a quark (upper limit), $10^{-18}$ m, 1 am The smallest distance that can exist: the "Planck length," $10^{-35}$ m I don't know Mr Huang, but his Scale of the Universe 2 (http://htwins.net) is worth playing with, if not owning his app. You know. For parties. We will deal with some functions that would be very hard to evaluate on your calculator. But Descartes' gift is that I can show you the graph and evaluation can be done by eye, which is in effect solving the equation. We'll use some simple geometrical relations which I'll summarize here. Formulas from your past that might be explicitly useful I know that you've seen most of this somewhere in your past! So return with us now to those thrilling days of yesteryear. Straight lines, circles, and the areas and circumference of circles and triangles are useful. Equation of a straight line A straight line with a slope of $m$ and a $y$ intercept of $b$ is generally described by the equation, A straight line with a slope of $1/3$ and a $y$ intercept of $2$, so $y=1/3x+2$. Equation of a circle A circle of radius $R$ in the $x-y$ plane centered at a $(a,b)$ is described by the equation: Of course if the circle is centered at the origin, then it looks more familiar as in this figure: A circle centered at the origin described by the equation, $x^2 + y^2 = 81$. It has radius of $9$, area $A=\pi 9^2$, and circumference $C=2 \pi 9.$ Equation of a parabola A parabola in the $x-y$ plane facing up with vertex at $(a,b)$ where $C$ is a constant has the equation, A parabola satisfying the equation, $y = 1x^2$. Area of a rectangle A rectangle with sides $a$ and $b$ has an area, $A$ of Area of a right triangle A right triangle (which means that one of the angles is $90$ degrees) with base of $a$ and height of $b$ has an area, $A$ of For a right triangle, the base and height are equal to the two legs. But the formula works for any triangle. Here are some examples, Three triangles, all with the same areas. Area and circumference of a circle For a circle of radius $R$, the area, $A$ is and the circumference, $C$ is You realize that two pizzas is a "circumference"? Be- cause...wait for it...it's "2 pie are." You're welcome. (papajohns) For a right triangle (like the left hand triangle above), the hypotenuse, $c$ is related to the lengths of the two sides $a$ and $b$ by the Theorem of Pythagoras: And, no. He didn't invent it and it's been proven many, many different ways. The quadratic formula We might run across a particular polynomial, which you've also probably seen before: It's an "order 2" polynomial, which means that there are two values of $x$ that qualify as "solutions": the values of $x$ that when substituted make the function be zero. You could plot the function and find what $x$ values the curve passes through the $x-$axis, or you could rely on the time-honored recipe: There are a handful of skills that will sometimes come up, but not every lesson. I have in mind here reading log-plots, unit conversions, approximating functions, simple graphical vector manipulations, and a few more geometrical relationships. Wait. LOGARITHMS!!? NO! Glad you asked. Calm down. There will be only a couple of times when I'll ask you to read a plot…meaning identify a point on a curve in which the axes are not linear (1, 2, 3, 4…) but logarithmic (10, 100, 1000…). No functional manipulations. Just interpretation. Sometimes it will be useful to plot functions or data that range over a wide scale, maybe even many powers of 10. I want to make use of log-log plots and semi-log plots because it's about the only way to display functions that range over many orders of magnitude. So we'll use them as a tool, but we'll never actually evaluate a logarithm. Here's an example. We'll learn that there is a relationship between the time it takes for a planet to orbit the sun (its "period") and the distance away that its orbit is from the center of the sun. Here it is for a slice of distances. The graph displace orbital period in units of days versus the distance from the sun in units of 100,000,000 kilometers. So the left hand "origin" is at 100,000 km, then the next big tick mark is at 110,000,000 km and the next (labeled) tick mark is at 120,000,000 km. That's a lot of zeros, so I labeled the horizontal axis as units of $10^8$ km. If I asked you to find the distance earth, which has a period of 365 days, is from the sun, you'd look at the vertical, period axis, dig into the tick marks, and figure out where to find 365 on the vertical axis, right? I've kind of done that—what your finger would probably do—and I find that the horizontal, light lines are at every 10 days and the tick marks are at every 20 days, so I'd find 365 at about where the horizontal arrow is. Using the model—solving the equation that relates period to distance—means simply finding the point on the curve and reading down to the distance. Going down from the curve, it would hit at about $1.5 \times 10^8$ km. That's about right. If I asked you to do the same thing for Venus, but the other way around you could do that: Interactive: 3.3; 2018.10.19; 13:41 d=1.07e8km -> 226 days How about Mars? Its period is about 687 days. Certainly the model (the blue line) should describe Mars. How about Mercury? Its period is 87 days and it's distance is $0.58 ]\times 10^8$ km. How about Neptune? It's period and distance are 60,200 days and $44.8\times 10^8$ km. None of these fit on the graph. So this plot is pretty useless as a guide to the solar system. Don't despair. There's a way. This is where a log-linear or log-log plot saves the day. In a log plot the axes are labeled by powers of 10, so 1, 2, 3… and so on, standing for $10^1, 10^2, 10^3$. This power relation distorts the curve from presentation on a linear scale pair, but it's not wrong, just different. Let's do that for our solar system for the horizontal axis. The same curve as above, but with the horizontal axis in powers of 10 from $1\times 10^8$ to $1\times 10^{10}$ km In this figure, the dark, black vertical lines tell you the $1\times 10^8, 1\times 10^9, \text{ and } 1\times 10^{10}$ km marks. The vertical gray lines indicate the $ 2,3,4,5,6,7,8,9\times 10^{8,9,\text{ or }10}$ km marks. The circular inset breaks down the $10^8$ region to that of the linear horizontal plot up above. Notice that the red, dashed lines mean what they did in the first plot: 365 days for Earth's period on the vertical axis and about $1.5 \times 10^8$ km for Earth's distance from the center of the sun on the horizontal axis. But we still can't represent much of the solar system in one plot, so we must also make the vertical axis logarithmic. A "log-log" plot, whereas the previous is a "linear-log" or "semi-log" plot. Here it is, the model of orbiting planets around our particular Sun. Earth is again represented as the red, dashed lines and now we can evaluate the periods and distances for many more planets. Kepler's 3rd law for anything orbiting in our solar system: the period versus the distance from our sun. This covers 6 orders of magnitude in days and 4 orders of magnitude in kilometers. Interactive: 3.4; 2018.10.23; 16:04 The planet jupiter is 778.5 million miles from the sun. What is its period (jupiter's "year") in days? Numbers are just numbers without some label that tells you what they refer to. Not all numbers have to refer to something, a pure number is a respectable mathematical object—prime numbers for example have been a topic of research for centuries. Irrational numbers—those that can't be expressed as a ratio of whole numbers, like $\pi$, —are likewise objects with no necessary relationship to…"stuff" in our world. But they keep coming up in nature, so we warm to them. We're mostly concerned with numbers that measure a parameter or count physical things and they come with some reference unit ("foot") that is a customary way to compare one thing with another. Of course not everyone agrees on the units that should be used. Wait. Let me restate that: there's THE WHOLE WORLD that agrees on one set and then there's the United States that marches to its own set of units. Thinking of you, "feet," "pounds," and "Fahrenheit." I'll not use Imperial units (feet, inches, pounds, etc.) very much, except to give you a feeling for something that you've got an instinct for…like the average height of a person or a single story house. We'll use the metric system, in particular the MKS, aka SI units1 in which the fundamental length unit is the meter (about a yard), the fundamental mass unit is the kilogram, and the fundamental temperature unit is the Celsius. I'll generically refer to these as "MKS" (for meter-kilogram-second) or "metric units" without being too fussy about the fancier names, like SI. It's small comfort that we're all in agreement on seconds, minutes, and hours and its base-60 origins. In 1793 the French tried to change that to "decimal time" with a 10-hour day, 100 minutes in an hour, and 100 seconds in a minute and so on, but it didn't catch on. Just like an exchange rate in currency, so many euros per dollar, we'll need to be able to convert, among many different units. All the time. Wait. That can be pretty involved. Glad you asked. You're right and it can be a way to make mistakes and get all wrapped up in the conversion that you lose track of the physics. You know what? I'll not care. I'll give you little conversion engines that will do any unit conversion that you need to do. Just let me show you what it means and then we'll be pretty low-key about this. Having said that, we should review how this works—what will be behind any tool that does unit conversion. Let's get our bearings. What's the height of an average male. Mr Google tells me that's about 5'10". How many inches tall is our average male? Here's the pre-QS&BB thought-process you'd use to calculate this. Three steps: A single foot is $12$ inches. So, $5$ feet is $5 \times 12$ = $60$ inches and the combination is $60 + 10 = 70$ inches. …which you could almost do in your head, which, by the way, averages in circumference at about 22 inches. You're welcome. But this simple, almost intuitive calculation uses a more general conversion from one unit to another through a tricky multiplication by the number 1. Can you multiply by 1? Then you can convert units like a champ. Let's work out an example. Something you can use at a party. I first worked this out for a class when I was in Geneva, Switzerland working at CERN. It was July 4, 2010, which was just another Sunday over there. The United States came into existence on July 4, 17762 which was $2010-1776 = 234$ years ago. How many seconds had the United States been around if we start from midnight on July 4, 1776? We need a handful of 1's here: Now it's a matter of just multiplying by the right combinations of "1" as many times as necessary to get where you want to be. There are a few of things to notice here. First, that's a lot of seconds! Second (get it?), if we'd known that there are $3.154 \times 10^7$ seconds in a year we could have started with that and had just one factor of 1. And finally…"3.154"? Does that sound familiar to anyone? Quite by accident, the number of seconds in a year is close to the first few digits of $\pi = 3.14159…$ times $10^7$ and so we often say that the number of seconds in a year is about "$\pi \times 10^7$." Just as a memory device. You're welcome. Some quantities in nature have a magnitude (like a temperature) and a magnitude and a direction, like a velocity. 60 mph north does not result in the same trip as 60 mph east, so direction and speed both matter. I think that the most intuitive vectors are those associated with a distance in space and a force, so in this survey I'll concentrate on those two. We'll meet many other vectors throughout QS&BB, but I'll highlight their vector natures when we come to them. As I'm writing this, the World Series in 2018 is about to start, so let's think about a baseball diamond. In baseball, the distance between the bases is 90 feet and according to the rules, a runner must follow the bases in order. So to go from home plate to second base, the path that's followed must be according to the two arrows in this figure. This situation is shown in the (a) below. Here, Frank has hit a double and being a good sport has indeed taken the appropriate path to second base. Figure (a) shows the appropriate path to second base. (b) Shows an illegal path to second base. Earlier Earl was at bat and hit a ground ball to the shortstop but he fell down halfway to first base. How does his displacement compare to Ossie's in vector notation? Before his embarrassment, Earl was running in the same direction that Ossie ran—and thousands of ball-players before them. However his vector would be: an arrow pointing from home plate halfway to first base. Interactive: 3.5; 2018.10.23; 16:51 Dewey was also unlucky and after Chester was called out he tried to steal third base from second base, but fell down 1/3 of the way. What's the vector that represents his short trip? In QS&BB we'll make a lot of use of the handy arrow symbol: $\rightarrow$. The length of the arrow represents the magnitude and of course the orientation and the head of the arrow represent the direction. Arrows can be $\longrightarrow$, or short $\rightarrow$, pointed in different directions, $\nwarrow$, $\leftarrow$, $\nearrow$, etc. Very handy. The magnitude can mean many things, depending on the physical quantity being represented. Some of the vectors that we'll meet are displacement, velocity, momentum, force, electric field, magnetic field, and angular momentum. A few useful things from this figure: Random vectors, all of the same length. Two vectors, $\vec{A}$ and $\vec{B}$ are said to be equal if they are both the same length and point in the same direction so Vector $\vec{C}$ is the same length as both $\vec{A}$ and $\vec{B}$ but because its direction is different. Finally, the negative of a vector is that same vector pointing in the opposite direction. So for example, Vector addition Generally, we'll treat a vector quantity as an arrow pointing in some direction and a length that represents its magnitude. Sometimes a vector can represent an actual path in space (like meters, feet, and so on) where it's easy to imagine what it means. We do this all the time on maps with a scale showing that some map-distance (an inch) can stand for a real-world distance ("1 inch = 1 mile"). But, sometimes a vector doesn't represent a length in space, but some other physical quantity, like a force or a velocity. This can be complicated since you're drawing an arrow that has a "regular" length, but you mean it to be something else, like a force. But, it still works geometrically (the arrow still points in space) and we just use a different scale. Let's do something simple. One dimension Two dimensions, the head to tail way Here's a different combination of vectors, which looks more like Chester's baseball career's embarrassing final act. Here, we have two vectors that have the same lengths on your screen, but now their lengths represent displacement (both a distance and a direction). They look like the force vectors and their lengths on your screen are the same, but the units are different: $\vec{A}=3$ blocks and $\vec{B}=2$ blocks. In the figure, how much shorter is cutting across the park as compared to traveling on the sidewalk? Obviously the sidewalk journey is $3 + 2 = 5$ blocks. What's the length of $\vec{D}$? We can do this two ways. One way is to look at the triangle in (b) and remember Pythagoras' Theorem. Or we can use the scale, as in (c)…construct $\vec{D}$ with the head-tail rule and then just transplant it to the scale and see that its length is indeed a little more than 3 and a half. Or, somehow move the scale like a ruler and measure the length of $\vec{D}$. Either way, it's shorter to cut across the park by almost a block and a half. But you sort of knew that. Two dimensions, the parallelogram way Here's another situation. Decompose a vector An inverse of the process of adding two vectors is called resolving or decomposing a vector into its components. This figure shows the steps and is literally the parallelogram addition-method done backwards! The successive steps involved in "resolving" a vector into its perpendicular components. Let's decompose vector $\vec{D}$ into components along the horizontal and vertical directions (it could be any two directions and they need not be perpendicular). The way to construct this is to add a placeholder parallelogram — usually a rectangle — with the original vector across a diagonal. Then the sides become the two decompositions: the vector components. In both of these cases we're doing: Here going from left to right (decomposition of $\vec{D}$) and just before, going from right to left (addition of $\vec{A} \text{ and } \vec{B}$). Vector subtraction That is everything we'll need for any vectors that come along in QS&BB! But there's a more nuanced way of looking at approximations which is due to Isaac Newton. He found a way to represent a function in pieces, for cases in which the power of such a function could be anything: a positive integer, a negative integer, or even a fraction. The pieces add together to perfectly recreate the original function. The bad news is that to do so perfectly requires an infinite number of them! The good news is that one can get very close to the original function with only a few of the pieces. In contrast to how that sounds, it's actually very useful for many physics applications as we'll see. Here's his expansion for our function: That last bit of $…$ means that the expansion continues in that pattern for an infinite number of terms. But notice that each term is a separate function in and of itself. That is, $f(x)$ can be written as the sum and difference of many functions, $1, x, x^2,…$. Add them all up and you'll get the original function in all of its glory. Add only the first few terms and you'll get close to the original function. Let's do that for the first four terms and compare it to the original, full-fledged function. The original function is in solid red and each successive curve adds the next term in the series. So, the blue dotted line is $f(x)=1$, the first term in the series; small dashed orange is adding $-x$, so $f(x)=1-x$; medium dashed green is $f(x)=1-x+x^2$; and finally, long dashed purple is $f(x)=1-x+x^2-x^3$. Let's zoom into the region in the box. The curves all mean the same thing, except that the $x$ range is now from $0–0.6$ and the $f(x)$ range is from $0–1.5$. Wait. I've had more fun than this… Glad you asked. Here's the punchline. You'll thank me when we get to relativity. Or not. Suppose that all you cared about was $x$'s that are less than about 0.1 and you need to evaluate the curve quickly, or gain some physics insight for that small of an $x$ region. Then you could get away with approximating Look at how close the solid red curve is to the short dashed orange curve. Good enough. Suppose you cared about $x$'s that are less than 0.3…then $1-x$ would not be accurate enough, but the long dashed purple curve would be since it's indistinguishable from the solid red curve up to that point. Look how each curve successively makes the approximation better and better as $x$ increases. So if you can be confident that your $x$'s are going to be less than say 0.1, then you can approximate the original function with maybe the first two terms: since the blue and orange curves when added together are neatly underneath the red curve. The more terms we might add the further out in $x$ that agreement would continue. Add an infinite number of terms—not advisable—and you'd perfectly reproduce the original function. Remember this! It will become important later when we'll encounter some physics functions and approximate them with a few terms of the expansions that we'll encounter. Here are the functions that we'll see in the lessons ahead: Thanks, Isaac. Ellipses and hyperbolas will come up, but descriptively. I'll just want you to have a feel for their shapes and some of the terms that are defined by them. Just file away this location and we'll come back only a few times. Equation of an ellipse An ellipse is a squashed circle (?) that has two axes, the major axis ($a$) and the minor axis ($b$). The points at which the curve goes through the axes at the major axis points are called the vertices of the ellipse. The equation of an ellipse centered on the origin is The figure of an ellipse centered on the origin is here: An ellipse with equation $\frac{x^2}{36}+\frac{y^2}{16}=1.$ The focus ($c$ in the diagram) of an ellipse is shown and has the relationship to the curve in that any line connecting one focus to the curve and then to the other focus is always constant. The relationship to the major and minor axes is $c^2 = a^2-b^2.$ So, if $a=b$ then the ellipse is actually a circle and the position of the focus is at the center of the circle, here the origin. The degree to which an ellipse is almost-circle-like (more symmetric) and almost-flattened-like is determined by its "eccentricity," $e$. It's defined as $e = \frac{c}{a}$. So an eccentricity of zero is a circle and the larger the eccentricity, the more the focus point is close to the vertex…and the flatter it is. Equation of a hyperbola I'll want to refer to a hyperbola once in QS&BB and it will have a particular shape. This particular form of hyperbola is open to the right and left and crosses the $x$ axis at $\pm a$—the "semi-major axis"—and has a semi-minor axis of $b$ (see the figure). The equation is The points $(x_0, y_0)$ are where the center of the hyperbola is and in the figure, that's the origin. There are a variety of definitions which you can see on the diagram. The equation of this hyperbola is $\frac{x^2}{36} - \frac{y^2}{16} = 1.$ This has been a whirlwind pass through lots of mathematics from you past. I'd like you to remember that functions are nothing more than little machines for taking a variable and turning it into a value. The world seems to be astonishingly well described by models made up of functions! Some of them are easy and some of them are complex. Only some very simple manipulations will be required. See the Fairness Doctrine of Algebra! I'll ask you to "read" functions sometimes, but I promise: only when they are simple and only when there's physics insight to be gained from that. Otherwise, we'll be content to read graphs to "evaluate" functions because, well, it's the same thing. That will be an important tool since we're talking about the largest things in the universe and also the smallest things in the universe. There are some simple equations, areas, and circumferences of geometrical objects that I'll need you to remember: line, circle, triangle. The rest of the items in Lesson 3 are there for you to refer to when we touch on some science stories that need them. This stands for meter-kilogram-second, as the basic units of length, mass, and time. It's a dated designation as the real internationally regulated system is now the International System of Units (SI) which stands for Le Système International d'Unités. The French have always been good at this. ↩︎ Actually, the Declaration of Independence wasn't fully signed until August 2, 1776—my birthday! The day, not the year. ↩︎
CommonCrawl
Leaky Buckets: Volume in a system of 2+ buckets that can be empty What is the volume of water in each bucket in a system of leaky buckets with the following conditions: There are $N$ buckets with $W_i(t)$ volume of water. A bucket has infinite capacity, but a finite amount of water bounded at zero: $\infty \geq W_i(t) \geq 0$. There is a "source" bucket which flows into other buckets, but other buckets do not flow into it. Every non-source bucket flows into every other non-source bucket. With enough volume, each bucket has a constant outflow rate, $v_i$ (units of $W/t$), of water flowing out and being equally distributed into the other buckets. You can imagine that each bucket has a pump attached to it trying to pump at a fixed rate through $N-1$ or $N-2$ pipes for the source and non-source buckets, respectively. The attached figure illustrates a system with $N=3$ buckets. What are the formulae for $W_i(t)$? This is a relatively simple problem apart from the constraint that $W_i(t) \geq 0$, since this puts a continuity-breaking constraint on the flow rate, and I'm very rusty. Austin Milt M F H You say "a finite amount of water", but then " ∞ ≥ W_i " where "equal" is possible - can we assume that that was a typo and you meant " > " rather than " ≥ " here? Also, can you confirm that the source bucket (which isn't refilled) is empty after time W_0(0) / v_0, or does it remain (magically) always full? Actually, is v_i = 0 allowed or can we assume v_i > 0 for all i (including 0)? It's fine to assume the capacity is finite (i.e. W_i can never be infinity), but I dont want the total filled volume of water (W_i) to have an upper limit in my project. You can assume the source bucket remains empty after emptying and has no inflow rate, though in practice I will "fill" it back up iteratively. v_i = 0 is allowed and should be the case when W_i = 0 wait - of course there is no outflow when W_i = 0 but otherwise the v_i are constants, yes? They don't change over time. Here's my answer (already posted yesterday evening...) I hope you're not deceived, I don't think there's a much more elegant way to compute the solution other than step by step determine which bucket will become empty next until no more will become empty. Please tell me if you want something to be better clarified. (I think the google sheet also gives a good idea of the process, you should have a look at that and make a copy to modify at wish.) Let us first state some more or less obvious facts. 1°) A bucket loses volume if and only if the inflow (from all other buckets) is strictly less than its (fixed, maximum) outflow. In particular this is the case if a bucket that was nonempty for $0 < t < t^*$ becomes empty at some time $t^* > 0$. 2°) An initially empty bucket can become nonempty right at the start, if the inflow from the other buckets is larger than its maximum outflow. 3°) No bucket that previously became or remained empty at some time t* > 0, can ever become nonempty at some time $t > t^*$. Indeed, assume that this happens for the first time at some $t' > t^*$, for some bucket(s) number $ i_1, ?, i_k $. It can only happen if the inflow of bucket(s) $i_1, ?, i_k$ had increased beyond their (fixed maximum) outflow rate, but that isn't possible unless some other bucket(s) became nonempty just before, in contradiction with the definition of $t'$ and $i_1, ?, i_k$. (The increased inflow cannot come from the buckets $i_1, ?, i_k$ themselves, since if this were possible, it should have happened already earlier, as by assumption at any moment the maximum possible outflow is pumped out of each bucket.) 4°) We conclude that for each bucket number $i$ there is a time $t_i \in [0,\infty]$ such that $W_i(t) > 0 \iff 0 < t < t_i$, and $W_i(t) = 0 \iff t \ge t_i$. (Then $t_i = 0$ means that the bucket is always empty, and $t_i = \infty$ means the bucket is nonempty for all $t>0$.) 5°) In general the system reaches an equilibrium state with $m$ empty buckets and $N-m \ge 1$ buckets that remain nonempty forever. These are in general the buckets with the smallest (maximum) outflow rate; those with a larger outflow rate (and also the source bucket) will eventually become empty. 6°) For later it will be more convenient to reorder the indices according to when the buckets become empty: $ \{ t_0, t_1, ?, t_{N-1} \} = \{ t_{i_1}, t_{i_2}, ?t_{i_{N}} \}$ with $0 =: t_{i_0} \le t_{i_1} \le ?\le t_{i_m} < t_{i_k} = \infty$ for all $k > m$, where $m$ is the number of buckets that become empty at some moment. (Unless all buckets are empty at the beginning and forever, there will always remain at least one bucket nonempty forever, since no water is ever lost but only pumped from one bucket into another one: see also 14° at the end.) To simplify notations, let's also define the corresponding time intervals $ I_k = [t_{i_{k-1}} , t_{i_{k}}]$. (We'll see below how these $t_i$ are actually computed!) 7°) It will be handy to define the functions which give the flow from a given bucket $i$ to each other (non-source) bucket: $ u_0(t) := [~v_0 / (N-2) ~$ if $~ W_0(t)>0 ~$ else $~ 0 ~] $, and $\forall i > 0 : u_i(t) := [~ v_i~$ if $~ W_i(t) > 0 ~$ else $~ \min\{ v_i, \sum_{j \ne i} u_j(t) \} ~] / (N-1) $ These definitions are crucial and the basis for all subsequent computations. They should be clear: when a bucket is nonempty, its outflow is given by $~v_i~$, to be distributed over all other receiving buckets. When it's empty, its outflow is not zero but equal to the minimum of $~v_i~$ and the sum of the inflow coming from all other buckets. Actually, that minimum should always be that sum of the inflows, meaning that all that comes in will instantly be pumped out, so that the bucket remains empty. If the bucket was initially empty, it is possible that the inflow is larger than the maximum outflow (see 2°), but then the bucket will immediately become nonempty. 8°) Then the "volumes" satisfy the following differential equations: * $ W'_0(t) = - v_0 ~$ while $~ W_0 > 0 $, i.e., for $ 0 \le t < w_0 / v_0 $, and 0 otherwise. * $\forall i > 0:~ W'_i(t) = [ \sum_{j \ne i } u_j(t)~ ] - v_i ~$ if $~ W_i(t) > 0 ~$ (and otherwise, $\forall t>0:W_i(t)=0 ~\Rightarrow~W'_i(t)=0$). (But as we'll see below, these derivatives are not well defined in a few points $t_{k}$ in which one may have $~W'_i(t_k^-) \ne W'_i(t_k^+)$, where $f(t^\pm) := \lim_{h\to 0^+} f(t\pm h)$. So in general it will either be clear inside which interval $I_k$ is $t$, or we'll make explicit whether $t$ is to the right or to the left of a given $t_k$.) Obviously, the first equation is easy to integrate over all time: $ W_0(t) = \max( w_0 - v_0 \, t , 0 ) = [~w_0 - v_0 \, t ~$ if $~ 0 \le t < w_0/v_0 ~$ else $~0~]$. For the others, it is more complicated, because their inflow depends on the other sources and in particular, whether they are empty or not at a given moment. 9°) Within the interior of the time intervals $I_k = [t_{i_{k-1}} , t_{i_{k}}] $, where no bucket becomes empty, all flows $u_i(t)$ and $W'_i(t)$ are constant (so the argument $t$ is only needed to know in which interval $I_k$ the constant is taken), and a bucket that has $W_i(t^*) > 0$ at some time $ t^* \in I_{k}^\circ $ will satisfy the equation $W_i(t) = W_i(t^*) + W'_i(t^*) (t-t^*)$ (which is a straight line of constant (usually negative) slope $W'_i(t)$ given in 8°) for all $t\in I_k$, up to the moment $t_{i_{k}}$ where some bucket becomes empty. 10°) At a given moment $t$, if nothing changed, such a bucket would become empty after a time $\Delta_i(t) := -W_i(t) / W'_i(t)$, if this is a finite positive number. Otherwise (for buckets that are already empty or have inflow $\ge$ outflow), we let $\Delta_i(t) = \infty$. Then the next change will occur after a lapse of time $\Delta(t) := \min\{\Delta_i(t) ; i \ge 0 \}$. Usually, we would compute $\Delta_i^{(k)} = \Delta_i(t_{i_{k-1}}^+) $ at the beginning = to the left of the interval $ I_k $, and their minimum $\Delta^{(k)} = \min \{ \Delta_i^{(k)} ; 0 \le \Delta_i^{(k)}<\infty \} $: this is exactly the length of the interval $I_k$, and it determines $t_{i_{k}} = t_{i_{k-1}} + \Delta^{(k)}$. (An interval of length 0 means that bucket $i_k$ went empty at the same time than the preceding bucket.) 11°) Then, the next change will occur at $ t_{i_{k}} = t_{i_{k-1}} + \Delta^{(k)} $, unless $k = m \iff \Delta(t) = \infty \iff$ the system has reached its equilibrium state, and nothing will change any more: $~\forall i \ge 0,~ \forall t' > t_{i_m}:~ W_i(t') = W_i(t)$. 12°) If $~k < m~$, then at time $~ t + \Delta(t)~$, the buckets with $~\Delta_i(t) = \Delta(t)~$ will become empty, and their outflow will drop from $~v_i~$ to what they receive (for $i>0$) from the others, $~\sum_{j\ne i}u_i(t)~$ (which is necessarily less than $ v_i $, since otherwise $~\Delta_i = \infty~$). 13°) So we finally have the following solutions: $W_{i_1}(t) = W_{i_1}(0) + W'_{i_1}(0^+)\,t~$ for $~0 \le t \le t_{i_1} = \Delta^{(1)} = \Delta_{i_1}(0^+)~$ (this defines $~i_1$: the index of the smallest positive $\Delta_{i}(0^+)$ as defined in 10°), and $~\forall t>t_{i_1}:~W_{i_1}(t) = 0~$. Next we have: $W_{i_2}(t) = W_{i_2}(0) + W'_{i_2}(0^+)\,t~$ for $~0 \le t \le t_{i_1}~$, and $W_{i_2}(t) = W_{i_2}(t_{i_1}) + W'_{i_2}(t_{i_1}^+)\,(t-t_{i_1})~$ for $~ t_{i_1} \le t \le t_{i_2} := t_{i_1}+\Delta^{(2)}~$, and $~\forall t>t_{i_2}:~W_{i_2}(t) = 0$. And so on. The last one that gets empty at some point is $~W_{i_m}(t) = W_{i_m}(0) + W'_{i_m}(0^+)\,t~$ for $~0 \le t \le t_{i_1}~$, $~W_{i_m}(t) = W_{i_m}(t_{i_1}) + W'_{i_m}(t_{i_1}^+)\,(t-t_{i_1})~$ for $~ t_{i_1} \le t \le t_{i_2}$, $~W_{i_m}(t) = W_{i_m}(t_{i_{m-1}}) + W'_{i_m}(t_{i_{m-1}}^+)\,(t-t_{i_{m-1}})~$ for $~ t_{i_{m-1}} \le t \le t_{i_m} := t_{i_{m-1}}+\Delta^{(m)}~$, and $~\forall t \ge t_{i_m}:~W_{i_m}(t) = 0$. Unfortunately, there is no simpler formula than that for these piecewise defined functions. I have made a google sheet that allows make numerical simulations and see how the successive $\Delta^{(k)}$ and $t_{i_k}$ are computed: https://docs.google.com/spreadsheets/d/1giKGtubTUGkyISBwmf0vK8vpPyAyLI845gUavyaCVIY/edit#gid=0 14°) Concerning the final equilibrium state, for $t \ge T := t_{i_m}$, we have $\forall k>m:~\forall t>T: W_{i_k}(t) = W_{i_k}(T) ~$, and the sum of these constant nonzero volumes is the same as the sum of all initial volumes, $~\sum_{k>m} W_{i_k}(T) = \sum_{i\ge0}W_i(0)~$, and the inflow and outflow of all of the (non-source) buckets, empty or not, is equal to the smallest such flow rate, $ v^* = \min\{ v_i \mid i>0\} $ (assuming that all of them are positive). This is excellent! I mostly understand. Is this a correct summary?: the volume of water in a bucket at a given time can only be determined by iteratively ordering buckets by the time at which their rates of change would change, and calculating their rate of change at each time based on the one of 5 states they could be in - empty with "net" outflow, non-empty with net outflow < max outflow, non-empty with max outflow, filling, static. Right, but not all of these are disjoint. "Filling" exist only at the beginning (if a bucket isn't "filling" at t=0 it will never be) and only for finite time. Similarly, "non-empty with non-max outflow" is only temporary. In the end, all buckets are "static", and either in "empty with net outflow" (all those that have v_i > v* = min{v_i}), or in "non-empty with max outflow" (all those that have v_i = v*). Solve $Lx = b$ for $x$ when $b = (1, 1, 2)^T$. Lyapuniv-functions Find a formula for the vector hyperbolic problem Ordinary differential equation questions How does the traffic flow model arrive at the scaled equation? Linearization of nonlinear differential equations near an equilibrium position Differential equations, question 3
CommonCrawl
Roche Limit: The Radius of Disintegration "Nature does not hurry, yet everything is accomplished" - Lao Tzu I think most guys can recall memorable father-son activities they did growing up - for me it was primarily golf, but another one I distinctly remember is watching old-school Chinese kung fu movies with my dad. Most of them were exactly as you'd expect: awful, complete with ketchup blood, goofy one-liners, dreadful acting. Of course it was fun nonetheless, and some of them were halfway decent! (check out Snake in the Eagle's Shadow and Drunken Master: two classics by a young Jackie Chan) But to this day, I still don't really like watching Chinese-made action movies: the modern ones are chock-full of heavy-handed CGI, not-so-subtle propaganda, and more bad acting (Wolf Warriors is a perfect example of a film that was crazy popular in China, but that I found abysmal). So when a good friend of mine recommended I watch The Wandering Planet, I was pretty skeptical and only agreed after I learned the movie is based off a short story by Cixin Liu, whose Three Body Problem trilogy is one of the best sci-fi stories I've ever read. The movie was pretty mediocre, but it did introduce me to a neat concept in orbital mechanics: the Roche limit! 哎呀 - AIYAAAAAAAAAA!!!!!! Ladies and gentlemen, I present to you the greatest death scene in the history of cinema The basic premise of The Wandering Planet is that the sun is dying, so humanity builds enormous planet-moving thrusters to eject Earth from the solar system and find a new host star. But a malfunction en route causes Earth to be captured by Jupiter's gravity, leading to a desperate effort to restart the engines before Earth falls within Jupiter's Roche limit and gets ripped apart by tidal forces**. The Roche limit is a real phenomenon; calculated in 1848 by French mathematician Edouard Roche, it's the distance from a central body within which a satellite would disintegrate as tidal forces overcome the satellite's gravitational self-attraction. Said differently, matter inside the Roche limit cannot coalesce into moons, but will instead break apart and possibly form rings. Now, the Roche limit is different for every central body-satellite combination. The question is, what factors does it depend on? Tidal force**: the gravitational effect that stretches a satellite in the direction of the central body, because the parts of the satellite closer to the central body are more strongly attracted than the parts that are farther away. If this disparity is greater than the gravitational self-attraction that holds the satellite together, it can rip the satellite apart Diagram of tidal forces, showing the satellite being stretched in the direction of the central body Visualizing the Roche limit The answer is that it mostly comes down to two factors: the ratio between the densities of the central body and the satellite, and the rigidity of the satellite. For the densities, although determining these values is challenging in practice (we've got pretty good values for the densities of the Earth and the moon, but what about some random unexplored body like Neptune's moon Thalassa?), mathematically it's a simple input. It's the rigidity of the satellite that makes determining the Roche limit extremely difficult. Consider the simplest possible scenario: a perfectly rigid spherical satellite (meaning it will maintain its shape exactly, right up until the moment it breaks apart). In this case, the formula for calculating the Roche limit is given as: $d = R_M (2 \frac{\rho_M}{\rho_m})^\frac{1}{3}$ Whereby $d$: Roche limit $R_M$: radius of the central body $\rho_M$: density of the central body $\rho_m$: density of the satellite Now consider the opposite extreme: a perfectly fluid spherical satellite (imagine an enormous spherical water droplet orbiting a planet). In this case, the formula for calculating the Roche limit is given as: $d \approx 2.44 R_M (\frac{\rho_M}{\rho_m})^\frac{1}{3}$ You'll notice the approximate sign for the fluid formula: even in this already heavily simplified example, the Roche limit can't be determined by an exact algebraic formula (see the Wikipedia page for an in-depth derivation of the Roche limit formulas) Great animation from BBC's The Planets showing an icy moon disintegrating as it passes the Roche limit So if these two formulas can only describe our highly-contrived scenarios (perfectly rigid / perfectly fluid) and ignore other factors like the satellite's rotation or tensile strength, what good are they? Well, since actual satellites would fall somewhere in between being perfectly rigid and perfectly fluid, we can use them to estimate what the upper and lower bounds of the Roche limit would be! Taking the familiar example of the Earth and the moon, knowing that Earth's radius is 6,378km, Earth's mean density is 5,513kg/m^3, and the moon's mean density is 3,346kg/m^3, we can determine that the Earth-moon Roche limit is 9,492km based on the rigid formula and 18,381km based on the fluid formula, and that the actual Roche limit likely lies somewhere in between. Given that the moon orbits at about 384,399km (21x further than the more conservative fluid Roche limit formula), we can feel pretty confident that the moon isn't going to disintegrate anytime soon! Other selected examples of Roche limits for various central bodies-satellites To see more detailed and accurate Roche limit computations, definitely check out the Wikipedia! Labels: Orbital Mechanics
CommonCrawl
Performances of clinical characteristics and radiological findings in identifying COVID-19 from suspected cases Xuanxuan Li1 na1, Yajing Zhao1 na1, Yiping Lu1, Yingyan Zheng1, Nan Mei1, Qiuyue Han1, Zhuoying Ruan1, Anling Xiao2, Xiaohui Qiu3, Dongdong Wang1 & Bo Yin1 To identify effective factors and establish a model to distinguish COVID-19 patients from suspected cases. The clinical characteristics, laboratory results and initial chest CT findings of suspected COVID-19 patients in 3 institutions were retrospectively reviewed. Univariate and multivariate logistic regression were performed to identify significant features. A nomogram was constructed, with calibration validated internally and externally. 239 patients from 2 institutions were enrolled in the primary cohort including 157 COVID-19 and 82 non-COVID-19 patients. 11 features were selected by LASSO selection, and 8 features were found significant using multivariate logistic regression analysis. We found that the COVID-19 group are more likely to have fever (OR 4.22), contact history (OR 284.73), lower WBC count (OR 0.63), left lower lobe involvement (OR 9.42), multifocal lesions (OR 8.98), pleural thickening (OR 5.59), peripheral distribution (OR 0.09), and less mediastinal lymphadenopathy (OR 0.037). The nomogram developed accordingly for clinical practice showed satisfactory internal and external validation. In conclusion, fever, contact history, decreased WBC count, left lower lobe involvement, pleural thickening, multifocal lesions, peripheral distribution, and absence of mediastinal lymphadenopathy are able to distinguish COVID-19 patients from other suspected patients. The corresponding nomogram is a useful tool in clinical practice. In December 2019, a few pneumonia cases of unknown etiology were reported in Wuhan, Hubei Province, China [1]. The disease, now named coronavirus disease 2019 (COVID-19) then spread at a striking speed worldwide. The causative organism was identified as a novel coronavirus named severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) due to the phylogenetic similarity to SARS-CoV [2]. As of October 15th, 2022, there were a total of 238,940,176 cumulative cases and 4,882,066 cumulative deaths worldwide. COVID-19 was declared as a public health emergency of international concern (PHEIC) by the World Health Organization (WHO) as early as January 30th, 2020 [3, 4]. The confirmation of COVID-19 relies on the positive result of the nucleic acid amplification test (NAAT) of the upper respiratory tract specimens using the real-time reverse transcriptase–polymerase chain reaction (RT-PCR) tests [5]. However, the limitations of RT-PCR tests include: 1) The severity and progression of the disease cannot be quantitatively judged. 2) They have long turnaround times, especially in less developed regions. 3) They require certified laboratories, expensive equipments and trained technicians [6, 7]. On the contrary, chest CT scan is relatively easy to perform with fast diagnosis and the sensitivity reached as high as 97% for COVID-19 according a study of 1014 patients in Wuhan [8]. Chest CT abnormalities have also been identified in patients even prior to the development of symptoms or the detection of viral RNA [9, 10]. Thus it has a great value in early identification of COVID-19 [8, 11, 12]. Chest CT imaging is also a useful tool in monitoring COVID-19 progression and therapeutic effect in clinical settings [13]. The Diagnosis and Treatment Program of COVID-19 (trail version 8) [14, 15] formulated by the National Health Commission of China has summarized the typical CT manifestations of COVID-19 as follows and incorporated them in the diagnosis criteria: multiple small patchy shadows and interstitial changes are seen, mainly in periphery lungs. This may progress into bilateral multiple ground glass opacities (GGOs) and infiltrations. In severe cases, consolidation may occur, but pleural effusion is rare. In multiple system inflammatory syndrome (MIS-C), patients with cardiac insufficiency can show enlarged heart silhouette and pulmonary edema. Patients with above-said CT manifestations are suspected as COVID-19 infectors therefore need further examinations. Before the RT-PCR result is available, the patient needs isolation, but the quarantine of the patients may lead to a waste of medical resources and a possible delay of essential treatment. Hence, effective and convenient methods to better distinguish COVID-19 patients are needed. The aim of our study is to identify the useful clinical, laboratory and radiographic features that are able to distinguish COVID-19 patients from other suspected cases and generate a nomogram as a useful tool for clinical practice. The schematic workflow is depicted in Figure 1. Workflow of the whole study Patient cohort Data were de-identified to guarantee the patients' confidentiality. From January 21th to March 5th, 2020, patients admitted to a hospital in Anhui province, China and our institution in Shanghai, China who met the following requirements were enrolled as the primary cohort in our study: (1) Patients with chest CT manifestations suggested by the Diagnosis and Treatment Program of COVID-19 (trail version 8) [15] that had a suspicion of COVID-19. (2) Patients that took laboratory examination at admission. (3) Patients diagnosed of COVID-19 with positive RT-PCR for SARS-CoV-2, or patients excluded from the diagnosis of COVID-19 with 2 consecutively negative RT-PCR test results separated by at least 1 day. Exclusion criteria included: (1) Patients who were hospitalized before (n = 4). (2) Significant motion artefacts in CT images (n = 12). (3) Patients lacking essential data (n = 21). The epidemiological history, the symptoms, the laboratory test results and the imaging features of their first CT scan after onset were recorded. From February 6th to March 13th, 2020, an independent cohort of CT-suspected patients from another institution in Anhui Province was retrospectively studied, using the same inclusion and exclusion criteria. These patients formed the validation cohort. The laboratory tests were carried out in the outpatient department or in the wards on admission, mostly on the same day when CT scan was done. Collected laboratory indices included the white blood cell (WBC) count, lymphocyte count, lactate dehydrogenase (LDH), C reactive protein (CRP), procalcitonin (PCT), alanine aminotransferase (ALT), and aspartate aminotransferase (AST). CT protocol 105 patients from Huashan Hospital Affiliated to Fudan University were imaged with 1.5 mm-thickness with a 256-slice spiral CT scanner (Philips). 134 patients from Fuyang No.2 People`s Hospital were imaged with 1 mm-thickness with a 64-section CT scanner (Aquilion 64, Toshiba Medical Systems). 59 patients from Bozhou People`s Hospital in the validation cohort were imaged with 5 mm-thickness with a 64-section CT scanner (Siemens Somantom Sensation). CT manifestation analysis All imaging data were analyzed with consensus by two experienced radiologists (Y.L. and D.W., general radiologists with 12 and 7 years of experience in CT interpretation). 23 features from 18 aspects were collected as listed below: (a) The involved pulmonary lobes including five features: right upper, right middle, right lower, left upper, left lower lobes; (b) Distribution of lesions including two features: anterior and posterior part of lungs; (c) The location of lesions that is set as dummy variables: peripheral (the outer one-third of the lung), central (the inner two-thirds of the lung) or both; (d) The extent of the lesions that is set as dummy variables: unifocal (only one lesion can be observed), multifocal (multiple lesions separated from each other by uninvolved lung tissue) and diffuse (dispersed over a large area). (e) An extent score was semi-quantitatively calculated. Both lungs were divided into upper (above tracheal carina), lower (below inferior pulmonary vein) and middle (in between) zones, and involved percentage in each zone was scored: 0, 0%; 1, < 25%;2, 25%—49%; 3, 50%—74%; 4, > 75%, and they added up to the extent score (range 0–24). (f) The existence of opacification set as dummy variables included GGO, mixed (mainly GGO), mixed (mainly consolidation) and consolidation; (g) The shape of the lesions, including nodular (characterized by a rounded or irregular opacity, well or poorly defined, measuring up to 3 cm in diameter), linear (fine linear opacity), patchy (isolated focal lesions with no nodular/linear shape in the segment) and large patchy (large fused lesions involving multiple segments); (h) The halo sign; (i) The reversed halo sign; (j) Reticulated changes; (k) The existence of vascular enlargement; (l) The existence of air bronchogram; (m) Bronchiectasis; (n) Pleural thickening (> 3 mm); (o) Pleural traction; (p) Pleural effusion; (q) Mediastinal lymphadenopathy (the short axial diameter > 1 cm); (r) Liver spleen ratio (LS ratio) was calculated as CTliver/CTspleen to indicate the relative density. Five 1cm2 regions of interests (ROI) were drawn in the liver and spleen parenchymal to obtain the mean CT values of liver and spleen. The description of the radiological features of the lungs followed the definition compiled by the Fleischner Society [16]. The clinical [8], laboratory [7] and CT features [23] were analyzed altogether, but with the limited sample size, a total of 38 features would lead to overfitting in multivariate analysis. Thus, the least absolute shrinkage and selection operator (LASSO) method was adopted to select the most relevant features. This method is able to shrink the coefficients and diminish some to zero, thus can be used for feature reduction and selection. The R software and the "glmnet" package (version 3.6.0; R foundation for Statistical computing) were used. All statistical analyses were executed with R software. The Shapiro–Wilk test was used to evaluate the distribution type and Bartlett`s test was used to evaluate the homogeneity of variance. Normally distributed data were displayed as mean ± standard deviation. Non-normally distributed data and ordinal data were displayed as median (inter-quartile range). Categorical variables were summarized as counts and percentages. Both univariate and multivariate logistic regression were analyzed to demonstrate the correlation of the features with COVID-19 diagnosis. The regression coefficient (β) was calculated using the odds ratio (OR). The model was estimated as follows: $$\upbeta =\mathrm{log}\left(\mathrm{OR}\right)$$ $$\mathrm{logit P}={\upbeta }_{1}{\upchi }_{1}+{\upbeta }_{2}{\upchi }_{2}+\dots +{\upbeta }_{i}{\upchi }_{i}$$ A nomogram was established. The calibration ability was internally assessed with the bootstraping method and the Hosmer–Lemeshow test (HL test) was performed to test the goodness of fit. For the external validation of the nomogram, the prediction value of each case was calculated according to the nomogram and compared with the observed diagnosis. The accuracy was validated by correctly predicted case proportion and the HL goodness-of-fit test. A P-value of < 0.05 was defined as statistical significance. IRB approval This multi-center retrospective study was approved by the institutional review board (IRB) and the requirement of written informed consent was waived. The clinical information, laboratory tests, and chest CT imaging findings were compared between the primary cohort and validation cohort (Tabled 1 and 2). In the primary cohort, 239 patients (134 males and 105 females) were included in this study with an average age of 46.31 ± 15.90 years old. 28.87% of the patients had a direct contact with confirmed COVID-19 patients before the onset or had travelled/lived in the Hubei Province. 17.57% of the patients had indirect contact. Most common symptoms the patients presented were fever (70.29%), cough (44.35%), and chest distress (11.30%). Some patients had digestive symptoms such as diarrhea (2.09%) and anorexia (2.09%) (Table 1). The median interval between the onset and the date of CT scan was 8 (range 1–22) days. 157 patients were confirmed as COVID-19 by RT-PCR and were allocated to the COVID-19 group. They were put in quarantine and treated with the antiviral therapy based on the evolving recommendations [17]. The other 82 patients had negative RT-PCR results. They were eventually diagnosed as other conditions such as viral pneumonia (influenza type A virus, respiratory syncytial virus), bacterial infection (Staphylococcus aureus, Streptococcus pneumoniae), fungal infection (pneumocystis jiroveci pneumonia), mycoplasma pneumoniae pneumonia, and other respiratory conditions (acute eosinophilic pneumonia, Goodpasture syndrome etc.). Clinical information of two groups were compared using univariate analysis (Table 3). COVID-19 patients were found to be younger (P = 0.037), more likely to have fever (P = 0.001) or cough (P < 0.001), and more likely to have contact history (P < 0.001). Table 1 Clinical characteristics and laboratory tests of the primary cohort and validation cohort Table 2 Imaging manifestations on chest CT of the primary and validation cohort Table 3 Univariate logistic regression analysis of features for differentiating COVID-19 patients and non-COVID patients in Primary cohort Compared with the non-COVID-19 group, COVID-19 group showed lower WBC (P < 0.001) and lymphocyte count(P = 0.002), as well as lower levels of PCT(P = 0.002) (Table 3). Chest CT imaging findings Imaging characteristics were assessed and compared between two groups (Tables 3). Regarding the location and the distribution of the lesions, COVID-19 patients were found to be more located in posterior part of the lungs (P < 0.001) compared with non-COVID-19 patients. They had more involvement in every lobe of the lung (P < 0.05) due to more multifocal distribution (P < 0.001). Besides, they were more likely to have specific signs including reticular changes (P = 0.04), vascular enlargement (P < 0.001), air bronchogram (P = 0.043), and pleural thickening (P < 0.001). They were less likely to show pleural effusion (OR 0.16, P = 0.007) or mediastinal lymphadenopathy (P < 0.001). Other parameters were not significantly different. In LASSO model, the λ value of 0.0376 with log (λ) of -3.280 chosen (1-SE criteria), and a total of 38 features were reduced to 11 potential features with nonzero coefficients on the basis of 239 patients (21.7:1 ratio; Fig. 2). These features were further incorporated in the multivariate logistic analysis (Table 4). Eight features were found to be statistically significant. COVID-19 group tended to have more fever (OR 4.22; 95% CI [confidence interval], 1.09–18.63; P = 0.043), less probability of no contact history (meaning higher probability of indirect or direct contact history [OR 284.73; 95% CI, 38.17–4214.18; P < 0.001]), lower WBC count (OR 0.63; 95% CI, 0.47–0.77; P < 0.001), more involving left lower lobe (OR 9.42; 95% CI, 1.95–62.80; P = 0.010), more exhibiting multifocal lesions (OR 8.98; 95%CI, 1.58–61.36; P = 0.017), more pleural thickening (OR 5.59; 95%CI, 1.32–28.85; P = 0.026), less located in central part (OR 0.09; 95%CI, 0.01–0.75; P = 0.043), and less mediastinal lymphadenopathy (OR 0.037; 95% CI, 0.00–0.29; P = 0.004). Feature selection using the least absolute shrinkage and selection operator (LASSO) binary logistic regression model. A The parameter (λ) in the LASSO model used tenfold cross-validation based on minimum criteria. The mean squared error was plotted versus log(λ). Dotted vertical lines were drawn at the optimal values by using the minimum criteria and the 1 standard error of the minimum criteria (the 1-SE criteria). B The plot of LASSO coefficient profiles was produced against the log (λ) sequence. The dotted vertical line was drawn at the optimal values by using the minimum criteria and the 1 standard error of the minimum criteria (the 1-SE criteria), and the latter was chosen with the λ value of 0.0376 and log (λ) of − 3.280 according to the tenfold cross-validation that resulted in 11 nonzero coefficients Table 4 Multivariate logistic regression analysis of features for differentiating COVID-19 patients and non-COVID patients Nomogram A nomogram was constructed based on the multivariate Logistic analysis model. The adjusted C-index of the nomogram was 0.97 (Fig. 3A). The calibration curve was determined with bootstrap analysis to get bias-corrected estimation. It indicated great agreement between the prediction and the actual diagnosis in the probability (Fig. 3B). The HL goodness-of-fit test showed good calibration as well (P = 0.4797). The CT images of two cases illustrated the application of the nomogram (Fig. 4). The nomogram and calibration curves based on significant features in multivariate analysis. A A nomogram was built on the basis of eight significant features in multivariate Logistic model. If a patient is suspected to be COVID-19 by radiological diagnosis, the data needed includes whether he has fever, contact history, decreased WBC count, left lower lobe involvement, pleural thickening, multifocal lesions, peripheral distribution or absence of mediastinal lymphadenopathy. The point of each feature adds up to a total score with a corresponding probability of COVID-19. B The calibration curve was determined with bootstrap analysis to get bias-corrected estimation. It indicated great agreement between the prediction and the actual grouping in the probability Two representative cases to illustrate the application of the nomogram. A A 40-year-old male patient complained of fever for 4 days (score ≈ 80). He had travelled to Huangshi, a city in Wuhan Province, China a week before the onset (score ≈ 100). His laboratory tests indicated leukocytopenia (1.99*10^9/L, score ≈ 92). His chest CT showed patchy ground glass opacities with vascular enlargement and reticular changes on bilateral lower lobes (left lower lobe involvement: score ≈ 83; multifocal: score ≈85). Lesions were located both central and peripheral (score ≈ 80). No mediastinal lymphadenopathy was observed (score ≈ 80). Slight pleural thickening was observed (score ≈ 85). Total estimated score reached around 687, indicating > 99.8% probability to be a COVID-19 case. He was later confirmed by RT-PCR. B A 60-year-old female patient complained of fever for 3 days (score ≈ 80). She claimed no contact or exposure history (score ≈ 60). Her WBC count is slightly elevated (10.52*10^9/L, score ≈ 60). Her chest CT showed unifocal (score ≈ 68) large patchy ground glass opacities with consolidation only involving the right upper lobe (score ≈ 63), but with both central and peripheral distribution (score ≈ 80). Mediastinal lymphadenopathy was observed in mediastinal window (score ≈ 60). No pleural thickening (score ≈ 70). Total estimated score reached around 541, indicating < 0.2% probability to be a COVID-19 case. She was radiologically suspected as COVID-19, but the diagnosis of COVID-19 was ruled out by 2 consecutively negative RT-PCR test results. She was finally diagnosed with respiratory syncytial virus infection External validation The validation cohort included 59 cases with 43 COVID-19 and 16 non-COVID. The baseline data were collected in Tables 1 and 2. 56 out of 59 cases were correctly predicted using the nomogram, reaching an accuracy of 94.91%. Calibration was good (P = 0.9956 for the HL goodness-of-fit test). An ongoing outbreak of COVID-19 originated from Hubei Province in China has been spreading worldwide. Experts in infectious and respiratory diseases, critical care, and radiology from all over the world have been making a joint effort to contain the epidemic situation [18]. Presently, RT-PCR is the standard confirmative method in spite of a few flaws including long turnaround time for the results in underdeveloped regions and low sensitivity especially in the early phase of the disease [10]1920. On the contrary, chest CT scan is able to recognize the lesions at earlier stages with high sensitivity, thus is considered an important tool to guarantee an early diagnosis and isolation of infected patients [8]. Before the RT-PCR results are attainable, the quarantine is needed, but the isolation site is insufficient, and it possibly delays essential treatment. In this study, the CT manifestations summarized by the National Health Commission of China were used as the inclusion criteria. We investigated the differential values of clinical characteristics, laboratory results and CT features to better distinguish COVID-19 patients from those with suspicious CT findings, and developed a model with a nomogram as a practical tool. The most common symptom in the patients we enrolled is fever, followed by cough and chest distress. As a differential feature, fever is significant in both univariate and multivariate analysis. This echoes previous studies, and fever is the leading symptom listed in the case definition for surveillance of COVID-19 by the Chinese Health Commission [17, 21, 22]. Therefore, it is necessary to monitor body temperature and at-home temperature measurement is a useful and easy way for the public to early notice. Additionally, we noticed a small portion of the patients with digestive disorders like diarrhea and anorexia, and it occurred more in the COVID-19 group. Increasing evidence shows the manifestation of COVID-19 is not always confined to respiratory symptoms, but may also involve other systems, e.g., the central nervous system [23, 24]. Liver function abnormalities have been reported in COVID-19 patients with a pooled prevalence of 19% (95% confidence interval, 9–32%) with an association with disease severity. Hepatocyte degeneration, focal necrosis, and fatty infiltration were reported in COVID-19 patients [25, 26]. LS ratio was observed in this study since the CT values were attainable in chest CT image, but was insignificant here. The contact history is another valuable factor for COVID-19, including direct contact with COVID-19 patients, direct exposure in districts with confirmed cases, and indirect contact with those who were exposed [27]. According to the National Health Commission of China, a patient with one exposure or contact history and two clinical conditions can be regarded as a suspected case [17]. However, with the swift spread of the disease, some contact history is unrevealed, making it harder to contain the epidemic [28]. More active precaution and isolation is needed. Among the laboratory parameters, WBC count is significantly lower in COVID-19 group in both univariate and multivariate analysis, and lymphocyte count is lower in univariate analysis. This is consistent with previous findings and the criteria by the Chinese Health Commission [1, 12, 17]. We also found lower levels of CRP and PCT in the COVID-19 group. They are useful indicators of infection or inflammation, and CRP was previously reported to increase in COVID-19 patients by some researchers [8, 29]. Our finding may result from higher extent of increased levels of these indices in non-COVID-19 patients since they had other inflammatory conditions including bacterial infection, while other studies used healthy controls. Typical radiographic features on chest CT in COVID-19 patients were reported to predominantly include bilateral and peripheral GGOs and consolidative pulmonary opacities. The location of the lesions varied among studies, yet the peripheral site is most frequently reported [8, 30,31,32]. These widely-accepted imaging characteristics constituted the most important inclusion criterium in this study, thus were seen in both groups. Less typical signs in previous studies included linear opacities, "crazy-paving" pattern and the reverse halo sign, etc. [8, 33,34,35,36,37]. We found that COVID-19 lesions are more commonly seen in both lower lobes, which echoes existing literature. We also found that the right lower lobe was more often involved in both COVID-19 and non- COVID-19 groups, which may be related to the shorter and thicker structure of the right lower lobe bronchus that may make it easier for the pathogens to enter this lobe [38]. There are also studies that found left lower lobe to be mostly involved [39, 40]. Distribution in all lobes showed significant difference between two groups, but left lower lobe involvement remained after two-step feature selection, making it a significant feature in differentiating COVID-19 patients from other conditions. Although it is unclear at this time why it is useful, further investigations of the common distribution and the corresponding mechanisms of the diseases in the non-COVID-19 group respectively will be helpful. Besides, compared with non-COVID-19 cases, COVID-19 is more likely to exhibit multifocal distribution rather than unifocal changes, and more likely to have reticulated changes, vascular enlargement, and pleural thickening. The pooled prevalences of pleural thickening in COVID-19 patients were 30.0–52.46% [39, 41, 42]. COVID-19 patients are also less likely to have pleural effusion and mediastinal lymphadenopathy, which is consistent with prior researches [30]. Fever, contact history, decreased WBC count, left lower lobe location, pleural thickening, multifocal lesions, peripheral distribution, and absence of mediastinal lymphadenopathy were found to be features independently associated to COVID-19 patients. On the basis of these parameters, a nomogram was built to better interpret our findings, which is popular in cancer research these years [35]. According to our nomogram, the point of each feature adds up to a total score with a corresponding probability of COVID-19. A nomogram can be validated by both internal and external validation [36]. In this study, internal validation used the data of the same cohort for the generation of the nomogram, and external validation used the data from another institution. Both internal and external validation indicated good agreement between the prediction and the actual diagnosis in the probability. Since the COVID-19 outbreak, the scientific researchers have focused more on clinical and radiological findings of COVID-19 infection, whereas a few studies have investigated the differential diagnoses. Three studies from Europe presented a vast spectrum of differential diagnoses with abundant figures and elaborate illustrations to help the radiologist with differentiation [43,44,45]. Another study evaluated the performances of radiologists from US and China in differentiating COVID-19 from other viral pneumonia [46]. Researchers from Japan compared COVID-19 and other diseases with similar symptoms, and proposed useful laboratory indicators [47]. The studies above investigated the differential diagnosis of COVID-19, but did not construct a practical model. One study built a diagnostic model, but with a small sample size, and only included non-COVID-19 pneumonia patients in the control group [48]. Our study has a different design from those of existing papers. In this study, the typical CT manifestations of COVID-19 were used as the inclusion criteria, thus a wider spectrum of diseases that needed to be differentiated from COVID-19 was included, which is a realistic problem that may be encountered in clinical practice. In summary, this study is the first to investigate the features to distinguish confirmed COVID-19 patients from other conditions with similar CT findings, which is an important clinical issue. The nomogram can be used as an instant tool able to provide practical reference for individualized management for every suspected patient and is likely to offer effective and scientific basis for empirical treatment. Our study had several limitations. Firstly, in this multi-center study, the normal range and results of the laboratory data might be different due to the differences in the kits, equipment, and environmental conditions. However, three institutions are all China's Grade-A Tertiary Hospitals, with laboratories of the highest qualifications, and similar protocols are adhered, thus the results are relatively stable. Secondly, the sample size is relatively small since no data was obtained from the epicenter of the outbreak, and the spread of COVID-19 was successfully suppressed in a few months in China as appropriate precautions were taken. Besides, despite being the standard confirmative test, RT-PCR has false-negative probabilities, therefore our results might be biased since non-COVID-19 group might include infected patients. Future prospective investigation of larger scale with international data and evolved diagnostic techniques is expected. In conclusion, fever, contact history, decreased WBC count, left lower lobe involvement, pleural thickening, multifocal lesions, peripheral distribution, and absence of mediastinal lymphadenopathy are able to distinguish COVID-19 patients from other suspected patients. The nomogram based on these features is a useful tool in the clinical practice. The datasets generated and/or analysed during the current study are not publicly available due to ethical restrictions but are available from the corresponding author on reasonable request. SARS-CoV-2: Severe Acute Respiratory Syndrome Coronavirus 2 PHEIC: Public Health Emergency of International Concern NAAT: Nucleic acid amplification test MIS-C: Multiple system inflammatory syndrome RT-PCR: Reverse transcriptase-polymerase chain reaction IRB: WBT: White blood count CPR: LDH: Lactate dehydrogenase PCT: Procalcitonin GGO: Ground-glass opacity RHS: Reversed halo sign LASSO: Least absolute shrinkage and selection operator LR ratio: Liver/spleen ratio 95% CI: Chan JFW, Yuan S, Kok KH, To KKW, Chu H, Yang J, et al. A familial cluster of pneumonia associated with the 2019 novel coronavirus indicating person-to-person transmission: a study of a family cluster. Lancet. 2020;395(10223):514–23. Munster VJ, Koopmans M, van Doremalen N, van Riel D, de Wit E. A Novel coronavirus emerging in China—Key questions for impact assessment. N Engl J Med. 2020;2001017. Guan W-J, Ni Z-Y, Hu Y, Liang W-H, Ou C-Q, He J-X, et al. Clinical characteristics of Coronavirus Disease 2019 in China. N Engl J Med. 2020;1–13 Sohrabi C, Alsafi Z, O'Neill N, Khan M, Kerwan A, Al-Jabir A, et al. World Health Organization declares global emergency: a review of the 2019 novel coronavirus (COVID-19). Int J Surg. 2020;76:71–6. Corman VM, Landt O, Kaiser M, Molenkamp R, Meijer A, Chu DKW, et al. Detection of 2019 novel coronavirus (2019-nCoV) by real-time RT-PCR. Eurosurveillance. 2020;25(3):1–8. Li Z, Yi Y, Luo X, Xiong N, Liu Y, Li S, et al. Development and clinical application of a rapid IgM-IgG combined antibody test for SARS-CoV-2 infection diagnosis. J Med Virol. 2020;92(9):1518–24. Dai WC, Zhang HW, Yu J, Xu HJ, Chen H, Luo SP, et al. CT imaging and differential diagnosis of COVID-19. Can Assoc Radiol J. 2020;71(2):195–200. https://doi.org/10.1177/0846537120913033. Ai T, Yang Z, Hou H, Zhan C, Chen C, Lv W, et al. Correlation of chest CT and RT-PCR testing for coronavirus disease 2019 (COVID-19) in China: a report of 1014 cases. Radiology. 2020;296(2):E32-40. Shi H, Han X, Jiang N, Cao Y, Alwalid O, Gu J, et al. Radiological findings from 81 patients with COVID-19 pneumonia in Wuhan, China: a descriptive study. Lancet Infect Dis. 2020;20(4):425–34. https://doi.org/10.1016/S1473-3099(20)30086-4. Xie X, Zhong Z, Zhao W, Zheng C, Wang F, Liu J. Chest CT for Typical 2019-nCoV Pneumonia: relationship to Negative RT-PCR testing. Radiology. 2020;200343. Fang Y, Zhang H, Xie J, Lin M, Ying L, Pang P, et al. Sensitivity of chest CT for COVID-19: comparison to RT-PCR. Radiology. 2020;296(2):E115–7. Xiong Y, Sun D, Liu Y, Fan Y, Zhao L, Li X, et al. Clinical and high-resolution CT features of the COVID-19 infection: comparison of the initial and follow-up changes. Invest Radiol. 2020;55(6):332–9. Rai P, Kumar BK, Deekshit VK, Karunasagar I, Karunasagar I. Detection technologies and recent developments in the diagnosis of COVID-19 infection. Appl Microbiol Biotechnol. 2021;105(2):441–55. https://doi.org/10.1007/s00253-020-11061-5. Wang G-Q, Zhao L, Wang X, Jiao Y-M, Wang F-S. Diagnosis and treatment protocol for COVID-19 patients (tentative 8th edition): interpretation of updated key points. Infect Dis Immun. 2021;1(1):17–9. General Office of National Health Committee. Office of State Administration of Traditional Chinese Medicine. Diagnosis and Treatment Protocol for COVID-19 Patients (Trial Version 8) (2020–08–19) [Internet]. Available from: https://covid19.alliancebrh.com/covid19en/c100036/202008/12b9b42813a94755bbf442008fe86f63/files/b0ae9b6c1d9a47bf81d7dc1f5e7ddda5.pdf Hansell DM, Bankier AA, MacMahon H, McLoud TC, Müller NL, Remy J. Fleischner Society: glossary of terms for thoracic imaging. Radiology. 2008;246(3):697–722. General Office of National Health Committee. Office of State Administration of Traditional Chinese Medicine. Notice on the issuance of a program for the diagnosis and treatment of novel coronavirus (2019-nCoV) infected pneumonia (trial sixth edition)(2020 [Internet]. 2020. Available from: http://bgs.satcm.gov.cn/zhengcewenjian/2020-03-04/13594.html To KK, Sridhar S, Chiu KH, Hung DL, Li X, Hung IF, Tam AR, Chung TW, Chan JF, Zhang AJ, Cheng VC, Yuen KY. Lessons learned 1 year after SARS-CoV-2 emergence leading to COVID-19 pandemic. Emerg Microbes Infect. 2021;10(1):507–35. https://doi.org/10.1080/22221751.2021.1898291. Pontone G, Scafuri S, Mancini ME, et al. Role of computed tomography in COVID-19. J Cardiovasc Comput Tomogr. 2021;15(1):27–36. Sharma A, Ahmad Farouk I, Lal SK. COVID-19: A review on the novel coronavirus disease evolution, transmission, detection, control and prevention. Viruses. 2021;13(2):202. https://doi.org/10.3390/v13020202.PMID:33572857;PMCID:PMC7911532. Wang W, Tang J, Wei F. Updated understanding of the outbreak of 2019 novel coronavirus (2019-nCoV) in Wuhan, China. J Med Virol. 2020;92:441–7. Wu J, Liu J, Zhao X, Liu C, Wang W, Wang D, et al. Clinical characteristics of imported cases of COVID-19 in Jiangsu Province: a multicenter descriptive study. Clin Infect Dis. 2020;71:706. Li YC, Bai WZ, Hashikawa T. The neuroinvasive potential of SARS-CoV2 may play a role in the respiratory failure of COVID-19 patients. J Med Virol. 2020;92(6):552–5. Lu Y, Li X, Geng D, Mei N, Wu PY, Huang CC, et al. Cerebral micro-structural changes in COVID-19 patients – An MRI-based 3-month follow-up study: a brief title: cerebral changes in COVID-19. EClinicalMedicine. 2020;25(2):100484. John C. Smulian Sonja A. Rasmussen MD MS. Liver injury in COVID-19: management and challenges. Ann Oncol. 2020;19–21. Xu Z, Shi L, Wang Y, Zhang J, Huang L, Zhang C, et al. Pathological findings of COVID-19 associated with acute respiratory distress syndrome [published correction appears in Lancet Respir Med. 2020 Feb 25]. Lancet Respir Med. 2020;8(4):420–422. https://doi.org/10.1016/S2213-2600(20)30076-X Huang C, Wang Y, Li X, Ren L, Zhao J, Hu Y, et al. Clinical features of patients infected with 2019 novel coronavirus in Wuhan. China Lancet. 2020;395(10223):497–506. Zu ZY, Di Jiang M, Xu PP, Chen W, Ni QQ, Lu GM, et al. Coronavirus disease 2019 (COVID-19): a perspective from China. Radiology. 2020;296(2):E15-25. Li L, Huang T, Wang Y, Wang Z, Liang Y, Huang T, et al. COVID-19 patients' clinical characteristics, discharge rate, and fatality rate of meta-analysis. J Med Virol. 2020;92(6):577–83. Yang W, Cao Q, Qin L, Wang X, Cheng Z, Pan A, et al. Clinical characteristics and imaging manifestations of the 2019 novel coronavirus disease (COVID-19): a multi-center study in Wenzhou city, Zhejiang, China. J Infect. 2020;80:388. Chung M, Bernheim A, Mei X, Zhang N, Huang M, Zeng X, et al. CT Imaging features of 2019 novel coronavirus (2019-nCoV). Radiology. 2020;200230. Pan Y, Guan H, Zhou S, Wang Y, Li Q, Zhu T, et al. Initial CT findings and temporal changes in patients with the novel coronavirus pneumonia (2019-nCoV): a study of 63 patients in Wuhan, China. Eur Radiol. 2020;30(6):3306–9. Bernheim A. Chest CT findings in coronavirus disease-19: relationship to duration of infection. Radiology. 2020;19:200463. Shi H, Han X, Jiang N, Cao Y, Alwalid O, Gu J, et al. Radiological findings from 81 patients with COVID-19 pneumonia in Wuhan, China: a descriptive study. Lancet Infect Dis. 2020;3099(20):1–10. Pan F, Ye T, Sun P, Gui S, Liang B, Li L, et al. Time course of lung changes on chest CT During Recovery From 2019 Novel Coronavirus (COVID-19) Pneumonia. Radiology. 2020;200370. Xu X, Yu C, Qu J, Zhang L, Jiang S, Huang D, et al. Imaging and clinical features of patients with 2019 novel coronavirus SARS-CoV-2. Eur J Nucl Med Mol Imaging. 2020;613:2–7. Xu X, Yu C, Zhang L, Luo L, Liu J. Imaging features of 2019 novel coronavirus pneumonia. Eur J Nucl Med Mol Imaging. 2020;613:1–2. Zhang B, Wang X, Tian X, Zhao X, Liu B, Wu X, et al. Differences and prediction of imaging characteristics of COVID-19 and non-COVID-19 viral pneumonia: a multicenter study. Medicine (Baltimore). 2020;99(42):e22747. Adams HJA, Kwee TC, Yakar D, Hope MD, Kwee RM. Chest CT Imaging signature of coronavirus disease 2019 infection: in pursuit of the scientific evidence. Chest. 2020;158(5):1885–95. https://doi.org/10.1016/j.chest.2020.06.025 Guan CS, Wei LG, Xie RM, Lv ZB, Yan S, Zhang ZX, et al. CT findings of COVID-19 in follow-up: Comparison between progression and recovery. Diagnostic Interv Radiol. 2020;26(4):301–7. Karimian M, Azami M. Chest computed tomography scan findings of coronavirus disease 2019 (Covid-19) patients: a comprehensive systematic review and meta-analysis. Polish J Radiol. 2021;86(1):e31-49. Bao C, Liu X, Zhang H, Li Y, Liu J. Coronavirus disease 2019 (COVID-19) CT findings: a systematic review and meta-analysis. J Am Coll Radiol. 2020;17(6):701–9. https://doi.org/10.1016/j.jacr.2020.03.006. Guarnera A, Podda P, Santini E, Paolantonio P, Laghi A. Differential diagnoses of COVID-19 pneumonia: the current challenge for the radiologist—a pictorial essay. Insights Imaging. 2021. https://doi.org/10.1186/s13244-021-00967-x. Hani C, Trieu NH, Saab I, Dangeard S, Bennani S, Chassagnon G, et al. COVID-19 pneumonia: a review of typical CT findings and differential diagnosis. Diagn Interv Imaging. 2020;101(5):263–8. https://doi.org/10.1016/j.diii.2020.03.014. Bordi L, Nicastri E, Scorzolini L, Di Caro A, Capobianchi MR, Castilletti C, et al. Differential diagnosis of illness in patients under investigation for the novel coronavirus (SARS-CoV-2), Italy, February 2020. Eurosurveillance. 2020;25(8):2–5. https://doi.org/10.2807/1560-7917.ES.2020.25.8.2000170. Bai HX, Hsieh B, Xiong Z, Halsey K, Choi JW, Tran TML, et al. Performance of radiologists in differentiating COVID-19 from non-COVID-19 viral pneumonia at chest CT. Radiology. 2020;296(2):E46-54. Nakanishi H, Suzuki M, Maeda H, Nakamura Y, Ikegami Y, Takenaka Y, et al. Differential diagnosis of COVID-19: importance of measuring blood lymphocytes, serum electrolytes, and olfactory and taste functions. Tohoku J Exp Med. 2020;252(2):109–19. Chen X, Tang Y, Mo Y, Li S, Lin D, Yang Z, et al. A diagnostic model for coronavirus disease 2019 (COVID-19) based on radiological semantic and clinical features: a multi-center study. Eur Radiol. 2020;30(9):4893–902. This project was supported by Clinical Research Plan of SHDC (Grant No. SHDC2020CR4069), Medical Engineering Fund of Fudan University (Grant No. yg2021-029), Shanghai Sailing Program (Grant No. 21YF1404800), Youth Program of Special Project for Clinical Research of Shanghai Municipal Health Commission Health industry (Grant No. 20204Y0421), Youth Medical Talents –Medical Imaging Practitioner Program (No. 3030256001), Shanghai Municipal Science and Technology Major Project (No. 2018SHZDZX01), ZJ Lab, and Shanghai Center for Brain-Inspired Technology. Li Xuanxuan and Zhao Yajing contributed equally to the paper. They should be regarded as the co-first authors. Department of Radiology, Huashan Hospital, Fudan University, 12, Middle Wulumuqi Rd., Jing'an District, Shanghai, 200040, China Xuanxuan Li, Yajing Zhao, Yiping Lu, Yingyan Zheng, Nan Mei, Qiuyue Han, Zhuoying Ruan, Dongdong Wang & Bo Yin Department of Radiology, Fu Yang No. 2 People's Hospital, 450 Linquan Road, Fuyang, Anhui Province, China Anling Xiao Department of Radiology, Bozhou People's Hospital, 616, Duzhong Road, Bozhou, Anhui Province, China Xiaohui Qiu Xuanxuan Li Yajing Zhao Yiping Lu Yingyan Zheng Nan Mei Qiuyue Han Zhuoying Ruan Dongdong Wang Bo Yin XL: drafted the work. YZ (Yajing Zhao): drafted the work. YL: substantively revised the work. YZ (Yingyan Zheng): analysis. NM: interpretation of data. QH: interpretation of data. ZR: analysis. AX: acquisition. XQ: acquisition. DW: design of the work. BY: conception. All authors have read and approved the manuscript. Correspondence to Dongdong Wang or Bo Yin. This multi-center retrospective study was approved by the institutional review board (IRB) of Huashan Hospital, Fudan University, Bozhou People`s Hospital and Fu Yang No. 2 People`s Hospital, and the requirement of written informed consent was waived by the IRBs. Ethics accordance This study was approved by the institutional review board (IRB), and the methods were carried out in accordance with the Declaration of Helsinki. Li, X., Zhao, Y., Lu, Y. et al. Performances of clinical characteristics and radiological findings in identifying COVID-19 from suspected cases. BMC Med Imaging 22, 55 (2022). https://doi.org/10.1186/s12880-022-00780-y X-ray computed tomography
CommonCrawl
A multi-objective test data generation approach for mutation testing of feature models Rui A. Matnei Filho1 & Silvia R. Vergilio1 Mutation approaches have been recently applied for feature testing of Software Product Lines (SPLs). The idea is to select products, associated to mutation operators that describe possible faults in the Feature Model (FM). In this way, the operators and mutation score can be used to evaluate and generate a test set, that is a set of SPL products to be tested. However, the generation of test sets to kill all the mutants with a reduced, possible minimum, number of products is a complex task. To help in this task, in a previous work, we introduced a multi-objective approach that includes a representation to the problem, search operators, and two objectives related to the number of test cases and dead mutants. The proposed approach was implemented and evaluated with three representative multi-objective and evolutionary algorithms: NSGA-II, SPEA2 and IBEA, and obtained promising results. Now in the present paper we extend such an approach to include a third objective: the pairwise coverage. The goal 4 is to reveal other kind of faults not revealed by mutation testing and to improve the efficacy of the generated test sets. Results of new studies are reported, showing that both criteria can be satisfied with a reduced number of products. The approach produces diverse good solutions and different sets of impacting factors can be considered. At the end, the tester can either prioritize one objective, by choosing solutions in the extreme points of the fronts or choose solutions with smaller ED values, according to the testing goals and resources. A Software Product Line (SPL) can be defined as a set of products that share common features (Pohl et al. 2005). A feature is related to the software functionalities or system attributes that are visible to the user. Features allow distinguishing products and are important to represent variability. In this sense, different products can be generated by selecting different features. The features are generally expressed in a Feature Model (FM), which allows a hierarchical arrangement of features represented by a tree. The adoption of the SPL approach in industries is crescent (SEI 2016), due to the associated advantages. With this increasing usage, the demand for specific SPL testing techniques has also been growing (da Mota Silveira Neto et al. 2011). An important activity in this context is the feature testing, which tests if the products derived from the FM match their requirements. To ensure correctness, ideally, all the products derived from the FM should be tested. However, this is impractical in terms of resources and execution time (Cohen et al. 2006). Then, to select only the most representative products, testing criteria are needed. A criterion provides a way to select and evaluate test data sets that, in the FM context, are sets of products to be tested. The main criteria used for feature testing of SPL are based on combinatorial testing (Cohen et al. 2006; Henard et al. 2014; Lamancha and Usaola 2010; Perrouin et al. 2010). Such kind of test derives SPL products to test combination of features. For example, the pairwise testing (Cohen et al. 2006) requires that each possible pair of features from the FM is included in at least one product derived for the test. Recently, fault-based criteria, such as that ones based on mutation testing, have been investigated for variability test using the FM (Ferreira et al. 2013; Henard et al. 2013a). As it happens in the traditional test of programs (Wong et al. 1995), studies show that this kind of criterion is more efficacious, in terms of revealed faults, but also more expensive, in terms of required test cases (Ferreira et al. 2013). In addition to this, a hard task, associated to the application of a test criterion, is the generation of test data to reach the desired coverage. This task has been successfully solved in the Search Based Software Engineering (SBSE) field (Harman et al. 2012). Search based algorithms, such as the genetic ones, are capable to search, in a huge space, the solution that solves the problem and satisfies some constraints. Recent surveys (Harman et al. 2014; Matnei Filho and Vergilio 2014) show that the use of search-based algorithms for SPL engineering has raised interest in the last two years, mainly for configuration of products, evolution and adaptation of the FM, and also selection of products for testing. Works on this last subject address minimization (Wang et al. 2013), prioritization (Wang et al. 2014) and generation (Ensan et al. 2012; Henard et al. 2013b) of products (test cases), taking into account different factors: number of test cases, pairwise coverage, number of revealed faults, and other ones related to costs. The mutation score is used only in the work of Henard et al. (2014), which implements the (1+1) Evolutionary Algorithm (EA) algorithm, and considers the operators defined in (Henard et al. 2013a). Most existing works have some limitations. The main one is that they deal with the problem by using an aggregation function and a single-objective algorithm, generally evolutionary one. However the problem is in fact multi-objective, impacted by many factors. Due to this, multi-objective algorithms are more suitable. Such algorithms are based on the Pareto dominance concept (Pareto 1927) and produce a set of good solutions, that represent the best trade-offs between the objectives. We can find in the literature works that propose multi-objective approaches (Harman et al. 2014). Among them we can mention the work of Lopez-Herrejon et al. (2013) that uses a multi-objective algorithm and two objectives: number of products and pairwise-coverage but, in general, multi-objective approaches do not address mutation testing. To overcome such limitation, in a previous work (Matnei Filho and Vergilio 2015), we introduced a multi-objective and mutation based approach to generate sets of products for the variability test of FMs. The approach reached good results considering two objectives: the size and mutation score of the derived sets. However, there are other factors that impact on the testing cost and efficacy, and need to be considered. For instance, the works found in the literature does not generate test sets to satisfy both mutation and pairwise testing. Such goal is very important because studies reported by Ferreira et al. (2013) show that both criteria should be used in a complementary way, since they can reveal different kind of faults. Motivated by this fact and to improve the efficacy of the generated test sets, this paper now extends the previous work by instantiating and evaluating the approach with a new objective: the pairwise coverage. The idea is to obtain a set of products to the feature testing with a minimal number of test cases, factor related to the cost, and high mutation score and pairwise coverage, factors related to the quality and efficacious to reveal faults. The work uses the mutation operators and tool introduced in (Ferreira et al. 2013). The approach encompasses two main characteristics: i) introduces a new representation for the population, where an individual (solution) is a set of products, differently of most existing works, where an individual represents a product; and ii) allows the use of different objectives. The approach is implemented with three different algorithms, traditionally used by SBSE works: NSGA-II (Deb et al. 2002), SPEA2 (Zitzler et al. 2001), and IBEA (Zitzler and Künzli 2004). The performance of such algorithms are compared, according to quality indicators of the optimization field. The obtained solutions are also evaluated with respect to the three objectives. In all cases, solutions that kill all the non-equivalent mutants are obtained. We observe a reduced number of products required to mutation testing, mainly for FMs that derive the greatest number of products. This work is organized as follows. First we present concepts from the multi-objective optimization field and we review mutation testing of SPLs, the adopted mutation approach and tool. After this, we introduce the test data generation approach: population, search operators, fitness, and implementation aspects, and we also present a use example of the introduced approach. Following, we describe how the evaluation was conducted, including the research questions, FMs used, algorithms configuration and, in the sequence, we also present and analyze the results. Finally at the end, we present related work and conclusions of the work, showing future research directions. Multi-objective optimization Optimization problems that are impacted by many factors are called muti-objective. For them, it is not always possible to find only a solution that optimizes all objectives simultaneously. This is because the objective functions, associated to diverse metrics, are usually in conflict, thus, a set of good solutions is generated, usually following Pareto dominance concepts (Pareto 1927). This set forms the approximation to the Pareto Front (P F a p p r o x), which is composed by different non-dominated solutions. Given a set of possible solutions, the solution A dominates B if, the value of at least one objective in A is better than the corresponding objective value in B, and the values of the remaining objectives in A are at least equal to the corresponding values in B. We observe that the generation of test data sets is a multi-objective problem, impacted by many factors. For example, in our case, to reach a high score Sc implies in a higher cost, in terms of number of test cases t. A solution A with (t=15,S c=38 %) dominates the solution B with (t=16,S c=38 %), since the same score was reached with a lower number t. However, A does not dominate C with (t=18,S c=56 %), since C is better considering the score. A solution is said non-dominated if it is not dominated by any other solution. Multi-objective algorithms In order to solve a multi-objective problem, multi-objective algorithms have been successfully applied. Variants of Genetic Algorithms (GAs) are widely used in SBSE (Harman et al. 2012). A GA is a heuristic inspired by the theory of natural selection and genetic evolution. The search is started with an initial population composed by some solutions of the search space. From this population, search operators are applied consisting of selection, crossover and mutation. Such operators are specific for the representation adopted for the problem. They iteratively generate new solutions from existing ones, until some stopping condition is reached. Through the selection operator, copies of those individuals with the best values of the objective function are selected to be parent. So, the best individuals (candidate solutions) will survive in the next population. The crossover operator combines parts of two parent solutions to create a new one. The mutation operator randomly modifies a solution. The descendant population, created from the selection, crossover and mutation, replaces the parent population. At the end, the best solution found is returned. In our work we use three most representative algorithms (Coello et al. 2006): NSGA-II (Non-dominated Sorting Genetic Algorithm) (Deb et al. 2002), SPEA2 (Strength Pareto Evolutionary Algorithm) (Zitzler et al. 2001), and IBEA (Indicator-Based Evolutionary Algorithm) (Zitzler and Künzli 2004). Each algorithm adopts different evolution and diversification strategies and were chosen in our study because they are well known Multiobjective Evolutionary Algorithms (MOEAs) (Coello et al. 2006) and largely used in the SBSE field (Harman et al. 2012). Next, they are described briefly. Non-dominated sorting genetic algorithm (NSGA-II) The algorithm NSGA-II (Deb et al. 2002) (see Fig. 1) is a MOEA based in GA with a strong elitism strategy. For each generation, NSGA-II sorts the individuals from parent and offspring populations, considering the non-dominance, creating several fronts (Lines 10 and 11 of Fig. 1). The first front is composed by all non-dominated solutions. The second one has the solutions dominated by only one solution. The third front has solutions dominated by two other solutions, and the fronts are created until all solutions are classified. Pseudocode of NSGA-II (adapted from (Coello et al. 2006)) For the solutions of the same front, another sort is performed using the crowding distance to maintain the diversity of solutions (Line 12 of Fig. 1). The crowding distance calculates how far away the neighbors of a given solution are and, after calculation, the solutions are decreasingly sorted. The solutions in the boundary of the search space are benefited with high values of crowding distance, since the solutions are more diversified but with fewer neighbors. Both sorting procedures, front and crowding distance, are used by the selection operator (Line 17 of Fig. 1). The binary tournament selects individuals of lower front; in case of same fronts, the solution with greater crowding distance is chosen. New populations are generated with recombination and mutation (Line 18 of Fig. 1). The computational complexity of NSGA-II is O(M N ′ 2), where M is the number of objectives and N ′ the population size (Deb et al. 2002). Strength Pareto evolutionary algorithm (SPEA2) SPEA2 (Zitzler et al. 2001) is also a multi-objective algorithm based on GA (Fig. 2). In addition to its regular population, SPEA2 uses an external archive that stores non-dominated solutions found at each generation. Pseudocode of SPEA2 (adapted from (Coello et al. 2006)) In each generation a strength value for each solution is calculated and used by the selection operator. The strength value of a solution i corresponds to the number of j individuals, belonging to the archive and the population, dominated by i. The fitness of a solution is the sum of the strength values of all its dominators, from archive and population (Line 4 of Fig. 1); 0 indicates a non-dominated individual, whereas a high value points out that the individual is dominated by many other ones. After the selection of individuals (Line 8), new populations are generated by recombination and mutation (Line 9). During the evolutionary process, the external archive, which is used in the next generation, is filled with non-dominated solutions of the current archive and population (Line 5). When the non-dominated front does not correspond exactly to the size of the archive, two cases are possible: a too large new archive or a too small one. In the former case, a truncation procedure is performed (Line 6); first the distances from the solutions to their neighbors are calculated, then, the nearest neighbors are removed. In the second case, the dominated individuals from the current archive and population are copied to the new archive (Line 7). The worst run-time of the truncation procedure is O(M 3) (Zitzler et al. 2001), where M is given by the sum of the population and external archive sizes, but SPEA can have different implementations and, in most cases, behavior similar to NSGA-II (Deb et al. 2002). Indicator-based evolutionary algorithm (IBEA) IBEA (Zitzler and Künzli 2004) is a multi-objective algorithm based on indicators (Fig. 3). Basically, a weight is assigned to each solution found, according to the quality indicators, favoring the user optimization objectives. IBEA performs binary tournaments for mating selection and implements environmental selection by iteratively removing the worst individual from the population and updating the fitness values of the remaining individuals. Pseudocode of IBEA (adapted from (Zitzler and Künzli 2004)) The algorithm has as input values α that represents the population size, N maximum number of generations and k fitness scaling factor. To each population individual, a fitness value is calculated based on the k factor (Line 5 of Fig. 3). That individuals with worst values of fitness are removed from population (Lines 7 and 8). The other individual fitness are updated. If the generation counter is bigger than the maximum number of generations the decision vector is returned with the non-dominated individuals of population P, and the execution is finished (Line 11). Else, a binary tournament selection is performed with replacement on P, in order to fill the temporary mating pool P ′ (Line 14). The individuals P ′ suffer mutations and crossover, and resulting offspring is added to population P (Line 15). After that, the generation counter is incremented and the algorithm returns to Line 2. The complexity of the algorithm is O(α 2), with regard to the populatio size α (Zitzler and Künzli 2004). Quality indicators are used in the optimization field to evaluate the obtained solutions and to compare the algorithms performance. To calculate these indicators, generally three sets of solutions are used: 1) P F approx : formed by non-dominated solutions returned by one execution of the algorithm; 2) P F known : combination of all P F approx obtained through different executions of an algorithm, removing dominated and repeated solutions; 3) P F true : represents the Optimal Pareto Front to the problem. In our case this set is unknown. Due to this, and following the literature (Zitzler et al. 2003), this set was formed by all sets P F known obtained with different algorithms by removing dominated and repeated solutions. The set P F true is in fact an approximation to the real front. In our work, we assessed the number of solutions and performed an analysis of the Pareto fronts, in order to evaluate the capability of the algorithms in finding a great number and diversified solutions. In this sense, we calculated the Error Ratio (ER) (Van Veldhuizen 1999) quality indicator. It corresponds to the ratio between the P F known elements that are present and those that are not present in P F true . We also used the hypervolume (Zitzler et al. 2003) as the main quality indicator and the Kruskal Wallis (significance level of 95 %) (Derrac et al. 2011) as non-parametric statistic test. Hypervolume measures the coverage area of a known Pareto front (P F known ) on the objective space regarding a nadir (reference) point. The higher the hypervolume, the better the Pareto front. Hypervolume was used because it evaluates a set of solutions generated by multi-objective algorithms regarding convergence and diversity, besides being one of the most used indicators in the literature (Bringmann et al. 2014). In addition, hypervolume is \(\vartriangleright \)-completeness compliant (Zitzler et al. 2003), which means that, hypervolume can measure if a Pareto front is better than another in terms of dominance relation (Zitzler et al. 2003). It is a useful indicator for what we are trying to assess, since we want to find which resulting Pareto Front is the best one by comparing fronts generated by different algorithms. Furthermore, for calculating the hypervolume, only the objective values of the solutions and the nadir point is needed as entry. Taking into account that the tester will choose only one solution from the set of non-dominated ones, we analyzed the solutions with the lowest Euclidean Distance (ED) from the ideal solution. Equation 1 shows how ED is calculated. ED represents the distance between two points P= (p 1,p 2,…,p n ) and Q= (q 1,q 2,…,q n ) in an Euclidean Space n-dimensional. $$ ED = \sum\limits_{i = 1}^{n}{(p_{i} - q_{i})^{2}} $$ ED is used here as preference criterion to help the tester in the selection of a solution. The lower the ED value, the better the trade-off between the objectives. Feature testing of SPL As mentioned before the demand for specific testing techniques and tools for SPL is crescent. In the feature testing, the goal is to derive the most representative set of products from the FM. In this section we describe two criteria that can be used for this task. Pairwise testing We can find many works in the literature addressing combinatorial testing in the SPL context (Cohen et al. 2006; 2008; Lamancha and Usaola 2010; McGregor 2001; Oster et al. 2011; Perrouin et al. 2010; Uzuncaova et al. 2010). The pairwise testing is a well known and used kind of combinatorial test, and due to this was adopted in our work. To derive the pairs, we use the the Combinatorial tool1 that implements, among other algorithms, the AETG algorithm, introduced by Cohen et al. (1997). The size of an AETG test set grows logarithmically in the number of test parameters. To illustrate how the pairs are derived, consider the FM of Fig. 4, for the SPL CAS (Car Audio System (Weißleder et al. 2008)), and Table 1, which presents the set of products generated by pairwise testing using such tool. In such table the valid products are represented only in terms of their variabilities. In this sense, Traffic Message Channel that is a obligatory characteristic does not appear, as well as Control. Product 1 includes the 14 pairs (Wheel Control,Map Data via CD), (Wheel Control,CD), (Wheel Control,AAC), (Wheel Control,USB), (Wheel Control,MP3), (Map Data via CD,CD), (Map Data via CD,AAC) and so on. Feature diagram of the Car Audio System (adapted from (Weißleder et al. 2008)) Table 1 Products required by pairwise testing to the FM of Fig. 4 Mutation testing in the SPL context The mutation testing has been recently explored in the SPL context. Mutation operators are used to describe faults that can be present mainly in the FM (Ferreira et al. 2013; Henard et al. 2013a). Henard et al. (2013a) introduced two mutation operators. They are defined to generate dissimilar test cases and reveal faults in the FM. A limitation of this work is that the operators are not oriented to common faults that can be present in the FM. Other one is that the work does not explore the use of such operators for generation of a test set. They are only used to assessment of test sets. The work of Ferreira et al. (2013) introduced a set of operators that describe different faults associated to the FM and related to the feature management. This set is defined based on classes of typical faults that can be present in the FM. They are related to incorrect cardinality of solitary features and set relations, incorrect definition of grouped relations, incorrect definition of constraints, such as depends and excludes relation. The work also introduced a mutation process to use the operators as a test criterion, for evaluation and selection of test cases. This process includes steps which are similar to the steps followed in the traditional mutation testing of programs. First of all, mutants for FM are generated by applying the mutation operators and introducing only a simple modification each time. It is important to note that the FM mutants are valid diagrams. Consider again the FM of Fig. 4. Figure 5 contains an example of mutant generated by the operator AFS (Add Feature to a Set relation (solitary feature to grouped)). In the mutant, the feature USB was added to the set relation, previously composed only by features CD, Cassette and DVD. The operator and corresponding mutant describe a possible fault in the FM. An example of mutant (extracted from (Ferreira et al. 2013)) A test set T is given by a set of products to be used in the test. If T is available, it can be evaluated by the produced score, by executing the mutants. If not, the mutants can be used to construct T. A test data (product) is "executed" with an FM analyzer. A mutant is considered dead if the validation of a product by using the mutant produces a different result from the validation of the same product against the original FM. To illustrate the mutation process, consider again Fig. 5 and the test case of Fig. 6. The product is valid for the original FM, however it is not valid for the mutant. In this case, this product (test data) kills the mutant. A product that is invalid according to the original FM and valid according to the mutant is also capable to kill the mutant. At the end, the mutation score is calculated, given by the number of generated and dead mutants. If both models, original and mutant ones, validate the same set of products, they are considered equivalent. Equivalent mutants are not counted to calculate the score. Example of product (test data) to the FM of Fig. 4 A tool, named FMTS (Feature Model Testing System) was implemented to support the proposed process. It works with the framework Feature Model Analyzer (FaMa) (FaMa 2014), which is responsible to validate the models and execute the test data. A valid FM satisfies some properties like satisfability, and does not have inconsistencies such as dead features. The tool supports the FODA notation (Kang et al. 1990), extended and cardinality-based FMs (Czarnecki et al. 2005). The input is an XML file that represents the FM. The tester can provide a percentage for each operator to be used in the mutant generation. If a percentage of 0.5 is provided to operator AFS, only 50 % of the possible mutants to this operator are generated. The tester can also provide a test data (or a test data set) in XML format to be executed by the tool, to check the score, as well as, to set a mutant as equivalent. By using FMTS, Ferreira et al. (2013); Ferreira (2013) conducted an experiment to evaluate their operators considering factors such as efficacy, strength and cost. In such works we can find the definition of the operators, what operators are more expensive, what generate more equivalent mutants and so. In addition to this, a comparison with pairwise testing was performed showing that both criteria can reveal different kind of faults and considered incomparable in terms of inclusion relation. Mutation testing is, in general more expensive and difficult to satisfy. In our work, we used the set of operators proposed by Ferreira et al. (2013) and the tool FMTS. This set is more complete and describes more faults. It can be applied as a testing criterion in a testing process that is automated. Our approach is introduced in the next section. The search based approach This section describes our search-based approach. The main goal is to generate sets of products, which are sets of test data to satisfy the mutation testing of FMs and also to consider different factors that can impact the test data generation. This is a complex problem that can be efficiently solved by multi-objective optimization algorithms. According to Harman et al. (2012), to implement a search-based solution to a problem, we need the following main ingredients: i) an adequate representation to the solution, that needs to be represented in a way that can be manipulated by the algorithm; ii) search-based operators to improve the solutions and explore the search space; and iii) an adequate way to evaluate the quality of the solution, that is, the fitness function (objective function). Such ingredients of our approach are described next. Population representation The individual (solution) in the population represents a set of n products, as illustrated in Fig. 7. In such figure, the individual X contains three products (test data), identified by integer numbers: (1, 5, 11). Each product is represented by a binary vector, where each element of this vector corresponds to a feature of the FM. The value (1) represents that the corresponding feature is selected for the product, and (0) represents that the feature is not selected. In the example the product 1 does not contain the features CD and DVD. Individual representation Search operators The crossover operator proposed is called ProductCrossover. It works recombining two individuals X and Y, called parents, randomly selected from the population. Being n X and n Y the size of, respectively, individuals X and Y, if n X >1 and n Y >1 the individuals are divided into two parts. The size of the first part is equal to n d i v 2 and the second one is equal to n−(n d i v 2). If n m o d 2==0, both parts will have the same size. For example, if n X = 8 (X contains 8 products) the first part after division contains the first four products and the second part contains the last four. If n X =5, the first part contains the first two products and the second one the last three. After division the parts of the parents are combined to form the offspring, as illustrated in Fig. 8. If n X ==1, the division results in two equal parts, containing the same information and used in the combination, as shown in Fig. 9. Crossover operator example Crossover operator with n X =1 The mutation operator proposed is called ProductMutation and works with three different types of mutation: addition, removal and swap. In the addition, a product randomly selected is inserted in the individual. In the removal a product of the individual is randomly selected and removed. In the swap, a product is randomly removed from the individual, and another one, also randomly selected, which does not belong to the individual is added. Figure 10 shows application examples of this operator. Mutation operator examples After the application of a search operator, repeated products are removed from the generated individuals. The goal is to obtain high mutation scores but also to consider other factors (objectives). In this section we describe the objectives used in this work, they are related to the cost, given by the number of test cases and quality given by the pairwise coverage. The problem is transformed in a minimization problem and we use three functions that produce values between 0 and 1, associated to: size of the individual, number of corresponding alive mutants, number of uncovered pairs. They are calculated as described next. The score is calculated as for the mutation testing of programs, and corresponds to the relation between the number of dead mutants and the total number of mutants, given in Eq. 2. $$ A_{X} = 1 - \frac{{DM}_{X}}{AM} $$ where A X is the fitness value A of the individual X; D M X is the number of dead mutants, killed by executing the set X; AM is the number of active mutants being considered, that is, this set can be obtained from the set of generated mutants by discarding those mutants that are not valid (anomalous) or that generate equivalent sets of products, when compared with the original FM. The fitness function associated to the pairwise coverage is given by Eq. 3. $$ P_{X} = 1 - \frac{{PC}_{X}}{P} $$ where P X is the fitness value P of the individual X; P C X is the number of covered pairs, by executing the set X; and P is the number of valid pairs generated. The pairs need to satisfy the restrictions present in the FM being tested. The number of test cases is calculated according to the size of the individual, which represents the set of products. $$ S_{X} = \frac{n_{X}}{n} $$ where S X is the fitness value for the individual (set) X; n X is the number of products in X; and n is the number of products derived from the original FM. Implementation aspects To implement and evaluate our approach, we used three multi-objective algorithms that performed well in the SPL context: NSGA-II (Deb et al. 2002), SPEA2 (Zitzler et al. 2001) and IBEA (Zitzler and Künzli 2004). They were implemented by configuring jMetal (Durillo and Nebro 2011). jMetal is a Java based framework that implements, among others, the three mentioned algorithms. It allows an easy integration with other tools. It includes default operators and solutions for known optimization problems, and can be instanced for new ones. In addition to this, it implements some quality indicators, such as hypervolume. As mentioned before, in this work to calculate the fitness of the solutions, we used the tool FMTS (Ferreira et al. 2013) and its operators, and the AETG algorithm implemented in the Combinatorial tool. FMTS works with FaMa analyzer. However, its is important to highlight that the approach is independent of the set of mutation operators and testing tool. To do this, it is only necessary to implement other fitness procedures. The diagram under test is provided, following FaMa, in XML format. An array of features present in the diagram is generated and used by the AETG algorithm to generate the pairs. After this, a procedure is executed to discard invalid pairs, considering the restrictions of the FM. A module to check if a product covers a given pair was also implemented, to evaluate the fitness of an individual. By using FMTS a set of mutants is generated. FMTS only generated valid mutants according to FaMa. From this set, and by using FaMa, equivalent mutants can be discarded. That is, they are discarded if the set of products derived by the mutant and the set derived by the original FM are the same. The mutants that can not be killed by valid products according to FM can also be discarded to ensure that the final set is composed by only valid products. The set AM is composed by the remaining set of mutants. To reduce execution time during the evolution process and to avoid many fitness calculations, the implementation uses auxiliary matrices, MA and MP that maintain, respectively, the mutants killed and the covered pairs by a product. These matrices are generated in the initial phase and have n entries, where n can be either the number of products generated by FaMa for the FM, or a smaller number provided by the tester. This mechanism allows scalability, avoiding the manipulation of a huge number of products. The initial population is generated by randomly selecting products in the matrices, but other methods to obtain such population can be evaluated in future works. The size of each individual is also randomly set. In the evolution process the search operators (mutation and crossover) are applied according to the rates provided by the tester. The method used to select the individuals is binary tournament. In the case of n smaller than the total number of products, new products can be generated in the evolution process, which are not in the matrices MA and MP. If such products are not valid according to FaMa they are discarded, otherwise, their fitness are calculated and such information is added to the matrices. Using the approach This section presents examples using the test data generation approach, considering the FM of the SPL CAS (Fig. 4) and the NSGA-II algorithm. First of all, the FM is provided in the XML format. The user can do this informing the file location through the "load" command of FMTS. After that, the user configures the "problem", command with a set of parameters. The definition of the problem corresponds to the definition of the algorithms parameters: population size, maximum number of evaluations, size of the external file, crossover and mutation rates. To adjust such parameters and to ease the use of the approach in practice, we recommend the use of default parameters found in the literature. This recommendation is supported by studies in the software testing domain, reported in (Arcuri and Fraser 2011), which analyze the effects of a tuning and shows that the use of default parameters is reasonable and a justified choice. The user also needs to chose the objectives to be used in the evolution process. In the next subsections we present examples that consider two and three objectives. Two-objective use To illustrate the use with two objectives, suppose that the user wants to satisfy the mutation testing with a reduced number of test cases. Hence, he/she selected the corresponding fitness functions, the algorithm NSGA-II, and provided the following set of parameters values: 50 to population size, 10,000 to maximum number of evaluations, 0 to extension file size (NSGA-II does not use an external file), 0.1 to crossover rate (corresponding to 10 %) and 0.5 to mutation rate (corresponding to 50 %). In this case, no value for maximum number of products is provided. This means that all the products generated by FaMa were used to compose matrix MA. To generate the mutants a percentage of 100 % was also used in FMTS for all operators. This means that all the possible mutants were generated. The number of active mutants AM to be killed by the test sets is 227, discarding the equivalent mutants. After the execution of NSGA-II algorithm with the above mentioned parameters, a set of 13 non-dominated solutions was generated. These solutions form the approximation to the Pareto Front, the P F known set. The frontier is depicted in Fig. 11. Axis X corresponds to the objective S(x) and axis Y corresponds to A(x). P F known obtained for SPL CAS with NSGA-II algorithm and two objectives Table 2 describes in detail each non-dominated solution found. The second column shows the fitness values reached for the solution, considering the normalized values, produced by the functions presented in Section 2.1.3. The third column shows the fitness values transformed to better visualization, corresponding to the pair: number of products and mutation score, presented in a coverage percentage of dead mutants. Table 2 NSGA-II 2-Objective Solutions for CAS Through the table and figure, it is possible to see that the proposed approach generates a set of optimal solutions. If the user wants the smallest product set, the first solution is the best option, however it has a low mutation score. If the user wants the highest score, the last solution is the best choice, however it includes 44 products. All the solutions are good considering a point of view. The user needs to select the solution that better covers his/her requirements and needs. Solutions with the best trade-off between the objectives are that ones located in the knee regions of the Pareto front (Solutions 6–10). These solutions have the best values of ED (Euclidean Distance), considering a minimization problem and the ideal solution as the point (0,0). This indicator can be used as a good way to select a solution. We can observe that solutions with the best ED values are not in the extreme points of the Pareto front and are not associated to a mutation score of 100 %. Maybe the tester wants to prioritize this objective and choose solutions with score greater than 98 %. In this, case Solution 11 could be chosen. It has 19 products and 98.678 % of coverage. Some products of this solution are in Table 3. The ids of the such products are {15, 18, 34, 87, 107, 113, 134, 143, 154, 255, 257, 264, 279, 287, 312, 364, 388, 414, 416}. Table 3 Products included in the 2-Objective Solution Number 11 Three-objective use If the user additionally wants a high pairwise coverage he needs to select the three fitness functions, besides all the other parameters. Suppose that he/she provided the following parameters to NSGA-II: 50 to population size, 10,000 to maximum number of evaluations, 0 to extension file size, 0.1 to crossover rate and 0.9 to mutation rate. No value for maximum number of products was provided. This means that all the products generated by FaMa were used to compose matrices MA and MP. To generate the mutants, a percentage of 100 % was also used in FMTS for all operators. The equivalent mutants and invalid pairs were discarded, totaling a number of 227 active mutants and 420 pairs. After the execution of NSGA-II algorithm with the above mentioned parameters, a set of 23 non-dominated solutions was generated. The fitness of these solutions are presented in Table 4. The fitness values are normalized and also in terms of percentage. We can observe that the smallest set contains only 1 product and the largest 45. This last one is associated to a coverage of 100 % for both criteria, but if desired, the user can select the option with best ED value, in this case, Solution 17, with 21 products, score of 98.678 % and a pairwise coverage of 98.907 %. Some products included in this solution are in Table 5. The ids of the such products are {12, 16, 29, 61, 101, 134, 149, 180, 201, 204, 252, 262, 268, 277, 278, 298, 329, 361, 368, 420, 445}. Table 5 Products included in the 3-objective solution - number 17 It is interesting to observe, by checking the ids of the products that compose the 2-objective and 3-objective solutions with best ED, that the sets are disjunct. This shows the difficult to solve the problem, since many solutions are possible. The approach offers the best ones considering the desired objectives. Evaluation description This section describes how our approach was evaluated: research questions, target FMs, how the experiment was organized, and parameters used.2 The evaluation was guided by the following two main research questions. RQ1: How are the solutions produced by the approach with respect to the objectives? To evaluate this question the solutions produced by each algorithm are considered and evaluated according to the corresponding fitness values. Since the goal is to satisfy the mutation testing criterion, solutions associated to the best scores are analyzed. RQ2: Which algorithm is the best to solve the problem? To evaluate this question the algorithm that produced the best solutions for each FM is identified. To do this, we used hypervolume and error ratio (ER), quality indicators from the optimization field described in Section 1.1.3. Feature models used We used four FMs extracted from the SPLOT repository (Mendonça et al. 2009). Such FMs were used in related work (Ferreira et al. 2013) because they contain different kind of constructs that can be present in the FMs to allow the evaluation of the proposed mutation operators. In addition to this, such repository and FMs were used in different works from literature. The first FM is associated to the SPL CAS (Car Audio System) (Weißleder et al. 2008) to manage automotive sound systems. JAMES (Benavides et al. 2005): SPL for collaborative web systems; Weather Station (WS) (Beuche and Dalgarno 2012): SPL for weather systems; and E-Shop: an E-commerce SPL (Segura et al. 2010). Some information about the FMs of these SPLs are presented in Table 6: number of features, number of binary and grouped relations, number of includes and excludes constraints, and number of products. E-Shop has the greatest number of products. Such number is impacted by the number of optional and grouped features. On the other hand, includes and excludes constraints have a negative impact. Table 6 Feature models properties The matrices MA and MP were initially generated containing all the valid products for each FM, not being necessary to set the number maximum of products n. The last columns of Table 6 present the number of active mutants AM and the number of valid pairs P. The sets AMs differ from the sets used in (Ferreira et al. 2013). This is because, the set does not include equivalent mutants and other ones that are killed by invalid products. This ensures generation of a set with only valid products according to the FM being tested. Experiment organization To answer the research questions and evaluate the use of the approach with different sets of objectives, we created two experiments. The first experiment, named here E2O (Experiment with 2 Objectives), used two objectives related to mutation score and number of test cases, given respectively by Eqs. 2 and 4. The goal is to obtain high scores with reduced cost. The second experiment, named here E3O (Experiment with 3 Objectives), was conducted to evaluate the performance by using more than two objectives. The goal is to obtain high scores, with reduced cost, but also covering all pairs of features. The fitness values are given by Eqs. 2, 3, and 4. In this case the complexity of the search is greater, as well, as the difficult to obtain the solutions. Hence, the goal is to analyze if the approach is capable to reach solutions with score and pairwise coverage of 100 %. Parameters setting Before the experiment execution, a parameter tuning was performed. We used the tuning instead of fixed parameters because, in this way, it is possible to compare the best performance of the algorithms. The following parameters were adjusted: population size; maximum number of evaluations; external file size; crossover and mutation rates. Based on the literature (Arcuri and Briand 2011), the values variation were defined as in Table 7. The stop criterion adopted was the number of evaluations. Table 7 Parameters variation These parameters were adjusted for each algorithm and each FM. In this stage, 81 different settings to the NSGAII algorithm were created and 162 settings for the algorithms SPEA2 and IBEA (both use the external file). For each algorithm 10 executions were made. Therefore, 29,160 executions were performed. After tuning, the best settings were selected based on the best mean values of hypervolume (Zitzler et al. 2003). To those averages that did not reach statistic significance through the Kruskal-Wallis test (Kruskal and Wallis 1952), with significance level of 95 %, the fastest setting was selected. Table 8 shows the best settings founded to each FM/algorithm. Then, those parameters were used in the experiments, that executed each algorithm 30 times, since they are non-deterministic. Table 8 Best configuration settings founded Threats to validity We consider the SPLs size as the main threat of our study. Nevertheless, the used SPLs indicate that the approach can produce good solutions to satisfy the mutation testing of FMs. We have not evaluated the mechanism to allow scalability, that is, to provide a limited number n of products to be manipulated by FaMa and optimization algorithm. This should be better investigated in future experiments with greater FMs. In our experiment the FMS are small and all the equivalent mutants could be identified comparing the set of products derived by FaMa. But a limitation that we can expect in the future experiments is the difficulty (or even impossibility) of automatically determining the equivalent mutants. In this way, it will be not possible to reach a 100 % score. A mechanism implemented by FMTS to allow the user to set equivalent mutants can help in this task. The execution time of the algorithms is mainly impacted by the fitness evaluation procedures, the use of matrix M can help to reduce this time and to scale our implementation to greater FMs. The choice of parameters for the algorithms is always a hard task. To tunning the algorithms and reduce the associated threats, we followed recommendations found in (Arcuri and Briand 2011). Since search algorithms include random variations, we repeated the experiments 30 times, to reduce the possibility the results were obtained by chance. As mentioned before the approach was instantiated with the mutant operators generated by FMTS and pairwise implemented in the Combinatorial tool. Due to this the implementation works with FMs valid according to FaMa. We think this is not a problem because such framework considers most FM constructs, and includes cardinality-based FMs. Further investigations can use other testing tools. In this section the experimental results are presented and analyzed. Tables 9 and 10 show the number of solutions obtained, respectively, through experiments E2O ad E3O. The first column of each table displays the used FM, while the second one displays the amount of solutions in the real front (P F true ). The other columns display the amount of solutions in the P F known obtained for each algorithm. The amount of the P F known solutions that are present in the front P F true is also presented separated by "/", ER values are displayed in parenthesis. The bold results are the best ones. Table 9 Pareto fronts - experiment E2O Table 10 Pareto fronts - experiment E3O Through Table 9 we can observe that for E2O, NSGAII obtained in three (out four) FMs the greatest number of solutions in the P F know followed by SPEA2. However in all those cases NSGA-II obtained lower ER values. This implies that despite the great number of solutions found by NSGA-II, few of them belong to the approximation of the real front. Both of SPEA2 and IBEA algorithms obtained the best ER values. For CAS, SPEA2 obtained more diversity of non-dominated solutions resulting 16 solutions. From these solutions, 9 of them are present in the P F true , which implies in a 0.44 ER value. This is the best ER value, which represents that most of the found SPEA2 solutions belong to P F true . For EShop, the algorithm did not find the greatest number of solutions, 18 against 20 obtained by NSGAII, however, again it obtained the best ER value. For JAMES and WS, even with low diversity of found solutions, IBEA algorithm showed the best ER values. When we analyze E3O (Table 10), NSGA-II obtained the greatest number of solutions in the P F know for two systems CAS and EShop, and IBEA for the other two. Similarly to what happened for E2O, this does not imply in higher ER values. SPEA2 presented the highest ER values for all systems. This means that SPEA2 generated the greatest number of solutions in P F know that dominate the other solutions. To provide another quantitative analysis of the experimental results, Tables 11 and 12 show the hypervolume mean obtained, respectively, in both experiments. In those tables, the " =" sign represents hypervolume means that do not show statistic significance according to Kruskal-Wallis test, with 95 % of significance. The statistical test was performed with all thirty execution means. Bold values correspond to the best ones and in parenthesis the standard deviation is shown. Table 11 Hypervolume means - experiment E2O We can observe in Table 11 (E2O) that just for JAMES the best hypervolume mean obtains a statistical significance. For the other FMs, there is no statistical difference between the algorithms. However, this does not happen in E3O (Table 12). In the presence of three objectives there is difference between the algorithms for all almost cases, except between NSGA-II and SPEA for JAMES, where both algorithms are equivalent. For the other systems, SPEA2 is the best for CAS and WS, and NSGA-II is the best for EShop. IBEA is not the best for any system. Table 13 shows the runtimes, in seconds, to each algorithm in each FM. Bold values are the best ones. About the runtime, IBEA seems to be the best option in both experiments. In E2O, IBEA obtained the best values in three FMs. Just for JAMES other algorithm, NSGAII, obtained the best value. In E3O, IBEA runtime is the best for CAS, JAMES and WS, and NSGA-II is the best for E-Shop. We can notice that SPEA2 runtimes were the worst ones for all systems in both experiments. Table 13 Average Runtimes As mentioned before the ED indicator can be used by the tester to choose the solutions with best trade-off among the objectives. The tester can also use a solution with the smallest number of products, sacrificing the number of dead mutants. It is possible to choose other solutions according to the tester's preferences. For example, solutions with a 100 % coverage, in cases where the required reliability is very high, or solutions that satisfy other constraints related to organizational or contractual restrictions. Such preferences can be incorporated in procedures that allow automatic selection after the evolution process or during the optimization. This subject has been investigated in the area of preference-based algorithms (Bechikh et al. 2015) and should be explored in future works. For our analysis, we consider that it is more interesting to take some solutions with a mutation coverage greater than a value ε, such as ε=94 %, and compare them to solutions with a 100 % coverage. The fitness values of the chosen solutions are presented respectively, for E2O and E3O, in Tables 14 and 15. For all FMs, all the algorithms found solutions with 100 % of coverage in both experiments. The number of products in those solutions are in Table 16. Bold values are the best ones. In the case of E2O, the best values for the number of products were obtained by NSGA-II for three of the FMs. IBEA was the best for CAS. In a general case, IBEA seems to be the best option. In E3O, the best values were obtained for SPEA2 for all systems, but the other algorithms also found good values. Table 14 Fitness values of solutions with score greater than 94 % - E2O Table 15 Fitness values of solutions with coverages greater than 94 % - E3O Table 16 Number of products in the solutions with a 100 % coverage Comparing both experiments, we can notice that the addition of a new objective did not impact in the number of required products. However, if we observe Table 14 and analyze solutions with score greater than 98 %, we can see the impact on the cost to increase the score in 2 %. For example, if we take SPL CAS and NSGA-II, we can observe that to this increase, it is necessary 25 additional products (44-19). This impact can be observed for all FMs and algorithms. If the reliability required is high, maybe this additional cost is justified. We can observe in Table 15 that the same happens in E3O. If we take the algorithm IBEA and CAS, we can see that we needed 30 additional products to increase the score in 3.084 % and to increase the pairwise coverage only in 0.544 %. Hence, using solutions of Table 14 and 15 seems to be a good choice for many cases. This section discusses the main findings of our evaluation, comparing experiments E2O and E3O and providing answers to our research questions. During the evaluation we can observe the main advantages of our approach, compared with a single-objective one. A multi-objective algorithm produces diverse good solutions and different sets of impacting factors can be considered. At the end, the tester has a set of possible solutions and can either prioritize one objective, by choosing solutions in the extreme points of the fronts or choose solutions with smaller ED values, according to the testing goals and resources. Answering RQ1, we can observe that independently of the experiment, it is possible to kill more than 98 % of the mutants with a reduced number of products. This guideline can be used by the tester to select the solutions and is a good choice, since the goal is to satisfy mutation test. However as showed in last section, if the reliability required to the application is high, an extra effort in terms of test cases can be justified. In the worst case, SPL CAS and algorithm SPEA2, we observed that to increase 2 % in the score it was necessary to increase the test set in 50 %. But in other cases, such as for SPL WS and IBEA, it was necessary an increase in the test set of 20 %. This is an advantage of the approach, which is capable to generate different solutions with different trade-offs including solutions with 100 % of coverage for both criteria. The tester can choose according her needs. A simple-objective algorithm generates only a solution and this selection is not possible. If the tester wants to prioritize the score, we observe that in both experiments, for all SPLs, all the algorithms found solutions with a score of 100 %. A reduced number of products is required. In E2O, if we would take the solutions generated by IBEA, 23 products were required for CAS, 11 for JAMES, 34 for WS and 44 for EShop. These numbers represent, respectively, 5, 16, 6, and 3 % of the total number of products derived from the FMs being tested. These numbers are lower than the ones reported in the approaches from the literature (Ferreira et al. 2013). We observe similar behavior in E3O. Considering SPEA2 solutions, 41 products were required for CAS, 11 for JAMES, 23 for WS and 31 for EShop, representing, respectively, 9, 16, 4, and 2 % of the total number of products derived from the FMs being tested. We observe in both experiments, that the greatest reduction was for EShop, the FM with the greatest number of products, case where the use of our optimization approach seems to be more advantageous. This should be evaluated in future experiments. Comparing both experiments, we can notice that the use of three objectives does not impact in the cost and in the necessary number of products. In the cases analyzed this number was lower. In other cases (see Table 16) it is either greater or maintain constant, but the differences are not so significant. This fact is maybe due to the nonexistence of a conflict between mutation score and pairwise coverage. In fact, increasing the score results implies increasing pairwise coverage. The noticeable difference between the experiments is in the number of non-dominated solutions found by the algorithms, given in the set P f true (see Tables 9 and 10), with an increase of 17, 63, 55 and 38 % for, respectively, CAS, JAMES, WS, and EShop. This shows that the complexity of the problem with three objectives increases, since the search space of solutions is greater, and in such case the use of the approach seems to be indispensable. With respect to RQ2, the statistical test does not point out difference among the algorithms in the experiment with two objectives, but in the presence of three objectives there is difference between them for almost all cases, SPEA2 presented the best performance in three FMs (out four). Despite of this, all the algorithms found good solutions and reached 100 % of coverage with reduced number of test cases. Hence, we can conclude that all of them are a good option. However, we can observe some points related to them. IBEA presented the best runtimes in both experiments, and should be investigated in future works to evaluate scalability. In E2O, NSGA-II presented greater diversity of solutions, a great number of solutions in P F known , for most cases. NSGA-II also obtained the best values of hypervolume. This means that this algorithm offers to the tester a greater number of solutions with different and extreme values of fitness for the tester. IBEA and SPEA2 presented greater number of solutions in the P F true , in E2O. In E3O only SPEA2 had this behavior. Then SPEA2 was capable to find the greatest number of non-dominated solutions in both experiments, considering the approximated real fronts. In short, all the algorithms are capable to find good solutions and can be used in a general case, with many advantages compared to a single-objective algorithm. NSGA-II should be used in cases where the testing activities have many constraints, such as that ones associated to contractual and development issues. In this way, a great variety of solutions will be available, and the tester can choose the best one according his/her preferences or needs. If a fast execution is required, cases where the FMs has a large number of products, IBEA seems to be the best choice. SPEA2 reached in both experiments the greatest number of non-dominated solutions, this means that it should be used if the tester is only interested in solutions with the best-trade-offs. The interest in the application of search-based techniques in the SPL engineering is crescent. A recent survey (Harman et al. 2014) shows an increasing number of papers on this subject in the last three years. Results of other mapping study related to search based selection and configuration of features are presented in (Matnei Filho and Vergilio 2014). They show that this selection considers different objectives. For example, adaptation and evolution, customization according to user preferences, and so on (Henard et al. 2015; Olaechea et al. 2014; Sayyad et al. 2013). For example, the work of Sanchez et al. (2013) and Karimpour and Ruhe (2013) has as objectives the run time adaptation and evolution of SPLs. The configuration of products is addressed by many works, mainly considering cost, preferences of the user and decision-makers, and violations of the model rules (Cruz et al. 2013; Guo et al. 2011; Pereira et al. 2013; White et al. 2014). The works of Sayyad et al. (2013 a,b) evaluate different multi-objective evolutionary algorithms for SPL configuration, considering different factors to select the products: number of violated rules in the FM, cost, number of used features, and number of faults revealed during the testing activity. Multi-objective selection approaches and exact ones are compared in the work of (Olaechea et al. 2014). The works most related to ours are those ones on search-based testing of FMs. They are described next. The work of Wang et al. (2013) addresses minimization of test cases. They use a GA and an aggregation function of the following factors: number of tests cases, pairwise coverage and capability to reveal faults. In (Wang et al. 2014) the goal is prioritization of test cases. Wang et al. use another aggregation function including cost measures, and compare GA with (1+1) EA and random search. Different GA configurations were evaluated considering different weights in the aggregation function. Other factors, such as execution cost and resources, are also considered in another work by the authors (Wang et al. 2014). Similarly, Ensan et al. (2012) also use a simple GA with an aggregation function composed by cost and error rate factors. The work of Henard et al. (2013b) also uses a GA with an aggregation function to handle with conflicting objectives in the selection of test products. Cost, pairwise coverage, and number of products are considered. Testing solutions based on multi-objective algorithms are also found. The work of Lopez-Herrejon et al. (2013) proposes a Pareto solution using pairwise coverage and size of the test suites. However, we can observe that most of them are based on single GAs and aggregation functions. Pairwise coverage, is the only test criterion considered in such functions besides other factors. Mutation testing has been addressed for test data generation only in the work of Henard et al. (2014). In this work the operators proposed in (Henard et al. 2013a) and mentioned in Section 1.2.2 are considered to generate test cases (products) for testing. The approach implements the (1+1) EA algorithm in conjunction with a constraint solver to check if the products are valid, according to the FM. The work of Henard et al. (2014) has similar goals to our work since it also considers the mutation score for generating products, but there are some differences: i) our treatment to the problem is multi-objective, since we use multi-objective algorithms that produce a set of good solutions with the best trade-off between score and number of products. The related work uses a single-objective algorithm guided by the score only; ii) we consider a broader set of mutation operators; iii) we work with a population of individuals that are complete sets of products and propose specific operators adequate to this kind of population. In most existing works, the individual used in the population is given by a binary vector, similarly to our representation for product. The related work (Henard et al. 2013a) uses only one individual. In this way, such work produces only a solution. Instead, our approach produces a set of good solutions, which can be used by the tester according to his or her goals. We can observe that the works do not deal with the mutation-based test data generation as a multi-objective problem. In addition to this, all of them do not address mutation and combinatorial testing at the same time. To satisfy both criteria can improve the quality of the generated test data. Moreover, our approach allows reducing costs. This paper introduced a multi-objective approach to generate test sets to kill mutants used for the feature testing of SPLs. The approach includes: i) a representation for the individual in the population that allows manipulating a population of test sets; ii) search operators adequate to the introduced representation and to evolutionary algorithms; iii) three fitness functions related to the size of the generated sets, number of dead mutants and number of pairwise coverage. This paper extends our previous work by presenting new experimental results and considering faults that are described by pairwise testing to generate reduced test sets to satisfy mutation testing of FMs. The great advantage of our approach, with respect to works from the literature, is to offer different alternatives for the testing, differently of a single objective approach. The approach was implemented using three multi-objective algorithms: NSGA-II, SPEA2, and IBEA2 in a framework that works with FMTS and Combinatorial tool. FMTS implements different kinds of mutation operators that describe common FM faults from diverse categories. Combinatorial tool implements the algorithm AETG and pairwise testing. However the approach can be implemented with other evolutionary algorithms and testing tools. In such case other mutation operators could be used, as well as t-wise testing. This should be explored in future works. We conducted experiments to evaluate the approach with two and three objectives. In both cases, good solutions are produced, representing the best trade-offs between the objectives. The tester can select the solution that better fits his/her needs. It is possible to choose solutions with a score greater than 0.98 and with a reduced number of test cases. In most cases, it was observed that an increase in the score implies also an increase in the pairwise coverage. Other option is to choose solutions associated to the greatest scores. In such cases, we observe in both experiments that the number of test cases of these solutions is not greater than 16 % of the number of products derived from the FM under testing. The greater the number of products derived from the FM, the greater this reduction is. In general all the algorithms performed well and obtained similar results. NSGA-II presented more diversity, a greater number of solutions in the P F approx . IBEA and SPEA2 more solutions in the P F true . The results point out that IBEA seems to be a good choice for FMs with a large number of products, since it presented the best runtimes. This should be investigated in future experiments. Such experiments should consider other FMs to investigate the performance of the implementation and the impact of using the matrices associated to the fitness. The results point out that in these cases the approach is more useful. Other objectives, related for example to test cases similarity, can also be investigated. 1 http://161.67.140.42/CombTestWeb (Cohen et al. 1996). 2 The artefacts and results obtained with the testing tools used in our evaluation can be found in www.inf.ufpr.br/gres/apoio_en.html. AFS, add feature to a set relation; CAS, car automotive system; EA, evolutionary algorithm; ED, euclidean distance; E20, experiment with two objectives; E3O, experiment with three objectives; EShop, e-commerce-Shop; FaMA, feature model analyzer; FM, feature model; FMTS, feature model testing system; GA, genetic algorithm; IBEA, indicator-based evolutionary algorithm; NSGA-II, non-dominated sorting genetic algorithm II; RQ1, research question 1; RQ2, research question 2; SPEA2, strength Pareto evolutionary algorithm 2; SPL, software product line; WS, wheather station; XML, extensible markup language Arcuri, A, Briand L (2011) A practical guide for using statistical tests to assess randomized algorithms in software engineering In: Proceedings of the 33rd International Conference on Software Engineering (ICSE'11), 21–28.. ACM, New York. Arcuri, A, Fraser G (2011) Parameter tuning or default values? an empirical investigation in search-based software engineering In: Proceedings of the International Symposium on Search Based Software Engineering (SSBSE), 594–623.. Springer, Hungary. Bechikh, S, Kessentini M, Said LB, Ghédira K (2015) Chapter four - preference incorporation in evolutionary multiobjective optimization: A survey of the state-of-the-art. Adv Comput 98: 141–207. Elsevier. Benavides, D, Trujillo S, Trinidad P (2005) On the modularization of feature models In: First European Workshop on Model Transformation. www.academia.edu/5750212/On_the_Modularization_of_Feature_Models. Beuche, D, Dalgarno M (2012) Software product line engineering with feature models. http://www.pure-systems.com/fileadmin/downloads/pure-variants/tutorials. SPLWithFeatureModelling.pdf. Accessed July 2016. Bringmann, K, Friedrich T, Klitzke P (2014) Two-dimensional subset selection for hypervolume and epsilon-indicator In: Proceedings of the Genetic and Evolutionary Computation Conference. GECCO, 589–596.. Vancouver, ACM. Coello, CA, Lamont GB, Veldhuizen DAV (2006) Evolutionary Algorithms for Solving Multi-Objective Problems (Genetic and Evolutionary Computation). Springer, Secaucus. MATH Google Scholar Cohen, DM, Dalal SR, Fredman ML, Patton GC (1997) The AETG system: An approach to testing based on combinatorial design. IEEE Trans Softw Eng 33(7): 437–444. Cohen, DM, Dalal SR, Parelius J, Patton GC (1996) The combinatorial design approach to automatic test generation. IEEE Softw 13(5): 83–88. Cohen, MB, Dwyer MB, Shi J (2006) Coverage and adequacy in software product line testing In: Proceedings of the ISSTA 2006 Workshop on Role of Software Architecture for Testing and Analysis. ROSATEA'06, 53–63.. ACM, New York. Cohen, MB, Dwyer MB, Shi J (2008) Constructing interaction test suites for highly-configurable systems in the presence of constraints: A greedy approach. IEEE Trans Softw Eng 34(5): 633–650. doi:http://dx.doi.org/10.1109/TSE.2008.50. Cruz, J, Neto PS, Britto R, Rabelo R, Ayala W, Soares T, Mota M (2013) Toward a hybrid approach to generate software product line portfolios In: IEEE Congress on Evolutionary Computation (CEC), 2229–2236.. IEEE, Cancun, Mexico. Czarnecki, K, Helsen S, Eisenecker U (2005) Formalizing cardinality-based feature models and their specialization. Softw Process Improv Pract 10(1): 7–29. da Mota Silveira Neto, PA, Machado IC, Mcgregor JD, de Almeida ES, de Lemos Meira SR (2011) A systematic mapping study of software product lines testing. Inf Softw Technol 53(5): 407–423. Deb, K, Pratap A, Agarwal S, Meyarivan T (2002) A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans Evol Comput 6(2): 182–197. Derrac, J, Garcìa S, Molina D, Herrera F (2011) A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol Comput 1(1): 3–18. Durillo, JJ, Nebro AJ (2011) jMetal: A Java framework for multi-objective optimization. Adv Eng Softw 42: 760–771. Ensan, F, Bagheri E, Gašević D (2012) Evolutionary search-based test generation for software product line feature models In: Proceedings of the 24th International Conference on Advanced Information Systems Engineering. CAiSE'12, 613–628.. Springer, Berlin. FaMa, FW (2014). http://www.isa.us.es/fama. Accessed July 2016. Ferreira, JM (2013) Teste de Linha de Produto de Software Baseado em Mutação do Diagrama de Características. Federal University of Paraná, Curitiba, Paraná, Brazil. in Portuguese. Ferreira, JM, Vergilio SR, Quinaia M (2013) A mutation approach to feature testing of software product lines In: International Conference on Software Engineering and Knowlodge Engineering (SEKE), 231–237.. Knowledge Systems Institute, Boston. Guo, J, White J, Wang G, Li J, Wang Y (2011) A genetic algorithm for optimized feature selection with resource constraints in software product lines. J Syst Softw 84(12): 2208–2221. Harman, M, Jia Y, Krinke J, Langdon WB, Petke J, Zhang Y (2014) Search based software engineering for software product line engineering: A survey and directions for future work In: Proceedings of the 18th International Software Product Line Conference - Volume 1. SPLC'14, 5–18.. ACM, New York. Harman, M, Mansouri SA, Zhang Y (2012) Search-based software engineering: Trends, techniques and applications. ACM Comput Surv 45(1): 1099–1161. Henard, C, Papadakis M, Harman M, Traon YL (2015) Combining multi-objective search and constraint solving for configuring large software product lines In: Proceedings of 37th International Conference on Software Engineering (ICSE 2015), 517–528.. IEEE, Florence. Henard, C, Papadakis M, Perrouin G, Klein J, Heymans P, Traon YL (2014) Bypassing the combinatorial explosion: Using similarity to generate and prioritize t-wise test configurations for software product lines. IEEE Trans Softw Eng 40(7): 650–670. Henard, C, Papadakis M, Perrouin G, Klein J, Traon YL (2013a) Assessing software product line testing via model-based mutation: An application to similarity testing In: IEEE Sixth International Conference on Software Testing, Verification and Validation Workshops, 188–197.. IEEE, Luxembourg. Henard, C, Papadakis M, Perrouin G, Klein J, Traon YL (2013b) Multi-objective test generation for software product lines In: Proceedings of the 17th International Software Product Line Conference. SPLC'13, 62–71.. ACM, New York. Henard, C, Papadakis M, Traon YL (2014) Mutation-based generation of software product line test configurations. In: Le Goues C Yoo S (eds)Search-Based Software Engineering, Lecture Notes in Computer Science, Vol. 8636, 92–106.. Springer Verlag, Fortaleza. Kang, KC, Cohen SG, Hess JA, Novak WE, Peterson AS (1990) Feature-Oriented Domain Analysis (FODA) Feasibility Study. Software Engineering Institute, Carnegie Mellon University, Pittsburgh, Pennsylvania. www.sei.cmu.edu/reports/90tr021.pdf. Karimpour, R, Ruhe G (2013) Bi-criteria genetic search for adding new features into an existing product line In: International Workshop on Combining Modelling and Search-Based Software Engineering (CMSBSE), 34–38.. IEEE, San Francisco. Kruskal, WH, Wallis WA (1952) Use of ranks in one-criterion variance analysis. J Am Stat Assoc 47(260): 583–621. Article MATH Google Scholar Lamancha, BP, Usaola MP (2010) Testing product generation in software product lines using pairwise for features coverage In: Proceedings of the 22Nd IFIP WG 6.1 International Conference on Testing Software and Systems. ICTSS'10, 111–125.. Springer, Berlin. Lopez-Herrejon, RE, Chicano JF, Ferrer J, Egyed A, Alba E (2013) Multi-objective optimal test suite computation for software product line pairwise testing In: International Conference on Software Maintenance (ICSM), 404–407.. IEEE, Netherlands. Matnei Filho, RA, Vergilio SR (2014) Configuração baseada em busca de linha de produto de software: Resultados de um mapeamento sistemático In: V Workshop de Engenharia de Software Baseada em Busca (WESB).. Congresso Brasileiro de Desenvolvimento de Software (CBSoft), Maceió, AL. In Portuguese. Matnei Filho, RA, Vergilio SR (2015) A mutation and multi-objective test data generation approach for feature testing of software product lines In: Brazilian Symposium on Software Engineering (SBES).. Congresso Brasileiro de Desenvolvimento de Software (CBSoft), Belo Horizonte - MG. McGregor, JD (2001) Testing a software product line. Software Engineering Institute, Carnegie Mellon University, Pittsburgh, Pennsylvania. www.sei.cmu.edu/reports/01tr022.pdf. Mendonça, M, Branco M, Cowan D (2009) SPLOT: software product lines online tools In: 24th ACM SIGPLAN Conference Companion on Object Oriented Programming Systems Languages and Applications, 761–762.. ACM, USA. Olaechea, R, Rayside D, Guo J, Czarnecki K (2014) Comparison of exact and approximate multi-objective optimization for software product lines In: Proceedings of the 18th International Software Product Line Conference - Volume 1. SPLC'14, 92–101.. ACM, New York. Oster, S, Zink M, Lochau M, Grechanik M (2011) Pairwise feature-interaction testing for SPLs: potentials and limitations In: 15th International Software Product Line Conference, 6–168.. ACM, Munich. Pareto, V (1927) Manuel D'Economie Politique. Ams Press, Paris. Pereira, JA, Figueiredo E, Noronha T (2013) Modelo computacional para apoiar a configuração de produtos em linha de produtos de software In: V Workshop de Engenharia de Software Baseada em Busca (WESB), 80–89.. Congresso Brasileiro de Desenvolvimento de Software (CBSoft), Brasilia, DF, Brazil. In Portuguese. Perrouin, G, Sen S, Klein J, Baudry B, le Traon Y (2010) Automated and scalable t-wise test case generation strategies for software product lines In: Software Testing, Verification and Validation (ICST), 2010 Third International Conference On, 459–468.. IEEE, Paris. Pohl, K, Böckle G, van der Linden FJ (2005) Software Product Line Engineering: Foundations, Principles and Techniques. Springer, Secaucus. Sanchez, LE, Moisan S, Rigault JP (2013) Metrics on feature models to optimize configuration adaptation at run time In: International Workshop on Combining Modelling and Search-Based Software Engineering (CMSBSE) 2013 1st, 39–44.. IEEE, San Francisco. Sayyad, AS, Ingram J, Menzies T, Ammar H (2013a) Optimum feature selection in software product lines: Let your model and values guide your search In: International Workshop on Combining Modelling and Search-Based Software Engineering (CMSBSE), 22–27. doi:http://dx.doi.org/10.1109/CMSBSE.2013.6604432. Sayyad, AS, Ingram J, Menzies T, Ammar H (2013b) Scalable product line configuration: A straw to break the camel's back In: IEEE/ACM 28th International Conference on Automated Software Engineering (ASE), 465–474.. IEEE, Palo Alto, California. Sayyad, AS, Menzies T, Ammar H (2013) On the value of user preferences in search-based software engineering: A case study in software product lines In: Proceedings of the 2013 International Conference on Software Engineering. ICSE'13, 492–501.. IEEE, San Francisco. Segura, S, Hierons RM, Benavides D, Ruiz-Cortés A (2010) Automated test data generation on the analyses of feature models: A metamorphic testing approach In: 3rd International Conference on Software Testing, Verification, and Validation, 35–44.. IEEE, Paris. SEI (2016) Product line hall of fame. Technical report. http://www.splc.net/fame.html. Uzuncaova, E, Khurshid S, Batory DF (2010) Incremental Test Generation for Software Product Lines. IEEE Trans Softw Eng 36(3): 309–322. Van Veldhuizen, DAV (1999) Multiobjective Evolutionary Algorithms: Classifications, Analyses, and New Innovations. Air Force Institute of Technology, Wright Patterson AFB, OH, USA. Wang, S, Ali S, Gotlieb A (2013) Minimizing test suites in software product lines using weight-based genetic algorithms In: Proceeding of the Fifteenth Annual Conference on Genetic and Evolutionary Computation Conference. GECCO'13, 1493–1500.. ACM, New York. Wang, S, Buchmann D, Ali S, Gotlieb A, Pradhan D, Liaaen M (2014) Multi-objective test prioritization in software product line testing: An industrial case study In: Proceedings of the 18th International Software Product Line Conference - Volume 1. SPLC'14, 32–41.. ACM, New York. Weißleder, S, Sokenou D, Schlingloff H (2008) Reusing state machines for automatic test generation in product lines. In: Bauer T, Eichler H, Rennoch A (eds)MoTiP '08: Model-Based Testing in Practice.. ACM, USA. White, J, Galindo JA, Saxena T, Dougherty B, Benavides D, Schmidt DC (2014) Evolving feature model configurations in software product. J Syst Softw 87(0): 119–136. Wong, WE, Mathur AP, Maldonado JC (1995) Mutation versus all-uses: An empirical evaluation of cost, strength and effectiveness In: Software Quality and Productivity: Theory, Practice and Training, 258–265.. Chapman & Hall, Ltd., London. Zitzler, E, Künzli S (2004) Indicator-based selection in multiobjective search In: Parallel Problem Solving from Nature - PPSN VIII. Lecture Notes in Computer Science, Vol. 3242, 832–842.. Springer Verlag. Zitzler, E, Laumanns M, Thiele L (2001) SPEA2: Improving the strength pareto evolutionary algorithm for multiobjective optimization In: Evolutionary Methods for Design Optimization and Control with Applications to Industrial Problems, 95–100.. International Center for Numerical Methods in Engineering, Athens. Zitzler, E, Thiele L, Laumanns M, Fonseca CM, da Fonseca VG (2003) Performance assessment of multiobjective optimizers: an analysis and review. IEEE Trans Evol Comput 7(2): 117–132. The authors would like to thank CAPES and CNPq for financial support. Federal University of Paraná (UFPR), Computer Science Department, Curitiba, 19081, CEP: 81531-970, PR, Brazil Rui A. Matnei Filho & Silvia R. Vergilio Rui A. Matnei Filho Silvia R. Vergilio Correspondence to Rui A. Matnei Filho. Both authors are equal contributors for the work and for the definition of the approach. RMF is the main responsible for the implementation and experiments execution. SRV helped in the results analysis and writing. All authors read and approved the final manuscript. Matnei Filho, R.A., Vergilio, S.R. A multi-objective test data generation approach for mutation testing of feature models. J Softw Eng Res Dev 4, 4 (2016). https://doi.org/10.1186/s40411-016-0030-9 Software product line SBES 2015 (29th Brazilian Symposium on Software Engineering)
CommonCrawl
Full paper | Open | Published: 27 May 2015 Estimation of geomagnetically induced currents based on the measurement data of a transformer in a Japanese power network and geoelectric field observations Shinichi Watari1 Earth, Planets and Spacevolume 67, Article number: 77 (2015) | Download Citation Geomagnetically induced currents (GICs) have the potential to cause electric power blackouts. Hence, it is important to study the effects of GICs produced by intense geomagnetic storms. The measurements of GICs were conducted at the Memanbetsu substation, Hokkaido, between December 2005 and March 2008. We obtain the complementary cumulative distribution function (CCDF) of the measured GICs and the empirical equation to estimate GICs using the GIC data and geoelectric field observation data. GICs associated with the past intense geomagnetic storms, e.g., the March 13–15 storm and the October 29–30, 2003 storm, are estimated. The effects of geomagnetically induced currents (GICs) on electric power grids have been observed since the 1940s (Boteler 2001). An electric power blackout occurred in Quebec, Canada, during the March 13–15, 1989 storm (Boteler et al. 1989; Kappenman 1989; Boteler 2001). In southern Sweden, the GIC caused an electric power blackout on October 30, 2003 (Kappenman 2005). The occurrence of strong GICs is often associated with strong auroral electrojet currents at geomagnetically high latitudes (Thomson et al. 2011; Pulkkinen et al. 2012). Japan is located at a geomagnetically lower latitude compared to its geographical latitude. It is believed that the possibility of power grid problems caused by GICs is lower because of the country's location at geomagnetically low latitude. However, it was reported that long distance telegraph lines between Tokyo and the regions outside Tokyo (the Tokyo-Yokkaichi line, the Tokyo-Matsumoto line, the Tokyo-Ogasawara line, the Tokyo-Guam line, and so on) were affected by GICs caused by a geomagnetic storm on September 25, 1909 in Japan (Uchida 1909). Kappenman (2004) noted that large GICs are produced by geomagnetic disturbances driven by the intensification of the ring and magnetopause currents at low latitudes. Gaunt and Coetzee (2007) reported damage to transformers caused by GICs in South Africa as a result of series of intense geomagnetic storms between the end of October and the beginning of November in 2003. The geomagnetic latitude of South Africa is similar to that of Japan. This suggests a possibility of GIC effects in countries with lower geomagnetic latitudes such as Japan if an extremely large geomagnetic storm, such as the Carrington storm on September 1–2, 1859, occurs (Watari et al. 2001; Tsurutani et al. 2003; Committee on the Societal and Economic Impacts of Severe Space Weather Events 2008; The working group on extreme solar weather of the Royal Academy of Engineering 2013). Pulkkinen et al. (2012) and Bernabeu (2013) made studies on extreme 100-year geoelectric field scenarios. In Spain located in low geomagnetic latitude, Torta et al. (2014) studied the effect of GICs on the Spanish high-voltage power network. In a Japanese power network, GIC measurements were conducted between December 2005 and March 2008 as part of the close collaboration among National Institute of Information and Communications Technology (NICT), Hokkaido Electric Power Co., and Solar-Terrestrial Environment Laboratory (STEL) at Nagoya University (Watari et al. 2009). We obtained the empirical equations of GICs using the GIC measurement data and geoelectric field data and estimated GICs associated with the past intense geomagnetic storms. Using a current clump meter, we measured the electrical current of a grounded neutral point of a transformer at the Memanbetsu substation of the Hokkaido Power Co. between December 2005 and March 2008. The transformer is an ordinal three-phase transformer and is connected to the 187-kV line from the Ashoro power station as shown in Fig. 1. Both ends of the line are grounded and the Memanbetsu substation is an end point of this line. There is no branch between Memanbetsu and Ashoro. The line is in a south-west direction (approximately 40° west-wards from the north-south direction) and the length of the line is approximately 100 km. To analyze the 1-s GIC data, we used 1-s geomagnetic field data and the 1-s and 1-min geoelectric field data at the Memanbetsu Observatory, Japan Meteorological Agency (JMA), near the substation. The 1-hour geomagnetic field data since 1958 are also used for a statistical analysis. The geoelectric fields are measured by potential difference between two electrodes separately buried in the ground. Mean values of the data sets are subtracted to remove the offsets of the geoelectric field data in this paper. Configuration of 187-kV power line of GIC measurements Observed GICs in Hokkaido Table 1 presents a list of the large GICs observed during the period of our GIC measurements with geomagnetic disturbances observed at the Memanbetsu Observatory. The GIC events in Table 1 are associated with geomagnetic storms with the exception of two events, a positive bay and a sudden impulse (SI) event. GICs associated with positive bays, auroral activities in high latitude, are often observed in our measurements (Watari et al. 2009). We show two examples of observed GICs at the Memanbetsu substation associated with the geomagnetic storms with variations of geomagnetic fields at the Memanbetsu Observatory. Table 1 Large GIC events measured between December 2005 and March 2008 Figure 2 shows the GIC event associated with a geomagnetic storm on December 14–15, 2006. This geomagnetic storm was caused by a full halo coronal mass ejection (CME) on December 13, 2006. Magnetic and geoelectric fields observed at the Memanbetsu Observatory are also shown with the GIC data. According to this figure, temporal variations of By, east-west component of geomagnetic fields, show a good correlation with the measured GIC data. The maximum 1-s GIC value of 3.85 A is measured during the main phase of the storm. GIC associated with the geomagnetic storm on December 14–15, 2006 (top panel) and geomagnetic field observation (Bx (second panel), By (third panel), and Bz (bottom panel)) at the Memanbetsu Observatory Figure 3 shows the GICs and geomagnetic fields of the November 9–11, 2006 storm. This storm was caused by high-speed solar wind from a coronal hole and started gradually. As shown in Fig. 3, the maximum 1-s GIC value of 2.23 A measured around 10:00 UT associated with a large variation of By component of geomagnetic fields during the storm. Several spikes of the GIC data in Figs. 2 and 3 are artificial noises. GIC associated with the geomagnetic storm on November 9–10, 2006 (top panel) and geomagnetic field observation (Bx (second panel), By (third panel), and Bz (bottom panel)) at the Memanbetsu Observatory Probability occurrence of GICs and electric field observation There are several methods to estimate the probability of occurrence of the extreme space weather events (Hapgood 2011; Love 2012; Riley 2012; Kataoka 2013). Here, we used a method shown by Riley (2012) and Kataoka (2013) to estimate the probability of occurrence of the events exceeding some critical value. Probability of an event of magnitude equal to or greater than some critical values x crit is expressed below. $$ P\left(x\ge {x}_{\mathrm{crit}}\right)={\displaystyle \underset{x_{\mathrm{crit}}}{\overset{\infty }{\int }}}p\left({x}^{\hbox{'}}\right)dx\hbox{'} $$ This is called the complementary cumulative distribution function (CCDF). If the probability of occurrence, p(x) follows a power law distribution: $$ p(x)=C{x}^{-\alpha } $$ Equation 1 is expressed by the equation below. $$ P\left(x\ge {x}_{\mathrm{crit}}\right)=\frac{C}{\alpha -1}{x}_{\mathrm{crit}}^{-\alpha +1} $$ where α and C are some fixed values. The Poisson distribution is applicable for the probability of occurrence of one or more events equal to or greater than x crit during some time Δt assuming the events occur independently of one another. The probability of in Δt is given by the equation below. $$ P\left(x\ge {x}_{\mathrm{crit}},\ t=\Delta t\right)=1-{e}^{-N\frac{\Delta t}{\tau }P\left(x\ge {x}_{\mathrm{crit}}\right)} $$ where N is the number of events in the data set and τ is the total time span of the data set. Figure 4 shows occurrence number of 1-min averaged values of the GIC data between December 2005 and March 2008 and the CCDF of the values equal to or greater than GIC values of the horizontal axes. Two dashed lines show 95 % confidence interval. There is a power law relation for the value equal to or greater than 0.5 A (see the vertical dotted line in Fig. 4). Hence, N becomes the number of events equal to or greater than 0.5 A. The exponent value of α of 5.11 and the value of C of 0.0030 are obtained by using the data in Fig. 4. We can calculate the probability using Eq. 3. For example, the probability with 95 % confidence interval of GIC value equal to or greater than 10 A is 5.8 × 10−8 [3.5 × 10−10, 9.5 × 10−6]. The probability of GIC ≥ 100 A is 4.5 × 10−12 [2.6 × 10−14, 7.8 × 10−10]. According to Eq. 4, the probabilities with 95 % confidence interval of GIC ≥ 10 A in 50 and 100 years are 0.67 [0.0067, 1.0] and 0.89 [0.013, 1.0], respectively. And the probabilities of GIC ≥ 100 A in 50 and 100 years are 8.7 × 10−5 [5.0 × 10−7, 1.4 × 10−2] and 1.7 × 10−4 [1.0 × 10−6, 2.9 × 10−2], respectively. Occurrence number of 1-min averaged values of GICs and the CCDF of the values equal to or greater than 0.1 A. The dashed lines in the right panel show 95 % confidence interval of the fitting (solid line) Intense GICs are produced by large geoelectric fields. Hence, we study the 1-min geoelectric field data at the Memanbetsu Observatory near the substation between 1987 and 2014. Table 2 shows dates and geomagnetic disturbances when the largest geoelectric fields more than or equal to 0.1 V/km are observed. They are observed or associated with intense geomagnetic storms as shown in Table 2. Figure 5 shows the occurrence number and the CCDF of the values equal to or greater than geoelectric field values of horizontal axes. There is a power law relation for the value equal to or greater than 0.03 V/km (see the vertical dotted line in Fig. 5) and N is the number of data equal to or greater than 0.003 V/km. The exponent value of α of 4.98 and the value of C of 2.11 × 10−9 are obtained by using the data in Fig. 5. The probability with 95 % confidence interval of geoelectric fields |E| equal to or greater than 1.0 V/km is 5.3 × 10−10 [4.9 × 10−12, 5.7 × 10−8] and that of |E| ≥ 5.0 V/km is 8.7 × 10−13 [8.1 × 10−15, 9.3 × 10−11]. Using Eq. 4, the probabilities of |E| ≥ 1.0 V/km in 50 and 100 years are 1.2 × 10−2 [1.1 × 10−4, 7.3 × 10−1] and 2.4 × 10−2 [2.3 × 10−4, 9.3 × 10−1], respectively. The probabilities of |E| ≥ 5.0 V/km in 50 and 100 years are 2.0 × 10−5 [1.9 × 10−7, 2.2 × 10−3] and 4.0 × 10−5 [3.7 × 10−7, 4.3 × 10−3], respectively. Table 2 Largest geoelectric fields observed between 1958 and 2014 at the Memanbetsu Observatory Occurrence number of 1-min values of geoelectric field data between 1987 and 2014 at the Memanbetsu Observatory and the CCDF of the values equal to or greater than 0.001 V/km. The dashed lines in the right panel show 95 % confidence interval of the fitting (solid line) One-hour geomagnetic field data of Memanbetsu Observatory are available since 1958. Figure 6 shows occurrence number and the CCDF of the values equal to or absolute values |ΔH| of difference of 1 hour values of horizontal component of Memanbetsu geomagnetic field greater than values of horizontal axes. There is a power law relation for the value equal to or greater than 20 nT/hour (see the vertical dotted line in Fig. 6). N is the number of data equal to or greater than 20 nT/hour. The exponent value of α of 4.45 and the value of C of 3.90 × 103 are obtained by using the data in Fig. 6. The probability with 95 % confidence interval of |ΔH| equal to or greater than 500 nT/hour is 5.41 × 10−7[5.4 × 10−9, 5.5 × 10−4]. Using Eq. 4, the probabilities of |ΔH| ≥ 500 nT/hour in 50 and 100 years are 1.9 × 10−2 [2.1 × 10−3, 1.0] and 3.5 × 10−1 [4.3 × 10−3, 1.0], respectively. Occurrence number of absolute values of difference of 1-hour values of horizontal component of Memanbetsu geomagnetic field data between 1958 and 2013 and the CCDF of the values equal to or greater than 10 nT/hour. The dashed lines in the right panel show 95 % confidence interval of the fitting (solid line) Estimation of GIC and discussion According to Pulkkinen et al. (2007) and Torta et al. (2012), GIC at a site by geoelectric field is modeled by the equation $$ \mathrm{G}\mathrm{I}\mathrm{C}(t)=a{E}_x\left(\mathrm{t}\right)+b{E}_y(t)+\varepsilon (t) $$ where E x and E y are the horizontal components of the local geoelectric field and a and b are the site-dependent system parameters. ε(t) is the noise term. The values of a and b are given by the equations $$ a=\frac{<\mathrm{G}\mathrm{I}\mathrm{C}{E}_y><{E}_x{E}_y>-<\mathrm{G}\mathrm{I}\mathrm{C}{E}_x><{E}_y^2>}{<{E}_x{E}_y{>}^2-<{E}_x^2><{E}_y^2>} $$ $$ b=\frac{<\mathrm{G}\mathrm{I}\mathrm{C}{E}_x><{E}_x{E}_y>-<\mathrm{G}\mathrm{I}\mathrm{C}{E}_y><{E}_x^2>}{<{E}_x{E}_y{>}^2-<{E}_x^2><{E}_y^2>} $$ where < . > denotes the expectation taken over different realizations of the process. Figure 7 shows geoelectric fields observed at the Memanbetsu Observatory with the GIC data of the December 14–15, 2006 storm. In Fig. 7, north is 0° and angle increases clockwise. The black lines and crosses show 1-min data and the red lines and dots show 1-s data. By applying the Eqs. 6 and 7 to the 1-min data in Fig. 7, values of a and b are obtained as 38.1 and −7.4 A km/V, respectively. The blue line of Fig. 7 shows the estimated GIC using Eq. 5. The estimated value of GIC is approximately half of the observed value around the maximum of GIC in Fig. 7. Ogawa (2002) noted that it is necessary to consider a gain factor between the electric field at a site and the regional electric field. The gain factor is assumed to be 1 in the analysis of this paper. Figure 8 shows the 1-s geoelectric field values associated with the GIC values shown in Table 1. According to this figure, the GIC of the December 14–15, 2006 event, 3.85 A, is larger comparing with other events. Geoelectric fields observed at the Memanbetsu Observatory with the GIC data of the December 14–15, 2006 storm. Angle is 0° for the north and increases clockwise. Block lines and crosses show the 1-min data, and red lines and dots in the figure show the 1-s data. The blue line is the estimated GIC using the 1-min geoelectric field data One-s geoelectric field values associated GIC values shown in Table 1 Vertical component of geomagnetic variation at a site is expressed by linear combination of the two horizontal components as the equation below (Rikitake and Yokoyama 1953; Gregori and Lanzerotti 1980). $$ \Delta {B}_z=\alpha \Delta {B}_x+\beta \Delta {B}_y $$ Parkinson arrow, (−Re(α), − Re(β)) obtained by Eq. 8 is used for analysis of an underground conductivity anomaly. The arrow points direction of conductive layer. Figure 9 shows the values of Parkinson arrow for several frequencies calculated by using geomagnetic field data of Memanbetsu shown in Fig. 2. According to this figure, Parkinson arrow points eastward. This suggests existence of conductive layer in east of Memanbetsu. Uyeshima et al. (2001) suggested a significant coast effect in eastern Hokkaido based on the observation by the network-magnetotelluric (network-MT) method. Consideration of underground conductivity is necessary to understand the measured GIC data as a future work. Values of Parkinson arrow (−Re(α), −Re(β)) for several frequencies calculated for Memanbetsu geomagnetic filed data shown in Fig. 2 We calculated GICs of the March 13–15, 1989 storm and the October 29–30, 2003 storm using Eq. 5. Figures 10 and 11 show the electric field data and the estimated GICs using Eq. 5 with the values of a and b obtained from the December 14–15, 2006 event. According to our result, the expected maximum absolute values of GICs are approximately 6.2 and 4.2 A, respectively. One-min geoelectric field data observed at the Memanbetsu Observatory of the March 13–15, 1989 storm and the estimated GIC using the 1-min geoelectric field data Geoelectric fields observed at the Memanbetsu Observatory of the October 29–30, 2003 storm and the estimated GIC using the 1-min geoelectric field data. The black lines and crosses show the 1-min data and the red lines and dots in the figure show the 1-s data As another approach, GIC is estimated by the equation below (Boteler et al. 1994) for an equivalent circuit of a three-phase electric power line earthed on both ends shown in Fig. 12 when the same earthing resistance, R s (Ω) and the winding resistance of the transformers R w (Ω) are assumed for both ends. Equivalent circuit of the power line $$ \mathrm{G}\mathrm{I}\mathrm{C}=\frac{E_{\parallel }L}{\frac{rL+2{R}_w}{3}+2{R}_s}=\frac{E_{\parallel }}{\frac{r}{3}+\frac{2{R}_w}{3L}+\frac{2{R}_s}{L}} $$ where E ∥ (V/km) is the uniform electric field parallel to a power line, r (Ω/km) is the power line resistance per unit length, and L (km) is length of the power line. GIC is proportional to electric fields as shown Eq. 9. For the sufficiently long power line, Eq. 9 becomes $$ \mathrm{G}\mathrm{I}\mathrm{C}\sim \frac{3}{r}{E}_{\parallel } $$ Equation 10 gives an upper limit of GIC by E ∥. If we assume that r is 0.05 Ω /km for 187-kV power lines, R S is 0.1 Ω, R w is 0.1 Ω, and L is 100 km, Eq. 9 becomes $$ \mathrm{G}\mathrm{I}\mathrm{C}=\frac{E_{\parallel }}{\frac{0.05}{3}+\frac{0.2}{300}+\frac{0.2}{100}}\cong 51.7{E}_{\parallel } $$ We can estimate GIC using E ∥ as shown in the Eq. 11. According to the result from Eqs. 5, 6, and 7, GIC is also calculated using Eq. 12. $$ \mathrm{G}\mathrm{I}\mathrm{C}=\sqrt{a^2+{b}^2}{E}_{\parallel}\cong 38.8{E}_{\parallel } $$ Using a DC power source as a proxy of GICs, Takasu et al. (1994) performed experiments on DC excitation for scale models with linear dimensions that were one third to one half of those of actual power transformers. Distortion of wave forms of AC currents was observed by the applied DC currents of several tens ampere to the scale models. A maximum temperature rise of approximately 110 °C was measured in the case of the core plate and the core support made by magnetic steel for a GIC level of approximately 200 A for three phases. From Eq. 12, the geoelectric field parallel to the power line, E ∥ of 5.2 V/km, is necessary for the GIC level of 200 A. We need further studies on GIC levels affecting the power grids to know an effective level of geoelectric fields. We studied the GIC data measured at the Memanbetsu substation, Hokkaido, using the geoelectric and geomagnetic field data observed at the Memanbetsu Observatory, JMA, and obtained the empirical equation to estimate GICs associated with the past intense geomagnetic storms. GICs associated with the March 13–15, 1989 storm and the October 29–30, 2003 storm were estimated by using the Eq. 8. Estimated maximum absolute values of the GICs are approximately 6.4 and 4.2 A, respectively. Our estimation seems to be approximately half of the observed values according to the December 14–15, 2006 event shown in Fig. 7. It is necessary to consider the effect of regional underground conductivity for the estimation as a future work. The CCDF of the GIC data is calculated. According to it, the probabilities of extremely large values of the GIC seem to be low. However, it is based on the measurement of approximately 2 years and there is a large uncertainty. We need more long-term data as noted by Hapgood (2011). Bernabeu EE (2013) Modeling geomagnetically induced currents in Dominoin Virginia power using extreme 100-year geoelectric field scenarios—part1. IEEE Trans Power Delivery 28(1):516–23. doi:10.1109/TPWRD.2012.2224141 Boteler DH (2001) Space weather effects on power systems. In: Song P, Singer H, Siscoe G (eds) Space weather. AGU, Washington D.C. ISBN 0-87590-984-1 Boteler DH, Shier RM, Watanabe T, Horita RE (1989) Effects of geomagnetically induced currents in the B. C. Hydro 500 kV system. IEEE Trans Power Systems 4(1):818–23 Boteler DH, Bui-Van Q, Lemay J (1994) Directional sensitivity to geomagnetically induced currents of the Hydoro-Quebec 735 kV power system. IEEE Trans Power Delivery 9(4):1963–71 Committee on the Societal and Economic Impacts of Severe Space Weather Events (2008) Severe space weather events—understanding societal and economic impacts: workshop reports. The National Academies Press, Washington D.C. ISBN 0-309-12770-X Gaunt CT, Coetzee G (2007) Transformer failure in regions incorrectly considered to have low GIC-risk. Paper presented at IEEE Power Tech 2007, Lausanne, Switzerland Gregori GP, Lanzerotti LJ (1980) Geomagnetic depth sounding by induction arrow representation: a review. Reviews Geophys Space Phys 18(1):203–9 Hapgood MA (2011) Towards a scientific understanding of the risk from extreme space weather. Adv Space Res 47:2059–72. doi:10.1016/j.asr.2010.02.007 Kappenman JG (1989) Effects of geomagnetic disturbances on power systems. IEEE Power Eng Rev 9(10):15–20 Kappenman JG (2004) Effects of space weather on technology infrastructure. In: Daglis IA (ed) Space weather and the vulnerability of electric power grids. Kluwer Academic Publishers, NATO Science Series, pp 257–99. ISBN 1-4020-2747-8 Kappenman JG (2005) An overview of the impulsive geomagnetic field disturbances and power grid impacts associated with the violent sun-earth connection events of 29–31 October 2003 and a comparative evaluation with other contemporary storms. Space Weather 3:S08C01. 10.1029/2004SW000128 Kataoka R (2013) Probability of occurrence of extreme magnetic storms. Space Weather 11:1–5. doi:10.1002/swe.20044 Love JJ (2012) Credible occurrence probabilities for extreme geophysical events: earthquakes, volcanic eruption, magnetic storms. Geophys Res Lett 39, L10301. doi:10.1029/2012GL051431 Ogawa Y (2002) On two-dimensional modeling of magnetotelluric field data. Surv Geophys 23(2–3):251–73. doi:10.1023/A:1015021006018 Pulkkinen A, Pirjola R, Viljanen A (2007) Determination of ground conductivity and system parameters for optimal modeling of geomagnetically induced current flow in technological systems. Earth Planets Space 59:999–1006 Pulkkinen A, Bernabeu E, Eichner J, Beggan C, Thomson AWP (2012) Generation of 100-year geomagnetically induced current scenarios. Space Weather 10, S04003. doi:10.1029/2011SW000750 Rikitake T, Yokoyama I (1953) Anomalous relations between H and Z components of transient geomagnetic variations. J Geomagn Geoelec 5(3):59–65 Riley P (2012) On the probability of occurrence of extreme space weather. Space Weather 10, S02012. doi:10.1029/2011SW000734 Takasu N, Oshi T, Miyashita F, Saito S, Fujiwara Y (1994) An experimental analysis of excitation of transformers by geomagnetically induced currents. IEEE Trans Power Delivery 9(2):1173–9 The working group on extreme solar weather of the Royal Academy of Engineering (2013) Extreme space weather: impacts on engineered systems and infrastructure. Royal Academy of Engineering, London. ISBN 1-903496-95-0 Thomson A, Reay S, Dawson E (2011) Quantifying extreme behavior in geomagnetic activity. Space Weather 9, S10001. doi:10.1029/2011SW000696 Torta JM, Serrano L, Regue JR, Sanchez AM (2012) Geomagnetically induced currents in a power grid of northeastern Spain. Space Weather 10, S06002. doi:10.1029/2012SW000793 Torta JM, Marsal S, Quintana M (2014) Assessing the hazard from geomagnetically induced currents to the entire high-voltage power network in Spain. Earth Planets Space 66:87. doi:10.1186/1880-5981-66-87 Tsurutani BT, Gonzalez WD, Lakhina GS, Alex S (2003) The extreme magnetic storm of 1–2 September 1859. J Geophys Res 108(A7):1268. 10.1029/2002JA009504 Uchida T (1909) Earth current on 25 September 1909. J Inst Electrical Eng Japan 29(255):701–21 (in Japanese) Uyeshima M, Utada H, Nishida Y (2001) Network-magnetotelluric method and its first results in central and eastern Hokkaido, NE Japan. Geophys J Int 146:1–19 Watari S, Kunitake M, Watanabe T (2001) The Bastille day (14 July 2000) event in historical large sun-earth connection events. Sol Phys 204:423–36 Watari S, Kunitake M, Kitamura K, Hori T, Kikuchi T, Shiokawa K, Nishitani N, Kataoka R, Kamide Y, Aso T, Watanabe Y, Tsuneta Y (2009) Measurements of geomagnetically induced current in a power grid in Hokkaido, Japan. Space Weather 7, S03002. doi:10.1029/2008SW000417 The GIC measurements in Hokkaido, Japan, were conducted as part of the close collaboration among NICT, Hokkaido Electric Power Co., and STEL at Nagoya University. We would like to thank the Hokkaido Electric Power Co. for the GIC measurements at the Memanbetsu substation. We thank the Kakiok3a Magnetic Observatory, JMA, for providing us with geomagnetic and geoelectric field data and the lists of magnetic storms at the Memanbetsu Observatory. Finally, we wish to acknowledge the anonymous reviewers and the editor, Dr. Ikuko Fujii, for their valuable comments and suggestions. National Institute of Information and Communications Technology, 4-2-1 Nukuikita, Koganei, Tokyo, 184-8795, Japan Shinichi Watari Search for Shinichi Watari in: Correspondence to Shinichi Watari. The author declares that he has no competing interests. Geomagnetically induced current (GIC) Geomagnetic storm Geoelectric fields Extremely Severe Space Weather and Geomagnetically Induced Currents in Regions with Locally Heterogeneous Ground Resistivity
CommonCrawl
Self-organized hydrodynamics with density-dependent velocity KRM Home A kinetic equation for economic value estimation with irrationality and herding March 2017, 10(1): 215-237. doi: 10.3934/krm.2017009 Deterministic particle approximation of the Hughes model in one space dimension Marco Di Francesco 1,, , Simone Fagioli 1, , Massimiliano Daniele Rosini 2, and Giovanni Russo 3, DISIM, Università degli Studi dell'Aquila, via Vetoio 1 (Coppito), 67100 L'Aquila (AQ), Italy Instytut Matematyki, Uniwersytet Marii Curie-Skłodowskiej, plac Marii Curie-Skłodowskiej 1, 20-031 Lublin, Poland Dipartimento di Matematica ed Informatica, Università di Catania, Viale Andrea Doria 6,95125 Catania, Italy * Corresponding author: Marco Di Francesco Received March 2016 Revised July 2016 Published November 2016 Figure(8) In this paper we present a new approach to the solution to a generalized version of Hughes' models for pedestrian movements based on a follow-the-leader many particle approximation. In particular, we provide a rigorous global existence result under a smallness assumption on the initial data ensuring that the trace of the solution along the turning curve is zero for all positive times. We also focus briefly on the approximation procedure for symmetric data and Riemann type data. Two different numerical approaches are adopted for the simulation of the model, namely the proposed particle method and a Godunov type scheme. Several numerical tests are presented, which are in agreement with the theoretical prediction. Keywords: Crowd dynamics, conservation laws, eikonal equation, Hughes' model for pedestrian flows, particle approximation. Mathematics Subject Classification: Primary: 35L65; Secondary: 90B20. Citation: Marco Di Francesco, Simone Fagioli, Massimiliano Daniele Rosini, Giovanni Russo. Deterministic particle approximation of the Hughes model in one space dimension. Kinetic & Related Models, 2017, 10 (1) : 215-237. doi: 10.3934/krm.2017009 D. Amadori and M. Di Francesco, The one-dimensional Hughes model for pedestrian flow: Riemann-type solutions, Acta Math. Sci. Ser. B Engl. Ed., 32 (2012), 259-280, http://dx.doi.org/10.1016/S0252-9602(12)60016-2. doi: 10.1016/S0252-9602(12)60016-2. Google Scholar D. Amadori, P. Goatin and M. D. Rosini, Existence results for Hughes' model for pedestrian flows, J. Math. Anal. Appl., 420 (2014), 387-406, http://dx.doi.org/10.1016/j.jmaa.2014.05.072. doi: 10.1016/j.jmaa.2014.05.072. Google Scholar B. Andreianov, C. Donadello, U. Razafison and M. D. Rosini, Riemann problems with non{local point constraints and capacity drop, Mathematical Biosciences and Engineering, 12 (2015), 259-278, http://aimsciences.org/journals/displayArticlesnew.jsp?paperID=10696. Google Scholar B. Andreianov, C. Donadello and M. D. Rosini, Crowd dynamics and conservation laws with nonlocal constraints and capacity drop, Math. Models Methods Appl. Sci., 24 (2014), 2685-2722, . doi: 10.1142/S0218202514500341. Google Scholar B. Andreianov, C. Donadello, U. Razafison and M. D. Rosini, Qualitative behaviour and numerical approximation of solutions to conservation laws with non-local point constraints on the flux and modeling of crowd dynamics at the bottlenecks, ESAIM: M2AN, http://dx.doi.org/10.1051/m2an/2015078.Google Scholar C. Bardos, A. Y. le Roux and J.-C. Nédélec, First order quasilinear equations with boundary conditions, Comm. Partial Differential Equations, 4 (1979), 1017-1034, http://dx.doi.org/10.1080/03605307908820117. doi: 10.1080/03605307908820117. Google Scholar N. Bellomo and A. Bellouquid, On the modeling of crowd dynamics: Looking at the beautiful shapes of swarms, Networks and Heterogeneous Media, 6 (2011), 383-399. doi: 10.3934/nhm.2011.6.383. Google Scholar D. Braess, A. Nagurney and T. Wakolbinger, On a paradox of traffic planning, Transportation Science, 39 (2005), 446-450. doi: 10.1287/trsc.1050.0127. Google Scholar A. Bressan, Hyperbolic Systems of Conservation Laws vol. 20 of Oxford Lecture Series in Mathematics and its Applications, Oxford University Press, Oxford, 2000. Google Scholar M. Burger, M. Di Francesco, P. A. Markowich and M.-T. Wolfram, Mean field games with nonlinear mobilities in pedestrian dynamics, Discrete Contin. Dyn. Syst. Ser. B, 19 (2014), 1311-1333, http://dx.doi.org/10.3934/dcdsb.2014.19.1311. doi: 10.3934/dcdsb.2014.19.1311. Google Scholar J. A. Carrillo, S. Martin and M. -T. Wolfram, A local version of the hughes model for pedestrian flow, to appear in Mathematical Models and Methods in the Applied Sciences, 2015, Url: http://arxiv.org/abs/1501.07054.Google Scholar R.M. Colombo and M.D. Rosini, Existence of nonclassical solutions in a pedestrian flow model, Nonlinear Anal. Real World Appl., 10 (2009), 2716-2728. doi: 10.1016/j.nonrwa.2008.08.002. Google Scholar R. M. Colombo, P. Goatin and M. D. Rosini, A macroscopic model for pedestrian flows in panic situations, in Current advances in nonlinear analysis and related topics, vol. 32 of GAKUTO Internat. Ser. Math. Sci. Appl., Gakkōtosho, Tokyo, 2010,255-272. Google Scholar R. M. Colombo and M. D. Rosini, Pedestrian flows and non-classical shocks, Math. Methods Appl. Sci., 28 (2005), 1553-1567, http://dx.doi.org/10.1002/mma.624. doi: 10.1002/mma.624. Google Scholar C.M. Dafermos, Polygonal approximations of solutions of the initial value problem for a conservation law, J. Math. Anal. Appl., 38 (1972), 33-41. doi: 10.1016/0022-247X(72)90114-X. Google Scholar M. DiFrancesco, S. Fagioli, M. D. Rosini and G. Russo, Follow-the-leader approximation of the Dirichlet boundary value problem for a scalar conservation law, In preparation.Google Scholar M. DiFrancesco and M.D. Rosini, Rigorous derivation of nonlinear scalar conservation laws from follow-the-leader type models via many particle limit, Arch. Ration. Mech. Anal., 217 (2015), 831-871. doi: 10.1007/s00205-015-0843-4. Google Scholar M. Di Francesco, P. A. Markowich, J.-F. Pietschmann and M.-T. Wolfram, On the Hughes' model for pedestrian flow: The one-dimensional case, J. Differential Equations, 250 (2011), 1334-1362, http://dx.doi.org/10.1016/j.jde.2010.10.015. doi: 10.1016/j.jde.2010.10.015. Google Scholar F. Dubois and P. LeFloch, Boundary conditions for nonlinear hyperbolic systems of conservation laws, J. Differential Equations, 71 (1988), 93-122. doi: 10.1016/0022-0396(88)90040-X. Google Scholar N. El-Khatib, P. Goatin and M. D. Rosini, On entropy weak solutions of Hughes' model for pedestrian motion, Z. Angew. Math. Phys., 64 (2013), 223-251, http://dx.doi.org/10.1007/s00033-012-0232-x. doi: 10.1007/s00033-012-0232-x. Google Scholar P. Goatin and M. Mimault, The wave-front tracking algorithm for Hughes' model of pedestrian motion, SIAM J. Sci. Comput., 35 (2013), B606-B622, http://dx.doi.org/10.1137/120898863. doi: 10.1137/120898863. Google Scholar D. Helbing, A. Johansson and H.Z. Al-Abideen, An empirical study, Physical Review E (Statistical, Nonlinear, and Soft Matter Physics), 75 (2007), 046109. doi: 10.1103/PhysRevE.75.046109. Google Scholar L.F. Henderson, The statistics of crowd fluids, Nature, 229 (1971), 381-383. doi: 10.1038/229381a0. Google Scholar R. L. Hughes, A continuum theory for the flow of pedestrians, Transportation Research Part B: Methodological, 36 (2002), 507-535, http://www.sciencedirect.com/science/article/pii/S0191261501000157. doi: 10.1016/S0191-2615(01)00015-7. Google Scholar R. L. Hughes, The flow of human crowds, in Annual review of fluid mechanics, vol. 35 of Annu. Rev. Fluid Mech., Annual Reviews, Palo Alto, CA, 2003,169-182, http://dx.doi.org/10.1146/annurev.fluid.35.101101.161136. doi: 10.1146/annurev.fluid.35.101101.161136. Google Scholar K. H. Karlsen, K. -A. Lie and N. H. Risebro, A front tracking method for conservation laws with boundary conditions, in Hyperbolic problems: theory, numerics, applications, Vol. Ⅰ (Zürich, 1998), vol. 129 of Internat. Ser. Numer. Math., Birkhäuser, Basel, 1999,493-502. Google Scholar S.N. Kruzhkov, First order quasilinear equations with several independent variables., Mat. Sb. (N.S.), 81 (1970), 228-255. Google Scholar M.J. Lighthill and G.B. Whitham, On kinematic waves. ii. a theory of traffic flow on long crowded roads, Royal Society of London Proceedings Series A, 229 (1955), 317-345. doi: 10.1098/rspa.1955.0089. Google Scholar B. Maury, A. Roudneff-Chupin and F. Santambrogio, A macroscopic crowd motion model of gradient flow type, Math. Models Methods Appl. Sci., 20 (2010), 1787-1821. doi: 10.1142/S0218202510004799. Google Scholar M. Muramatsu and T. Nagatani, Jamming transition in two-dimensional pedestrian traffic, Physica A, 275 (2000), 281-291. doi: 10.1016/S0378-4371(99)00447-1. Google Scholar E. Y. Panov, Existence of strong traces for quasi-solutions of multidimensional conservation laws, J. Hyperbolic Differ. Equ., 4 (2007), 729-770, http://dx.doi.org/10.1142/S0219891607001343. doi: 10.1142/S0219891607001343. Google Scholar B. Piccoli and A. Tosin, Pedestrian flows in bounded domains with obstacles, Contin. Mech. Thermodyn., 21 (2009), 85-107. doi: 10.1007/s00161-009-0100-x. Google Scholar P.I. Richards, Shock waves on the highway, Operations Res., 4 (1956), 42-51. doi: 10.1287/opre.4.1.42. Google Scholar M. D. Rosini, Macroscopic Models for Vehicular Flows and Crowd Dynamics: Theory and Applications, Understanding Complex Systems, Springer, Heidelberg, 2013, Classical and non-classical advanced mathematics for real life applications. doi: 10.1007/978-3-319-00155-5. Google Scholar A. Treuille, S. Cooper and Z. Popovic, Continuum crowds, ACM Transaction on Graphics, 25 (2006), 1160-1168, Proceedings of SCM SIGGRAPH 2006. doi: 10.1145/1179352.1142008. Google Scholar M. Twarogowska, P. Goatin and R. Duvigneau, Macroscopic modeling and simulations of room evacuation, Appl. Math. Model., 38 (2014), 5781-5795, http://dx.doi.org/10.1016/j.apm.2014.03.027. doi: 10.1016/j.apm.2014.03.027. Google Scholar A. Vasseur, Strong traces for solutions of multidimensional scalar conservation laws, Arch. Ration. Mech. Anal., 160 (2001), 181-193, http://dx.doi.org/10.1007/s002050100157. doi: 10.1007/s002050100157. Google Scholar Figure 1. Evolution of $R(t,x)$ with initial data $\bar{\rho}(x)=0.25$ at times $t=0$, $t=0.5$ and $t=1$. In the particle simulations the blu dots represent particles positions, whereas the red line is the discretized density. The magenta vertical line describes the turning point evolution Figure Options Download as PowerPoint slide Figure 2. Evolution of $R(t,x)$ with initial data $\bar{\rho}(x)=0.6$ at times $t=0$, $t=0.5$ and $t=1$ Figure 3. Evolution of $R(t,x)$ with initial data $\bar{\rho}(x)$ given in (24) Figure 6. Mass transfer across the turning point and non-classical shock with initial data $\bar{\rho}$ given in (26) Figure 7. Comparison between the Follow-the-Leader scheme (in red) and the Godunov scheme (in blu) Figure 8. Increasing the number of particles, and then the cells integration for the Godunov method, the agreement between the two methods greatly improves. Here we consider the initial datum $\bar{\rho}=0.3\times\,\mathbf{1}_{[-1,0]}+0.7\times \,\mathbf{1}_{(0,1]}$ and we set $N=1000$ with $1500$ time iterations Fabio Camilli, Adriano Festa, Silvia Tozza. A discrete Hughes model for pedestrian flow on graphs. Networks & Heterogeneous Media, 2017, 12 (1) : 93-112. doi: 10.3934/nhm.2017004 Dirk Hartmann, Isabella von Sivers. Structured first order conservation models for pedestrian dynamics. Networks & Heterogeneous Media, 2013, 8 (4) : 985-1007. doi: 10.3934/nhm.2013.8.985 Christophe Chalons, Paola Goatin, Nicolas Seguin. General constrained conservation laws. Application to pedestrian flow modeling. Networks & Heterogeneous Media, 2013, 8 (2) : 433-463. doi: 10.3934/nhm.2013.8.433 Boris Andreianov, Kenneth H. Karlsen, Nils H. Risebro. On vanishing viscosity approximation of conservation laws with discontinuous flux. Networks & Heterogeneous Media, 2010, 5 (3) : 617-633. doi: 10.3934/nhm.2010.5.617 Abdul M. Kamareddine, Roger L. Hughes. Towards a mathematical model for stability in pedestrian flows. Networks & Heterogeneous Media, 2011, 6 (3) : 465-483. doi: 10.3934/nhm.2011.6.465 Alexander Bobylev, Mirela Vinerean, Åsa Windfäll. Discrete velocity models of the Boltzmann equation and conservation laws. Kinetic & Related Models, 2010, 3 (1) : 35-58. doi: 10.3934/krm.2010.3.35 María Santos Bruzón, Tamara María Garrido. Symmetries and conservation laws of a KdV6 equation. Discrete & Continuous Dynamical Systems - S, 2018, 11 (4) : 631-641. doi: 10.3934/dcdss.2018038 Mark A. Peletier, Marco Veneroni. Stripe patterns and the Eikonal equation. Discrete & Continuous Dynamical Systems - S, 2012, 5 (1) : 183-189. doi: 10.3934/dcdss.2012.5.183 Veronika Schleper. A hybrid model for traffic flow and crowd dynamics with random individual properties. Mathematical Biosciences & Engineering, 2015, 12 (2) : 393-413. doi: 10.3934/mbe.2015.12.393 Chadi Nour. Construction of solutions to a global Eikonal equation. Conference Publications, 2007, 2007 (Special) : 779-783. doi: 10.3934/proc.2007.2007.779 María Rosa, María de los Santos Bruzón, María de la Luz Gandarias. Lie symmetries and conservation laws of a Fisher equation with nonlinear convection term. Discrete & Continuous Dynamical Systems - S, 2015, 8 (6) : 1331-1339. doi: 10.3934/dcdss.2015.8.1331 Gianluca Crippa, Laura V. Spinolo. An overview on some results concerning the transport equation and its applications to conservation laws. Communications on Pure & Applied Analysis, 2010, 9 (5) : 1283-1293. doi: 10.3934/cpaa.2010.9.1283 Yuri B. Gaididei, Christian Marschler, Mads Peter Sørensen, Peter L. Christiansen, Jens Juul Rasmussen. Pattern formation in flows of asymmetrically interacting particles: Peristaltic pedestrian dynamics as a case study. Evolution Equations & Control Theory, 2019, 8 (1) : 73-100. doi: 10.3934/eect.2019005 Anupam Sen, T. Raja Sekhar. Structural stability of the Riemann solution for a strictly hyperbolic system of conservation laws with flux approximation. Communications on Pure & Applied Analysis, 2019, 18 (2) : 931-942. doi: 10.3934/cpaa.2019045 Avner Friedman. Conservation laws in mathematical biology. Discrete & Continuous Dynamical Systems - A, 2012, 32 (9) : 3081-3097. doi: 10.3934/dcds.2012.32.3081 Mauro Garavello. A review of conservation laws on networks. Networks & Heterogeneous Media, 2010, 5 (3) : 565-581. doi: 10.3934/nhm.2010.5.565 Mauro Garavello, Roberto Natalini, Benedetto Piccoli, Andrea Terracina. Conservation laws with discontinuous flux. Networks & Heterogeneous Media, 2007, 2 (1) : 159-179. doi: 10.3934/nhm.2007.2.159 Len G. Margolin, Roy S. Baty. Conservation laws in discrete geometry. Journal of Geometric Mechanics, 2019, 11 (2) : 187-203. doi: 10.3934/jgm.2019010 Simone Cacace, Maurizio Falcone. A dynamic domain decomposition for the eikonal-diffusion equation. Discrete & Continuous Dynamical Systems - S, 2016, 9 (1) : 109-123. doi: 10.3934/dcdss.2016.9.109 Jaime Cruz-Sampedro. Schrödinger-like operators and the eikonal equation. Communications on Pure & Applied Analysis, 2014, 13 (2) : 495-510. doi: 10.3934/cpaa.2014.13.495 2018 Impact Factor: 1.38 PDF downloads (8) HTML views (2) Marco Di Francesco Simone Fagioli Massimiliano Daniele Rosini Giovanni Russo Figures and Tables
CommonCrawl
Association Between HLA DQBl * 03 and Cervical Intra-epithelial Neoplasia Kunle Odunsi1, George Terry2, Linda Ho2, John Bell3, Jack Cuzick4 & Trivadi S. Ganesan1 Molecular Medicine volume 1, pages161–171(1995)Cite this article Cervical intraepithelial neoplasia (CIN) and cervical cancer have been shown to be strongly associated with infection by human papillomavirus (HPV). However, other factors may be contributory in the progression from normal epithelium to CIN and cervical cancer, since not all women with HPV infection develop disease. Recently, it was demonstrated that there is a high risk for cervical cancer and CIN in women with HLA DQB1 * 03 (RR = 7.1, p < 0.0009) (1). Subsequent reports have been conflicting, due to sample size, genetic heterogeneity and differences in the techniques employed for the detection of HLA DQB1 * 03. DNA from cervical smears of 178 women with CIN and 420 controls with normal cervical cytology was analyzed by polymerase chain reaction (PCR) with type-specific primers for HPV 16, 18, 31, and 33. The DNA from test and control samples were also analyzed by a novel PCR technique, which mutates the first base of codon 40 (DQ alleles) from T to G to create an artificial restriction site for an enzyme Mlu I that distinguish DQB1 * 03 from other alleles and are confirmed by digestion of amplified DNA with Mlu I. Further analysis of individual DQB1 * 03 alleles was performed using PCR and allele-specific primers. One hundred forty-four (34%) out of 420 controls (all HPV 16, 18, 31, or 33 negative and normal cytology), 37/66 (56%) of CIN I and 72/112 (64%) of CIN III were positive for DQB1 * 03 (trend test, p < 0.001, χ2 = 37.3). A significant association was observed between DQB1 * 03 and CIN (odds ratio 3.03; 95% CI 2.11–3.45). Of women with CIN, 131/ 178 (73.5%) had HPV (types 16, 18, 31, or 33) infection. There was a significant association between DQB1 * 03 and presence of HPV (odds ratio 3.43; 95% CI 2.25–5.10). Homozygosity for DQB1 * 03 was more strongly associated with CIN than heterozygosity (odds ratios 4.0 and 2.63, respectively); and for the presence of HPV (odds ratio 4.47; 95% CI 2.58–7.77). HLA DQB1 * 0301 was the most strongly associated allele with CIN and HPV (odds ratios 2.53 and 2.63, respectively). HLA DQB1 * 03 is associated significantly with CIN and may be permissive for HPV infection. Further analysis of class II HLA typing in CIN is necessary to evaluate this association. The hypothesis that human papillomavirus (HPV) is causally associated with cervical intraepithelial neoplasia (CIN) and cervical cancer is well supported by several lines of evidence. First, HPV is detected in the cervical tissue of majority of patients with disease, compared with controls (2, 3). Second, the oncogenic types of HPV can transform and immortalize human keratinocytes in vitro (4, 5). Third, the HPV16 E6 and E7 proteins inactivate endogenous tumour suppressor proteins p53 and Rb, thereby abrogating normal cell control mechanisms (6). Despite this, the majority of women infected with HPV do not develop CIN or cancer. Several cofactors have been suggested as being important. A recent analysis of HLA type has shown a significant association between HLA class II, in particular HLA DQB1 * 03, and cervical cancer but not for class I (1). Subsequent reports have not been consistent (Table 1) presumably due to the size and type of population examined and differences in techniques used for the HLA and HPV typing. As cervical intraepithelial neoplasia (CIN) is an early stage prior to the development of cervical cancer, any association with HLA class II that could be demonstrated would be important. This paper reports the results of such a study conducted in a Caucasian population, where both controls and test samples were typed for HPV and HLA DQB1 * 03. Table 1 Summary of studies on CIN, cervical cancer, and HLA Cervical smears were taken using standard procedures from healthy women and those with CIN attending the outpatient clinics at City Hospital, Nottingham; Whittington Hospital, London, and the Margaret Pyke center, London. Further processing of samples for PCR was performed as described (7, 8). Histological Diagnosis Histological classification into the three groups of normal, CIN I, and CIN III were carried out according to established criteria (9, 10). HPV Typing HPV typing by PCR was performed using type specific oligonucleotide primers (HPV 16, 18, 31, 33) as previously described. (7, 8) HLA Typing A rapid system to type the HLA DQB1 * 03 was developed using primer directed mutagenesis by PCR and restriction enzyme digestion (artificial restriction fragment lenth polymorphism [ARFLP]-PCR) (11). All DQB1 * 03 alleles possess an A as the last base of codon 38 followed by CGC (codon 39) and TTC (codon 40) (Fig. 1). By mutating the first base of codon 40, from "T" to "G", a Mlu I site (ACGCGT) is created in the DQB1 * 03 allele. Non DQB1 * 03 alleles possess a "G" as the last base of codon 38, and a Mlu I site cannot be created this way. The forward primer A is used in conjunction with the reverse mutagenesis primer B: A schematic diagram showing the principle of ARFLP-PCR $$\begin{array}{*{20}{c}} {A:\;5'\;AGGATCCCCGCAGAGGATTTCGTGTACC\;\;3'\;(forward)\;\;\;} \\ {B:\;5'\;CCGGTACACCCCCACGTCGCTGTCGACGCG\;\;3'\;(reverse)} \end{array}$$ (The mutated base is underlined.) PCR was carried out as described in 50 vl volume using a Techne-PC 3 machine, according to manufacturer's instructions. The initial denaturation was at 94°C for 8 min followed by 30 cycles each at 94°C for 1 min, 60°C for 1 min, and 72°C for 1 min. There was a final extension step at 72°C for 15 min. Following amplification 10 µl of the PCR product was restricted with 50 units of Mlu I (Boehringer Mannheim) in a volume of 20 µl at 37°C overnight using manufacturer's buffer. The products were analyzed by electrophoresis on 4% agarose gels (Metaphor, Flowgen) (Fig. 2). A 4% agarose gel showing amplified DNA after PCR with primers A and B with and without digestion by Mlu I The size of amplified DNA is 145 bp and on digestion with Mlu I, a 115 and 30 bp product is produced in DQB1 * 03 homozygotes. DNA for all controls were from the British Society for Histocompatibilty and Immunogenetics. Arrows show the 145 and 115 bp products. (Lanes 2 and 3) Heterozygous DQB1 * 03 control with and without digestion by Mlu I. (Lanes 4 and 5) Homozygous DQB1 * 03 control with and without digestion with Mlu I. (Lanes 6 and 7) Non-DQB1 * 03 control with and without digestion with Mlu I. (Lanes 1 and 8) 123 bp markers. Further typing for HLA DQB1 * 03 for 0301, 0302, 0303, and 0304 in samples that were positive for the HLA DQB1 * 03 phenotype was performed by PCR using a combination of sequence specific primers (12) (Table 2). The annealing temperature for all the primer pairs was 60°C, and other PCR conditions were as published (12). All PCR reactions were performed with both positive and negative controls. The amplified products were separated by agarose gel electrophoresis and visualized by ultraviolet (UV) light. Table 2 Sequence-specific primer pairs for typing the HLA DQB1 * 03 locus Odds ratios and their approximate 95% confidence intervals were calculated for all variables by a χ2 test for 2 x 2 tables without a continuity correction (13). For 2 x k tables the χ2 test for trend was calculated (14). The unit of sampling was the individual in all analysis except when studying specific alleles. In that case, each allele was taken as an independent observation so that the sample size was twice as large for allele specific comparisons. Overall (Table 3) Table 3 Summary of distribution of HLA DQB1 * 03 Association Between HLA DQB1 * 03 and CIN (Tables 4 and 5). Table 4 Association between HLA DQB1 * 03 and CIN Table 5 Association between HLA DQB1 * 03 allele and CIN Association Between HPV and HLA DQB1 * 03 (Tables 6, 7, and 8). Table 6 Association between HLA DQB1 * 03 and HPV type Table 7 Association between HLA DQBI * 03 and HPV Table 8 Association between HLA DQB1 * 03 allele and HPV Cancer of the uterine cervix is the second most common cancer world-wide and the most common cancer in developing countries (15). The annual incidence in the UK is 4,000 cases per year, with 2,000 dying as a result of the disease. Cervical intraepithelial neoplasia which precedes cervical cancer is even more common. Both cervical cancer and CIN have been shown to be strongly associated with the oncogenic types of the human papillomaviruses (types 16, 18, 31, 33, 35, 39, 45, 51, 52, 56, and 58) (16, 17). An overview of studies show that the incidence of HPV infection detected by PCR based techniques in women with normal cervical cytology varies from 6–53% (18). There also exists a long latency period between HPV infection and cervical cancer (19). Therefore, additional factors are probably involved in the transformation of normal epithelium to CIN and cervical cancer after HPV infection. One such factor is possibly immunological, as in other virus induced cancers, such as nasopharyngeal carcinoma due to Epstein-Barr virus (20). In immunocompromised women following renal transplantation, the risk for cervical cancer is increased 10 fold (21). HIV infected patients also show an increased frequency of CIN (22). Analysis of HLA class I and II antigens in cervical cancer has shown that loss of class I expression occurs in a third of patients, while the majority demonstrated increased class II expression (23, 24). HLA type has been analyzed by several groups in patients with cervical cancer and CIN with different conclusions (Table 1). The discrepancy in the results between different investigators maybe due to several reasons. Serological typing of DQB1, as in the initial report, is prone to significant error (25). The sample size was adequate in only two studies (26, 27). In fact, analysis on a sufficiently large number of cases of CIN has not been performed, though two preliminary studies reported that the DQB1 * 03 locus was associated significantly with CIN (Table 1) (28, 29). The controls in several studies were from local donor panels and not necessarily comparable. Typing for HPV was performed in only three studies (27, 30, 31). To address these issues we have performed a larger study with sufficient number of cases and controls to evaluate the significance of DQBI * 03 association with cervical intraepithelial neoplasia. Further, it is quite important to evaluate the association between CIN, HPV, and DQBI * 03 using controls that are negative for HPV and have a normal cytology. The typing for HLA DQB1 * 03 was performed with a rapid technique which was concordant with data based on sequencing (11). This technique is also informative in assessing whether the individual sample is heterozygous or homozygous for the DQB1 * 03 locus. The results show a significant association between CIN and DQB1 * 03 which is only slightly stronger for CIN III than CIN I. The association between CIN and DQB1 * 03 that we found (odds ratio 3.03) was weaker than that reported by Wank and Thomssen (1). A slightly stronger association in HPV-positive CIN (odds ratio 3.43) was observed than that reported by Van den velde et al. (28). Homozygosity at the DQB1 * 03 locus, was significantly associated (odds ratio 4.0) with CIN and was more strongly related than heterozygosity, a result not reported so far in any of the previous studies. The 0301 allele was the most strongly associated with CIN (odds ratio 2.53, χ2 = 28.6, p < 0.001), but 0302 was also positively related. This agrees with Wank and Thomssen's DNA typing data for 0301 on their original sample of cervical cancer patients (32). A significant association with HPV-positive CIN and DQB1 * 03 was found for all HPV types tested (types 16, 18, 31, and 33). Again, homozygosity at the DQB1 * 03 locus was strongly associated with HPV-positive CIN (odds ratio 4.47) with intermediate risk found for heterozygotes. Typing for HPV has not always been performed in all the previous studies, but in general HPV-positive CIN was significantly associated with the DQB1 * 03 phenotype. In our study, only the most prevalent oncogenic types have been determined, and it is possible that some of the HPV-negative CIN cases are positive for other types. Additional typing for other HPV types would probably increase the strength of the association. These results suggest that the DQB1 * 03 locus is probably an important determinant in allowing HPV infection to be tolerated and increasing the risk for high grade CIN or cancer. Another disease due to HPV infection, recurrent respiratory papillomatosis, has been shown to be associated with the DQB1 * 03 phenotype (33). In a preliminary analysis of 16 patients, 75% were positive for DQB1 * 03. Analysis of HLA class I and II using restriction fragment length polymorphism's in New Zealand rabbits infected with Shopecotton-tail rabbit papillomavirus showed a strong linkage between wart regression and DR locus, and an increased risk of malignant transformation with the DQ locus (34). Thus, based on our study and others, the DQB1 * 03 locus seems to be important for HPV associated disease. The results of the analysis of HLA DR and DQ in squamous cell carcinoma reported by Apple et al. (27) in an Hispanic population are intriguing in that no significant association was found with the DQB1 * 03 locus, although the haplotype DRB1 * 0407-DQB1 * 0302 was associated with increased risk of cervical carcinoma. It may be partly related to ethnicity of the population under study. There are at least three possible ways by which the association between DQB1 * 03 and HPV positive disease can be explained. These women may present peptide antigen to CD4+ T cells ineffectively; clonal deletion of antigen specific T cell during thymic maturation may occur; or there maybe active suppression of immune response to HPV in DQB1 * 03 positive women. Although Mellins et al. (35) using T cell lines and specific HLA DP, DQ, and DR deletion mutants have shown that 20–30% of response to various recall antigens is restricted to DQ, in general, there is a bias against HLA-DQ restriction in human T cell clones reflecting a low level of expression of DQ on antigen presenting cells in the periphery (36). One proposal, in view of the above result, is that DQ is important in thymic selection of T cells. Indeed, there is a high level of expression of HLA DQ in the thymic cortex (37), and a role for negative selection for HPV-specific T cell clones would fit predisposition to HPV-positive CIN by the DQB1 * 03 alleles. The other possible mechanism is based on observations of HLA-associated immunological low responsiveness to antigens such as streptococcal cell wall (38), schistosoma (39), mycobaterium leprae (40), tetanus toxoid (41), and hepatitis surface antigen (42) either after natural exposure or after vaccination. Despite the controversy regarding the function of suppressor T cells, there is evidence to suggest that HLA DQ maybe the preferred restriction element for immunological suppression mediated by CD8-positive T suppressor cells (43, 44). It is possible that women who are positive for the DQB1 * 03 phenotype are unable to mount an effective cytotoxic T cell response against HPV infection. This is particularly important as it has been shown that HPV 16 E7 is a target for cytotoxic T cells and can mediate tumor rejection (45). The present study on CIN lays the framework for a more detailed study of DR and DQ in CIN to identify the significant haplotype. It is obvious that detailed investigation of mechanisms of HLA association with CIN would be important in improving the understanding of the biology of cervical cancer and generally be informative about virus associated cancer. This has obvious importance in the design of vaccines for prevention and treatment of HPV infection and associated disease. Wank R, Thomssen C. (1991) High risk of squamous cell carcinoma of the cervix for women with HLA-DQw3. Nature 352: 723–725. Meijer CJLM, van den Brule AJC, Snijders PJF, Heimerhorst T, Kenemans P, Walboomers JMM. (1992) Detection of human papillomavirus in cervical scrapes by the polymerase chain reaction in relation to cytology: Possible implications for cervical cancer screening. In: Munoz N, Bosch FX, Shah KV, Meheus A (eds). The Epidemiology of Human Papillomavirus and Cervical Cancer. Oxford University Press, Oxford. Lorincz AT, Reid R, Jenson B, Greenberg MD, Lancaster W, Kurman RJ. (1992) Human papillomavirus infection of the cervix: relative risk association of 15 common anogenital types. Obstet. Gynecol. 79: 328–337. Hawley-Nelson P, Vousden KH, Hubbert NL, Lowy DR, Schiller JT. (1989) HPV16 E6 and E7 proteins cooperate to immortalize human foreskin keratinocytes. E.M.B.O. J. 8: 3905–3910. Munger K, Phelps WC, Bubb V, Howley PM, Schlegel RM. (1989) The E6 and E7 genes of human papillomavirus type 16 together are necessary and sufficient for transformation of primary human keratinocytes. J. Virol. 63: 4417–4421. Vousden KH. (1993) Interactions of human papillomavirus transforming proteins with the products of tumour suppresor genes. F.A.S.E.B. J. 7: 872–879. Cuzick J, Terry G, Ho L, Hollingworth T, Anderson M. (1992) Human papillomavirus type 16 DNA in cervical smears as a predictor of high-grade cervical intraepithelial neoplasia. Lancet 339: 959–960. Cuzick J, Terry G, Ho L, Hollingworth T, Anderson M. (1994) Type-specific human papillomavirus DNA in abnormal smears as a predictor of high-grade cervical intraepithelial neoplasia. Br. J. Cancer 69: 167–171. Richart RM. (1987) Causes and management of cervical intraepithelial neoplasia. Cancer 60: 1951–1959. Anderson M, Brown C, Buckley C, et al. (1991) Current views on cervical intraepithelial neoplasia. J. Clin. Pathol. 44: 969–978. Mehal WZ, Lo YD, Herrington S, et al. (in press) Human papillomavirus infection plays an important role in determining the HLA associated risk of cervical carcinogenesis. J. Clin. Pathol. Olerup O, Aldener A, Fogdell A. (1993) HLA-DQB1 and -DQA1 typing by PCR amplification with sequence-specific primers (PCR-SSP) in 2 hours. Tissue Antigens 41: 119–134. Davis W, Breslow NE Day NE (eds). (1980) Statistical Methods in Cancer Research: vol.1: The analysis of case control studies. International Agency for Research on Cancer, Lyon. Armitage P (ed). (1971) Statistical Methods in Medical Research. Blackwell, Oxford. Parkin DM, Laara E, Muir CS. (1988) Estimates of the worldwide frequency of sixteen major cancers in 1980. Int. J. Cancer 41: 184–197. Schiffman MH, Bauer HM, Hoover RN, et al. (1993) Epidemiologic evidence showing that human papillomavirus infection causes most cervical intraepithelial neoplasia. J. Natl. Cancer Inst. 85: 958–964. Walboomers J, Melkert P, van den Brule A, Snijders P, Meijer C. (1992) The polymerase chain reaction for screening in diagnostic cytopathology of the cervix. In: Herrington CS, McGee JO (eds). Diagnostic Molecular Pathology. IRL press, Oxford. Schneider A, Koutsky L. (1992) Natural history and epidemiological features of genital HPV infection. In: Munoz M, Bosch FX, Shah KV, Meheus A (eds). The Epidemiology of Cervical Cancer and Human Papillomavirus. International Agency for Research on Cancer, Lyon. Zur Hausen H. (1986) Intracellular surveillance of persisting viral infections. Lancet 1: 489–491. Zur Hausen H. (1991) Viruses in human cancers. Science 254: 1167–1173. Sillman FH, Sedlis A. (1987) Anogenital papillomavirus infection and neoplasia in immunodeficient women. In: Reed R (ed). Human Papillomavirus. W. B. Saunders, Philadelphia. Schafer A, Friemann W, Mielke M, Schwatlander B, Koch MA. (1991) The increased frequency of cervical dysplasia in women infected with the human immunodeficiency virus is related to the degree of immunosuppression. Am. J. Obstet. Gynecol. 164: 593–599. Connor MF, Stern PL. (1990) Loss of MHC class I expression in cervical carcinomas. Int. J. Cancer 46: 1029–1034. Glew SS, Duggan-Keen M, Cabrera T, Stern PL. (1992) HLA class II antigen expression in human papillomavirus-associated cervical cancer. Cancer Res. 52: 4009–4016. Mytilineos J, Scherer S, Opelz G. (1990) Comparison of RFLP-DR-Beta and serological HLA-DR typing in 1500 individuals. Transplantation 50: 870–873. Heiland A, Borresen AL, Kaern J, Ronningen KS, Thorsby E. (1992) HLA antigens and cervical carcinoma. Nature 356: 23. Apple RJ, Erlich HA, Klitz W, Manos MM, Becker TM, Wheeler CM. (1992) HLA DR-DQ associations with cervical carcinoma show papilloma virus-type specificity. Nature Gen. 6: 157–162. Vandenvelde C, deFoor M, vanBeers D. (1993) HLA-DQB1 * 03 and cervical intraepithelial neoplasia grades I–III. Lancet 341: 442–444. David ALM, Taylor GM, Gokhale D, Aplin JD, Seif MW, Tindall VR. (1992) HLA-DQB1 * 03 and cervical intraepithelial neoplasia type III. Lancet 340: 52. Glew SS, Duggan-Keen M, Ghosh AK, et al. (1993) Lack of association of HLA polymorphisms with human papillomavirus-related cervical cancer. Hum. Immunol. 37: 157–164. Gregoire L, Lawrence WD, Kukuruga D, Eisenbrey AB, Lancester WD. (1994) Association between HLA-DQB1 alleles and risk for cervical cancer in African-American women. Int. J. Cancer 57: 504–507. Wank R, Meulen JT, Luande J, Eberhardt H-C, Pawlita M. (1993) Cervical intraepithelial neoplasia, cervical carcinoma, and risk for patients with HLA-DQB1 * 0602, *301, *0303 alleles. Lancet 341: 1215. Bonagura VR, O'Reilly ME, Abramson AL, Steinberg BM. (1993) Recurrent respiratory papillomatosis (RRP): Enriched HLA DQw3 phenotype and decreased class I MHC expression. Proceedings of the 12th International Papillomavirus Conference. p. 48. Han R, Breitburd F, Marche PN, Orth G. (1992) Linkage of regression and malignant conversion of rabbit viral papillomas to MHC class II genes. Nature 356: 66–68. Mellins E, Woefel M, Pious D. (1987) Importance of HLA-DQ and HLA-DP restriction elements in T-cell responses to soluble antigens—Mutational analysis. Hum. Immunol 18: 211–213. Altman DM, Sansom D, Marsh SGE. (1991) What is the basis for HLA-DQ associations with autoimmune disease? Immunol. Today 12: 267–270. Ishikura H, Ishikawa N, Aizawa M. (1987) Differential expression of HLA class H antigens in the human thymus-relative paucity of HLA-DQ antigens in the thymic medulla. Transplantation 44: 314–317. Nishimura Y, Sasazuki T. (1983) Suppressor T cells control the HLA-linked low responsiveness to streptococcal antigen in man. Nature 301: 67. Sasazuki T, Ohta N, Kaneoka R, Kojima S. (1980) Association between an HLA haplotype and low responsiveness to schistosomal worm antigen in man. J. Exp. Med. 152: 314. Ottenhoff THM, Walford C, Nishimura Y, Reddy NBB, Sasazuki T. (1990) HLA DQ molecules and the control of mycobaterium leprae specific T cell non responsiveness in lepromatous leprosy patients. Eur. J. Immunol. 20: 2347. Sasazuki T, Kohno Y, Iwamoto I, Tanimura M, Naito S. (1978) Association between an HLA haplotype and low responsiveness to tetanus toxoid in man. Nature 272: 359. Hatae K, Kimura A, Okubo R, et al. (1992) Genetic control of non responsiveness to hepatitis B virus vaccine by an extended HLA haplotype. Eur. J. Immunol. 22: 1899–1905. Salgame P, Convit J, Bloom BR. (1991) Immunological suppression by human CD8+ T cells is receptor dependent and HLA-DQ restricted. Proc. Natl. Acad. Sci. U.S.A. 88: 2598–2602. Sasazuki T, Kikuchi K, Hirayama S, Matsushita S, Ohta N, Nishimura Y. (1989) HLA-linked immune suppression in humans. Immunology S2: 21–25. Chen L, Thomas EK, Hu SL, Hellstrom J, Hellstrom KE. (1991) Human papillomavirus type 16 nucleoprotein E7 is a tumour rejection antigen. Proc. Natl Acad. Sci. U.S.A. 88: 110–114. Acknowledgements to Dr. A. Hollingworth, Dr. A. Szarewski, and Prof. A. Singer for samples, and Dr. M. Anderson and Dr. A. M. Hanby for histological diagnosis, and Dr. R. Edwards for statistical analysis. Supported by the Imperial Cancer Research Fund and Wellbeing. ICRF Molecular Oncology Laboratories, Institute of Molecular Medicine, John Radcliffe Hospital, Oxford, OX3 9DU, UK Kunle Odunsi & Dr. Trivadi S. Ganesan Departments of Molecular Pathology and Medical Microbiology, University College, London, UK George Terry & Linda Ho Nuffield Department of Medicine, John Radcliffe Hospital, Oxford, UK Department of Mathematics, Statistics, and Epidemiology, Imperial Cancer Research Fund, London, UK Jack Cuzick Search for Kunle Odunsi in: Search for George Terry in: Search for Linda Ho in: Search for John Bell in: Search for Jack Cuzick in: Search for Dr. Trivadi S. Ganesan in: Correspondence to Dr. Trivadi S. Ganesan. Odunsi, K., Terry, G., Ho, L. et al. Association Between HLA DQBl * 03 and Cervical Intra-epithelial Neoplasia. Mol Med 1, 161–171 (1995). https://0-doi-org.brum.beds.ac.uk/10.1007/BF03401564 Issue Date: January 1995 DOI: https://0-doi-org.brum.beds.ac.uk/10.1007/BF03401564
CommonCrawl
Nanomaterial-based environmental sensing platforms using state-of-the-art electroanalytical strategies Wei Jin1 & Govindhan Maduraiveeran2 Journal of Analytical Science and Technology volume 9, Article number: 18 (2018) Cite this article Electroanalytical techniques have been extensively employed in the advancement of sensor platforms based on nanomaterials owing to their rapid response, high sensitivity, and selectivity. It is of immense significance for the swift and sensitive detection of environmental pollutants or contaminants such as a major group of unregulated chemicals such as heavy metals, inorganic anions, phenolic compounds, pesticides, and chemical warfare reagents, which may cumulatively resource severe harm to human health and environmental. These environmental pollutants are regularly obtained from a large group of unmaintained compounds/complexes, containing industry, human, and animal fecal waste; natural toxins; drinking water disinfection by-products; personal care products; pharmaceuticals; food materials through food preparation and packaging processes, etc. The present minireview will display various concepts and advancements of electroanalytical techniques and their potential applications in environmental sensing. The introduction of novel electroanalytical tools and nanostructured electrode surfaces may demonstrate even higher sensitive and selective sensor platforms. Electroanalytical methods possess passionate importance in the analytical research community, and they serve as ideal tactics, which display several features such as rapid response, robustness, high selectivity and sensitivity, cost-effective miniaturization, and the perspective for online monitoring towards environmental, food, and biomedical applications. The advancement and prospects for the applications of electroanalytical techniques using nanomaterials in the design of environmental sensor platforms will also be discussed. This review presents the concepts and insights of various electroanalytical techniques and their potential applications in the design of electrochemical sensor and biosensor platforms. Electroanalytical techniques are usually the inter-conversion between electricity and chemistry, which measure the electrical response of current, potential, or charge with respect to the variation of chemical reactions/parameters (Wang 2002). One main advantage of electrochemical sensing is the direct analysis of analyte information in the compact, simple, and portable electrochemical setup. General electrochemical cell for environmental detection mainly consists of an ionic conductor (electrolyte) and an electronic conductor (electrode) (Bansod et al. 2017). The electrochemical sensing reactions occur at the interface between the reactant-bearing electrolyte and the working electrode (WE), in which the potential is obtained with respect to the reference electrode (RE) and the current is measured with respect to counter electrode (CE), respectively. In order to provide electrical excitation and receive the sensing response, these electrodes are connected to a laboratory or in-field portable electrochemical workstation embedded with a required power source as shown in Fig. 1. And the workstation is linked to a computer installed with corresponding software to interpret and analyze the sensing data. It should be noted that the two-electrode cell is also available for the environmental sensing, depending on the target electrical signal and corresponding electrochemical techniques. Analytical electrochemistry is the application of electrochemical processes on the systems to measure the quantity of a species of interest, and it is mainly concerned with the measurements of electrical quantities of current, potential, charge, and their chemical parameters. Electroanalytical chemistry is much fascinated, and it is lengthened to new phases of environment and medicine by the design of tailored electrochemical interfaces, molecular monolayer structures, the coupling of electrochemical transducers with nanoscale materials and chemical/biological components, and the development of analytical techniques. General setup for electrochemical cell (reprinted with permission from Reference Bansod et al. 2017). © 2017 Elsevier The type of quantitative electrical signal originates from many different electroanalytical methods, leading to a wide range of environmental monitoring applications based on nanomaterials (Shao et al. 2010). For the pollutant detection (Jin and Maduraiveeran 2017; Maduraiveeran and Jin 2017), electrochemical techniques are mainly classified into potentiometric, potentiostatic, electrochemical impedance spectroscopy (EIS), conductometric, and electrochemiluminescent methods. In most of these methods, either current or potential is controlled to determine the change of another parameter. However, there are also some techniques in which no control excitation is provided (at zero current), while equilibrium potential is obtained at an ion-selective membrane (Qin et al. 2002). As illustrated in Fig. 2, there are three kinds of environmental gas sensors, including potentiometric sensor for voltage measurement, amperometric sensor for current measurement, and conductometric sensor for conductivity measurement (Li et al. 2017). Therefore, the specific electrochemical techniques with corresponding recent advances are described in the following sections. Schematic diagrams of three typical membrane-based electrochemical gas sensors. a Potentiometric sensor. b Amperometric sensor. c Conductometric sensor. (Reprinted with permission from Reference Li et al. 2017). © 2017 Elsevier Potentiometry Potentiometry is a static (zero-current) method in which the target analyte value is measured by the potential generated across an ion-selective electrode (ISE) (Karami et al. 2003). It is an interesting area of electroanalytical research owing to high accuracy, quick response, non-destructive, and cost-effective analysis (Matzeu et al. 2015). The glass electrode employed to determine the solution pH is the most common example of ISE. The response of an ion-selective electrode is given by Eq. 1 as follows: $$ E= Eo+\frac{RT}{zF}\ln a $$ in which E is the measured potential (in volts), E0 is a characteristic constant for the ISE system, R is the gas constant, T is the absolute temperature, z is the signed ionic charge, F is the Faraday constant, and a is the activity of analyte. ISEs incorporate a selective polymeric membrane to decrease the matrix interferences; thus, ideally, it responds only to one target analyte (Dimeski et al. 2010). Strictly, the potential is obtained as a function of analyte activity rather than concentration. It has been widely employed for the pollution monitoring, such as CN, F, S, Cl, and NO3 in industrial effluents, natural waters, and agriculture processing (Crespo 2017; Zuliani and Diamond 2012). As illustrated in Fig. 3 (Karami et al. 2003), the Pb2+ concentration was successfully detected using the flow injection potentiometry at a new coated graphite ion-selective electrode. Potentiometric peaks for three repetitive measurements of different Pb2+ ion concentrations. (Reprinted with permission from Reference Karami et al. 2003). © 2003 Elsevier A recent development of ion-selective electrodes is ion-selective field-effect transistors (ISFETs) and is displayed in Fig. 4 (Sudibya et al. 2011; Chin et al. 2001). The sensing signal is depending on the interaction or absorption of a charged species, with the interface of FET. Subsequently, the surface potential is changed, and thus, the current flow in the conductive channel of FET is affected via the field effect. In order to selectively detect particular species and obtain an electrical "fingerprint," the FET surface is usually functionalized with corresponding binding groups. The main superiority of such devices is the ultra-small size (below 0.1 mm2), which allows low-cost mass-produced preparation with integrated circuit technology. a Schematic illustration for fabrication of patterned rGO thin films on APTES-coated quartz. b Ambipolar characteristics of rGO-FET measured in buffer solution. (Reprinted with permission from Reference Sudibya et al. 2011). © 2011 American Chemical Society Potentiostatic techniques Potentiostatic techniques (Ricci et al. 2007) involve the use of a potentiostat instrument to control the potential and deal with the charge transfer processes at the electrode/electrolyte interface. As a result, they can determine the concentration of many environmental species including the electroactive compounds from electrochemical reduction or oxidation and the non-electroactive compounds from indirect or derivatization electrochemical procedures (Jin et al. 2016; Jin et al. 2010). In comparison with potentiometry, the advantages of potentiostatic techniques are high sensitivity and selectivity, wide linear range, portable and low-cost setup, and particularly considerable electrode material availability (Fleet and Gunasinghuam 1992). Consequently, the reported electrochemical detections for environmental monitoring are mainly concentrated on potentiostatic methods, including cyclic voltammetry (CV), amperometry, and stripping voltammetry. Cyclic voltammetry Cyclic voltammetry is one of the most used techniques in electroanalytical studies (Rusling and Suib 1994). It provides fruitful insight into both half reactions occurring at the working electrode and the chemical or physical properties coupled to the target electrochemical reaction. In cyclic voltammetry, initiating from a potential Ei, a staircase or linear potential sweep is applied to the working electrode (Jin and Yan 2015). When reaching a switching potential of Ef, the potential sweep is reversed and returns to its initial point. The data output of CV is a current-potential curve, where the main features are the peak potentials and currents at anode and cathode. It should be noted that the half peak potential (median between the cathodic and the anodic peak potentials) reflects mainly thermodynamics characteristics, and the magnitudes of the peak current reveal the kinetics during the electrochemical reaction (Fig. 5). (Left) A three-electrode system to measure cyclic voltammetric waveform. (Right) Ideal shape for cyclic voltammograms of a reversible diffusion-controlled reaction. (Reprinted with permission from Reference Rusling and Suib 1994). © 1994 Wiley The information of the electrode reaction type, the number of electron transfer, and the additional coupled phenomena (adsorption and crystallization) can also be obtained in the CV curves. For example, the electron transfer numbers are calculated using the Randles-Sevcik equation for electrochemically reversible reaction (Jin et al. 2010): $$ {i}_p=\left(2.69\times {10}^5\right){n}^{3/2}A{D}^{1/2}C{v}^{1/2} $$ where A is the surface area of the working electrode, n is the number of transferred electrons, F is the Faraday constant, ip is the peak current density, C is the solubility of the oxygen, D is the diffusion coefficient of the oxygen, and v is the scan rate of the CV curves. As shown in Fig. 6 (Ma and Chen 2015), at grapheme-doped gold nanoparticles modified electrode, a good linear relationship between the oxidation peak current of CV curves, and the concentration of diethylstilboestrol (DES) was obtained from 1.20 × 10−8 to 1.20 × 10−5 mol L−1. CVs of DES at the GR/Nano-Au/GCE electrode with different concentrations. Inset is the plot of the oxidation peak current versus concentration of DES. (Reprinted with permission from Reference Ma and Chen 2015). © 2015 Elsevier Another advance of this technique is fast-scan cyclic voltammetry (FSCV) at microelectrode, offering great chemical selectivity and high sensitivity for the electroactive species (Fig. 7). Sanford (Sanford et al. 2010) reported the voltammetric measurement of H2O2 at single, uncoated carbon-fiber microelectrodes in the presence of a catalyst. They identified both in vitro and in brain tissue that H2O2 can be reliably detected in the presence of multiple common interfering species. Fast-scan cyclic voltammetry of H2O2. a The applied potential was scanned at 400 V s−1. b Background current at the carbon surface produced by the rapid scan (solid line). c Background-subtracted cyclic voltammogram of 100 μM H2O2 at pH 7.4. d Color plot containing 200 background-subtracted cyclic voltammograms recorded over 20 s. (Reprinted with permission from Reference Sanford et al. 2010). © 2010 American Chemical Society Amperometry Amperometry is operated by directly stepping the potential to a desired value and then determining the current or maintaining the potential at the constant value and measuring the samples at the electrode in flow injection system (Ronkainen et al. 2010). Due to the specific oxidation or reduction potential, the as-obtained current is selectively proportional to the concentration of the electroactive species in the test sample. The fixed potential also leads to a negligible charging current and minimizes the background signal that interferes the limit of detection (Rivoira et al. 2015). As shown in Fig. 8, the electrochemical cell has a weak "zero" current in the absence of sample A, and the magnitude of this blank is usually obtained during calibration at a constant polarization voltage. With the addition of sample A, species of interest start to transport across the electrolyte and are reduced at the electrode. The current of this process increases up to a plateau and reaches the equilibrium, and this plateau current monotonically related to the concentration of the target analyte. Principles of amperometric measurements Besides, hydrodynamic amperometric methods can significantly improve mass transport at the electrode surface by rotating and vibrating. This flow systems exhibit superior performance in environmental monitoring than steady-state batch systems since the flow conditions allow the better solution transport in multistep assay procedures and online detection. As presented in Fig. 9 (Jin et al. 2014), the Cr(VI) concentration was detected at Ti/TiO2NT/Au electrode, an amperometry technique, and the potential was set at the Cr(VI) reduction peak potential. There was an excellent linear range from 0.10 to 105 mM Cr(VI), and more importantly, this exhibited the highest sensitivity (6.91 μA μM−1) owing to the nanostructured electrode and hydrodynamic amperometry method. (Left) Amperometric current responses of the successive addition of Cr(VI) in 0.1 M HCl solution. The inset is the enlarged part of low Cr(VI) concentration curve; (right) the calibration plot of the response against the Cr(VI) concentration (reprinted with permission from Reference Jin et al. 2014). © 2014 Royal Society of Chemistry Moreover, amperometry is a widely used method for the biosensors in the environmental assessment of inorganic and organic pollutants. As illustrated in Fig. 10 (Jarque et al. 2016), the yeast biosensors are developed by coupling specific target yeast cells with electronic transducers (electrode). During the detection process, pollutant molecules enter the yeast cells and interact with biorecognition sites, such as nuclear receptors, DNA, and enzymes. As a result, an amperometric response can be detected, and the sensing information can be obtained. Fig. 10 Examples of immobilization strategies for yeast biosensors. (Reprinted with permission from Reference Jarque et al. 2016). © 2016 Elsevier Stripping voltammetry Stripping voltammetry is a two-step method, in which the initial step consists of the electrodeposition of analyte species onto the electrode surface at a constant potential (Izadyar et al. 2016). This preconcentration step involves either anodic or cathodic process, while the second step consists of a voltage scan to obtain an electrolytic dissolution of the deposit at the electrode (stripping). Take metal ion detection for instance, the accumulation is the electrochemical reduction and deposition to metal, and then the measurement is the anodic dissolution of deposit. This process is called anodic stripping voltammetry (ASV), while the case of halides is called cathodic stripping voltammetry (CSV). The stripping step can be either linear, staircase, square wave or pulse. As shown in Fig. 11 (Baron-Jaimez et al. 2013), the three heavy metals were simultaneously detected using stripping voltammetry. However, it should be noted that the concentration at the electrode is directly determined in the stripping step rather than the concentration of an analyte in the solution. Therefore, in order to improve the sensitivity, the electrode concentration can be increased by increasing the deposition time and/or the rate of stirring. a Excitation signal. b Response curve of anodic stripping voltammetry. (Reprinted with permission from Reference Baron-Jaimez et al. 2013). © 2013 IOP Publishing The preconcentration step of stripping voltammetry results in remarkable sensitivity particularly for the trace species at the sub-ppm level. As shown in Fig. 12 (Izadyar et al. 2016), the stripping voltammetry nearly exhibit 50-fold enhancement for the detection of citalopram at the disposable pencil lead electrode. In order to apply this sensitive method to those species that cannot be accumulated via electrolysis at the electrode surface, alternative principles have been developed with the utilization of the adsorption technique. In adsorptive stripping voltammetry (AdSV), the target analyte can be preconcentrated by physical adsorption instead of electrolytic deposition. Many organic pollutants and heavy metal cations have strong affinity to be adsorbed from aqueous solution onto a mercury electrode surface, resulting in the considerable application of AdSV technique using surface-active complex agents. (Left) CVs of 0 and 50 μM citalopram and (right) stripping voltammetry of citalopram ranging from 0 to 1000 nM. (Reprinted with permission from Reference Izadyar et al. 2016). © 2016 Elsevier Impedance techniques Electrochemical impedance spectroscopy (EIS) (Ronkainen et al. 2010) is proposed by Lorenz and Schulze in 1975. It measures the resistive and capacitive properties of electrode upon perturbation with a small amplitude AC excitation, and the frequency is varied over a wide range to generate the impedance spectrum. Then, the in-phase and out-of-phase current responses are measured to determine the resistive and capacitive components of impedance, respectively. In EIS, electrochemical reactions that occur in an electrolytic cell are illustrated in terms of the electrical equivalent circuit (EEC). An idealized electrical equivalent circuit included is presented in Fig. 13 (Bansod et al. 2017), while the high- and low-frequency components are shown on the left and right, respectively. By measuring this impedance data and other resistive-capacitive (RC) characteristics of the EEC, one could determine the analyte concentration in the test sample. An idealized Randles electrical equivalent circuit for an electrochemical reaction. Cd, double-layer capacitor; Rp, polarization resistance; Rs, solution resistance; ZW, Warburg impedance. (Reprinted with permission from Reference Bansod et al. 2017). © 2017 Elsevier EIS is a powerful detection method because they are capable of illustrating electron transfer at high frequency and mass transfer at low frequency. Impedance-based detection has also been proved to be an efficient tool for the recognition of appropriate interface properties in biosensing system. It has some advantages over the widely used amperometric method since the redox mediators improve the accessibility and overcome proximity limitations to the electrode surface. As shown in Fig. 14 (Fan et al. 2013), the aptasensor concentration was determined by measuring the changes of Rct after incubating the probe in different concentrations of acetamiprid solution for 40 min. In the Nyquist plots of the response, the impedance obviously increases with increasing concentration of acetamiprid, indicating the increase of the electron transfer resistance. The dependence of Rct and acetamiprid concentration is given. A linear relationship between Δ and the logarithm value of acetamiprid concentrations were obtained in the range of 5–600 nM, indicating the facilitating effect of the redox mediator. a Nyquist plots of the MCH/aptamer/AuNPs/gold electrode correspond to different concentrations of acetamiprid. b The dependence of Ret of on the concentration of acetamiprid; inset is the linear calibration curve of ΔRet with the logarithm of acetamiprid concentrations (reprinted with permission from Reference Fan et al. 2013). © 2013 Elsevier Conductometry Conductometry is to monitor the changes in the electrical conductivity of the test sample (Jaffrezic-Renault and Dzyadevych 2008). This method is of great importance in the detection of toxic, harmful gases, like SO2, NOx, H2S, CO, and NH3. Conductometric (resistive) gas sensors are widely used for the environmental monitoring; these "solid-state" sensors possess an excellent sensitivity, short response time, low cost, and device feasibility. The detection mechanism originates from a change in conductivity via trapping of electrons and band bending induced by the adsorbed analyte molecules. As shown in Fig. 15 (Shu et al. 2010), the course of the measured resistance R during cyclic exposure to NO2 at iron (II) phthalocyanine is presented. While dosing NO2 in N2 with a fixed concentration of 0.5, 1, and 2 ppm, the resistance decreases with a steady slope and then remains at the corresponding level during the intermediate N2 phases, leading to plateaus in the detection course. Response of sensors to 30-min 100 ppm NO2 exposure (reprinted with permission from Reference Shu et al. 2010). © 2010 Elsevier Conductometry has also been employed in biosensors for environmental monitoring and clinical analysis. As illustrated in Fig. 16 (Anh et al. 2004), the conductometric transducer is a miniature two-electrode device fabricated to determine the conductivity of the thin electrolyte layer between the electrode surfaces. Two pairs of Pt-integrated electrodes (150 nm thick) were developed on the Pyrex glass substrate (10 mm × 30 mm); tyrosinase was immobilized on the biosensor detection part. As a result, the concentration of diuron, atrazine, desisopropylatrazine (DIA) and desethylatrazine (DEA) was successfully detected at the "sensitive" electrode. (Upper) Schematic representation of conductometric biosensor; (below) calibration curves of conductometric tyrosinase biosensor for 4-chlorophenol, phenol, and catechol. (Reprinted with permission from Reference Anh et al. 2004). © 2004 Elsevier Electrochemiluminescence techniques Electrochemiluminescence (ECL, called electrogenerated chemiluminescence) involves the formation of species at electrode surfaces which then proceed to electron-transfer reactions to generate an electronically excited state that emits light (Richter 2004). Electrochemiluminescence from Ru(bpy)32+ was first developed in 1972 in acetonitrile (MeCN) using tetrabutylammonium tetrafluoroborate (TBABF4) as the solvent. ECL was formed by change pulsing of an electrode potential to form oxidized Ru-(bpy)33+ and reduced Ru(bpy)3+ (Fig. 17). Therefore, ECL is observed during the application of potential (several volts) to electrode materials in the aprotic organic solvent of luminescent species, which are usually polycyclic aromatic hydrocarbons, metal complexes, quantum dots, or nanoparticles. It should be noted that simultaneous redox reactions of luminescent species are difficult to achieve in an aqueous medium due to the electrochemical water splitting. Structure of Ru(byy)32+ and a proposed mechanism for Ru(bpy)33+/Ru(bpy)3+ ECL system (reprinted with permission from Reference Richter 2004). © 2004 American Chemical Society ECL is a highly sensitive and selective analytical method. It combines both advantages of chemiluminescent (elimination of background optical signal) and ease process control by applying electrode potential. Recently, a novel electrochemiluminescence (ECL) sensor based on carbon quantum dots (CQDs) immobilized on graphene (GR) has been reported for the detection of chlorinated phenols (CPs) in water as presented in Fig. 18 (Yang et al. 2013). The low ECL intensity of CQDs in the aqueous system has been overcome through multistage amplification of the CQD ECL signal by GR and S2O82−. The as-prepared ECL sensor enables the real-time monitoring of PCP with an unprecedented sensitivity of 1.0 × 10−12 M and a wide linear range from 1.0 × 10−12 to 1.0 × 10−8 M. Illustrative ECL detection mechanism for PCP with CQD/GR in S2O82− solution (reprinted with permission from Reference Yang et al. 2013). © 2013 American Chemical Society This minireview has described to emphasize the electroanalytical techniques employed for the sensing of environmental pollutants based on nanomaterials as the exceptional analytical tools among other analytical methods in use today. Electroanalytical chemistry offers considerable promise for the environmental sensing platforms, with features of remarkable sensitivity and selectivity, low detection limit, inherent miniaturization, and portability. The analytes can be electrochemically monitored with a measurable potential (potentiometry), current (cyclic voltammetry, amperometry, stripping voltammetry), impedance (EIS), and conductivity (conductometry) as well as the recently updated electrochemiluminescence (ECL). With respect to numerous pollutant systems, various electroanalytical techniques should be carefully selected to obtain better sensing performance. The sensitivity and selectivity of the environmental sensor platforms have always been coupled with their selection of electroanalytical tools, electrode materials of nanomaterials, and electrolytes. The applications of electroanalytical methods have been evidenced for numerous applications in diagnostics, biological, biomedical, environmental, and food industries. The advancement of novel analytical methods with an excellent electrochemical active electrode surfaces based on functional nanomaterials offers the prospect of environmental sensors that will display even rapid response and high sensitivity. With the knowledge of electrochemistry and a better indulgent of electrode fabrication with micro- and nano-scale electrodes and testing, the performance of sensor platforms will improve considerably in terms of both sensitivity and detection limits. Electroanalytical techniques will certainly participate a significant role in their development, and the advancement in these areas will be persuaded to spark innovations in health, industry, and society as an intact. Anh TM, Dzyadevych SV, Van MC, Renault NJ, Duc CN, Chovelon J-M. Conductometric tyrosinase biosensor for the detection of diuron, atrazine and its main metabolites. Talanta. 2004;63:365. Bansod BK, Kumar T, Thakur R, Rana S, Singh I. A review on various electrochemical techniques for heavy metal ions detection with different sensing platforms. Biosensors and Bioelectronics. 2017;94:443–55. Baron-Jaimez J, Joya MR, Barba-Ortega J. Anodic stripping voltammetry–ASV for determination of heavy metals. J Phys-Concend Mat. 2013;446:012023. Chin Y-L, Chou J-C, Sun T-P, Chung W-Y, Hsiung S-K. A novel pH sensitive ISFET with on chip temperature sensing using CMOS standard process. Sensor Actuat B-Chem. 2001;76:582. Crespo GA. Recent advances in ion-selective membrane electrodes for in situ environmental water analysis. Electrochim Acta. 2017;245:1023. Dimeski G, Badrick T, John AS. Ion Selective Electrodes (ISEs) and interferences--a review. Clin Chem Acta. 2010;411:309. Fan L, Zhao G, Shi H, Liu M, Li Z. A highly selective electrochemical impedance spectroscopy-based aptasensor for sensitive detection of acetamiprid. Biosens Bioelectron. 2013;43:12. Fleet B, Gunasinghuam H. Electrochemical sensors for monitoring environmental pollutants. Talanta. 1992;39:1449. Izadyar A, Arachchige DR, Cornwell H, Hershberger JC. Ion transfer stripping voltammetry for the detection of nanomolar levels of fluoxetine, citalopram, and sertraline in tap and river water samples. Sens Actuat B-Chem. 2016;223:226. Jaffrezic-Renault N, Dzyadevych SV. Conductometric Microbiosensors for Environmental Monitoring. Sensors. 2008;8:2569. Jarque S, Bittner M, Blah L, Hilscherov K. east Biosensors for Detection of Environmental Pollutants: Current State and Limitations. Trends Biotechnol. 2016;34:408. Jin W, Du H, Zheng S, Xu H, Zhang Y. Comparison of the oxygen reduction reaction between NaOH and KOH solutions on a Pt electrode: The electrolyte-dependent effect. J Phys Chem B. 2010;114:6542. Jin W, Du H, Zheng S, Zhang Y. Electrochemical processes for the environmental remediation of toxic Cr(VI): A review. Electrochim Acta. 2016;191:1044. Jin W, Maduraiveeran G. lectrochemical detection of chemical pollutants based on gold nanomaterials. Trends Environ Anal Chem. 2017;14:28. Jin W, Wu G, Chen A. Sensitive and selective electrochemical detection of chromium(VI) based on gold nanoparticle-decorated titania nanotube arrays. Analyst. 2014;139:235. Jin W, Yan K. Recent advances in electrochemical detection of toxic Cr(VI). RSC Adv. 2015;5:37440. Karami H, Mousavi MF, Shamsipur M. Flow injection potentiometry by a new coated graphite ion-selective electrode for the determination of Pb2+. Talanta. 2003;60:775. Li T, Wu Y, Huang J, Zhang S. Gas sensors based on membrane diffusion for environmental monitoring. Sens Actuat B-Chem. 2017;243:566. Ma X, Chen M. lectrochemical sensor based on graphene doped gold nanoparticles modified electrode for detection of diethylstilboestrol. Sensor Actuat B-Chem. 2015;215:445. Maduraiveeran G, Jin W. Nanomaterials based electrochemical sensor and biosensor platforms for environmental applications. Trends Environ Anal Chem. 2017;13:10. Matzeu G, Zuliani C, Diamond D. Solid-contact ion-selective electrodes (ISEs) based on ligand functionalised gold nanoparticles. Electrochim Acta. 2015;159:158. Qin Y, Peper S, Bakker E. Plasticizer‐free polymer membrane ion‐selective electrodes containing a methacrylic copolymer matrix. Electroanalysis. 2002;14:1375. Ricci F, Volpe G, Micheli L, Palleschi G. A review on novel developments and applications of immunosensors in food analysis. Anal Chim Acta. 2007;605:111. Richter MM. Electrochemiluminescence (ECL). Chem Rev. 2004;104:3003. Rivoira L, De Carlo RM, Cavalli S, Bruzzoniti MC. Simple SPE-HPLC determination of some common drugs and herbicides of environmental concern by pulsed amperometry. Talanta. 2015;131:205. Ronkainen NJ, Halsall HB, Heineman WR. Electrochemical biosensors. Chem Soc Rev. 2010;39:1747. Rusling JF, Suib SL. Electroluminescence from oligothiophene‐based light‐emitting devices. Adv Mater. 1994;12:922. Sanford AL, Morton SW, Whitehouse KL, Oara HM, Lugo-Morales LZ, Roberts JG, Sombers LA. Voltammetric detection of hydrogen peroxide at carbon fiber microelectrodes. Anal Chem. 2010;82:5205. Shao Y, Wang J, Wu H, Liu J, Aksay IA, Lin Y. Graphene based electrochemical sensors and biosensors: A review. Electroanalysis. 2010;22:1027. Shu JH, Wike HC, Chin BA. Passive chemiresistor sensor based on iron (II) phthalocyanine thin films for monitoring of nitrogen dioxide. Sens Actuat B-Chem. 2010;148:498. Sudibya HG, He Q, Zhang H, Chen P. Electrical detection of metal ions using field-effect transistors based on micropatterned reduced graphene oxide films. ACS Nano. 2011;5:1990. Wang J. Electrochemical detection for microscale analytical systems: a review. Talanta. 2002;56:223. Yang S, Liang J, Luo S, Liu C, Tang Y. Supersensitive detection of chlorinated phenols by multiple amplification electrochemiluminescence sensing based on carbon quantum dots/graphene. Anal Chem. 2013;85:7720. Zuliani C, Diamond D. Opportunities and challenges of using ion-selective electrodes in environmental monitoring and wearable sensors. Electrochim Acta. 2012;84:29. WJ acknowledges the funding support from the National Natural Science Foundation of China under Grant No. 51604253 and "CAS Pioneer Hundred Talents Program". Data are available on request. National Engineering Laboratory for Hydrometallurgical Cleaner Production Technology, Institute of Process Engineering, Chinese Academy of Sciences, Beijing, 100190, China Wei Jin Department of Chemistry, Research Institute, SRM University, Chennai, Tamil Nadu, 603203, India Govindhan Maduraiveeran This review was primarily designed and written by WJ and GM. Both authors read and approved the final manuscript. Correspondence to Wei Jin or Govindhan Maduraiveeran. Both authors declare that they have no competing interests. Jin, W., Maduraiveeran, G. Nanomaterial-based environmental sensing platforms using state-of-the-art electroanalytical strategies. J Anal Sci Technol 9, 18 (2018). https://doi.org/10.1186/s40543-018-0150-4 Electrochemical methods Sensor platforms
CommonCrawl
Waves in the lower solar atmosphere: the dawn of next-generation solar telescopes Part of a collection: Solar Surface and Atmosphere David B. Jess ORCID: orcid.org/0000-0002-9155-80391,2, Shahin Jafarzadeh ORCID: orcid.org/0000-0002-7711-53973,4, Peter H. Keys ORCID: orcid.org/0000-0001-8556-470X1, Marco Stangalini ORCID: orcid.org/0000-0002-5365-75465, Gary Verth ORCID: orcid.org/0000-0002-9546-23686 & Samuel D. T. Grant ORCID: orcid.org/0000-0001-5170-97471 Living Reviews in Solar Physics volume 20, Article number: 1 (2023) Cite this article Waves and oscillations have been observed in the Sun's atmosphere for over half a century. While such phenomena have readily been observed across the entire electromagnetic spectrum, spanning radio to gamma-ray sources, the underlying role of waves in the supply of energy to the outermost extremities of the Sun's corona has yet to be uncovered. Of particular interest is the lower solar atmosphere, including the photosphere and chromosphere, since these regions harbor the footpoints of powerful magnetic flux bundles that are able to guide oscillatory motion upwards from the solar surface. As a result, many of the current- and next-generation ground-based and space-borne observing facilities are focusing their attention on these tenuous layers of the lower solar atmosphere in an attempt to study, at the highest spatial and temporal scales possible, the mechanisms responsible for the generation, propagation, and ultimate dissipation of energetic wave phenomena. Here, we present a two-fold review that is designed to overview both the wave analyses techniques the solar physics community currently have at their disposal, as well as highlight scientific advancements made over the last decade. Importantly, while many ground-breaking studies will address and answer key problems in solar physics, the cutting-edge nature of their investigations will naturally pose yet more outstanding observational and/or theoretical questions that require subsequent follow-up work. This is not only to be expected, but should be embraced as a reminder of the era of rapid discovery we currently find ourselves in. We will highlight these open questions and suggest ways in which the solar physics community can address these in the years and decades to come. Understanding the energy flow through the Sun's dynamic and tenuous atmosphere has long been a scientific interest for the global astrophysical community. The challenge of identifying the source(s) responsible for the elevated multi-million Kelvin temperatures in the solar corona has produced two main theoretical mechanisms. The first is via magnetic reconnection—the so-called 'DC' heating mechanism. Here, the continual re-configuration of the omnipresent magnetic fields that populate the Sun's atmosphere allow the production of intense thermal heating as the magnetic energy is converted through the process of reconnection, producing dramatic flares that often release energies in excess of \(10^{31}\) ergs during a single event (Priest 1986; Priest and Schrijver 1999; Shibata and Magara 2011; Benz 2017). However, such large-scale solar flares are relatively rare, and hence cannot supply the global background heating required to continuously maintain the corona's elevated temperatures. Instead, there is evidence to suggest that the frequency of flaring events, as a function of their energy, is governed by a power-law relationship (Shimizu and Tsuneta 1997; Krucker and Benz 1998; Aschwanden et al. 2000; Parnell and Jupp 2000), whereby smaller-scale micro- and nano-flares (with energies \(\sim 10^{27}\) ergs and \(\sim 10^{24}\) ergs, respectively) may occur with such regularity that they can sustain the thermal inputs required to maintain the hot corona. Many modern numerical and observational studies have been undertaken to try and quantify the ubiquity of these faint reconnection events, which often lie at (or below) the noise level of current-generation facilities (Terzo et al. 2011). Due to the difficulties surrounding the extraction of nanoflare characteristics embedded within the noise limitations of the data, only tentative evidence exists to support their global heating abilities of the outer solar atmosphere (Viall and Klimchuk 2013, 2015, 2016, 2017; Jess et al. 2014, 2019; Bradshaw and Klimchuk 2015; Tajfirouze et al. 2016a, b; Ishikawa et al. 2017, to name but a few recent examples). The second energy-supplying mechanism for the Sun's outer atmosphere involves the creation, propagation, and ultimately dissipation of wave-related phenomena—often referred to as the 'AC' heating mechanism (Schwarzschild 1948). The specific oscillatory processes responsible for supplying non-thermal energy to the solar atmosphere have come under scrutiny since wave motions were first discovered more than 60 years ago (Leighton 1960; Leighton et al. 1962; Noyes and Leighton 1963a). Of course, such early observations were without the modern technological improvements that enhance image quality, such as adaptive optics (AO; Rimmele and Marino 2011) and image reconstruction techniques, including speckle (Wöger et al. 2008) and multi-object multi-frame blind deconvolution (MOMFBD; van Noort et al. 2005). As a result, many pioneering papers documenting the characteristics of wave phenomena in the lower solar atmosphere relied upon the study of large-scale features that would be less effected by seeing-induced fluctuations, including sunspots and super-granular cells, captured using premiere telescope facilities of the time such as the McMath-Pierce Solar Telescope (Pierce 1964) at the Kitt Peak Solar Observatory, USA, and the National Science Foundation's Dunn Solar Telescope (DST; Dunn 1969), situated in the Sacramento Peak mountains of New Mexico, USA (see Fig. 1). Images depicting the construction of National Science Foundation facilities some 50 years apart. Panels b, d, e display construction stages of the Dunn Solar Telescope, which was first commissioned in 1969 in the Sacramento Peak mountains of New Mexico, USA. Panels a, c, f depict similar stages of construction for the Daniel K. Inouye Solar Telescope, which acquired first-light observations in 2019 at the Haleakal\(\bar{\text{a}}\) Observatory on the Hawaiian island of Maui, USA. Images courtesy of Doug Gilliam (NSO) and Brett Simison (NSO) Even at large spatial scales, Doppler velocity and intensity time series from optical spectral lines, including Fe i (Deubner 1967), H\(\alpha \) (Deubner 1969), Ca ii (Musman and Rust 1970), C i (Deubner 1971), and Na i (Slaughter and Wilson 1972) demonstrated the ubiquitous nature of oscillations throughout the photosphere and chromosphere. Through segregation of slowly-varying flows and periodic velocity fluctuations, Sheeley and Bhatnagar (1971) were able to map the spatial structuring of wave power in the vicinity of a sunspot (see Fig. 2), and found clear evidence for ubiquitous photospheric oscillatory motion with periods \(\sim 300\) s and velocity amplitudes \(\sim 0.6\) km s\(^{-1}\). Such periodicities and amplitudes were deemed observational manifestations of the pressure-modulated global p-mode spectrum of the Sun (Ulrich 1970; Leibacher and Stein 1971; Deubner 1975; Rhodes et al. 1977), where internal acoustic waves are allowed to leak upwards from the solar surface, hence producing the intensity and velocity oscillations synonymous with the compressions and rarefactions of acoustic waves. Image reproduced with permission from Sheeley and Bhatnagar (1971), copyright by Springer Observations of the photospheric Fe i absorption line, showing the sum of blue- and red-wing intensities (displayed in a negative color scale; top), the total measured Doppler velocities across the field-of-view (middle-top), the slowly varying component of the plasma flows (middle-bottom), and the Doppler velocity map arising purely from oscillatory motion (bottom). The region of interest includes a large sunspot structure (left-hand side), and shows ubiquitous oscillatory signatures with periods \(\sim 300\) s and velocity amplitudes \(\sim 0.6\) km s\(^{-1}\). Difficulties arose in subsequent work, when the measured phase velocities of the waves between two atmospheric heights were too large to remain consistent with a purely acoustic wave interpretation (Osterbrock 1961; Mein and Mein 1976). It was not yet realized that the 5-min oscillations are not propagating acoustic waves, but instead are evanescent in character since their frequency was lower than the associated acoustic cut-off value (see Sect. 3.1 for further details). Researchers further hypothesized that the magnetic fields, which were often synonymous with the observed oscillations, needed to be considered in order to accurately understand and model the wave dynamics (Michalitsanos 1973; Nakagawa 1973; Nakagawa et al. 1973; Stein and Leibacher 1974; Mein 1977, 1978, to name but a few examples). The field of magnetohydrodynamics (MHD) was introduced to effectively link the observed wave signatures to the underlying magnetic configurations, where the strong field strengths experienced in certain locations (e.g., field strengths that can approach approximately 6000 G in sunspot umbrae; Livingston et al. 2006; Okamoto and Sakurai 2018) produce wave modes that are highly modified from their purely acoustic counterparts. The importance of the magnetic field in the studies of wave phenomena cannot be overestimated, since both the alignment of the embedded magnetic field, \(B_0\), with the wavevector, k, and the ratio of the kinetic pressure, \(p_0\), to the magnetic pressure, \(B_{0}^{2}/2\mu _{0}\), play influential roles in the characteristics of any waves present (see the reviews by, e.g., Stein and Leibacher 1974; Bogdan 2000; Mathioudakis et al. 2013; Jess et al. 2015; Jess and Verth 2016). Commonly, the ratio of kinetic to magnetic pressures is referred to as the plasma-\(\beta \), defined as, $$\begin{aligned} \beta = \frac{2\mu _{0}p_{0}}{B_{0}^{2}} , \end{aligned}$$ where \(\mu _{0}\) is the magnetic permeability of free space (Wentzel 1979; Edwin and Roberts 1983; Spruit and Roberts 1983). Crucially, by introducing the local hydrogen number density, \(n_{\text{H}}\), the plasma-\(\beta \) can be rewritten (in cgs units) in terms of the Boltzmann constant, \(k_{B}\), and the temperature of the plasma, T, giving the relation, $$\begin{aligned} \beta = \frac{8{\pi }n_{\text{H}}Tk_{B}}{B_{0}^{2}} . \end{aligned}$$ In the lower regions of the solar atmosphere, including the photosphere and chromosphere, temperatures are relatively low (\(T \lesssim 15,000\,\text{K}\)) when compared to the corona. This, combined with structures synonymous with the solar surface, including sunspots, pores, and magnetic bright points (MBPs; Berger et al. 1995; Sánchez Almeida et al. 2004; Ishikawa et al. 2007; Utz et al. 2009, 2010, 2013a, b; Keys et al. 2011, 2013, 2014), all of which possess strong magnetic field concentrations (\(B_{0} \gtrsim 1000\,\text{G}\)), presents wave conduits that are inherently 'low-\(\beta \)' (i.e., are dominated by magnetic pressure; \(\beta \ll 1\)). Gary (2001) has indicated how such structures (particularly for highly-magnetic sunspots) can maintain their low-\(\beta \) status throughout the entire solar atmosphere, even as the magnetic fields begin to expand into the more volume-filling chromosphere (Gudiksen 2006; Beck et al. 2013b). Using non-linear force-free field (NLFFF; Wiegelmann 2008; Aschwanden 2016; Wiegelmann and Sakurai 2021) extrapolations, Aschwanden et al. (2016) and Grant et al. (2018) provided further evidence that sunspots can be best categorized as low-\(\beta \) wave guides, spanning from the photosphere through to the outermost extremities of the corona. As can be seen from Eq. (2), the hydrogen number density (\(n_{\text{H}}\)) also plays a pivotal role in the precise local value of the plasma-\(\beta \). As one moves higher in the solar atmosphere, a significant drop in the hydrogen number density is experienced (see, e.g., the sunspot model proposed by Avrett 1981), often with an associated scale-height on the order of 150–200 km (Alissandrakis 2020). As a result, the interplay between the number density and the expanding magnetic fields plays an important role in whether the environment is dominated by magnetic or plasma pressures. Of course, not all regions of the Sun's lower atmosphere are quite so straightforward. Weaker magnetic elements, including small-scale MBPs (Keys et al. 2020), are not able to sustain dominant magnetic pressures as their fields expand with atmospheric height. This results in the transition to a 'high-\(\beta \)' environment, where the plasma pressure dominates over the magnetic pressure (i.e., \(\beta > 1\)), which has been observed and modeled under a variety of highly magnetic conditions (e.g., Borrero and Ichimoto 2011; Jess et al. 2013; Bourdin 2017; Grant et al. 2018). This transition has important implications for the embedded waves, since the allowable modes become effected as the wave guide passes through the \(\beta \sim 1\) equipartition layer. Here, waves are able to undergo mode conversion/transmission (Schunker and Cally 2006; Cally 2007; Hansen et al. 2016), which has the ability to change the properties and observable signatures of the oscillations. However, we note that under purely quiescent conditions (i.e., related to quiet Sun modeling and observations), the associated intergranular lanes (Lin and Rimmele 1999) and granules themselves (Lites et al. 2008) will already be within the high plasma-\(\beta \) regime at photospheric heights. Since the turn of the century, there has been a number of reviews published in the field of MHD waves manifesting in the outer solar atmosphere, including those linked to standing (van Doorsselaere et al. 2009; Wang 2011), quasi-periodic (Nakariakov et al. 2005), and propagating (de Moortel 2009; Zaqarashvili and Erdélyi 2009; Lin 2011) oscillations. Many of these review articles focus on the outermost regions of the solar atmosphere (i.e., the corona), or only address waves and oscillations isolated within a specific layer of the Sun's atmosphere, e.g., the photosphere (Jess and Verth 2016) or the chromosphere (Jess et al. 2015; Verth and Jess 2016). As such, previous reviews have not focused on the coupling of MHD wave activity between the photosphere and chromosphere, which has only recently become possible due to the advancements made in multi-wavelength observations and data-driven MHD simulations. Here, in this review, we examine the current state-of-the-art in wave propagation, coupling, and damping/dissipation within the lower solar atmosphere, which comprises of both the photosphere and chromosphere, which are the focal points of next-generation ground-based telescopes, such as DKIST. In addition, we would also like this review to be useful for early career researchers (PhD students and post-doctoral staff) who may not necessarily be familiar with all of the wave-based analysis techniques the solar physics community currently have at their disposal, let alone the wave-related literature currently in the published domain. As a result, we wish this review to deviate from traditional texts that focus on summarizing and potential follow-up interpretations of research findings. Instead, we will present traditional and state-of-the-art methods for detecting, isolating, and quantifying wave activity in the solar atmosphere. This is particularly important since modern data sequences acquired at cutting-edge observatories are providing us with incredible spatial, spectral, and temporal resolutions that require efficient and robust analyses tools in order to maximize the scientific return. Furthermore, we will highlight how the specific analysis methods employed often strongly influence the scientific results obtained, hence it is important to ensure that the techniques applied are fit for purpose. To demonstrate the observational improvements made over the last \(\sim 50\) years we draw the readers attention to Figs. 2 and 3. Both Figs. 2 and 3 show sunspot structures captured using the best techniques available at that time. However, with advancements made in imaging (adaptive) optics, camera architectures, and post-processing algorithms, the drastic improvements are clear to see, with the high-quality data sequences shown in Fig. 3 highlighting the incredible observations of the Sun's lower atmosphere we currently have at our disposal. Observations of a sunspot (top row) and a quiet-Sun region (middle row) in the lower solar atmosphere, sampled at three wavelength positions in the Ca ii 8542 Å spectral line from the 1 m Swedish Solar Telescope (SST). The wavelength positions, from left to right, correspond to \(-900\) mÅ, \(-300\) mÅ, and 0 mÅ from the line core, marked with vertical dashed lines in the bottom-right panel, where the average spectral line and all sampled positions are also depicted. The bottom-left panel illustrates a photospheric image sampled with a broadband filter (centered at 3950 Å; filter width \(\approx 13.2\) Å). For better visibility, a small portion of the observed images are presented. All images are squared. Images courtesy of the Rosseland Centre for Solar Physics, University of Oslo After the wave detection and analysis techniques have been identified, with their strengths/weaknesses defined, we will then take the opportunity to summarize recent theoretical and observational research focused on the generation, propagation, coupling, and dissipation of wave activity spanning the base of the photosphere, through to the upper echelons of the chromosphere that couples into the transition region and corona above. Naturally, addressing a key question in the research domain may subsequently pose two or three more, or pushing the boundaries of observational techniques and/or theoretical modeling tools may lead to ambiguities or caveats in the subsequent interpretations. This is not only to be expected, but should be embraced as a reminder of the era of rapid discovery we currently find ourselves in. The open questions we will pose not only highlight the challenges currently seeking solution with the dawn of next-generation ground-based and space-borne telescopes, but will also set the scene for research projects spanning decades to come. Wave analysis tools Identifying, extracting, quantifying, and understanding wave-related phenomena in astrophysical time series is a challenging endeavor. Signals that are captured by even the most modern charge-coupled devices (CCDs) and scientific complementary metal-oxide-semiconductor (sCMOS) detectors are accompanied by an assortment of instrumental and noise signals that act to mask the underlying periodic signatures. For example, the particle nature of the incident photons leads to Poisson-based shot noise, resulting in randomized intensity fluctuations about the time series mean (Terrell 1977; Delouille et al. 2008), which can reduce the clarity of wave-based signatures. Furthermore, instrumental and telescope effects, including temperature sensitivity and pointing stability, can lead to mixed signals either swamping the signatures of wave motion, or artificially creating false periodicities in the resulting data products. Hence, without large wave amplitudes it becomes a challenge to accurately constrain weak wave signals in even the most modern observational time series, especially once the wave fluctuations become comparable to the noise limitations of the data sequence. In the following sub-sections we will document an assortment of commonly available tools available to the solar physics community that can help quantify wave motion embedded in observational data. In order for meaningful comparisons to be made from the techniques presented in Sect. 2, we will benchmark their suitability using two observed time series. We would like to highlight that the algorithms described and demonstrated below can be applied to any form of observational data product, including intensities, Doppler velocities, and spectral line-widths. As such, it is important to ensure that the input time series are scientifically calibrated before these wave analysis techniques are applied. HARDcam: 2011 December 10 The Hydrogen-alpha Rapid Dynamics camera (HARDcam; Jess et al. 2012a) is an sCMOS instrument designed to acquire high-cadence H\(\alpha \) images at the DST facility. The data captured by HARDcam on 2011 December 10 consists of 75 min (16:10–17:25 UT) of H\(\alpha \) images, acquired through a narrowband 0.25 Å Zeiss filter, obtained at 20 frames per second. Active region NOAA 11366 was chosen as the target, which was located at heliocentric coordinates (\(356''\), \(305''\)), or N17.9W22.5 in the more conventional heliographic coordinate system. A non-diffraction-limited imaging platescale of \(0{\,}.{\!\!}{''}138\) per pixel was chosen to provide a field-of-view size equal to \(71''\times 71''\). During the observing sequence, high-order adaptive optics (Rimmele 2004; Rimmele and Marino 2011) and speckle reconstruction algorithms (Wöger et al. 2008) were employed, providing a final cadence for the reconstructed images of 1.78 s. The dataset has previously been utilized in a host of scientific studies (Jess et al. 2013, 2016, 2017; Krishna Prasad et al. 2015; Albidah et al. 2021) due to the excellent seeing conditions experienced and the fact that the sunspot observed was highly circularly symmetric in its shape. A sample image from this observing campaign is shown in the right panel of Fig. 4, alongside a simultaneous continuum image captured by the Helioseismic and Magnetic Imager (HMI; Schou et al. 2012), onboard the Solar Dynamics Observatory (SDO; Pesnell et al. 2012). An SDO/HMI full-disk continuum image (left), with a red box highlighting the HARDcam field-of-view captured by the DST facility on 2011 December 10. An H\(\alpha \) line core image of active region NOAA 11366, acquired by HARDcam at 16:10 UT, is displayed in the right panel. Axes represent heliocentric coordinates in arcseconds In addition to the HARDcam data of this active region, we also accessed data from the Atmospheric Imaging Assembly (AIA; Lemen et al. 2012) onboard the SDO. Here, we obtained 1700 Å continuum (photospheric) images with a cadence of 24 s and spanning a 2.5 h duration. The imaging platescale is \(0{\,}.{\!\!}{''}6\) per pixel, with a \(350\times 350\) pixel\(^{2}\) cut-out providing a \(210''\times 210''\) field-of-view centered on the NOAA 11366 sunspot. The SDO/AIA images are used purely for the purposes of comparison to HARDcam information in Sect. 2.3.1. SuFI: 2009 June 9 The Sunrise Filter Imager (SuFI; Gandorfer et al. 2011) onboard the Sunrise balloon-borne solar observatory (Solanki et al. 2010; Barthol et al. 2011; Berkefeld et al. 2011) sampled multiple photospheric and chromospheric heights, with a 1 m telescope, in distinct wavelength bands during its first and second flights in 2009 and 2013, respectively (Solanki et al. 2017). High quality, seeing-free time-series of images at 300 nm and 397 nm (Ca ii H) bands (approximately corresponding to the low photosphere and low chromosphere, respectively) were acquired by SuFI/Sunrise on 2009 June 9, between 01:32 UTC and 02:00 UTC, at a cadence of 12 sec after phase-diversity reconstructions (Hirzberger et al. 2010, 2011). The observations sampled a quiet region located at solar disk center with a field of view of \(14''\times 40''\) and a spatial sampling of \(0{\,}.{\!\!}{''}02\) per pixel. Figure 5 illustrates sub-field-of-view sample images in both bands (with an average height difference of \(\approx 450\) km; Jafarzadeh et al. 2017d), along with magnetic-field strength map obtained from Stokes inversions of the Fe i 525.02 nm spectral line from the Sunrise Imaging Magnetograph eXperiment (IMaX; Martínez Pillet et al. 2011). A small magnetic bright point is also marked on all panels of Fig. 5 with a circle. Wave propagation between these two atmospheric layers in the small magnetic element is discussed in Sect. 2.4.1. A small region of an image acquired in 300 nm (left) and in Ca ii H spectral lines (middle) from SuFI/Sunrise, along with their corresponding line-of-sight magnetic fields from IMaX/Sunrise (right). The latter ranges between \(-1654\) and 2194 G. The circle includes a small-scale magnetic feature whose oscillatory behavior is shown in Fig. 25 One-dimensional Fourier analysis Traditionally, Fourier analysis (Fourier 1824) is used to decompose time series into a set of cosines and sines of varying amplitudes and phases in order to recreate the input lightcurve. Importantly, for Fourier analysis to accurately benchmark embedded wave motion, the input time series must be comprised of both linear and stationary signals. Here, a purely linear signal can be characterized by Gaussian behavior (i.e., fluctuations that obey a Gaussian distribution in the limit of large number statistics), while a stationary signal has a constant mean value and a variance that is independent of time (Tunnicliffe-Wilson 1989; Cheng et al. 2015). If non-linear signals are present, then the time series displays non-Gaussian behavior (Jess et al. 2019), i.e., it contains features that cannot be modeled by linear processes, including time-changing variances, asymmetric cycles, higher-moment structures, etc. In terms of wave studies, these features often manifest in solar observations in the form of sawtooth-shaped structures in time series synonymous with developing shock waves (Fleck and Schmitz 1993; Rouppe van der Voort et al. 2003; Vecchio et al. 2009; de la Cruz Rodríguez et al. 2013; Houston et al. 2018). Of course, it is possible to completely decompose non-linear signals using Fourier analysis, but the subsequent interpretation of the resulting amplitudes and phases is far from straightforward and needs to be treated with extreme caution (Lawton 1989). On the other hand, non-stationary time series are notoriously difficult to predict and model (Tunnicliffe-Wilson 1989). A major challenge when applying Fourier techniques to non-stationary data is that the corresponding Fourier spectrum incorporates numerous additional harmonic components to replicate the inherent non-stationary behavior, which artificially spreads the true time series energy over an uncharacteristically wide frequency range (Terradas et al. 2004). Ideally, non-stationary data needs to be transformed into stationary data with a constant mean and variance that is independent of time. However, understanding the underlying systematic (acting according to a fixed plan or system; methodical) and stochastic (randomly determined; having a random probability distribution or pattern that may be analyzed statistically but may not be predicted precisely) processes is often very difficult (Adhikari and Agrawal 2013). In particular, differencing can mitigate stochastic (i.e., non-systematic) processes to produce a difference-stationary time series, while detrending can help remove deterministic trends (e.g., time-dependent changes), but may struggle to alleviate stochastic processes (Pwasong and Sathasivam 2015). Hence, it is often very difficult to ensure observational time series are truly linear and stationary. The upper-left panel of Fig. 6 displays an intensity time series (lightcurve) that has been extracted from a penumbral pixel in the chromospheric HARDcam H\(\alpha \) data. Here, the intensities have been normalized by the time-averaged quiescent H\(\alpha \) intensity. It can be seen in the upper-left panel of Fig. 6 that in addition to sinusoidal wave-like signatures, there also appears to be a background trend (i.e., moving average) associated with the intensities. Through visual inspection, this background trend does not appear linear, thus requiring higher order polynomials to accurately model and remove. It must be remembered that very high order polynomials will likely begin to show fluctuations on timescales characteristic of the wave signatures wishing to be studied. Hence, it is important that the lowest order polynomial that best fits the data trends is chosen to avoid contaminating the embedded wave-like signatures with additional fluctuations arising from high-order polynomials. Importantly, the precise method applied to detrend the data can vary depending upon the signal being analyzed (e.g., Edmonds and Webb 1972; Edmonds 1972; Krijger et al. 2001; Rutten and Krijger 2003; de Wijn et al. 2005a, b). For example, some researchers choose to subtract the mean trend, while others prefer to divide by the fitted trend then subtract '1' from the subsequent time series. Both approaches result in a more stationary time series with a mean value of '0'. However, subtracting the mean preserves the original unit of measurement and hence the original shape of the time series (albeit with modified numerical axes labels), while dividing by the mean provides a final unit that is independent of the original measurement and thus provides a method to more readily visualize fractional changes to the original time series. It must be noted that detrending processes, regardless of which approach is selected, can help remove deterministic trends (e.g., time-dependent changes), but often struggle to alleviate stochastic processes from the resulting time series. An H\(\alpha \) line core intensity time series (upper left; solid black line) extracted from a penumbral location of the HARDcam data described in Sect. 2.1.1. The intensities shown have been normalized by the time-averaged H\(\alpha \) intensity established in a quiet Sun region within the field-of-view. A dashed red line shows a third-order polynomial fitted to the lightcurve, which is designed to detrend the data to provide a stationary time series. The upper-right panel displays the resulting time series once the third-order polynomial trend line has been subtracted from the raw intensities (black line). The solid red line depicts an apodization filter designed to preserve 90% of the original lightcurve, but gradually reduce intensities to zero towards the edges of the time series to help alleviate any spurious signals in the resulting FFT. The lower panel reveals the final lightcurve that is ready for FFT analyses, which has been both detrended and apodized to help ensure the resulting Fourier power is accurately constrained. The horizontal dashed red lines signify the new mean value of the data, which is equal to zero due to the detrending employed The dashed red line in the upper-left panel of Fig. 6 displays a third-order polynomial trend line fitted to the raw H\(\alpha \) time series. The line of best fit is relatively low order, yet still manages to trace the global time-dependent trend. Subtracting the trend line from the raw intensity lightcurve provides fluctuations about a constant mean equal to zero (upper-right panel of Fig. 6), helping to ensure the resulting time series is stationary. It can be seen that wave-like signatures are present in the lightcurve, particularly towards the start of the observing sequence, where fluctuations on the order of \(\approx 8{\%}\) of the continuum intensity are visible. However, it can also be seen from the right panel of Fig. 6 that between times of approximately 300–1300 s there still appears to be a local increase in the mean (albeit no change to the global mean, which remains zero). To suppress this local change in the mean, higher order polynomial trend lines could be fitted to the data, but it must be remembered that such fitting runs the risk of manipulating the true wave signal. Hence, for the purposes of this example, we will continue to employ third-order polynomial detrending, and make use of the time series shown in the upper-right panel of Fig. 6. For data sequences that are already close to being stationary, one may question why the removal of such background trends is even necessary since the Fourier decomposition with naturally put the trend components into low-frequency bins. Of course, the quality and/or dynamics of the input time series will have major implications regarding what degree of polynomial is required to accurately transform the data into a stationary time series. However, from the perspective of wave investigations, non-zero means and/or slowly evolving backgrounds will inappropriately apply Fourier power across low frequencies, even though these are not directly wave related, which may inadvertently skew any subsequent frequency-integrated wave energy calculations performed. The sources of such non-stationary processes can be far-reaching, and include aspects related to structural evolution of the feature being examined, local observing conditions (e.g., changes in light levels for intensity measurements), and/or instrumental effects (e.g., thermal impacts that can lead to time-dependent variances in the measured quantities). As such, some of these sources (e.g., structural evolution) are dependent on the precise location being studied, while other sources (e.g., local changes in the light level incident on the telescope) are global effects that can be mapped and removed from the entire data sequence simultaneously. Hence, detrending the input time series helps to ensure that the resulting Fourier power is predominantly related to the embedded wave activity. Another step commonly taken to ensure the reliability of subsequent Fourier analyses is to apply an apodization filter to the processed time series (Norton and Beer 1976). An Fourier transform assumes an infinite, periodically repeating sequence, hence leading to a looping behavior at the ends of the time series. Hence, an apodization filter is a function employed to smoothly bring a measured signal down to zero towards the extreme edges (i.e., beginning and end) of the time series, thus mitigating against sharp discontinuities that may arise in the form of false power (edge effect) signatures in the resulting power spectrum. Typically, the apodization filter is governed by the percentage over which the user wishes to preserve the original time series. For example, a 90% apodization filter will preserve the middle 90% of the overall time series, with the initial and final 5% of the lightcurve being gradually tapered to zero (Dame and Martic 1987). There are many different forms of the apodization filter shape that can be utilized, including tapered cosines, boxcar, triangular, Gaussian, Lorentzian, and trapezoidal profiles, many of which are benchmarked using solar time series in Louis et al. (2015). A tapered cosine is the most common form of apodization filter found in solar physics literature (e.g., Hoekzema et al. 1998), and this is what we will employ here for the purposes of our example dataset. The upper-right panel of Fig. 6 reveals a 90% tapered cosine apodization filter overplotted on top of the detrended H\(\alpha \) lightcurve. Multiplying this apodization filter by the lightcurve results in the final detrended and apodized time series shown in the bottom panel of Fig. 6, where the stationary nature of this processed signal is now more suitable for Fourier analyses. It is worth noting that following successful detrending of the input time series, the apodization percentage chosen can often be reduced, since the detrending process will suppress any discontinuities arising at the edges of the data sequence (i.e., helps to alleviate spectral leakage; Nuttall 1981). As such, the apodization percentage employed may be refined based on the ratio between the amplitude of the (primary) oscillatory signal and the magnitude of the noise present within that signal (i.e., linked to the inherent signal-to-noise ratio; Stoica and Moses 2005; Carlson and Crilly 2010). Performing a fast Fourier transform (FFT; Cooley and Tukey 1965) of the detrended time series provides a Fourier amplitude spectrum, which can be displayed as a function of frequency. An FFT is a computationally more efficient version of the discrete Fourier transform (DFT; Grünbaum 1982), which only requires \(N\log {N}\) operations to complete compared with the \(N^{2}\) operations needed for the DFT, where N is the number of data points in the time series, which can be calculated by dividing the time series duration by the acquisition cadence. Following a Fourier transform of the input data, the number of (non-negative) frequency bins, \(N_{f}\), can be computed by adding one to the number of samples (to account for the zeroth frequency representing the time series mean; Oppenheim and Schafer 2009), \(N+1\), dividing the result by a factor of two, before rounding to the nearest integer. The Nyquist frequency is the highest constituent frequency of an input time series that can be evaluated at a given sampling rate (Grenander 1959), and is defined as \(f_{\text{Ny}} = {\mathrm {sampling~rate}}/2 = 1/(2 \times {\text{cadence}})\). To evaluate the frequency resolution, \(\varDelta {f}\), of an input time series, one must divide the Nyquist frequency by the number of non-zero frequency bins (i.e., the number of steps between the zeroth and Nyquist frequencies, N/2), providing, $$\begin{aligned} \varDelta {f} = \frac{f_{\text{Ny}}}{N/2} = \frac{\frac{1}{2 \times {\text{cadence}}}}{\frac{{\mathrm {time~series~duration}}}{2 \times {\text{cadence}}}} = \frac{1}{{\mathrm {time~series~duration}}} . \end{aligned}$$ As a result, it is clear to see that the observing duration plays a pivotal role in the corresponding frequency resolution (see, e.g., Harvey 1985; Duvall et al. 1997; Gizon et al. 2017, for considerations in the helioseismology community). It is also important to note that the frequency bins remain equally spaced across the lowest (zeroth frequency or mean) to highest (Nyquist) frequency that is resolved in the corresponding Fourier spectrum. See Sect. 2.2.1 for a more detailed comparison between the terms involved in Fourier decomposition. The HARDcam dataset utilized has a cadence of 1.78 s, which results in a Nyquist frequency of \(f_{{\text {Ny}}}\approx 280\,\text{mHz}\) \(\left( \frac{1}{2\times 1.78}\right) \). It is worth noting that only the positive frequencies are displayed in this review for ease of visualization. Following the application of Fourier techniques, both negative and positive frequencies, which are identical except for their sign, will be generated for the corresponding Fourier amplitudes. This is a consequence of the Euler relationship that allows sinusoidal wave signatures to be reconstructed from a set of positive and negative complex exponentials (Smith 2007). Since input time series are real valued (e.g., velocities, intensities, spectral line widths, magnetic field strengths, etc.) with no associated imaginary terms, then the Fourier amplitudes associated with the negative and positive frequencies will be identical. This results in the output Fourier transform being Hermitian symmetric (Napolitano 2020). As a result, the output Fourier amplitudes are often converted into a power spectrum (a measure of the square of the Fourier wave amplitude), or following normalization by the frequency resolution, into a power spectral density. This approach is summarized by Stull (1988), where the power spectral density, PSD, can be calculated as, $$\begin{aligned} \text{PSD}(n) = \frac{2 \cdot |{\mathcal {F}}_{A}(n)|^{2}}{\varDelta f} = \frac{2 \cdot \left( \left[ {\mathcal {F}}_{\mathrm {real~part}}(n) \right] ^{2} + \left[ {\mathcal {F}}_{\mathrm {imaginary~part}}(n) \right] ^{2} \right) }{\varDelta f} . \end{aligned}$$ In Eq. (4), \({\mathcal {F}}_{A}(n)\) is the Fourier amplitude for any given positive frequency, n, while \(\varDelta f\) is the corresponding frequency resolution of the Fourier transform (see definition above and further discussion points in Sect. 2.2.1). Note that the factor of '2' is required due to the wrapping of identical Fourier power at negative frequencies into the positive domain. The normalization of the power spectrum by the frequency resolution is a best practice to ensure that the subsequent plots can be readily compared against other data sequences that may be acquired across shorter or longer observing intervals, hence affecting the intrinsic frequency resolution (see Sect. 2.2.1). As an example, the power spectral density of an input velocity time series, with units of km/s, will have the associated units of km\(^{2}\)/s\(^{2}\)/mHz (e.g., Stangalini et al. 2021b). The power spectral density for the detrended HARDcam H\(\alpha \) time series is depicted in the lower-middle panel of Fig. 7. Here, the intensity time series is calibrated into normalized data number (DN) units, which are often equally labeled as 'counts' in the literature. Hence, the resulting power spectral density has units of DN\(^{2}\)/mHz. Taking the raw HARDcam H\(\alpha \) lightcurve shown in the upper-left panel of Fig. 6, the upper row displays the resultant detrended time series utilizing linear (left), third-order polynomial (middle), and nineth-order polynomial (right) fits to the data. In each panel the dashed red line highlights the line of best fit, while the dashed blue line indicates the resultant data mean that is equal to zero following detrending. The lower row displays the corresponding Fourier power spectral densities for each of the linear (left), third-order polynomial (middle), and nineth-order polynomial detrended time series. Changes to the power spectral densities are particularly evident at low frequencies An additional step often employed following the calculation of the PSD of an input time series is to remove the Fourier components associated with noise. It can be seen in the lower panels of Fig. 7 that there is a flattening of power towards higher frequencies, which is often due to the white noise that dominates the signal at those frequencies (Hoyng 1976; Krishna Prasad et al. 2017). Here, white noise is defined as fluctuations in a time series that give rise to equal Fourier power across all frequencies, hence giving rise to a flat PSD (Bendat and Piersol 2011). Often, if white noise is believed to be the dominant source of noise in the data (i.e., the signal is well above the detector background noise, hence providing sufficient photon statistics so that photon noise is the dominant source of fluctuations), then its PSD can be estimated by applying Eq. (4) to a random light curve generated following a Poisson distribution, with an amplitude equivalent to the square root of the mean intensity of the time series (Fossum and Carlsson 2005a; Lawrence et al. 2011). Subtraction of the background noise is necessary when deriving, for example, the total power of an oscillation isolated in a specific frequency window (Vaseghi 2008). Other types of noise exist that have discernible power-law slopes associated with their PSDs as a function of frequency. For example, while white noise has a flat power-law slope, pink and red noise display 1/f and \(1/f^{2}\) power-law slopes, respectively, resulting in larger amplitudes at lower frequencies (Kolotkov et al. 2016; Strekalova et al. 2018). The specific dominant noise profile must be understood before it is subtracted from the relevant data PSDs. As a result of the detrending employed in Fig. 6, the absolute Fourier wave amplitude related to a frequency of 0 Hz (i.e., representing the time series mean; upper panel of Fig. 8) is very low; some 4 orders-of-magnitude lower than the power associated with white noise signatures at high frequencies. Of course, if the processed time series mean is exactly zero, then the Fourier wave amplitude at 0 Hz should also be zero. In the case of Fig. 8, the detrended time series does have a zero mean. However, because the time series is not antisymmetric about the central time value, it means that the application of the tapered cosine apodization function results in a very small shift in the time series mean away from the zero value. As a result, the subsequent Fourier amplitudes are fractionally (e.g., at the \(10^{-8}\) level for the upper panel of Fig. 8) above the zero point. Once the processes of detrending and apodization are complete, it is possible to re-calculate the time series mean and subtract this value to ensure the processed mean remains zero before the application of Fourier analyses. However, for the purposes of Figs. 7 and 8, this additional mean subtraction has not been performed to better highlight this potential artifact at the lowest temporal frequencies. Fourier power spectrum of the HARDcam H\(\alpha \) detrended lightcurve shown in the lower panel of Fig. 6 (top). For the purposes of wave filtering, a step function is shown on the Fourier spectrum using a dashed red line (middle left), where the step function equals unity between frequencies spanning 3.7–5.7 mHz (i.e., \(4.7\pm 1.0\) mHz). Multiplying the Fourier power spectrum by this step function results in isolated power features, which are displayed in the middle-right panel. Alternatively, a Gaussian function centered on 4.7 mHz, with a FWHM of 2.0 mHz, is overplotted on top of the Fourier power spectrum using a red line in the lower-left panel. Multiplying the power spectrum by the Gaussian function results in similar isolated power features, shown in the lower-right panel, but with greater apodization of edge frequencies to help reduce aliasing upon reconstruction of the filtered time series Note that Fig. 8 does not have the frequency axis displayed on a log-scale in order to reveal the 0 Hz component. As such, the upper frequency range is truncated to \(\approx 28\) Hz to better reveal the signatures present at the lower frequencies synonymous with wave activity in the solar atmosphere. The suppression of Fourier wave amplitudes at the lowest frequencies suggests that the third-order polynomial trend line fitted to the raw H\(\alpha \) intensities is useful at removing global trends in the visible time series. However, as discussed above, care must be taken when selecting the polynomial order to ensure that the line of best fit does not interfere with the real wave signatures present in the original lightcurve. To show the subtle, yet important impacts of choosing a suitable trend line, Fig. 7 displays the resultant detrended time series of the original HARDcam H\(\alpha \) lightcurve for three different detrending methods, e.g., the subtraction of a linear, a third-order polynomial, and a nineth-order polynomial line of best fit. It can be seen from the upper panels of Fig. 7 that the resultant (detrended) lightcurves have different perturbations away from the new data mean of zero. This translates into different Fourier signatures in the corresponding power spectral densities (lower panels of Fig. 7), which are most apparent at the lowest frequencies (e.g., \(<3\) mHz). Therefore, it is clear that care must be taken when selecting the chosen order of the line of best fit so that it doesn't artificially suppress true wave signatures that reside in the time series. It can be seen in the lower-middle panel of Fig. 7 that the largest Fourier power signal is at a frequency of \(\approx 4.7\) mHz, corresponding to a periodicity of \(\approx 210\) s, which is consistent with previous studies of chromospheric wave activity in the vicinity of sunspots (e.g., Felipe et al. 2010; Jess et al. 2013; López Ariste et al. 2016, to name but a few examples). Common misconceptions involving Fourier space Translating a time series into the frequency-dependent domain through the application of a Fourier transform is a powerful diagnostic tool for analyzing the frequency content of (stationary) time series. However, when translating between the temporal and frequency domains it becomes easy to overlook the importance of the sampling cadence and the time series duration in the corresponding frequency axis. For example, one common misunderstanding is the belief that increasing the sampling rate of the data (e.g., increasing the frame rate of the observations from 10 frames per second to 100 frames per second) will improve the subsequent frequency resolution of the corresponding Fourier transform. Unfortunately, this is not the case, since increasing the frame rate raises the Nyquist frequency (highest frequency component that can be evaluated), but does not affect the frequency resolution of the Fourier transform. Instead, to improve the frequency resolution one must obtain a longer-duration time series or employ 'padding' of the utilized lightcurve to increase the number of data points spanning the frequency domain (Lyons 1996). To put these aspects into better context, we will outline a worked example that conveys the importance of both time series cadence and duration. Let us consider two complementary data sequences, one from the Atmospheric Imaging Assembly (AIA; Lemen et al. 2012) onboard the SDO spacecraft, and one from the 4 m ground-based Daniel K. Inouye Solar Telescope (DKIST; Tritschler et al. 2016; Rimmele et al. 2020; Rast et al. 2021). Researchers undertaking a multi-wavelength investigation of wave activity in the solar atmosphere may choose to employ these types of complementary observations in order to address their science objectives. Here, the AIA/SDO observations consist of 3 h (10,800 s) of 304 Å images taken at a cadence of 12.0 s, while the DKIST observations comprise of 1 h (3600 s) of H\(\alpha \) observations taken by the Visual Broadband Imager (VBI; Wöger 2014) at a cadence of 3.2 s. The number of samples, N, for each of the time series can be calculated as \(N_{\text{AIA}} = 10{,}800 / 12.0 = 900\) and \(N_{\text{VBI}} = 3600 / 3.2 = 1125\). Therefore, it is clear that even though the AIA/SDO observations are obtained over a longer time duration, the higher cadence of the VBI/DKIST observations results in more samples associated with that data sequence. The number of frequency bins, \(N_{f}\), can also be computed as \(N_{f({\text{AIA}})} = (900+1) / 2 = 451\), while \({N_{f({\text{VBI}})} = (1125+1) / 2 = 563}\). Hence, the frequency axes of the corresponding Fourier transforms will be comprised of 451 and 563 positive real frequencies (i.e., \(\ge 0\) Hz) for the AIA/SDO and VBI/DKIST data, respectively. The increased number of frequency bins for the higher cadence VBI/DKIST observations sometimes leads to the belief that this provides a higher frequency resolution. However, we have not yet considered the effect of the image cadence on the corresponding frequency axes. In the case of the AIA/SDO and VBI/DKIST observations introduced above, the corresponding Nyquist frequencies can be computed as \(f_{\mathrm {Ny(AIA)}} = 1/(2 \times 12.0) \approx 42\) mHz and \({f_{\mathrm {Ny(VBI)}} = 1/(2 \times 3.2) \approx 156}\) mHz, respectively. As a result, it should become clear that while the VBI/DKIST observations result in a larger number of corresponding frequency bins (i.e., \(N_{f({\text{VBI}})} > N_{f({\text{AIA}})}\)), these frequency bins are required to cover a larger frequency interval up to the calculated Nyquist value. Subsequently, for the case of the AIA/SDO and VBI/DKIST observations, the corresponding frequency resolutions can be calculated as \({\varDelta {f}_{\text{AIA}} = 1/10{,}800 = 0.0926}\) mHz and \({\varDelta {f}_{\text{VBI}} = 1/3600 = 0.2778}\) mHz, respectively. Note that while the frequency resolution is constant, the same cannot be said for the period resolution due to the reciprocal nature between these variables. For example, at a frequency of 3.3 mHz (\(\approx 5\) min oscillation), the period resolution for VBI/DKIST is \(\approx 25\) s (i.e., \(\approx 303\pm 25\) s), while for AIA/SDO the period resolution is \(\approx 8\) s (i.e., \(\approx 303\pm 8\) s). Similarly, at a frequency of 5.6 mHz (\(\approx 3\) min oscillation), the period resolutions for VBI/DKIST and AIA/SDO are \(\approx 9\) s (i.e., \(\approx 180\pm 9\) s) and \(\approx 3\) s (i.e., \(\approx 180\pm 3\) s), respectively. Figure 9 depicts the Fourier frequencies (left panel), and their corresponding periodicities (right panel), as a function of the derived frequency bin. It can be seen from the left panel of Fig. 9 that the AIA/SDO observations produce a lower number of frequency bins (i.e., a result of less samples, \(N_{\text{AIA}} < N_{\text{VBI}}\)), alongside a smaller peak frequency value (i.e., a lower Nyquist frequency, \({f_{\mathrm {Ny(AIA)}}} < {f_{\mathrm {Ny(VBI)}}}\), caused by the lower temporal cadence). However, as a result of the longer duration observing sequence for the AIA/SDO time series (i.e., 3 h for AIA/SDO versus 1 h for VBI/DKIST), the resulting frequency resolution is better (i.e., \({\varDelta {f}_{\text{AIA}}} < {\varDelta {f}_{\text{VBI}}}\)), allowing more precise frequency-dependent phenomena to be uncovered in the AIA/SDO observations. Of course, due to the AIA/SDO cadence being longer than that of VBI/DKIST (i.e., 12.0 s for AIA/SDO versus 3.2 s for VBI/DKIST), this results in the inability to examine the fastest wave fluctuations, which can be seen more clearly in the right panel of Fig. 9, whereby the VBI/DKIST observations are able to reach lower periodicities when compared to the complementary AIA/SDO data sequence. The above scenario is designed to highlight the important interplay between observing cadences and durations with regards to the quantitative parameters achievable through the application of Fourier transforms. For example, if obtaining the highest possible frequency resolution is of paramount importance to segregate closely matched wave frequencies, then it is the overall duration of the time series (not the observing cadence) that facilitates the necessary frequency resolution. The frequencies (left panel) and corresponding periodicities (right panel) that can be measured through the application of Fourier analysis to an input time series. Here, the solid blue lines depict AIA/SDO observations spanning a 3 h duration and acquired with a temporal cadence of 12.0 s, while the solid red lines highlight VBI/DKIST observations spanning a 1 h window and acquired with a temporal cadence of 3.2 s. It can be seen that both the cadence and observing duration play pivotal roles in the resulting frequencies/periodicities achievable, with the longer duration AIA/SDO observations providing a better frequency resolution, \(\varDelta {f}\), while the higher cadence VBI/DKIST data results in a better Nyquist frequency that allows more rapid wave fluctuations to be studied. In the left and right panels, the dashed blue and red lines depict the Nyquist frequencies and corresponding periodicities for the AIA/SDO and VBI/DKIST data sequences, respectively (see text for more information) Another important aspect to keep in mind is that the Fourier spectrum is only an estimate of the real power spectrum of the studied process. The finite-duration time series, noise, and distortions due to the intrinsic covariance within each frequency bin may lead to spurious peaks in the spectrum, which could be wrongly interpreted as real oscillations. As a result, one may believe that by considering longer time series the covariance of each frequency bin will reduce, but this is not true since the bin width itself becomes narrower. One way forward is to divide the time series into different segments and average the resulting Fourier spectra calculated from each sub-division—the so-called Welch method (Welch 1967), at the cost of reducing the resolution of frequencies explored. However, data from ground-based observatories are generally limited to 1–2 h each day, and it is not always possible to obtain such long time series. Therefore, special attention must be paid when interpreting the results. It is also possible to artificially increase the duration of the input time series through the process known as 'padding' (Ransom et al. 2002), which has been employed across a wide range of solar studies incorporating the photosphere, chromosphere, and corona (e.g., Ballester et al. 2002; Auchère et al. 2016; Hartlep and Zhao 2021; Jafarzadeh et al. 2021). Here, the beginning and/or end of the input data sequence is appended with a large number of data points with values equal to the mean of the overall time series. The padding adds no additional power to the data, but it acts to increase the fine-scale structure present in the corresponding Fourier transform since the overall duration of the data has been artificially increased. Note that padding with the data mean is preferable to padding with zeros since this alleviates the introduction of low-frequency power into the subsequent Fourier transform. Of course, if the input time series had previously been detrended (see Sect. 2.2) so that the resulting mean of the data is zero, then zero-padding and padding with the time series mean are equivalent. Note that the process of padding is often perceived to increase the usable Fourier frequency resolution of the dataset, which is unfortunately incorrect. The use of padded time series acts to reveal small-scale structure in the output Fourier transform, but as it does not add any real signal to the input data sequence, the frequency resolution remains governed by the original time series characteristics (Eriksson 1998). As such, padding cannot recover and/or recreate any missing information in the original data sequence. This effect can be visualized in Fig. 10. Here, a resultant wave consisting of two sinusoids with normalized frequencies 0.075 and 0.125 of the sampling frequency is cropped to 32 and 64 data points in length. Figure 10a shows the corresponding power spectral density (PSD) following Fourier transformation on both the raw 32 data samples array (solid black line with circular data points) and the original 32 data point array that has been padded to a total of 64 data points (dashed black line with crosses). In addition, Fig. 10b shows another PSD for the data array containing 64 input samples (solid black line with circular data points), alongside the same PSD for the original 32 data point array that has been padded to a total of 64 data points (dashed black line with crosses; same as Fig. 10a). From Fig. 10a it can be seen that while the padding increases the number of data points along the frequency axis (and therefore creates some additional small-scale fluctuations in the resulting PSD), it does not increase the frequency resolution to a value needed to accurately identify the two sinusoidal components. This is even more apparent in Fig. 10b, where the Fourier transform of the time series containing 64 data points now contains sufficient information and frequency resolution to begin to segregate the two sinusoidal components. The padded array (32 data points plus 32 padded samples) contains the same number of elements along the frequency axis, but does not increase the frequency resolution to allow the quantification of the two embedded wave frequencies. The use of padding is often employed to decrease the computational time. Indeed, FFT algorithms work more efficiently if the number of samples is an integer power of 2. Image reproduced with permission from Eriksson (1998) Panels revealing the effect of padding an input time series on the resulting Fourier transform. For this example, two sinusoids are superimposed with normalized frequencies equal to 0.075 and 0.125 of the sampling frequency. Panels a, b show the resulting power spectral densities (PSDs) following the Fourier transforms of 32 input data points (solid black line with circular data points; left) and 64 input data points (solid black line with circular data points; right), respectively. In both panels, the dashed black lines with crosses represent the Fourier transforms of 32 input data points that have been padded to a total of 64 data points. It can be seen that the increased number of data points associated with the padded array results in more samples along the frequency axis, but this does not improve the frequency resolution to the level consistent with supplying 64 genuine input samples (solid black line in the right panel). Of course, while data padding strictly does not add usable information into the original time series, it can be utilized to provide better visual segregation of closely spaced frequencies. To show an example of this application, Fig. 11 displays the effects of padding and time series duration in a similar format to Fig. 10. In Fig. 11, the upper-left panel shows an intensity time series that is created from the superposition of two closely spaced frequencies, here 5.0 mHz and 5.4 mHz. The resultant time series is \(\approx 3275\) s (\(\sim 55\) min) long, and constructed with a cadence of 3.2 s to remain consistent with the VBI/DKIST examples shown earlier in this section. The absolute extent of this 3275 s time series is bounded in the upper-left panel of Fig. 11 by the shaded orange background. In order to pad this lightcurve, a new time series is constructed that has twice as many data points in length, making the time series duration now \(\approx 6550\) s (\(\sim 110\) min). The original \(\approx 3275\) s lightcurve is placed in the middle of the new (expanded) array, thus providing zero-padding at the start and end of the time series. The corresponding power spectral densities (PSDs) for both the original and padded time series are shown in the lower-left panel of Fig. 11 using black and red lines, respectively. Note that the frequency axis is cropped to the range of 1–10 mHz for better visual clarity. It is clear that the original input time series creates a broad spectral peak at \(\approx 5\) mHz, but the individual 5.0 mHz and 5.4 mHz components are not visible in the corresponding PSD (solid black line in the lower-left panel of Fig. 11). On the other hand, the PSD from the padded array (solid red line in the lower-left panel of Fig. 11) does show a double peak corresponding to the 5.0 mHz and 5.4 mHz wave components, highlighting how such padding techniques can help segregate multi-frequency wave signatures. Upper left: Inside the shaded orange region is a synthetic lightcurve created from the superposition of 5.0 mHz and 5.4 mHz waves, which are generated with a 3.2 s cadence (i.e., from VBI/DKIST) over a duration of \(\approx 3275\) s. This time series is zero-padded into a \(\approx 6550\) s array, which is displayed in its entirety in the upper-left panel using a solid black line. Upper right: The same resultant waveform created from the superposition of 5.0 mHz and 5.4 mHz waves, only now generated for the full \(\approx 6550\) s time series duration (i.e., no zero-padding required). Lower left: The power spectral density (PSD) of the original (un-padded) lightcurve is shown using a solid black line, while the solid red line reveals the PSD of the full zero-padded time series. It is clear that the padded array offers better visual segregation of the two embedded wave frequencies. Lower right: The PSDs for both the full \(\approx 6550\) s time series (solid black line) and the zero-padded original lightcurve (solid red line; same as that depicted in the lower-left panel). It can be seen that while the padded array provides some segregation of the 5.0 mHz and 5.4 mHz wave components, there is no better substitute at achieving high frequency resolution than obtaining long-duration observing sequences. Note that both PSD panels have the frequency axis truncated between 1 and 10 mHz for better visual clarity Of course, padding cannot be considered a universal substitute for a longer duration data sequence. The upper-right panel of Fig. 11 shows the same input wave frequencies (5.0 mHz and 5.4 mHz), only with the resultant wave now present throughout the full \(\sim 110\) min time sequence. Here, the beat pattern created by the superposition of two closely spaced frequencies can be readily seen, which is a physical manifestion of wave interactions also studied in high-resolution observations of the lower solar atmosphere (e.g., Krishna Prasad et al. 2015). The resulting PSD of the full-duration time series is depicted in the lower-right panel of Fig. 11 using a solid black line. For comparison, the PSD constructed from the padded original lightcurve is also overplotted using a solid red line (same as shown using a solid red line in the lower-left panel of Fig. 11). It is clearly seen that the presence of the wave signal across the full time series provides the most prominent segregation of the 5.0 mHz and 5.4 mHz spectral peaks. While these peaks are also visible in the padded PSD (solid red line), they are less well defined, hence reiterating that while time series padding can help provide better isolation of closely spaced frequencies, there is no better candidate for high frequency resolution than long duration observing sequences. On the other hand, if rapidly fluctuating waveforms are wanting to be studied, then achieving a high Nyquist frequency is necessary to achieve these objectives, which the duration of the observing sequence is unable to assist with. Hence, it is important to tailor the observing strategy to ensure the frequency requirements are met. This, of course, can present challenges for particular facilities. For example, if a frequency resolution of \(\varDelta {f} \approx 35~\mu \text{Hz}\) is required (e.g., to probe the sub-second timescales of physical processes affecting frequency distributions in the aftermath of solar flares; Wiśniewska et al. 2019), this would require an observing duration of approximately 8 continuous hours, which may not be feasible from ground-based observatories that are impacted by variable weather and seeing conditions. Similarly, while space-borne satellites may be unaffected by weather and atmospheric seeing, these facilities may not possess a sufficiently large telescope aperture to probe the wave characteristics of small-scale magnetic elements (e.g., Chitta et al. 2012b; Van Kooten and Cranmer 2017; Keys et al. 2018) and naturally have reduced onboard storage and/or telemetry restrictions, thus creating difficulties obtaining 8 continuous hours of observations at maximum acquisition cadences. Hence, complementary data products, including ground-based observations at high cadence and overlapping space-borne data acquired over long time durations, are often a good compromise to help provide the frequency characteristics necessary to achieve the science goals. Of course, next-generation satellite facilities, including the recently commissioned Solar Orbiter (Müller et al. 2013, 2020) and the upcoming Solar-C (Shimizu et al. 2020) missions, will provide breakthrough technological advancements to enable longer duration and higher cadence observations of the lower solar atmosphere than previously obtained from space. Another alternative to achieve both long time-series and high-cadence observations is the use of balloon-borne observatories, including the Sunrise (Bello González et al. 2010b) and Flare Genesis (Murphy et al. 1996; Bernasconi et al. 2000) experiments, where the data are stored in onboard discs. Such missions, however, have their own challenges and are limited to only a couple of days of observations during each flight. Calculating confidence levels After displaying Fourier spectra, it is often difficult to pinpoint exactly what features are significant, and what power spikes may be the result of noise and/or spurious signals contained within the input time series. A robust method of determining the confidence level of individual power peaks is to compare the Fourier transform of the input time series with the Fourier transform of a large number (often exceeding 1000) of randomized lightcurves based on the original values (i.e., ensuring an identical distribution of intensities throughout the new randomized time series; O'Shea et al. 2001). Following the randomization and computation of FFTs of the new time series, the probability, p, of randomized fluctuations being able to reproduce a given Fourier power peak in the original spectrum can be calculated. To do this, the Fourier power at each frequency element is compared to the power value calculated for the original time series, with the proportion of permutations giving a Fourier power value greater than, or equal to, the power associated with the original time series providing an estimate of the probability, p. Here, a small value of p suggests that the original lightcurve contains real oscillatory phenomena, while a large value of p indicates that there is little (or no) real periodicities contained within the data (Banerjee et al. 2001; O'Shea et al. 2001). Indeed, it is worth bearing in mind that probability values of \(p=0.5\) are consistent with noise fluctuations (i.e., the variance of a binomial distribution is greatest at \(p=0.5\); Lyden et al. 2019), hence why the identification of real oscillations requires small values of p. Following the calculation of the probability, p, the value can be reversed to provide a percentage probability that the detected oscillatory phenomenon is real, through the relationship, $$\begin{aligned} p_{\text {real}} = (1 - p) \times 100 . \end{aligned}$$ Here, \(p_{\text {real}}=100\%\) would suggest that the wave motion present in the original time series is real, since no (i.e., \(p=0\)) randomized time series provided similar (or greater) Fourier power. Contrarily, \(p_{\text {real}}=0\%\) would indicate a real (i.e., statistically significant) power deficit at that frequency, since all (i.e., \(p=1\)) randomized time series provided higher Fourier power at that specific frequency. Finally, a value of \(p_{\text {real}} = 50\%\) would indicate that the power peak is not due to actual oscillatory motions. A similar approach is to calculate the means and standard deviations of the Fourier power values for each independent frequency corresponding to the randomized time series. This provides a direct estimate of whether the original measured Fourier power is within some number of standard deviations of the mean randomized-data power density. As a result, probability estimations of the detected Fourier peaks can be estimated providing the variances and means of the randomized Fourier power values are independent (i.e., follow a normal distribution; Bell et al. 2018). If a large number (\(\ge 1000\)) of randomized permutations are employed, then the fluctuation probabilities will tend to Gaussian statistics (Linnell Nemec and Nemec 1985; Delouille et al. 2008; Jess et al. 2019). In this case, the confidence level can be obtained using a standardized Gaussian distribution. For many solar applications (e.g., McAteer et al. 2002b, 2003; Andic 2008; Bello González et al. 2009; Stangalini et al. 2012; Dorotovič et al. 2014; Freij et al. 2016; Jafarzadeh et al. 2017d, to name but a few examples), a confidence level of 95% is typically employed as a threshold for reliable wave detection. In this case, \(99\% \le p_{\text {real}} \le 100\%\) (or \(0.00 \le p \le 0.01\)) is required to satisfy the desired 95% confidence level. To demonstrate a worked example, we utilize the HARDcam H\(\alpha \) time series shown in the left panel of Fig. 6, which consists of 2528 individual time steps. This, combined with 1000 randomized permutations of the lightcurve, provides 1000 FFTs with 1000 different measures in each frequency bin; more than sufficient to allow the accurate use of Gaussian number statistics (Montgomery and Runger 2003). For each randomization, the resulting Fourier spectrum is compared to that depicted in the upper panel of Fig. 8, with the resulting percentage probabilities, \(p_{\text {real}}\), calculated according to Eq. (5) for each of the temporal frequencies. The original Fourier power spectrum, along with the percentage probabilities for each corresponding frequency, are shown in the left panel of Fig. 12. It can be seen that the largest power signal at \(\approx 4.7\) mHz (\(\approx 210\) s) has a high probability, suggesting that this is a detection of a real oscillation. Furthermore, the neighboring frequencies also have probabilities above 99%, further strengthening the interpretation that wave motion is present in the input time series. It should be noted that with potentially thousands of frequency bins in the high-frequency regime of an FFT, having some fraction of points that exceed a 95% (or even 99%) confidence interval is to be expected. Therefore, many investigations also demand some degree of coherency in the frequency and/or spatial distributions to better verify the presence of a real wave signal (similar to the methods described by Durrant and Nesis 1982; Di Matteo and Villante 2018). To better highlight which frequencies demonstrate confidence levels exceeding 95%, the right panel of Fig. 12 overplots (using bold red crosses) those frequencies containing percentage probabilities in excess of 99%. The full frequency extent of the Fourier power spectral densities shown in the lower-middle panel of Fig. 7, displayed using a log–log scale for better visual clarity (left panel). Overplotted using a solid red line are the percentage probabilities, \(p_{\text {real}}\), computed over 1000 randomized permutations of the input lightcurve. Here, any frequencies with \(p_{\text {real}} \ge 99\%\) correspond to a statistical confidence level in excess of 95%. The same Fourier power spectral density is shown in the right panel, only now with red crossed symbols highlighting the locations where the Fourier power provides confidence levels greater than 95% Lomb-scargle techniques A requirement for the implementation of traditional Fourier-based analyses is that the input time series is regularly and evenly sampled. This means that each data point of the lightcurve used should be obtained using the same exposure time, and subsequent time steps should be acquired with a strict, uniform cadence. Many ground-based and space-borne instruments employ digital synchronization triggers for their camera systems that can bring timing uncertainties down to the order of \(10^{-6}\) s (Jess et al. 2010b), which is often necessary in high-precision polarimetric studies (Kootz 2018). This helps to ensure the output measurements are sufficiently sampled for the application of Fourier techniques. However, often it is not possible to obtain time series with strict and even temporal sampling. For example, raster scans using slit-based spectrographs can lead to irregularly sampled observations due to the physical times required to move the spectral slit.Footnote 1 Also, some observing strategies interrupt regularly sampled data series for the measurement of Stokes I/Q/U/V signals every few minutes, hence introducing data gaps during these times (e.g., Samanta et al. 2016). Furthermore, hardware requiring multiple clocks to control components of the same instrument (e.g., the mission data processor and the polarization modulator unit on board the Hinode spacecraft; Kosugi et al. 2007) may have a tendency to drift away from one another, hence effecting the regularity of long-duration data sequences (Sekii et al. 2007). In addition, some facilities including the Atacama Large Millimeter/submillimeter Array (ALMA; Wootten and Thompson 2009; Wedemeyer et al. 2016) require routine calibrations that must be performed approximately every 10 min (with each calibration taking \(\sim 2.5\) min; Wedemeyer et al. 2020), hence introducing gaps in the final time series (Jafarzadeh et al. 2021). Finally, in the case of ground-based observations, a period of reduced seeing quality or the passing of a localized cloud will result in a number of compromised science frames, which require removal and subsequent interpolation (Krishna Prasad et al. 2016). If the effect of data sampling irregularities is not believed to be significant (i.e., is a fraction of the wave periodicities expected), then it is common to interpolate the observations back on to a constant cadence grid (e.g., Jess et al. 2012c; Kontogiannis et al. 2016). Of course, how the data points are interpolated (e.g., linear or cubic fitting) may effect the final product, and as a result, care should be taken when interpolating time series so that artificial periodicities are not introduced to the data through inappropriate interpolation. This is particularly important when the data sequence requires subsequent processing, e.g., taking the derivative of a velocity time series to determine the acceleration characteristics of the plasma. Under these circumstances, inappropriate interpolation of the velocity information may have drastic implications for the derived acceleration data. For this form of analysis, the use of 3-point Lagrangian interpolation is often recommended to ensure the endpoints of the time series remain unaffected due to the use of error propagation formulae (Veronig et al. 2008). However, in the case for very low cadence data, 3-point Lagrangian interpolation may become untrustworthy due to the large temporal separation between successive time steps (Byrne et al. 2013). For these cases, a Savtizky–Golay (Savitzky and Golay 1964) smoothing filter can help alleviate sharp (and often misleading) kinematic values (Byrne 2015). If interpolation of missing data points and subsequent Fourier analyses is not believed to be suitable, then Lomb–Scargle techniques (Lomb 1976; Scargle 1982) can be implemented. As overviewed by Zechmeister and Kürster (2009), the Lomb–Scargle algorithms are useful for characterizing periodicities present in unevenly sampled data products. Often, least-squares minimization processes assume that the data to be fitted are normally distributed (Barret and Vaughan 2012), which may be untrue since the spectrum of a linear, stationary stochastic process naturally follows a \(\chi _{2}^{2}\) distribution (Groth 1975; Papadakis and Lawrence 1993). However, a benefit of implementing the Lomb–Scargle algorithms is that the noise at each individual frequency can be represented by a \(\chi ^{2}\) distribution, which is equivalent to a spectrum being reliably derived from more simplistic least-squares analysis techniques (VanderPlas 2018). Crucially, Lomb–Scargle techniques differ from conventional Fourier analyses by the way in which the corresponding spectra are computed. While Fourier-based algorithms compute the power spectrum by taking dot products of the input time series with pairs of sine- and cosine-based waveforms, Lomb–Scargle techniques attempt to first calculate a delay timescale so that the sinusoidal pairs are mutually orthogonal at discrete sample steps, hence providing better power estimates at each frequency without the strict requirement of evenly sampled data (Press et al. 2007). In the field of solar physics, Lomb–Scargle techniques tend to be more commonplace in investigations of long-duration periodicities spanning days to months (i.e., often coupled to the solar cycle; Ni et al. 2012; Deng et al. 2017), although they can be used effectively in shorter duration observations where interpolation is deemed inappropriate (e.g., Maurya et al. 2013). One-dimensional Fourier filtering Often, it is helpful to filter the time series in order to isolate specific wave signatures across a particular range of frequencies. This is useful for a variety of studies, including the identification of beat frequencies (Krishna Prasad et al. 2015), the more reliable measurement of phase variations between different wavelengths/filters (Krishna Prasad et al. 2017), and in the identification of various wave modes co-existing within single magnetic structures (Keys et al. 2018). From examination of the upper panel of Figs. 8 and 12, it is clear that the frequency associated with peak Fourier power is \(\approx 4.7\) mHz, and is accompanied by high confidence levels exceeding 95%. If we wish to reconstruct a filtered time series centered on this dominant frequency, then we have a number of options available. The dashed red line in the middle-left panel of Fig. 8 depicts a step function frequency range of \(4.7\pm 1.0\) mHz, whereby the filter is assigned values of '1' and '0' for frequencies inside and outside, respectively, this chosen frequency range. Multiplying the Fourier power spectrum by this step function frequency filter results in the preserved power elements that are shown in the middle-right panel of Fig. 8, which can be passed through an inverse FFT to create a Fourier filtered time series in the range of \(4.7\pm 1.0\) mHz. However, by employing a step function frequency filter, there is a sharp and distinct transition between elevated power signals and frequencies with zero Fourier power. This abrupt transition can create aliasing artifacts in the reconstructed time series (Gobbi et al. 2006). Alternatively, to help mitigate against aliasing (i.e., sharp Fourier power transitions at the boundaries of the chosen frequency range), the Fourier power spectrum can be multiplied by a filter that peaks at the desired frequency, before gradually reducing in transmission towards the edges of the frequency range. An example of such a smoothly varying filter is documented in the lower panels of Fig. 8, where a Gaussian centered at 4.7 mHz, with a full-width at half-maximum (FWHM) of 2 mHz, is overplotted on top of the Fourier spectrum using a solid red line, which can be multiplied by the original Fourier spectrum to gradually decrease the power down to zero at the edges of the desired frequency range (lower-right panel of Fig. 8). Performing an inverse FFT on this filtered Fourier power spectrum results in the reconstruction of an H\(\alpha \) lightcurve containing dominant periodicities of \(\approx 210\) s, which can be seen in Fig. 13. This process is identical to convolving the detrended intensity time series with the given Gaussian frequency filter, but we perform this process step-by-step here for the purposes of clarity. The original HARDcam time series (upper solid black line), normalized by the quiescent H\(\alpha \) continuum intensity, and displayed as a function of time. The lower solid black line is a Fourier filtered lightcurve, which has been detrended using a third-order polynomial (right panel of Fig. 6), convolved with a Gaussian frequency filter centered on 4.7 mHz with a FWHM of 2.0 mHz (lower-right panel of Fig. 8), before applying an inverse FFT to reconstruct the filtered time series. For visual clarity, the filtered lightcurve has been offset to bring it closer to the original time series intensities It must be noted that here we employ a Gaussian frequency filter to smoothly transition the Fourier power to values of zero outside of the desired frequency range. However, other filter shapes can also be chosen, including Lorentzian, Voigt, or even custom profile shapes depending upon the level of smoothing required by the investigators. At present, there is no firm consensus regarding which filter profile shape is best to use, so it may be necessary to choose the frequency filter based upon the specifics of the data being investigated, e.g., the frequency resolution, the amplitude of the spectral components wishing to be studied, the width of the documented Fourier peaks, etc. Of course, we must remind the reader that isolating a relatively limited range of frequencies in Fourier space and transforming these back into real (temporal) space will always result in the appearance of a periodic signal at the frequency of interest, even if the derived Fourier transform was originally noise dominated. Therefore, it is necessary to combine confidence interval analysis (see Sect. 2.2.2) with such Fourier filtering techniques to ensure that only statistically significant power is being considered in subsequent analyses. Fourier phase lag analysis Many observational datasets will be comprised of a combination of multi-wavelength and/or multi-component spectral measurements. For example, the Rapid Oscillations in the Solar Atmosphere (ROSA; Jess et al. 2010b) instrument at the DST is able to observe simultaneously in six separate bandpasses. It is common practice to acquire contemporaneous imaging observations through a combination of G-band, 3500 Å and 4170 Å broadband continuum filters, in addition to Ca ii K, Na i D\(_{1}\), and H\(\alpha \) narrowband filters, which allows wave signatures to be studied from the depths of the photosphere through to the base of the transition region (e.g., Morton et al. 2011, 2012; Jess et al. 2012a, b, c; Kuridze et al. 2012; Grant et al. 2015; Krishna Prasad et al. 2015, 2016, 2017; Keys et al. 2018). On the other hand, Fabry–Pérot spectral imaging systems such as the Crisp Imaging Spectropolarimeter (CRISP; Scharmer et al. 2008) and the Interferometric Bi-dimensional Spectrometer (IBIS; Cavallini 2006), are able to capture two-dimensional spatial information (often including spectropolarimetric Stokes I/Q/U/V measurements) across a single or multiple spectral lines. This allows a temporal comparison to be made between various spectral parameters of the same absorption line, such as the full-width at half-maximum (FWHM), intensity, Doppler velocity, and magnitudes of circular/linear polarization (providing spectropolarimetric measurements are made). As a result, harnessing multi-wavelength and/or multi-component observations provides the ability to further probe the coupling of wave activity in the lower solar atmosphere. The upper panel of Fig. 14 displays two synthetic intensity time series generated with a cadence of 1.78 s (consistent with the HARDcam H\(\alpha \) data products overviewed in Sect. 2.1.1), each with a frequency of 5.6 mHz (\(\approx 180\) s periodicity) and a mean intensity equal to 2. However, the red lightcurve (LC2) is delayed by \(45^{\circ }\), and hence lags behind the black lightcurve (LC1) by 0.785 radians. As part of the standard procedures prior to the implementation of Fourier analysis (see, e.g., Sect. 2.2), each of the time series are detrended (in this case by subtracting a linear line of best fit) and apodized using a 90% tapered cosine apodization filter. The final intensity time series are shown in the lower panel of Fig. 14, and are now suitable for subsequent Fourier analyses. Synthetic time series (upper panel), each with a cadence of 1.78 s, displaying a frequency of 5.6 mHz (\(\approx 180\) s periodicity) and a mean intensity equal to 2. The red lightcurve is delayed by 45 \(^{\circ }\) (0.785 radians) with respect to the black lightcurve. The lower panel displays the detrended and apodized time series, which are now suitable for subsequent FFT analyses Following the approaches documented in Sect. 2.2.2, FFTs of the detrended and apodized time series are taken, with 95% confidence levels calculated. The resulting FFT power spectral densities are shown in Fig. 15, where the red crosses indicate frequencies where the associated power is in excess of the calculated 95% confidence levels for each respective time series. It can be seen in both the upper and lower panels of Fig. 15 that the input 5.6 mHz signal is above the 95% confidence threshold for both LC1 and LC2. Next, the cross-power spectrum, \(\varGamma _{12}(\nu )\), between the FFTs of LC1 and LC2 is calculated following the methods described by Bendat and Piersol (2000) as; $$\begin{aligned} \varGamma _{12}(\nu ) = F(LC1) * \overline{F(LC2)} , \end{aligned}$$ with F denoting an FFT and \({\overline{F}}\) the complex conjugate of the FFT. The cross-power spectrum is a complex array (just like the FFTs from which it is computed), and therefore has components representative of its co-spectrum (\(d(\nu )\); real part of the cross-power spectrum) and quadrature spectrum (\(c(\nu )\); imaginary part of the cross-power spectrum). The co-spectrum from the input time series LC1 and LC2 is shown in the upper panel of Fig. 16. The red cross signifies the frequency where the Fourier power exceeded the 95% confidence level in both FFTs, namely 5.6 mHz, which is consistent with the synthetic lightcurves shown in Fig. 14. FFT power spectral densities for LC1 (upper panel) and LC2 (lower panel), corresponding to the solid black and red lines in the lower panel of Fig. 14, respectively. The red crosses highlight frequencies where the calculated Fourier power is above the 95% confidence level. It can be seen that the synthetic 5.6 mHz input signal is accurately identified in both corresponding power spectra, with its associated Fourier power being in excess of the 95% confidence threshold. The oscillatory behavior at high frequencies is due to the selected apodization filter Co-spectrum (upper panel; real part of the cross-power spectrum) of the input time series LC1 and LC2 shown in the lower panel of Fig. 14. The lower panel displays the phase angle between the input time series LC1 and LC2, which corresponds to the phase of the complex cross-spectrum. Here, a positive phase angle indicates that LC1 leads LC2 (i.e., LC2 lags behind LC1), which can be seen visually through examination of the individual lightcurves depicted in Fig. 14. The red crosses indicate the frequency where the calculated Fourier power for LC1 and LC2 both exceed the 95% confidence levels (see Fig. 15). The horizontal dashed blue line in the lower panel highlights a phase angle of 0 \(^{\circ }\) Finally, the co-spectrum and quadrature spectrum can be utilized to calculate the phase lag between the input lightcurves LC1 and LC2 as a function of frequency, defined by Penn et al. (2011) as, $$\begin{aligned} \phi (\nu ) = atan\left( \frac{\langle c(\nu ) \rangle }{\langle d(\nu ) \rangle }\right) . \end{aligned}$$ Here, the phase angle, commonly chosen to span the interval \(-180^{\circ } \rightarrow +180^{\circ }\), is simply the phase of the complex cross-spectrum (see the nomenclature of Vaughan and Nowak 1997). The lower panel of Fig. 16 displays the calculated phase angles, again with the red cross highlighting the phase value at the frequency where the Fourier power exceeds the 95% confidence level in both FFTs corresponding to LC1 and LC2. In this example, the phase angle corresponding to a frequency of \(\approx 5.6\) mHz is equal to \(45^{\circ }\), which is consistent with the input lightcurves depicted in Fig. 14. Here, a positive phase angle indicates that LC1 leads LC2 (i.e., LC2 lags behind LC1), which can be visually confirmed in Fig. 14 with LC1 (solid black line) leading LC2 (solid red line). It must be noted that phase angles can be computed for all possible frequencies (see, e.g., the lower panel of Fig. 16). However, it is important to determine which of these phase values are reliable before they are used in subsequent scientific interpretations. For the purposes of the example shown here, we selected that frequency at which both times series LC1 and LC2 demonstrated Fourier power exceeding the 95% confidence levels in both of their corresponding FFTs. However, a common alternative is to calculate the coherence level for each constituent frequency, which can then be employed (independently of the confidence levels) to pinpoint reliable frequencies in the corresponding cross-power spectrum. The coherence level is estimated from the normalized square of the amplitude of the complex cross-spectrum (see, e.g., Storch and Zwiers 1999), providing a measure, ranging between '0' and '1', of the linear correlation between the two input time series. Under this regime, values of '0' and '1' indicate no and perfect correlation, respectively. For the purposes of solar physics research, it is common to adopt a coherence value \(>0.80\) to signify robust and reliable phase measurements (McAteer et al. 2003; Bloomfield et al. 2004a, b; Stangalini et al. 2013b, 2018; Kontogiannis et al. 2016). Therefore, the cross-power spectrum and coherence are both used to examine the relationship between two time series as a function of frequency. The cross spectrum identifies common large power (i.e., significant peaks) at the same frequencies in the power spectra of the two time series, and whether such frequencies are related to each other (the relationship is quantified by phase differences). Such correlations cannot, however, be revealed if one or both time series do not have significant power enhancements at particular frequencies, e.g., if the power spectra at those frequencies are indistinguishable from red noise. Nonetheless, there still may be coherent modes at such frequencies, that can be identified in the coherence spectrum, i.e., two time series can have a large coherence at a frequency even though both or one of the power spectra do not show large power at that frequency. Thus, the coherence is a measure of the degree of linear correlation between the two time series at each frequency. In solar physics, the coherence is particularly useful when the two signals are associated to, e.g., different solar atmospheric heights (with, e.g., different amplitudes) and/or two different physical parameters. An example, from real observations, where oscillatory power (at specific time-frequency locations) appears only in one of the signals is demonstrated in Fig. 25. Hence, no significant power is detected in the cross-power spectrum, whereas a large coherence level, exceeding 0.8, is identified. The significance of phase measurements for reliable coherence values can be evaluated by either introducing a coherence floor level (e.g., the 0.8 threshold mentioned above) or estimating confidence levels. To approximate a floor level, Bloomfield et al. (2004a) randomized both time series for a very large number of realizations and calculated the coherence for each, from which, the threshold was estimated as an average over all realizations plus some multiples of the standard deviation of the coherence values. For the confidence levels, the coherence values should be tested against the null hypothesis of zero population coherence, i.e., whether the coherence exceeds expected values from arbitrary colored (e.g., white or red) noise backgrounds. While various methods have been employed for this statistical test, one common approach is to estimate the confidence levels by means of Monte Carlo simulations (Torrence and Compo 1998; Grinsted et al. 2004; Björg Ólafsdóttir et al. 2016). With reliable phase angles calculated, it then becomes possible to estimate a variety of key wave characteristics. If T is the period of the wave, then the phase lag, \(\phi \) (in degrees), can be converted into a physical time delay through the relationship, $$\begin{aligned} \text {time delay (s)}\ = \frac{\phi }{360} \times T . \end{aligned}$$ The time delay value (arising from the measured phase lag) corresponds to a wave propagating between the different atmospheric layers. Of course, phase angles deduced from the co-spectrum and quadrature spectrum (see Eq. (7)) inherently have phase wrapping at \(\pm 180^{\circ }\), hence introducing a \(360^{\circ }\) ambiguity associated with the precise phase angle (as discussed in Centeno et al. 2006b; Beck et al. 2008; Cally 2009). Hence, the true time delay may need to include multiples of the period to account for the \(360^{\circ }\) ambiguity, hence transforming Eq. (8) into, $$\begin{aligned} \text {time delay (s)}\ = \frac{\phi }{360} \times nT , \end{aligned}$$ where n is a non-zero integer. Many studies to date have examined the propagation of relatively long-period oscillations (e.g., 100–300 s), which permit the assumption of \(n=1\) without violating theoretical considerations (e.g., sound speed restrictions; Jess et al. 2012c), hence allowing direct use of Eq. (8). However, as future studies examine higher-frequency (lower-period) wave propagation, then more careful consideration of Fourier phase wrapping will need to be taken into consideration to ensure the derived time delay is consistent with the observations. As part of a phase 'unwrapping' process, the identification of quasi-periodic waves and/or those with modulated amplitudes will allow phase ambiguities to be practically alleviated. For example, by tracking the commencement of a wave, and hence the time delay as it propagates between closely-spaced atmospheric layers, the phase angle can be computed without the \(\pm 360^{\circ }\) phase wrapping uncertainty. Alternatively, a modulated waveform will provide secondary peaks associated with the propagating group, which supplies additional information to better establish the precise value of n in Eq. (9), hence assisting with the phase unwrapping of the data, which will enable much more precise tracking of wave energy flux through the solar atmosphere. Finally, if the geometric height separation, d (in km), between the two layers is known or can be estimated (González Manrique et al. 2020), then the average phase velocity, \(v_{\text {ph}}\), of the wave propagating between these two distinct layers can be deduced via, $$\begin{aligned} v_{\text {ph}}~\text {(km/s)}\ = \frac{360d}{T\phi } . \end{aligned}$$ Similar estimations of the phase velocities of embedded waves have been made by Mein (1977), Athay and White (1979), White and Athay (1979), Centeno et al. (2006b), Bello González et al. (2010a), Jess et al. (2012c), Grant et al. (2015) and Jafarzadeh et al. (2017c), to name but a few examples. Importantly, Eq. (10) can also be rearranged to estimate the atmospheric height separation between two sets of observations. For example, the acoustic sound speed is approximately constant in the lower photosphere, hence this value, alongside the derived time lag, can be utilized to provide an estimate of the height separation, d (e.g., Deubner and Fleck 1989). Three-dimensional Fourier analysis Telescope facilities deployed in a space-borne environment, which benefit from a lack of day/night cycles and atmospheric aberrations, have long been able to harness three-dimensional Fourier analyses to examine the temporal (\(t \leftrightarrow \omega \)) and spatial (\([x,y] \leftrightarrow [k_{x},k_{y}]\)) domains. Here, t and \(\omega \) represent the coupled time and frequency domains, respectively, while the [x, y] and \([k_{x},k_{y}]\) terms represent the coupled spatial distances and spatial wavenumbers in orthogonal spatial directions, respectively. Such three-dimensional Fourier analyses has been closely coupled with the field of helioseismology, which is employed to study the composition and structure of the solar interior by examining large-scale wave patterns on the solar surface (Kosovichev et al. 1997; Turck-Chièze 2001; Braun and Lindsey 2000; Gizon et al. 2010; Kosovichev 2011; Buldgen et al. 2019), which often give rise to patterns consistent with 'rings' and 'trumpets' when viewed in Fourier space (Hill 1988). Up until recently, it has been challenging to apply the same three-dimensional Fourier techniques to high-resolution datasets from ground- and space-based observatories (Leighton 1963; Spruit et al. 1990). These techniques have been applied with ground-based observations to study convective phenomena (Chou et al. 1991; Straus et al. 1992) and plage (Title et al. 1992). With the advent of high image pointing stability, brought to fruition through a combination of high-order AO, photometrically accurate image reconstruction algorithms, precise telescope control hardware, and sub-pixel cross-correlation image co-alignment software, it is now possible to achieve long-duration image and/or spectral sequences that are stable enough to allow Fourier analyses in both temporal and spatial domains. The benefit of using high-resolution facilities is that they offer unprecedented Nyquist temporal frequencies (\(\omega \)) and spatial wavenumbers (\([k_{x},k_{y}]\)) due to their high temporal and spatial sampling, respectively. For example, the HARDcam H\(\alpha \) dataset described in Sect. 2.1.1 has a temporal cadence of 1.78 s and a spatial sampling of \(0{\,}.{\!\!}{''}138\) per pixel, providing a Nyquist frequency of \(\omega _{{\text {Ny}}}\approx 280\) mHz \(\left( \frac{1}{2\times 1.78}\right) \) and a Nyquist wavenumber of \(k_{{\text {Ny}}}\approx 22.8\) arcsec\(^{-1}\) \(\left( \frac{2\pi }{2\times 0.138}\right) \). This allows for the examination of the smallest and most rapidly varying phenomena currently visible in such high-resolution datasets. Applying an FFT to a three-dimensional dataset converts the spatial/temporal signals, [x, y, t], into its frequency counterparts, [\(k_{x}, k_{y}, \omega \)]. An example of this process can be seen in Fig. 17, whereby an FFT has been applied to the HARDcam H\(\alpha \) dataset documented by Grant et al. (2018). It can be seen in the right panel of Fig. 17 that the Fourier power signatures are approximately symmetric in the \(k_{x}/k_{y}\) plane. As a result, it is common for [\(k_{x}, k_{y}\)] cross-cuts at each frequency, \(\omega \), to be azimuthally averaged providing a more straightforward two-dimensional representation of the Fourier power in the form of a \(k{-}\omega \) diagram (Duvall et al. 1988; Krijger et al. 2001; Rutten and Krijger 2003; Kneer and Bello González 2011; Jess et al. 2012c, 2017). An example application of an FFT to a three-dimensional datacube, converting [x, y, t] (left) into its frequency counterparts [\(k_{x}, k_{y}, \omega \)] (right). The HARDcam H\(\alpha \) dataset presented here is taken from the work of Grant et al. (2018) An azimuthally averaged \(k{-}\omega \) diagram for the HARDcam H\(\alpha \) sunspot observations described in Sect. 2.1.1 is shown in the right panel of Fig. 18. A number of important features are present in this diagram, including consistency with many quiet-Sun and internetwork Fourier power peaks documented by Krijger et al. (2001), Kneer and Bello González (2011) and Jess et al. (2012c), whereby high power observed at larger spatial wavenumbers tends to be correlated with higher temporal frequencies. This can be visualized in the right panel of Fig. 18, whereby the dominant Fourier power is associated with the smallest spatial wavenumbers and temporal frequencies. However, as the wavenumber is increased to \(>1\) arcsec\(^{-1}\), the temporal frequencies corresponding to maximal Fourier power are concentrated within the \(3{-}6\) mHz interval. This is consistent with the general trends observed in classical photospheric \(k{-}\omega \) diagrams, such as that shown in Fig. 19. Here, two \(k{-}\omega \) diagrams from the photospheric SDO/AIA 1700 Å time series that is co-spatial (and overlaps temporally) with the HARDcam H\(\alpha \) observations (used to produce Fig. 18) are displayed. The information displayed in both panels of Fig. 19 is identical, however, the left panel is displayed on a linear wavenumber (k) and frequency (\(\omega \)) scales, while the right panel is displayed on log–log axes. In both panels, similar trends (e.g., heightened Fourier power with increasing temporal frequency in the interval of 3–6 mHz is linked to larger spatial wavenumbers) can be identified, which is consistent with the overall trends depicted in the right panel of Fig. 18. However, as discussed in Jess et al. (2017), within the region highlighted by the solid black box in the right panel of Fig. 18, there is evidence of elevated Fourier power that spans a large range of spatial scales, yet remains confined to a temporal frequency on the order of 5.9 mHz (\(\approx 170\) s). This suggests that the embedded wave motion has strong coherency across a broad spectrum of spatial scales, yet can be represented by a very narrow range of temporal frequencies. Looking closely at the right panel of Fig. 18, it can be seen that elevated levels of Fourier power extend down to the smallest spatial wavenumbers allowable from the HARDcam dataset. This implies that the 5.9 mHz frequency is still significant on spatial scales much larger than the field of view captured by HARDcam. Image reproduced with permission from Jess et al. (2017), copyright by the authors A two-dimensional \([k_{x},k_{y}]\) cross-cut for a single temporal frequency, \(\omega \), corresponding to the HARDcam H\(\alpha \) data acquired on 2011 December 10 and described in Sect. 2.1.1 (left panel). Due to the symmetries often found between \(k_{x}\) and \(k_{y}\), it is common to perform azimuthal averaging (e.g., along the solid green contour) to collapse this two-dimensional information into a single dimension, i.e. \([k_{x}, k_{y}] \rightarrow [k]\). This allows the three-dimensional FFT cube (see, e.g., the right panel of Fig. 17) to be simplified into a standardized two-dimensional image, forming a \(k{-}\omega \) diagram (right panel). Here, the \(k{-}\omega \) diagram is cropped between approximately \(1<\omega <10\) mHz and \(0.3<k<10.0\) arcsec\(^{-1}\), and displayed on a log–log scale to assist visual clarity. The colors represent oscillatory power that is displayed on a log-scale, while the vertical dashed and dotted lines correspond to the spatial size of the umbral diameter (\(\approx 13{\,}.{\!\!}{''}50\)) and the radius of the umbra (\(\approx 6{\,}.{\!\!}{''}75\)), respectively. The solid black box indicates a region of excess wave power at \(\approx 5.9\) mHz (\(\approx 170\) s) over the entire spatial extent of the sunspot umbra. A set of \(k{-}\omega \) diagrams, derived from the photospheric SDO/AIA 1700 Å time series of active region NOAA 11366, which is co-spatial (and overlaps temporally) with the chromospheric HARDcam measurements presented in Fig. 18. Both \(k{-}\omega \) diagrams are identical, however, the left panel is displayed on linear wavenumber (k) and frequency (\(\omega \)) scales, while the right panel is displayed on log–log axes. It is clear from inspection of the two panels that each have their merit when presenting results, with the linear axes giving less visual emphasis to the lower wavenumbers/frequencies, while the log–log axes allowing power-law trends in the power spectral densities to be modeled more easily through straight-line fitting However, there are a number of key points related to Figs. 18 and 19 that are worth discussing. First, Fig. 19 highlights the merits of utilizing either linear or log–log axes depending on the features being examined. For example, the use of a linear scale (left panel of Fig. 19) results in less visual emphasis being placed on the lowest spatial waveneumbers and temporal frequencies. This can help prevent (visual) over-estimations of the trends present in the \(k{-}\omega \) diagram since all of the frequency bins occupy identical sizes within the corresponding figure. However, as spatial and temporal resolutions dramatically improve with next generation instrumentation, the corresponding spatial/temporal Nyquist frequencies continue to become elevated, often spanning multiple orders-of-magnitude. If these heightened Nyquist frequencies are plotted on a purely linear scale, then many of the features of interest may become visually lost within the vast interval occupied by the \(k{-}\omega \) diagram. An option available to counter this would be to crop the \(k{-}\omega \) diagram to simply display the spatial wavenumbers and temporal frequencies of interest, although this comes at the price of discarding information that may be important within the remainder of the frequency space. Alternatively, it is possible to use log–log axes for the \(k{-}\omega \) diagram, which can be visualized in the right panels of Figs. 18 and 19. This type of log–log display also benefits the fitting of any power-law trends that may be present within the \(k{-}\omega \) diagram, since they will manifest as more straightforward (to fit) linear slopes in the plot. Finally, the right panel of Fig. 18 reveals some horizontal banding of power that appears slightly different than the diagonal 'arms' of Fourier power visible in Fig. 19. This may be a consequence of the reduced spatial wavenumber and temporal frequency resolutions achievable with large-aperture ground-based observatories, which naturally have a reduced field-of-view size (causing a relatively low spatial wavenumber resolution when compared to large field-of-view observations from, e.g., SDO) and limited time series durations (creating relatively low temporal frequency resolutions when compared to space-borne satellite missions that are unaffected by day/night cycles and/or atmospheric effects). Therefore, it is imperative that the investigative team examines the merits of each type of \(k{-}\omega \) display and selects the use of either linear or log–log axes to best represent the physical processes at work in their dataset. Three-dimensional Fourier filtering Taking the one-dimensional Fourier filtering methodology described in Sect. 2.2.4 a step further, it is often useful to filter an input three-dimensional dataset ([x, y, t]) in terms of both its temporal frequencies, \(\omega \), and its spatial wavenumbers, k. While it is common for the frequency to be defined as the reciprocal of the period, i.e., \(\omega = 1/T\), where T is the period of oscillation, the wavenumber is often defined as \(k = 2\pi /\lambda \) (Krijger et al. 2001), where \(\lambda \) is the wavelength of the oscillation in the spatial domain (i.e., [x, y]). Hence, it is often important to bear in mind this additional factor of \(2\pi \) when translating between wavenumber, k, and spatial wavelength, \(\lambda \). Figures 18 and 20 employ this form of frequency/wavenumber notation, meaning that the spatial wavelengths can be computed as \(\lambda = 2\pi /k\), while the period is simply \(T = 1/\omega \) (similar to that shown in Straus et al. 1992; Jess et al. 2012c). However, some research programs, particularly those adopting helioseismology nomenclature, utilize the factor of \(2\pi \) in both the wavenumber and frequency domains (e.g., \(T = 2\pi /\omega \); Mihalas and Toomre 1981). As a result, it is important to select an appropriate scaling to ensure consistency across a piece of work. An example code capable of doing three-dimensional Fourier filtering is the QUEEn's university Fourier Filtering (QUEEFF; Jess et al. 2017) algorithm, which is based around the original techniques put forward by Tarbell et al. (1988), Title et al. (1989), Rutten and Krijger (2003), Roth et al. (2010) and Krijger et al. (2001), but now adapted into a publicly available Interactive Data Language (idl; Stern 2000) package.Footnote 2\(^{,}\)Footnote 3 Outputs provided by a commonly available three-dimensional Fourier filtering code (QUEEFF; Jess et al. 2017), showing a frequency-averaged wavenumber spectrum (upper-left), a Gaussian (with \(2<k<10\) arcsec\(^{-1}\)) wavenumber filter that resembles a torus shape when viewed in the \([k_{x}, k_{y}]\) plane (upper-middle), and the resulting transmitted wavenumber spectra once multiplied by the chosen filter (upper-right). The lower panel displays the wavenumber-averaged frequency spectrum (solid black line), where the Fourier power is displayed (using a log-scale) as a function of the temporal frequency, \(\omega \). The dashed blue line highlights a chosen frequency filter, \(20\pm 10\) mHz, with a Gaussian shape to more smoothly reduce Fourier power at the edges of the chosen spectral range to reduce aliasing. The solid red line displays the resulting transmitted frequency spectrum once multiplied by the chosen Gaussian filter. In each panel, dashed black or white lines highlight the \(k_{x}/k_{y}=0\) arcsec\(^{-1}\) or \(\omega =0\) mHz locations Importantly, the QUEEFF code provides the user with the ability to apply Gaussian smoothing windows to both frequency and wavenumber regions of interest in order to help mitigate against elements of aliasing during subsequent dataset reconstruction. Figure 20 shows an example figure provided by the QUEEFF code, which displays the frequency-averaged wavenumber power (upper-left panel), the chosen wavenumber filter (upper-middle panel) utilizing a Gaussian structure providing a torus-shaped filter spanning 2–10 arcsec\(^{-1}\), alongside the resulting filtered wavenumber spectra (upper-right panel). The lower panel of Fig. 20 displays the spatially-averaged frequency spectrum of the HARDcam H\(\alpha \) dataset, where the Fourier power is displayed as a function of the frequency, \(\omega \), using a solid black line. A Gaussian frequency filter, spanning \(20\pm 10\) mHz, is overplotted using a dashed blue line. The preserved temporal frequencies (i.e., once the original frequency spectrum has been multiplied by the chosen frequency filter) is shown using a solid red line. This filtered three-dimensional Fourier cube can then be passed through an inverse FFT to reconstruct an intensity image cube that contains the wavenumbers and frequencies of interest to the user. Again, as discussed in Sect. 2.2.4, the QUEEFF three-dimensional Fourier filtering code constructs a Gaussian-shaped filter, which is applied in the Fourier domain. This ensures that the filter is symmetric about the chosen peak frequency (see, e.g., the black line in the left panel of Fig. 21). Of course, due to the oscillation period having a reciprocal relationship with the temporal frequency (i.e., \(1/\omega \)), this results in asymmetric sampling about the desired peak period (see, e.g., the solid black line in the right panel of Fig. 21). Depending upon the science requirements of the user, it may be more advantageous to apply a Gaussian-shaped filter in the period domain (e.g., the solid blue line in the right panel of Fig. 21), which ensures less inclusion of lower frequency (higher period) terms that may be undesirable in the final reconstructed time series. This is highlighted by the more rapid truncation of the filter (solid blue line in the left panel of Fig. 21) towards lower frequencies. Additionally, the user may select alternative frequency filters, such as a Voigt profile (Zaghloul 2007), which is shown in Fig. 21 using a solid red line. Furthermore, Fig. 21 shows possible filtering combinations that can be applied to the temporal domain, yet similar options are available when filtering the spatial wavenumbers (\([k_{x}, k_{y}]\)) too. Ultimately, it is the science objectives that drive forward the wave filtering protocols, so possible options need to be carefully considered before applying to the input data. Different types of frequency (\(\omega \)) filter that can be applied to time-resolved data products. The left panel displays the filter transmission (as a percentage) in terms of the frequency, while the right panel displays the same filters as a function of the oscillatory period. Presented using a solid black line is a Gaussian-shaped filter in the frequency domain with a FWHM equal to 10 mHz, while the solid red line indicates a Voigt-shaped filter in the frequency domain, both centered on 20 mHz. Contrarily, a Gaussian-shaped filter in the period domain, with a FWHM equal to 10 s, is shown using a solid blue line, again centered on 50 s to remain consistent with the 20 mHz profiles shown using red and black lines. It is clearly evident that the filter profile shape changes dramatically between the time and frequency domains, and hence it is important to select the correct filter based upon the science requirements Combination Fourier filters (i.e., that are functions of \(k_{x}\), \(k_{y}\) and \(\omega \)) have been utilized in previous studies to extract unique types of wave modes manifesting in the lower solar atmosphere. For example, specific Fourier filters may be employed to extract signatures of f- and p-mode oscillations manifesting in photospheric observations (e.g., Hill 1988; Schou et al. 1998; Gizon and Birch 2004; Bahauddin and Rast 2021). Another example of a well-used Fourier filter is the 'sub-sonic filter', which can be visualized as a cone in \(k{-}\omega \) space (Title et al. 1989), $$\begin{aligned} v_{\text{ph}} = \frac{\omega }{k} , \end{aligned}$$ where \(v_{\text{ph}}\) is the phase velocity of the wave. Here, all Fourier components inside the cone, where propagation velocities are less than the typical sound speed (i.e., \(v_{\text{ph}} < c_{s}\)), are retained while velocities outside the cone are set to zero. An inverse Fourier transform of this filtered spectrum provides a dataset that is embodied by the convective part of the solar signal since the non-convective phenomena (e.g., solar p-modes) have been removed (Straus and Bonaccini 1997; Rutten and Krijger 2003). Alternatively, modification of the sub-sonic filter to include only those frequencies above the Lamb mode, \(\omega = c_{s}k\) (Fleck et al. 2021), provides a reconstructed dataset containing oscillatory parts of the input signal. As highlighted above, it is the science objectives that define the filtering sequences required to extract the underlying time series of interest. However, well-proven examples of these exist for common phenomena (e.g., solar f- and p-modes), hence providing an excellent starting point for the community. Wavelet analyses While FFT analyses is very useful for identifying and characterizing persistent wave motion present in observational datasets, it begins to encounter difficulties when the time series contains weak signals and/or quasi-periodic signatures. Figure 22 shows example time series containing a persistent wave signal with a 180 s periodicity (5.56 mHz) and no embedded noise (top-left panel), a quasi-periodic 5.56 mHz wave signal with no noise (middle-left panel), and a qausi-periodic 5.56 mHz wave signal embedded in high-amplitude noise (lower-left panel). It can be seen for each of the corresponding right-hand panels, which reveal the respective Fourier power spectral densities, that the detected 5.56 mHz Fourier peak becomes progressively less apparent and swamped by noise, even becoming significantly broadened in the lower-right panel of Fig. 22. As a result, the application of Fourier analyses to solar time series often displaying quasi-periodic wave motion (e.g., spicules, fibrils, rapid blueshift excursions (RBEs), etc.; Beckers 1968; De Pontieu et al. 2004, 2007a, b; Zaqarashvili and Erdélyi 2009; Sekse et al. 2013a, b; Kuridze et al. 2015) may not be the most appropriate as a result of the limited lifetimes associated with these features. An example time series consisting of a pure 180 s periodicity (5.56 mHz) signal, which is sampled at a cadence of 1.44 s to remain consistent with modern instrument capabilities (upper left). The middle-left panel shows the same example time series, only now with the first three and last two complete wave cycles suppressed, hence making a quasi-periodic wave signal. The lower-left panel shows the same quasi-period wave signal shown in the middle-left panel (solid green line), only now with superimposed Poisson (shot) noise added on top of the signal. Each of the right panels display the corresponding FFT-generated Fourier spectra, with the frequency and Fourier power values plotted on log-scales for better visual clarity. The vertical dashed red lines highlight the input 5.56 mHz signal Wavelet techniques, pioneered by Torrence and Compo (1998), employ a time-localized oscillatory function that is continuous in both time and frequency (Bloomfield et al. 2004b), which allows them to be applied in the search for dynamic transient oscillations. The time resolution of the input dataset is preserved through the modulation of a simple sinusoid (synonymous with standard FFT approaches) with a Gaussian envelope, providing the Morlet wavelet commonly used in studies of waves in the solar atmosphere (Bloomfield et al. 2004a; Jess et al. 2007; Stangalini et al. 2012; Kobanov et al. 2013, 2015; Jafarzadeh et al. 2017d). As a result, a wavelet transform is able to provide high frequency resolution at low frequencies and high time resolution at high frequencies, which is summarized by Kehtarnavaz (2008). Figure 23 displays the wavelet power spectrum (lower panel) resulting from the application of a Morlet wavelet transform on the detrended and apodized HARDcam H\(\alpha \) lightcurve (upper panel). Here, it is possible to see the effects of quasi-periodic wave phenomena, where there is clear evidence of a large-amplitude periodicity between times of \(0{-}2200\) s at a period of \(\approx 210\) s (\(\approx 4.7\) mHz). This wave activity is highlighted in the wavelet transform by being bounded by the 95% confidence level isocontours across these times and periods, which is equivalent to the oscillatory behavior being significant at the 5% level (Torrence and Compo 1998). To calculate the wavelet power thresholds corresponding to the 95% confidence isocontours, the wavelet background spectrum (i.e., the output theoretical background spectrum that has been smoothed by the wavelet function) is multiplied by the 95\(^{\text {th}}\) percentile value for a \(\chi _{2}^{2}\) distribution (Gilman et al. 1963). Please note that for some considerations, including expensive computation times, the Monte Carlo randomization method is not preferred for wavelet transform (Lau and Weng 1995; Torrence and Compo 1998). The \(\approx 210\) s wavelet power signatures shown in the lower panel of Fig. 23 are consistent with the standardized FFT approach documented in Sect. 2.2, although the quasi-periodic nature of the wave motion is likely a reason why the corresponding power in the traditional FFT spectrum (upper panel of Fig. 8) is not as apparent. Importantly, with the wavelet transform it is possible to identify more clearly the times when this periodicity appears and disappears from the time series, which is seen to correlate visibly with the clear sinusoidal fluctuations present at the start of the H\(\alpha \) time series (upper panel of Fig. 23). Also, the lack of significant wavelet power at very long periods (low frequencies) suggests that the lightcurve detrending applied is working adequately. The detrended and apodized HARDcam H\(\alpha \) lightcurve shown in the lower panel of Fig. 6 (top). The bottom panel shows the corresponding wavelet transform, where the wave power is displayed as a function of the oscillatory period (y-axis) and observing time (x-axis). The color bar displays the normalized wavelet power, while the cross-hatched region (bounded by a dashed white line) highlights locations of the wavelet transform that may be untrustworthy due to edge effects. Solid white lines contour regions where the wavelet power exceeds the 95% confidence level (i.e., significant at the 5% level) Due to wavelet analyses preserving the time domain of the original input signal, care must be taken to ensure that any power visible in wavelet transforms is the result of wave motion and not an instantaneous spike in intensity. To achieve this, it is typical to exclude oscillations from subsequent analysis that last, in duration, for less than \(\sqrt{2}\) wave cycles. This requirement is often referred to as the decorrelation time (Torrence and Compo 1998), which involves comparing the width of a peak in the wavelet power spectrum (defined as the time interval over which the wavelet power exceeds the 95% confidence level—see Sect. 2.2.2) with the period itself to determine the number of complete wave cycles (Ireland et al. 1999; McAteer et al. 2004). Oscillations that last for less time than \(\sqrt{2}\) wave cycles are subsequently discarded as they may be the result of spikes and/or instrumental abnormalities in the data. In addition, periodicities manifesting towards the extreme edges of the lightcurve need to be considered carefully due to the possible presence of edge effects arising due to the finite duration of the time series (Meyers et al. 1993). This region where caution is required is highlighted in the lower panel of Fig. 23 using the cross-hatched solid white lines. Here, the "cone of influence" (COI) is defined as as the e-folding time for the autocorrelation of wavelet power at each scale, and for the traditional Morlet wavelet this is equal to \(\sqrt{2}\) wave cycles (Torrence and Compo 1998), hence why longer periods are more heavily effected (in the time domain) than their shorter (high-frequency) counterparts. Finally, many research studies employ the global wavelet spectrum to characterize the frequencies present in the input time series. Here, the global wavelet spectrum is defined as the average spectrum across all local wavelet spectra along the entire input time axis (Torrence and Compo 1998). Essentially, the global wavelet spectrum can be considered as an estimation of the true Fourier spectrum. For example, a time series comprised of mixed wave frequencies that are superimposed on top of a white noise background should produce Fourier spectral peaks equal to \(2\sigma _{\epsilon }^{2} + NA_{i}^{2}/2\), where \(A_{i}\) are the amplitudes of the oscillatory components, \(\sigma _{\epsilon }^{2}\) is the variance of the noise, and N is the number of steps in the time series (Priestley 1981). However, the corresponding peaks in the global wavelet spectrum will usually be higher at larger scales when compared to smaller scales, which is a consequence of the wavelet transform having better frequency resolution at long periods, albeit with worse time localization (Marković and Koch 2005). As such, the global wavelet spectrum is often considered a biased estimation of the true Fourier spectrum (Wu and Liu 2005). This effect can be clearly seen in Fig. 24, which displays both the Fourier and global wavelet power spectra for the same HARDcam H\(\alpha \) time series shown in the lower panel of Fig. 6. In Fig. 24, the higher power at larger scales (lower frequencies) is visible in the global wavelet spectrum (red line), when compared to that derived through traditional Fourier techniques (black line). However, at smaller scales (higher frequencies), both the global wavelet and Fourier spectra are in close agreement with one another, with the global wavelet spectrum appearing as a smoothed Fourier spectrum. The reason for these effects is due to the width of the wavelet filter in Fourier space. At large scales (low frequencies), the wavelet is narrower in frequency, resulting in sharper peaks that have inherently larger amplitudes. Contrarily, at small scales (high frequencies), the wavelet is more broad in frequency, hence causing any peaks in the spectrum to become smoothed (Torrence and Compo 1998). As such, it is important to take such biases into consideration when interpreting any embedded wave motion. Indeed, Banerjee et al. (2001), Christopoulou et al. (2003), Samanta et al. (2016), Kayshap et al. (2018) and Chae et al. (2019) have discussed the implementation of global wavelet and Fourier power spectra in the context of solar oscillations. Fourier (black line) and global wavelet (red line) power spectra of the HARDcam H\(\alpha \) detrended lightcurve shown in the lower panel of Fig. 6. It can be seen that at larger scales (lower frequencies) the global wavelet spectrum has increased power over that calculated from traditional Fourier techniques, due to the increased wavelet frequency resolution in this regime. Contrarily, at smaller scales (higher frequencies) the global wavelet spectrum appears as a smoothed Fourier spectrum due to the reduced frequency resolution at these smaller scales. While the global wavelet spectrum is a good estimation of the Fourier power spectrum, these biases need to be carefully considered when interpreting the embedded wave motion Wavelet phase measurements Similar to the Fourier phase lag analysis described in Sect. 2.2.5, it is also useful to obtain phase angles, cross-power spectrum, and coherence between wavelet power spectra at different wavelengths, spatial locations, and/or multi-component spectral measurements. Hence, the phase angles are determined not only as a function of frequency, but also as a function of time. These phase angles are usually demonstrated as small arrows on a wavelet co-spectrum (or wavelet coherence) map, where their directions indicate the phase angles at different time-frequency locations. The convention with which an arrow direction represents, e.g., zero and \(90^{\circ }\) phase angles (and which lightcurve leads or lags behind) should be specified. Reproduced from Jafarzadeh et al. (2017d), the lower- and upper-left panels of Fig. 25 display two wavelet power spectra (from a Morlet wavelet transform) of transverse oscillations in a small magnetic element (marked with circles in Fig. 5) at two atmospheric heights sampled by the SuFI/Sunrise 300 nm and Ca ii H bands (with an average height difference of \(\approx 450\) km), respectively. Islands of high power, particularly those marked by the 95% confidence level contours, are evident in both wavelet power spectra. The wavelet co-spectrum and coherence maps of these two power spectra are shown in the upper- and lower-right panels of Fig. 25, respectively. The phase-lag arrows are over plotted on the entire cross-power spectrum, while the same arrows are depicted on the latter map only where the coherence exceeds 0.7. Here, the arrows pointing right represent in-phase oscillations and those pointing straight up identify 90 degrees phase lags where the oscillations in 300 nm lag behind those observed in the Ca ii H time series. Note here the changes of phase lags from one time-frequency region to another, particularly, in regions with confidence levels larger than 95%, and/or areas with coherence exceeding 0.7 (or 0.8). However, most of the arrows point upwards (with different angles) in this example, implying an upward wave propagation in the lower solar atmosphere (i.e., from the low photosphere, sampled by the 300 nm band, to the heights corresponding to the temperature minimum/low chromosphere, sampled by the Ca ii H images). A slight downward propagation is also observed in a small area. These may associate to various wave modes and/or oppositely propagating waves at different frequencies and times. We note that such phase changes with time could not be identified using a Fourier phase lag analysis (see Sect. 2.2.5), where phase angles are computed as a function of frequency only. Images reproduced with permission from Jafarzadeh et al. (2017d), copyright by AAS Wavelet power spectra of transverse oscillations in a small magnetic element (marked with circles in Fig. 5), from time-series of images acquired in 300 nm (lower left) and in Ca ii H (upper left) bands from SuFI/Sunrise. The right panels display the wavelet co-spectrum power (on the top) and coherence map (on the bottom). The 95% confidence levels are identified with dashed/solid contours in all panels and the COIs are marked with the cross-hatched/shaded regions. The arrows on the right panels show the phase angles between oscillations at the two atmospheric heights, with in-phase oscillations depicted by arrows pointing right and fluctuations in Ca ii H leading those in 300 nm by \(90^{\circ }\) marked by arrows pointing straight up. Whether the cross-power spectrum or coherence should be used for the wave identification greatly depends on the science and the types of data employed. While the co-spectrum (which is obtained through multiplying the wavelet power spectrum of a time series by the complex conjugate of the other) identifies regions with large power in common (between the two time series), the coherence (i.e., square of the cross-spectrum normalized by the individual power spectra; Grinsted et al. 2004) highlights areas where the two time series co-move, but not necessarily sharing a high common power. An example is the area around the time and period of 70 and 47 s, respectively, that is associated to a coherence level exceeding 0.8 (and within the 95% confidence levels), but with no significant power in the co-spectrum (only one of the power spectra, i.e., that from the Ca ii H data, show large power at that time and period location). As a working example, from the right panels of Fig. 25, the phase lag at the time and period of 75 and 41 s, respectively, reads about 140 \(^{\circ }\), which is translated to a time lag of \(\approx 16\) s. Given the average height difference of 450 km between the two atmospheric layers, it results in a wave propagation speed of \(\approx 28\) km/s (due to the transverse oscillations in the small-scale magnetic element). A similar analysis for intensity oscillations in the same small-scale magnetic element has also been presented in Jafarzadeh et al. (2017d). Of course, as highlighted in Sect. 2.2.5, phase measurements are always subject to an associated uncertainty of \(\pm 360 ^{\circ }\) (\(\pm 2\pi \)), which arises via phase wrapping. As a consequence, to alleviate ambiguities in phase angles, in addition to subsequently derived phase velocities, care must be taken to select observational time series where the atmospheric height separation is not too substantial (see Sect. 2.2.5 for more discussion), which helps to minimize the ambiguities associated with phase wrapping. Depending on science objectives, it may be helpful to inspect the variation of phase lags with frequency (or period). To this end, a statistical phase diagram can be created, where all reliable phase angles (e.g., those associated to power significant at 5%, and/or with a coherence exceeding 0.8) are plotted as a function of frequency (Jess et al. 2012c). Such a phase diagram can provide information about the overall wave propagation in, e.g., similar magnetic structures. Figure 26 illustrates a phase diagram (i.e., a 2D histogram of phase angle versus period; from Jafarzadeh et al. 2017d) constructed from all the reliable phase angles obtained from the transverse oscillations in 7 small magnetic elements, similar to that discussed above. The background colors represent the occurrence frequency and the contours mark regions which are statistically significant (i.e., compared to the extreme outliers). From this phase diagram, it is evident that the upward propagating waves (i.e., the positive phase angles in the convention introduced here) appear preferential. Image reproduced with permission from Jafarzadeh et al. (2017d), copyright by AAS Phase diagram of transverse oscillations in 7 small magnetic elements observed in two layers of the lower solar atmosphere (with \(\approx 450\) km height difference) from SuFI/Sunrise. Empirical mode decomposition Empirical Mode Decomposition (EMD; Huang et al. 1998, 1999) is a statistical tool developed to decompose an input time series into a set of intrinsic timescales. Importantly, EMD is a contrasting approach to traditional FFT/wavelet analyses since it relies on an empirical approach rather than strict theoretical tools to decompose the input data. Due to the decomposition being based on the local characteristic timescales of the data, it may be applied to non-linear and non-stationary processes without the detrending often applied before the application of Fourier-based techniques (i.e., under the assumption that such detrending is able to accurately characterize any non-stationary and/or non-periodic fluctuations in the time series with a low-order polynomial). As such, it is possible for EMD to overcome some of the limitations of FFT/wavelet analyses, including aspects of wave energy leakage across multiple harmonic frequencies (Terradas et al. 2004). non-stationary/non-period fluctuations that can be characterized by a low-order polynomial Following the methodology described by Terradas et al. (2004), we apply EMD techniques to the HARDcam H\(\alpha \) time series depicted in the upper-left panel of Fig. 6. To begin, extrema in the lightcurve are identified, and are then connected by a cubic spline fit to provide an upper envelope of the positive intensity fluctuations (i.e., fluctuations above the mean). Next, the same process is applied to find the lower envelope corresponding to negative intensity fluctuations (i.e., fluctuations below the mean). The mean value between the upper and lower envelopes, at each time step, is denoted \(m_{1}(t)\). The difference between the original input data and the mean function is called the first component, \(h_{1}(t)\). Providing the input time series contains no undershoots, overshoots, and/or riding waves (Huang et al. 1998), then the first intrinsic mode function (IMF) is equal to \(h_{1}(t)\). Unfortunately, many input time series contain signal blemishes, and removal of the first component, \(h_{1}(t)\), from the original lightcurve will generate additional (false) extrema. Hence, to mitigate against these potential issues, the above procedure is repeated numerous times until the first true IMF is constructed (see Huang et al. 1998, for more information). The first IMF constructed, \(c_{1}(t)\), is comprised of the most rapid fluctuations of the signal. This can then be subtracted from the original time series, producing a residual lightcurve made up of longer duration fluctuations. The process can subsequently be repeated numerous times to extract additional IMFs until the amplitude of the residual lightcurve falls below a predetermined value, or becomes a function from which no more IMFs can be extracted (Terradas et al. 2004). Figure 27 shows a collection of IMFs extracted from the HARDcam H\(\alpha \) time series depicted in the upper-left panel of Fig. 6. It is clear that the most rapid fluctuations are present in IMF \(c_{1}\), with IMF \(c_{8}\) documenting the slowest evolving intensity variations. Plotted on top of IMF \(c_{8}\) is the original H\(\alpha \) time series, along with the polynomial best-fit line (dashed red line) used to detrend the lightcurve in Sect. 2.2 before the application of FFT/wavelet techniques. The global trends highlighted by IMF \(c_{8}\) and the polynomial best-fit line are similar, again highlighting the appropriate use of detrending in Sect. 2.2, but now compared with generalized empirical methods. Figure 28 displays the 8 extracted IMFs in the form of a two-dimensional map, which can often be used to more readily display the corresponding interplay between the various amplitudes and variability timescales. IMFs \(c_{1} \rightarrow c_{8}\), extracted from the original (non-detrended and non-apodized) HARDcam H\(\alpha \) time series overplotted in the lower-right panel. In addition, the lower-right panel also shows the polynomial best-fit line (dashed red line) used to detrend the data prior to FFT/wavelet analyses. It can be seen that the longest period fluctuations making up IMF \(c_{8}\) are similar to the global trend line calculated in Sect. 2.2.Note that a summation of IMFs \(c_{1} \rightarrow c_{8}\) will return the original signal IMFs \(c_{1} \rightarrow c_{8}\) displayed as a two-dimensional map (left), where yellow and blue colors represent the peaks and troughs, respectively, of the IMF intensity fluctuations. The horizontal dashed black lines represent cuts through each IMF, with the corresponding intensity time series displayed in the right panel. The horizontal dashed red lines represent the zero value corresponding to each IMF Once the IMFs have been extracted from the input time series, it is possible to employ Hilbert spectral analysis (Huang et al. 1998; Oppenheim and Schafer 2009) to examine the instantaneous frequencies with time for each IMF. The combined application of EMD and Hilbert spectral analysis is often referred to as the Hilbert–Huang transformation (Huang and Wu 2008). From the outputs of the Hilbert–Huang transformation, it is possible to display the instantaneous frequencies for each of the extracted IMFs as a function of time. The left panel of Fig. 29 displays the instantaneous frequencies corresponding to IMFs \(c_{2} \rightarrow c_{7}\) using the purple, blue, dark green, light green, orange, and red lines, respectively. IMFs \(c_{1}\) and \(c_{8}\) have been removed from the plot as these correspond to very high and low frequency fluctuations, respectively, which clutter the figure if included. The solid colored lines represent the running mean values of the instantaneous frequencies (calculated over a 30 s window), while the vertical colored error bars indicate the standard deviations of the frequency fluctuations found within the running mean sampling timescale. As already shown in Figs. 27 and 28, the frequencies associated with higher IMFs are naturally lower as a result of the residual time series containing less rapid fluctuations. It can be seen the in left panel of Fig. 29 that IMF \(c_{2}\) contains frequencies in the range of 50–300 mHz (3–20 s), while IMF \(c_{7}\) displays lower frequencies spanning 1–30 mHz (33–1000 s). We must note that the left panel of Fig. 29 is simply a representation of the instantaneous frequencies present in the time series as a function of time and does not contain information related to their relative importance (e.g., their respective amplitudes), although this information is indeed present in the overall Hilbert–Huang transform. Instantaneous frequencies computed from applying a Hilbert–Huang transform to the HARDcam H\(\alpha \) lightcurve shown in the lower-right panel of Fig. 27 and displayed as a function of time (left panel). The solid purple, blue, dark green, light green, orange, and red lines correspond to moving average frequencies (computed over a 30 s window) for the IMFs \({c_{2} \rightarrow c_{7}}\), respectively. The vertical error bars correspond to the standard deviations of frequencies found within the 30 s moving average windows. The right panel displays the corresponding Hilbert–Huang power spectrum, calculated by integrating the instantaneous frequency spectra over time and normalized to the largest power value computed, hence providing a plot of relative changes in spectral energy as a function of frequency. Features within the power spectrum are consistent with the FFT and wavelet outputs shown in Figs. 12 and 23 Finally, it is possible to integrate the instantaneous frequency spectra (including their relative amplitudes) across time, producing the Hilbert–Huang power spectrum shown in the right panel of Fig. 29. The features of the Hilbert–Huang power spectrum are very similar to those depicted in the FFT spectrum shown in the right panel of Fig. 12. Notably, there is a pronounced power enhancement at \(\approx 4.7\) mHz, which is consistent with both the FFT power peaks (right panel of Fig. 12) and the heightened wave amplitudes found at \(\approx 210\) s in the wavelet transform shown in the bottom panel of Fig. 23. This shows the consistency between FFT, wavelet, and EMD approaches, especially when visible wave activity is evident. Proper orthogonal decomposition and dynamic mode decomposition Recently, for the first time, Albidah et al. (2021, 2022) applied the methods of Proper Orthogonal Decomposition (POD; see e.g., Pearson 1901; Lumley 1967) and Dynamic Mode Decomposition (DMD; see e.g., Schmid 2010) to identify MHD wave modes in a sunspot umbrae. The POD method defines the eigenfunctions to be orthogonal in space but places no constraints on their temporal behaviour. On the other hand, DMD puts no constraints on the spatial structure of the eigenfunctions but defines them to be orthogonal in time, meaning that each DMD mode has a distinct frequency. Hence, POD modes are permitted to have broadband frequency spectra but DMD modes are not. This is shown in the right panel of Fig. 30, which shows a broadband power spectral density (PSD) of 8 POD modes detected in a sunspot umbra by Albidah et al. (2021) using HARDcam H\(\alpha \) intensity observations from Jess et al. (2017). The left panel shows a sunspot from Jess et al. (2017) in H\(\alpha \) intensity using data from HARDcam (one pixel has a width of 0.138"). The middle panel shows the mean intensity of the time series, the colourbar displays the magnitude of the mean time series, the solid black line shows umbra/penumbra boundary (intensity threshold level 0.85) and the green box (101 \(\times \) 101 pixels) shows the region where Albidah et al. (2021) applied POD and DMD. The right panel displays the PSD of the time coefficients of the first 20 POD modes (in log scale). The PSD shows peaks between frequencies 4.3–6.5 mHz (corresponding to periods of 153–232 s). Image adapted from Albidah et al. (2021) Both the POD and DMD produce 2D eigenfunctions as shown in the left and middle columns of Fig. 31, however, they achieve this using different approaches. Essentially, DMD identifies the spatial modes which best fit a constant sinusoidal behavior in time, as with a Fourier transform. POD ranks the spatial modes in order of contribution to the total variance, which DMD cannot do. Since POD can produce as many modes as there are time snapshots, the challenge is to identify which modes are physical and which are not. Similarly, not all DMD modes may be physical. For practical purposes a physical model, such as the magnetic cylinder model (see Sect. 2.9.2 for discussion of MHD wave modes of a magnetic cylinder) can be used to select POD and DMD modes which most closely correspond to predicted MHD wave modes. For the approximately circular sunspot shown in Fig. 31, the predicted MHD cylinder modes which are in the strongest agreement with the selected POD and DMD modes are shown in the right column. These are the fundamental slow body sausage (top row) and kink modes (bottom row). The top and bottom rows show snapshots of the slow body sausage and kink modes, respectively. From left to right, the columns show the POD and DMD modes from HARDcam H\(\alpha \) intensity observations of a sunspot (Jess et al. 2017), then the corresponding magnetic cylinder model modes. As shown in the color bar, the intensity oscillations are normalized between \(-1\) and 1, hence the blue and red regions are in anti-phase. The methods of POD and DMD provide a most promising approach to decompose MHD wave modes in pores and sunspots, even if their cross-sectional shapes are much more irregular than this example. Image adapted from Albidah et al. (2021) In the case of the magnetic cylinder model, assuming a static background, the eigenmodes, e.g., kink, sausage and fluting, are orthogonal to each in other in space by definition. Furthermore, each mode can have a broadband signal in \(\omega \) and k as shown for a real sunspot in the right panel of Fig. 30. Hence, POD can identify such modes in pores and sunspots, providing there is no significant background flow that will break the condition of orthogonality. Furthermore, if a mode has a dominant power frequency, this can be identified with DMD as well as POD. Indeed, this was done by Albidah et al. (2021) for the 8 POD modes shown in the PSD plot in the right panel of Fig. 31 which have distinct power peaks between 4.3 and 6.5 mHz. In such cases a combined POD/DMD approach is a most promising avenue for identifying physical modes. However, it must be highlighted, as initially introduced in Sect. 2.2, that the characterization of waves using POD and DMD techniques must be treated with the same caution as traditional FFT approaches. For example, it is essential that the relative amplitudes of each eigenfunction are compared to noise and/or background sources to establish its true significance. As will be shown in Sect. 3.3, POD and DMD methods are especially useful for decomposing MHD wave modes in pores and sunspots of more irregular cross-sectional shapes than the example shown in Fig. 31. This is because POD and DMD do not have the limitation of having their eigenfunctions pre-defined as they are with Fourier decomposition, where the basis functions are simply fixed as sinusoids. Even in the standard cylinder model, the eigenfunctions in the radial direction are Bessel functions not sinusoids. Hence, when it comes to identifying the spatial structure of individual MHD wave modes in pores and sunspots, the methods of POD and DMD are more suited to the job than Fourier decomposition. However, Fourier transforming the time coefficient of a POD mode is still necessary to calculate its PSD as shown in the right panel of Fig. 30. \(B{-}\omega \) diagrams Imaging spectropolarimetry offers the additional possibility to study the variations in the wave power spectrum as a function of magnetic flux. To this aim, Stangalini et al. (2021b) have proposed a new visualization technique, called a \(B{-}\omega \) diagram (see Fig. 32), which combines the power spectrum of a particular quantity (e.g., Doppler velocities) with its corresponding magnetic information. In this diagram, each column represents the average power spectrum of pixels within a particular magnetic field strength interval as inferred from polarimetry (e.g., via spectropolarimetric inversions or center-of-gravity methods; Rees and Semel 1979). The \(B{-}\omega \) diagram therefore has the capability to help visualize changes in the oscillatory field as one transitions from quiet Sun pixels outside the magnetic tube to the inner (more concentrated) magnetic region. In Fig. 32 we show an example of \(B{-}\omega \) diagram taken from Stangalini et al. (2021b), which reveals unique wave information for a magnetic pore observed by IBIS in the photospheric Fe i 6173 Å spectral line. Here, we clearly see that the the amplitude of 5-min (\(\approx 3\) mHz) oscillations in the quiet Sun is progressively reduced as one approaches the boundary of the magnetic pore (increasing B values). On the other hand, immediately inside the boundary of the pore (highlighted using a dashed vertical line), a set of spectral features is observed in both Doppler velocity and CP (circular polarization) oscillations (i.e., magnetic field oscillations), which are interpreted as specific eigenmodes of observed magnetic cylinder. Image reproduced with permission from Stangalini et al. (2021b) Doppler velocity (left) and circular polarization (CP; center) \(B{-}\omega \) diagrams of a magnetic pore observed in the photospheric Fe i 6173 Å spectral line. The vertical blue dashed lines represents the boundary of the umbral region as inferred from intensity images. The right panel shows the average spectra outside and inside the magnetic umbra of the pore. The 5-min (p-mode) oscillations dominate the quiet Sun, but their amplitude is progressively reduced absorbed as one approaches the concentrated magnetic fields of the pore, until a series of eigenmodes are excited within the magnetic tube itself. Effects of spatial resolution The solar atmosphere is highly structured, presenting features across a wide range of spatial scales down to the resolution limit of current instrumentation. Oscillations can be localized at particular spatial scales/features (see, e.g., the discussion in Sect. 3.3). This means that, for instance, the Doppler velocity, or indeed any other diagnostic, is the average within the resolution angle of the observations. For this reason, the signal itself and its inherent temporal oscillations associated with features below (or close to) the resolution limit can be underestimated (MacBride et al. 2021). To illustrate this effect, we consider a case study based on CRISP observations acquired at the SST of a quiet Sun region, which were previously deconvolved using the MOMFBD code (van Noort et al. 2005) to reduce the effects of residual image aberrations. Here, for the seek of simplicity we consider the starting data as "perfect data" for the only purpose of illustrating the effects of spatial resolution of the final power spectra of the oscillations. In the left panel of Fig. 33 we show the original instantaneous Doppler velocity field obtained from the Fe i 6301.5 Å photospheric spectral line. In order to mimic the effect of a lower spatial resolution, we convolve this data using a point spread function (PSF), assumed here to be Gaussian, with a larger full-width at half-maximum (FWHM). In order to simplify the process, we do not consider the effects of residual seeing aberrations present in the original and convolved images. Therefore, our PSF model only considers the effect of the instrumental PSF, which can be represented by the Houston diffraction limited criterion (Houston 1927), $$\begin{aligned} {\text{FWHM}} = \frac{1.03\lambda }{D} , \end{aligned}$$ where \(\lambda \) is the observed wavelength and D is the diameter of the telescope. Local seeing effects in ground-based observations can further reduce the effective resolution, in addition to the seeing conditions themselves varying significantly throughout the observations, thus providing further (time varying) degradation to the data. In the left panel of Fig. 33, the photospheric velocity field is the result of two components: downflows in the intergranular lanes (red colors) and upflows in the granules (blue colors). Since the integranular lanes are much smaller and narrower with respect to the granules, the velocity signals associated with the integranular regions become more affected (i.e., reduced) by the lower spatial resolution induced by worsening seeing conditions. This effect is apparent in the middle and right panels of Fig. 33, where the progressively worsening seeing conditions (\({\text{FWHM}}=0.2''\) middle panel; \({\text{FWHM}}=0.5''\) right panel) result in lost fine-scale velocity information. Estimated effects of the spatial resolution (i.e., different FWHMs of the instrumental PSF; see Eq. (12)) on the observed Doppler velocity field. The original Doppler velocity field observed by the CRISP instrument at the SST in the Fe i 6301.5 Å photospheric spectral line (left panel) is convolved with a Gaussian PSF with larger and larger FWHMs to mimic the effects of a lower spatial resolution (middle and right panels). The sign convention employed shows downflows (positive velocities) as red colors and upflows (negative velocities) as blue colors. It can be seen in the middle (\({\text{FWHM}}=0.2''\)) and right (\({\text{FWHM}}=0.5''\)) panels that progressively worsening seeing conditions results in lost velocity signals from primarily small-scale features (e.g., intergranular lanes) If the resolution angle is smaller than the angular size of the feature being studied, then the measured signal will approach the true value. This is due to the 'filling factor' being equal to '1', whereby the feature of interest occupies the entirety of the resolution element on the detector. On the contrary, if the resolution element is larger than a particular spatial feature, then the signal measured will be a combination of both the feature of interest and plasma in its immediate vicinity. Here, the filling factor of the desired structure is \(<1\), resulting in a blended signal that produces the measured parameters. In the specific case of integranular lanes (see, e.g., Fig. 33), this means that if the resolution element is larger than their characteristic width, signal from the neighboring granules will be collected too. This effect is shown in Fig. 34, where the probability density functions (PDFs) of the instantaneous velocities for different spatial resolutions is shown. By lowering the spatial resolution, the original skewed distribution of the velocity, which is a consequence of the different spatial scales associated with the upflows (blueshifts) and downflows (redshifts), is transitioned into a more symmetric distribution that is characterized by smaller velocity amplitudes. Probability density functions (PDFs) of the instantaneous velocity fields shown in Fig. 33 as a function of spatial resolution. Here, the blue, orange, and green lines represent the PDFs for three different seeing conditions represented by a \({\text{FWHM}}=0.16''\), \({\text{FWHM}}=0.2''\), and \({\text{FWHM}}=0.5''\), respectively. It can be seen that worse seeing conditions (e.g., the green line) produce more symmetric distributions and smaller velocity amplitudes due to the spatial averaging of the measured signals These effects, in turn, also translate into a reduction of the measured amplitudes of any oscillations present in the data. This effect can be seen in Fig. 35, where the suppression factor of the Doppler velocity amplitudes (upper panel) and the resulting power spectral densities in two distinct frequency bands, namely 3 mHz and 5 mHz (1 mHz bandwidth; lower panel), are shown as a function of the spatial resolution. The suppression factor gives an idea of the underestimation of the amplitudes of the embedded oscillations, and in the top panel of Fig. 35 it is normalized to the value associated with the original SST/CRISP data used here (i.e., \({\text{FWHM}}=0.16''\) provides a suppression factor equal to 1.0). From the upper panel of Fig. 35 we can also predict the amplitudes of the velocity oscillations captured in forthcoming observations from the new 4 m DKIST facility, which could be as large as 1.3–1.4 times that of the velocity amplitudes measured with a 1 m class telescope at the same wavelength (under similar local seeing conditions). Wave amplitude suppression factor (upper panel) and the resulting power spectral densities (lower panel) for observations acquired with different spatial resolutions. In the upper panel, the wave amplitude suppression factors (blue dots) are computed with respect to the velocity information displayed in Fig. 33, with the vertical dotted lines highlighting telescope aperture sizes of 4 m (DKIST), 1 m (SST), and 0.1 m. The dashed red line displays an exponential fit (using Eq. (13)), with the fit parameters shown in the figure legend. The lower panel displays the resulting power spectral densities, as a function of spatial resolution, for two key frequencies commonly found in observations of the solar atmosphere, notably 2.5–3.5 mHz (orange dots) and 4.5–5.5 mHz (blue dots). Again, the power spectral densities are fitted using Eq. (13), with the corresponding fit parameters shown in the figure legend. These panels document the importance of spatial resolution when attempting to measure weak oscillatory processes, since poor spatial resolution (either through small telescope aperture sizes or poor local seeing conditions) may result in complete suppression of the observable signal Both the suppression factor and the resulting power reduction, as a function of spatial resolution, are well modeled by an exponential decay of the form, $$\begin{aligned} A = A_{0} e^{-\frac{{\text{FWHM}}}{{s_{0}}}} + C , \end{aligned}$$ where \(A_{0}\) is either the amplitude of the velocity signals or the wave power, \(s_{0}\) is a characteristic spatial length, and C is a constant. Equation (13) characterizes very nicely the impact spatial resolution has on the visible wave characteristics, whereby when the resolution element is larger than the characteristic physical scale of the observed process in the solar atmosphere (i.e., \({\text{FWHM}}>s_{0}\)), then the oscillatory signal is strongly suppressed. This may result in weak oscillatory amplitudes being lost from the final data products, a process that was recently discussed by Jess et al. (2021b) in the context of sunspot oscillations. Such amplitude suppression effects imply that when estimating the energy flux of waves, one needs to consider the specific spatial resolution achieved and correct the resulting estimates by a factor depending on the FWHM of the instrumental PSF and the local seeing effects. We note that this effect strongly depends on the characteristic spatial length of the processes observed in the solar atmosphere. In order to illustrate the problem we have made use of photospheric observations (i.e., Figs. 33, 34, 35). However, due to the presence of narrow filamentary structures observed in the chromosphere, the power of the oscillations can be even more underestimated at those atmospheric heights. Identification of MHD wave modes In this section, we will not review MHD wave theory in any great detail since this has been covered previously in many books and reviews (see e.g., Aschwanden 2005; Nakariakov and Verwichte 2005; Priest 2014; Jess et al. 2015; Roberts 2019). Instead, we would like to highlight the particular challenges of identifying MHD wave modes from observational data given what is known from MHD wave theory. Homogeneous and unbounded plasma In most textbooks, for simplicity MHD waves are rightly introduced by assuming a homogeneous unbounded plasma with a straight and constant magnetic field. This highly idealized plasma configuration only permits propagating Alfvén, slow, and fast magnetoacoustic wave modes. In stark contrast, the Sun's atmosphere is actually very inhomogeneous and the newest high resolution instrumentation reveal the solar plasma to be ever more finely structured. But let us assume the wavelengths are large enough so that these MHD wave modes do not "feel" the effect of any plasma fine structure, hence allowing us to apply the unbounded homogeneous plasma model, as a zeroth order approximation, to observational data. How can we actually identify the Alfvén, slow, and fast magnetoacoustic MHD wave modes? As we shall discuss, in practical terms, even in this simplest of plasma configurations, each MHD wave mode would actually be non-trivial to identify without ambiguity, even from excellent quality spectropolarimetric data. First, let us consider the Alfvén wave (Alfvén 1942). The only restoring force of this wave is magnetic tension, but since this wave is incompressible the magnetic field lines remain equidistant from each other as they are oscillating. Hence, although the direction of the magnetic field vectors will change with time as the field lines oscillate the magnitude of the vectors will remain constant. Therefore, this wave will not reveal itself through variations in the magnetic field strength using the Zeeman or Hanle effects. Also, due to its incompressibility the Alfvén wave would not reveal itself in intensity oscillations since the density is not perturbed. This only leaves the velocity perturbations associated with this wave, which could in principle be detected in Doppler measurements. However, to truly identify an Alfvén wave it would have to be established that the velocity perturbations were perpendicular to the magnetic field lines and that the wave vector was not perpendicular to the direction of the magnetic field. To add even more difficulty to the challenge of identifying an Alfvén wave, it is only approximately anisotropic, in the sense that the fastest propagation is along the direction of the magnetic field and only completely perpendicular propagation is forbidden, i.e., the more perpendicular the wave vector becomes relative to the magnetic field the slower the propagation will be. What about identifying the slow and fast magnetoacoustic modes? The allowed directions for the slow magnetoacoustic wave vector are very similar to that of the Alfvén wave, meaning that it is only approximately anisotropic and propagation perpendicular to the magnetic field direction is forbidden. However, unlike the Alfvén wave, the slow magnetoacoustic wave is compressible and should reveal itself in intensity oscillations if the amplitude of the perturbations are large enough relative to the background. However, to establish even more convincing evidence, a slow magnetoacoustic wave requires validation that the plasma and magnetic pressure perturbations are in anti-phase. Of course, this is not an easy task in observational data and would require both a fortuitous line-of-sight and an excellent signal-to-noise ratio to determine perturbations in both intensity and Zeeman/Hanle effect measurements. In contrast to the Alfvén and slow magnetoacoustic waves, the fast magnetoacoustic wave is more isotropic in nature since it can also propagate perpendicular to the magnetic field. A further key difference to the slow magnetoacoustic wave is that the plasma and magnetic pressure perturbations associated with a fast magnetoacoustic wave are in phase. To show this from observational data would provide compelling evidence that a fast magnetoacoustic wave mode has indeed been identified, but, as with showing the anti-phase behavior between plasma and magnetic pressures for a slow magnetoacoustic wave, this is not a trivial task, even with excellent quality spectropolarimetric data. There are also more subtle points in distinguishing between the Alfvén, slow, and fast magnetoacoustic wave modes depending on the value of plasma-\(\beta \), which itself is difficult to determine from observational data. Importantly, for MHD wave modes the value of plasma-\(\beta \) also indicates the relative values of the sound and Alfvén speeds. Especially problematic is the case when the sound speed is close to the Alfvén speed, since here the propagation speeds of the Alfvén, slow and fast magnetoacoustic waves along the direction of the magnetic field are practically indistinguishable. This effect is clearly demonstrated in Fig. 36, which is based on the 'NC5' flux tube model presented by Bruls and Solanki (1993), and clearly shows how the localized velocities associated with different wave modes can become difficult to disentangle in the lower solar atmosphere, hence providing some ambiguity when attempting to diagnose the true wave mode from the propagation velocity alone. But remember, the nuanced discussion we have had here on wave mode identification assumed that the solar plasma was both homogeneous and unbounded. In practical terms, it is more likely that the analysis of waves in the lower solar atmosphere will be directly related to their excitation in, and propagation through, large scale magnetic structures such as sunspot and pores (see Sect. 3.2) or smaller scale structures such as spicules and fibrils (see Sect. 3.4). In such cases the most applied model is that of the magnetic cylinder (e.g., Wentzel 1979; Wilson 1979, 1980; Spruit 1982; Edwin and Roberts 1983), which we shall discuss next. Various wave speeds in a flux tube in the lower solar atmosphere, from the hot 'NC5' flux tube model put forward by Bruls and Solanki (1993), in combination with the surrounding cool VAL-A atmosphere (Vernazza et al. 1981) Magnetic cylinder model The advantage of the magnetic cylinder model is that it allows for the key plasma parameters, e.g., magnetic field strength and plasma density, to differ inside and outside of the flux tube, allowing us to introduce inhomogeneity in the direction perpendicular to the cylinder axis. In this model, relative to the cylindrical coordinates \((r, \theta , z)\), where r, \(\theta \), and z are the radial, azimuthal, and axial directions, respectively, waves can either be standing or propagating in all three orthogonal directions (see the left panel of Fig. 37). If the wave is propagating in the radial direction this is a so-called "leaky" wave, which is not trapped by the cylindrical waveguide and damps due to MHD radiation. The so-called "trapped" modes are standing in the radial direction with the greatest wave energy density in the internal region of the cylinder. Outside of the cylinder the trapped mode is evanescent and decays with increasing distance from the tube. Images reproduced with permission from Arregui et al. (2005, left), copyright by ESO; and Morton et al. (2012, middle and right panels), copyright by Macmillan A typical cylindrical flux tube model (left panel) represented by a straightened magnetic tube of length, L, and radius, R. The magnetic field, B, is uniform and parallel to the z-axis and the whole configuration is invariant in the azimuthal direction, \(\theta \) (labeled as \(\varphi \) in the diagram). In the schematic, the density varies in a non-uniform transitional layer of width, l, from a constant internal value, \(\rho _{i}\), to a constant external value in the local plasma environment, \(\rho _{e}\). The middle and right panels show the effects of \(m=0\) (sausage) and \(m=1\) (kink) wave perturbations, respectively, to the equilibrium flux tube. The sausage wave (middle) is characterized by an axi-symmetric contraction and expansion of the tube's cross-section. This produces a periodic compression/rarefaction of both the plasma and magnetic field. The kink wave (right) causes a transverse displacement of the flux tube. In contrast to the sausage wave, the kink wave displacement/velocity field is not axi-symmetric about the flux tube axis. The red lines show the perturbed flux tube boundary and thick arrows show the corresponding displacement vectors. The thin arrows labelled B show the direction of the background magnetic field. Beyond the basic descriptions of whether the mode is "leaky" or "trapped", the azimuthal integer wave number, m, defines whether the waves are the so-called "sausage", "kink", or "fluting" modes. The sausage mode has \(m=0\) and is azimuthally symmetric, the kink mode has \(m=1\) and is azimuthally asymmetric (see the middle and right panels of Fig. 37). The fluting modes are higher order in the azimuthal direction with \(m \ge 2\). A further classification of wave types in a magnetic cylinder is "body" or "surface" modes. A body wave is oscillatory in the radial direction inside the tube and evanescently decaying outside. Because the body wave is oscillatory inside the tube, it has a fundamental mode in the radial direction and also higher radial harmonics. In contrast, a surface wave is evanescent inside and outside of the tube with its maximum amplitude at the boundary between the internal and external plasma. Since it is strictly evanescent inside the tube, the surface mode cannot generate higher radial harmonics. At this point it will be worth explaining why confusion has arisen over the years since the seminal publication by Edwin and Roberts (1983), who also introduced the terms "fast" and "slow" to classify the propagation speeds of MHD wave modes along the axis of the magnetic cylinder. In the dispersion diagrams of a magnetic cylinder, distinct bands appear for a particular wave mode where the axial phase speed is bounded by characteristic background speeds. As an example, we can model a photospheric waveguide as being less dense than the surrounding plasma and having a stronger magnetic field internally than externally. This would be a reasonable basic model for, e.g., a pore or sunspot, where the internal density depletion is a result of the increased magnetic pressure (Maltby et al. 1986; Low 1992; Cho et al. 2017; Gilchrist-Millar et al. 2021; Riedl et al. 2021). In this case, we can form the inequality of the characteristic background speeds as \(v_A>c_e>c_0>v_{Ae}\), where \(v_A\) is the internal Alfvén speed, \(c_e\) is the external sound speed, \(c_0\) is the internal sound speed, and \(v_{Ae}\) is the external Alfvén speed. This results in a slower band with phase speeds between \([c_T, c_0]\), where the internal tube speed, \(c_T\), is defined as, $$\begin{aligned} c_T = \frac{c_0 v_A}{\sqrt{c_0^2+v_A^2}} . \end{aligned}$$ In addition, a faster band also exists with phase speeds between \([c_0, c_e]\). Wave modes with phase speeds below the "slow" band and above the "fast" band are not trapped modes (having real \(\omega \) and \(k_z\) values). The "slow" and "fast" bands for these chosen photospheric conditions are shown in the dispersion diagram in the left panel of Fig. 38. Left panel: A dispersion diagram is shown for a representative photospheric magnetic cylinder. It can be seen that there are two distinct horizontal bands with slower and faster phase speeds. The fast band is bounded between \([c_0,c_e]\) and the slow band between \([c_T,c_0]\). The adjectives "slow" and "fast" here have a quite distinct meaning from the terms slow and fast when referring to the magnetoacoustic wave modes of a homogeneous and unbounded plasma. Right panel: A cartoon of theoretically predicted MHD wave modes in a sunspot, and their possible sources, based on the magnetic cylinder model of Edwin and Roberts (1983). Images adapted from Edwin and Roberts (1983, left panel) and Evans and Roberts (1990, right panel) Although Edwin and Roberts (1983) used the perfectly apt adjectives, "slow" and "fast", to describe the phase speed bounds of these distinct bands of trapped MHD wave modes, they have quite a different physical meaning to the terms of the slow magnetoacoustic and fast magnetoacoustic waves from the homogeneous and unbounded plasma model. This is most clearly illustrated when comparing the same label "fast" in both scenarios. For a cylindrical waveguide any trapped fast MHD mode is strictly anisotropic since the propagating wave vector is restricted to being absolutely parallel to magnetic field direction, which is also aligned with the cylinder axis. However, a fast magnetoacoustic wave in a homogeneous plasma can propagate with any angle relative to the magnetic field orientation. There is a special class of incompressible Alfvén modes that can exist in a magnetic cylinder with any azimuthal wave number, m, the so-called torsional Alfvén waves (see e.g., Spruit 1982). Like the Alfvén wave in a homogeneous plasma, the only restoring force is magnetic tension. However, torsional Alfvén waves are strictly anisotropic since they can only propagate along the direction of the tube axis, whereas their counterpart in a homogeneous plasma can propagate at any angle (with the exception of perpendicular) relative to the magnetic field. The torsional Alfvén wave can only be excited if the driver itself is incompressible, meaning that the tube boundary is not perturbed at all in the radial direction. However, in reality it likely that the boundary of solar magnetic flux tubes are perturbed to some degree in the radial direction. If the boundary is only slightly perturbed in the radial direction, and the dominant perturbations are in the axial direction, then this will excite a slow mode. If the radial perturbation dominates over the axial perturbation, resulting in a greater perturbation of the boundary, then this will excite a fast mode. The greater radial perturbation for a fast mode means that magnetic tension plays a larger role in the restoring force than for a slow mode, where the longitudinal compressive forces of plasma and magnetic pressure dominate. Understanding the phase relations between the restoring forces for MHD wave modes in a magnetic cylinder is not as straightforward as it is for the three possible MHD modes in a homogeneous plasma. This is because the phase relations between plasma pressure, magnetic pressure, and magnetic tension restoring forces depend on whether the wave is propagating or standing in each of the three orthogonal directions, i.e., radial (r), azimuthal \((\theta )\), and axial (z). Also, the radial spatial structuring of the plasma in a magnetic cylinder means that perturbed MHD variables, such as the magnetic field \((B_r,B_{\theta },B_z)\) and velocity \((v_r,v_{\theta },v_z)\) components, are related, not only by time derivatives, but spatial derivatives dependent on the variation of the background plasma properties. A simplified thin tube or "wave on a string" approximation was made by Fujimura and Tsuneta (2009) to derive the phase relations between \(v_r\) and \(B_r\) for a kink mode, and \(v_z\) and \(B_z\) for a sausage mode. This was done for both propagating and standing waves in the axial direction, but caution should be taken in applying these results to structures of finite width. A more detailed investigation into the phase relations of these MHD variables was done for the sausage mode by Moreels and Van Doorsselaere (2013), utilizing a magnetic cylinder of finite width under photospheric conditions. Like Fujimura and Tsuneta (2009), this model predicted the phase relations for both standing and propagating waves in the axial direction. A note of caution should be introduced here to state that both the models of Fujimura and Tsuneta (2009) and Moreels and Van Doorsselaere (2013) assume the kink and sausage modes are "free" oscillations of the structure and are not being driven. To correctly derive the phase relations between the MHD wave variables in a driven system demands that system is solved as an initial value problem. However, currently the exact spatial and temporal structures of the underlying drivers of the waves observed in pores and sunspots are not universally understood. Although the phase relations between the perturbed variables for any MHD wave mode may be not simple to predict theoretically, at the least the spatial structure of these variables (independent of time), providing the cross-section of the wave guide is resolved (e.g., particularly in the case of larger magnetic structures such as pores and sunspots), should correlate in straightforward way. First, let us consider a fixed axial position, z, which for a vertical tube would correspond to a fixed height in the solar atmosphere. If the magnetic cylinder is oscillating with an eigenmode, then the variables related to compressible axial motion, i.e., \(v_z\), \(B_z\) and plasma pressure (also related to perturbations in temperature and plasma density), should have the same spatial structure in the radial (r) and azimuthal \((\theta )\) directions. Likewise, the spatial structure of variables related to radial perturbations of the magnetic field, i.e., \(v_r\) and \(B_r\), should be consistent. The same is also true for the variables that relate to the torsional motions of the magnetic field, i.e., \(v_{\theta }\) and \(B_{\theta }\). Again, all these theoretical predictions assume free oscillations of the entire magnetic structure, e.g., a pore or sunspot. If the oscillations are being driven, then this is a more complicated and computationally expensive modeling problem to solve. Also, the spatial scale of the driver relative to the size of the magnetic structure is crucial. To excite the global eigenmodes of magnetic structures the driver has to be at least as large as the structure itself. If the driver is much smaller than the magnetic structure, it will still excite localized MHD waves, but these will not be global eigenmodes of the entire magnetic structure. This too requires a different modeling approach, see e.g., Khomenko and Collados (2006), who modeled p-mode propagation and refraction through sunspots. High resolution images of sunspots, pores, magnetic bright points, and fibrillar structures are continually telling us that modeling these features using cylindrical flux tube geometries, while more mathematically simplistic, is far from realistic. Even from basic membrane models, in which separation of variables is possible, the cross-sectional shape has a fundamental effect on the structure of the eigenfunctions. For elliptical magnetic flux tubes, Aldhafeeri et al. (2021) investigated the effect of eccentricity, \(\epsilon =\sqrt{1-b^2/a^2}\), where a and b are the semi-major and semi-minor axes, respectively, on the spatial structure of eigenfunctions. See, for example Fig. 39, which shows two sunspot umbrae fitted with ellipses with eccentricities \(\epsilon =0.58\) and \(\epsilon =0.76\). These are not negligible values since a circle has \(\epsilon =0\). Figure 40 shows \(m=1\) (kink) and \(m=2,3\) (fluting) fast body modes where the phase is odd with respect to the major axis as eccentricity increases, while Fig. 41 shows the same modes where the phase is even with respect to the major axis. Although all MHD wave modes in flux tubes of elliptical cross-section have their spatial structure distorted when compared to their equivalent versions in flux tubes of circular cross-section, it can be seen that the fluting modes that have even phase with respect to the major axis (shown in Fig. 41) become notably different in character as eccentricity increases, since previously distinct regions of phase or anti-phase end up coalescing. This advancement from the cylindrical flux tube model demonstrates that more sophisticated modeling of magnetic flux tubes with more realistic, and hence more irregular, cross-sectional shapes is required to more accurately interpret what type of wave modes are present in pores and sunspots. Recently this was done by Albidah et al. (2022) and Stangalini et al. (2022) to identify MHD wave modes in sunspot umbrae and this will be discussed in Sect. 3.2. Two active regions, NOAA AR12565 (left) and NOAA AR12149 (right), captured in the G-band by ROSA at the Dunn Solar Telescope. To show the departure from circular cross-sectional shape, ellipses are fitted to the sunspot umbrae. The eccentricity of the left umbra is \(\epsilon =0.76\), while the right umbra is \(\epsilon =0.58\). Image adapted from Aldhafeeri et al. (2021) The normalized density perturbations of fast body wave modes under representative coronal conditions for the different values of eccentricity \(\epsilon \). Note that the eigenfunctions for slow body wave modes under photospheric conditions would have a very similar appearance. From top to bottom, the \(m=1\) (kink) and \(m=2,3\) (fluting) modes are shown which have an odd phase structure with respect to the major axis of the ellipse. Image adapted from Aldhafeeri et al. (2021) The same wave modes are shown as in Fig. 40 but their phase structure is even with respect to the major axis of the ellipse. Image adapted from Aldhafeeri et al. (2021) In Sect. 2.8 the crucial issue of spatial resolution was discussed. In smaller scale magnetic structures, such as off-limb spicules or on disc fibrils, it is not possible to observe the true cross-section of the wave guide (as is possible for larger on-disc features such as pores and sunspots) in order to identify eigenmodes. However, fast sausage and kink modes can still be identified in these smaller structures if the amplitude of the radial motion (i.e., transverse to the magnetic field direction) is large enough. The kink mode is the only cylinder mode which causes a transverse displacement of the axis. For smaller magnetic structures, such as fibrils, the kink mode will appear as a "swaying" motion. If the radial motion of the fast sausage mode is large enough, then this causes periodic changes in the width of the structure, which can be resolved. Wave mode identification in smaller magnetic structures is addressed in detail in Sect. 3.4. As for larger scale magnetic waveguides, where the cross-section can be resolved fully, such as in pores or sunspots, the right panel of Fig. 38 shows the wide variety of theoretically predicted MHD wave modes, including slow/fast and body/surface, that can exist in such structures based on the magnetic cylinder model of Edwin and Roberts (1983). Recent progress in the identification of such wave modes from observations is discussed in Sect. 3.3. Across Sect. 2, we have discussed the fundamental theoretical considerations of waves manifesting in the solar atmosphere (Sect. 2.9), we have provided an overview of the techniques used to characterize them (Sects. 2.2–2.7), and summarized the challenges faced in light of variable spatial resolution (Sect. 2.8). Regardless of these challenges, over the last number of decades the solar community has overcame many obstacles, which has allowed for the successful acquisition, extraction, and identification of many different types of wave modes across a wide variety of solar features. In the following section, we will overview recent discoveries in the field of waves in the lower solar atmosphere, as well as comment on the difficulties still facing the global community in the years ahead. Recent studies of waves In the past, review articles have traditionally segregated wave activity in the solar atmosphere into a number of sub-topics based on the specific wave types and structures demonstrating the observed behavior. For example, Jess et al. (2015) divided up the review content on a feature-by-feature basis, including sections related to compressible and incompressible waveforms, which were subsequently further sub-divided into quiet Sun, magnetic network, and active region locations. However, as modern observations and modeling approaches continue to produce data sequences with ever improving spatial resolutions, placing the physical boundary between two locations becomes even more challenging. Indeed, emerging (and temporally evolving) magnetic fields often blur the boundaries between magnetic network elements, pores, proto-sunspots, and fully developed active regions. Hence, it is clear that solar complexity continues to increase with each improvement in spatial resolution made. As a result, dividing the content between previously well-defined structures becomes inappropriate, which is even more apparent now that mixed MHD waves (e.g., compressible and incompressible modes; Morton et al. 2012) are being identified in a broad spectrum of magnetic features. Hence, for this topical review we employ just three (deliberately imprecise) sub-section headings, notably related to 'global wave modes', as well as 'large-scale' and 'small-scale' structures. This is to avoid repetition and confusion, and to allow the overlap between many of the observables in the Sun's atmosphere to be discussed in a more transparent manner. Importantly, while discussing the recent developments surrounding wave activity in the lower solar atmosphere, we will attempt to pinpoint open questions that naturally arise from the cited work. We must stress that closing one research door more often than not opens two (or more) further avenues of investigation. Therefore, discussion of the challenges posed is not to discredit the cited work, but instead highlight the difficult research stepping stones facing the solar physics community over the years and decades to come. Global wave modes The field of helioseismology has employed long-duration data sequences (some spanning multiple continuous solar cycles; Liang et al. 2018) to uncover the internal structure and dynamics of the Sun through its global oscillation properties. Pioneering observations by Frazier (1968) suggested, for the first time, the presence of dual oscillating modes in the solar atmosphere, something that contradicted previous interpretations where the observed oscillations were simply considered to be an atmospheric response to granular impacts. It was subsequently shown that a variety of global wavenumbers could be seen in the photospheric velocity field of the C i 538 nm line (Deubner 1975). Importantly, the pioneering work of Deubner (1975) revealed clear ridges in photospheric \(k{-}\omega \) power spectra, which helped to highlight, for the first time, that the ubiquitous 5-min p-mode oscillations are in-fact resonant eigenmodes of the Sun. Novel observations acquired during austral summer at the South Pole discovered global 5-min global oscillations at a wide range of horizontal wavelengths, revealing the true extent of oscillation modes associated with global solar resonances (Grec et al. 1980; Duvall and Harvey 1983). Traditionally, in the field of helioseismolgy, the Sun is considered as an approximate spherically symmetric body of self-gravitating fluid that is suspended in hydrostatic equilibrium (Christensen-Dalsgaard et al. 2000). This results in the modes of solar oscillations being interpreted as resonant vibrations, which can be represented as the product of a function of radius and a spherical harmonic, \(Y_{l}^{m}(\theta , \phi )\). Here, l relates to the horizontal scale on each spherical shell (commonly referred to as the 'angular degree'), while m determines the number of nodes in solar longitude (commonly referred to as the 'azimuthal order'). The specific modes of oscillation can be divided up into three main categories: (1) Pressure modes (p-modes), which are essentially acoustic waves where the dominant restoring force is pressure, providing frequencies in the range of \(\sim 1{-}5\) mHz and angular degrees spanning \(0 \le l \le 10^{3}\) (Rhodes et al. 1997; Kosovichev 2011; Korzennik et al. 2013), (2) Internal gravity modes (g-modes), where the restoring force is predominantly buoyancy (hence linked to the magnitude of local gravitational forces), which typically manifest in convectively stable regions, such as the radiative interior and/or the solar atmosphere itself (Siegel and Roth 2014), and (3) Surface gravity modes (f-modes), which have high angular degrees and are analogous to surface waves in deep water since they obey a dispersion relation that is independent of the stratification in the solar atmosphere (Mole et al. 2007). In the limit that the wavelength is much smaller than the solar radius these wave are highly incompressible. The main restoring force for f-modes is gravity, which acts to resist wrinkling of the Sun's surface. The intricacies of helioseismology become even more complex once isolated magnetic features, such as sunspots, develop within the solar atmosphere, which impact the velocities and travel times of the embedded global wave modes (Braun and Lindsey 2000; Rajaguru et al. 2001, 2004, 2019; Kosovichev 2012; Schunker et al. 2013, 2016). A complete overview of the progress in helioseismology is beyond the scope of the present review. Instead, we refer the reader to the vast assortment of review articles that focus on the widespread development of helioseismology over the last few decades (e.g., Deubner 1983; Bonnet 1983; Christensen-Dalsgaard 2002; Gizon and Birch 2005; Gizon et al. 2010, 2017; Gough 2013; Basu 2016; Buldgen et al. 2019) Importantly, the magnetic field in the solar photosphere is inhomogeneous and found in discrete concentrations across all spatial scales (Zwaan 1987). Outside of the magnetic concentrations, where plasma pressure and gravity are the dominant restoring forces, longitudinal acoustic waves (i.e., p-modes) are generated at the top of the convection zone from the turbulent motions constituting the convective motion (Stein 1967; Goldreich and Kumar 1990; Bogdan et al. 1993). The p-modes can propagate upwards and contribute to heating of the higher layers if their frequency is larger than the acoustic cut-off frequency (Ulmschneider 1971b; Wang et al. 1995). Thus, the acoustic waves can dissipate their energy in the solar chromosphere by forming shocks (as a result of gas-density decreases with height), which are manifested in intensity images as, e.g., intense brightenings (Rutten and Uitenbroek 1991; Carlsson and Stein 1997; Beck et al. 2008; Eklund et al. 2020, 2021), or drive, e.g., Type i spicules, in the so-called 'magnetic portals' (Jefferies et al. 2006). Moreover, Skogsrud et al. (2016) showed that these shocks are associated with dynamic fibrils in an active region they exploited from observations with SST and the Interface Region Imaging Spectrograph (IRIS; De Pontieu et al. 2014b) space telescope. Properties of propagating acoustic/magnetoacoustic waves through the lower solar atmosphere have also been reported in a number of recent studies from both ground-based (e.g., Sobotka et al. 2016; Abbasvand et al. 2020a, b) and space-borne observations (e.g., Martínez-Sykora et al. 2015; Zhao et al. 2016; Abbasvand et al. 2021). From IRIS observations of a quiet-Sun area in Mn i 2801.25 Å, Mg ii k 2796.35 Å, and C ii 1334.53 Å spectral lines (sampling the photosphere, chromosphere, and transition region, respectively), Kayshap et al. (2018) found upwardly propagating p-modes with periods on the order of 1.6–4.0 min, and downward propagation in the higher period regime (i.e., periods larger than \(\approx 4.5\) min). Furthermore, Kayshap et al. (2020) identified the propagation of slow magnetoacoustic waves (with 2–9 min periodicities), within a plage region, from the high-photosphere/low-chromosphere to the transition region, using SDO/AIA and IRIS observations. In addition, g-modes (i.e., internal gravity waves) can be produced within turbulent convective flows (Mihalas and Toomre 1981, 1982) and propagate through the lower solar atmosphere, with frequencies shorter than \(\approx 2\) mHz (Newington and Cally 2010; Kneer and Bello González 2011; Vigeesh et al. 2017; Jefferies et al. 2019; Vigeesh and Roth 2020). Their identifications in the solar atmosphere have, however, been in a challenging task since they become evanescent in the convection zone and their amplitudes at the surface are exceedingly small (Schunker et al. 2018; Calchetti et al. 2021). The internal gravity waves can potentially carry a large amount of energy flux (of \(\approx 5\) kW/m\(^2\); Straus et al. 2008) to the chromosphere, thus contributing to its radiative losses (Vigeesh et al. 2021). Fortunately, unlike g-modes, f-modes (i.e., surface gravity modes) have been detected in abundance and have provided valuable diagnostic information about flows and magnetic field in the near surface region (Ghosh et al. 1995; Rosenthal and Christensen-Dalsgaard 1995; Christensen-Dalsgaard et al. 1996). Furthermore, f-modes have been exploited to quantify the Sun's effective seismic (or acoustic) radius (Schou et al. 1997; Antia et al. 2000; Dziembowski et al. 2001; Dziembowski and Goode 2005), a relatively new concept driven by results from helioseismology, as opposed to the measuring the Sun's physical (or true) radius. Such studies have shown that f-mode frequencies, as well as being sensitive to the seismic radius, are also modified by changes in the magnetic field during the solar cycle. Global p-modes in the lower solar atmosphere Acoustic waves (i.e., p-modes) can propagate both outside and inside magnetic concentrations. Through a number of studies prior to the turn of the century, properties of their 'global' oscillations (i.e., properties averaged over a relatively large field of view) became a "basic fact", describing the characteristic periodicity of p-modes as 5 min in the solar photosphere (Leighton et al. 1962; Ulrich 1970; Ruiz Cobo et al. 1997; Schunker et al. 2009), and 3 min in the chromosphere (Evans et al. 1963; Orrall 1966; Cram 1978; Fleck and Schmitz 1991; Rutten and Uitenbroek 1991; Carlsson and Stein 1992). Standing acoustic waves have also been reported from multi-line observations in the solar chromosphere (Fleck et al. 1994a), though wave patterns (and power spectra) were found to be somewhat different in He i 1080 Å observations (Fleck et al. 1994b, 1995), compared to those in other chromospheric diagnostics (e.g., Ca ii H & K, Ca ii 8542 Å, and H\(\alpha \)). While the global p-modes are more purely acoustic in nature in the photosphere, they are more likely to manifest as magnetoacoustic waves in the upper atmosphere, where the magnetic forces dominate (Khomenko and Calvo Santamaria 2013). Many of the observations demonstrating the characteristic periodicities of the global p-modes have been based on wide-band imaging at low-spatial resolutions, with very large fields of view. A recent study by Fleck et al. (2021) highlighted the presence of ubiquitous 3-min characteristic periodicities by exploring several advanced state-of-the-art numerical models. Even so, considerable differences between the various simulations were also reported, including the height dependence of wave power, in particular for high-frequency waves, varying by up to two orders of magnitude between the models (Fleck et al. 2021). Thus, although the numerical simulations provide us with important information regarding the physical processes embedded within observational data, they should be interpreted with caution since the numerical domains are too small to resolve the true physics driving large-scale global eigenmodes. Development of modern instruments in recent years, resulting in relatively narrow-band (often spectrally-resolved) observations at high resolution, have further explored the highly dynamic nature of the lower solar atmosphere. These novel observations reveal that the physical properties and structure of the lower solar atmosphere may significantly vary over different solar regions (with different levels of magnetic flux and/or topology), as well as through different atmospheric layers. Therefore, chromospheric wide-band filtergrams, that often integrate over a significant portion of strong chromospheric lines (hence, sampling across a large range of heights), may result in mixing (or averaging) of observable information (e.g., the oscillatory power), which can largely vary within a short distance in the lower solar atmosphere. A large variation in the height of formation can also cause a strong temporal modulation that may consequently destroy the oscillatory signal. Furthermore, the effect of spatial resolution can be crucial, as information may be lost in lower resolution observations due to, e.g., smearing (see Sect. 2.8 for more discussion related to resolution effects). Moreover, an average power spectrum over a very large field of view can predominantly be dominated by characteristics of quiet-Sun regions (which cover the majority of the solar surface at any given time). An example of the influence of spatial resolution is the larger (total) energy flux of acoustic waves (larger by a factor of \(\approx 2\)) found in a quiet-Sun region by the 1 m Sunrise telescope (Bello González et al. 2010b), compared to that from the 0.7m VTT telescope (Bello González et al. 2009). However, the effect of seeing-free observations with Sunrise could also play a role in that difference, highlighting again the importance of spatial resolution (as discussed in Sect. 2.8). Such variations in atmospheric seeing (that directly affect the spatial resolution achievable) can influence the measured periodicities, in particular the global p-modes that are ubiquitously visible across the photosphere and chromosphere. In the presence of strong magnetic fields (e.g., in network or plage regions, where a group of concentrated small-scale magnetic features reside), the global acoustic power is enhanced at photospheric and low chromospheric heights (known as a 'power halo'; Brown et al. 1992; Kontogiannis et al. 2010; Rajaguru et al. 2013), while it is suppressed in the high chromosphere (so-called 'magnetic shadows'; Leighton et al. 1962; Title et al. 1992; McIntosh and Judge 2001). While the exact mechanisms behind such power variation are not yet fully understood, a number of suggestions have been provided in recent years, from both observations and simulations. In particular, models have shown that the power enhancement at lower heights can be due to the reflection of fast waves at the magnetic canopy, as a result of a large Alfvén speed gradient (Khomenko and Cally 2012; Rijs et al. 2016). From observations, both magnetic-field strength and inclination have been found to play an important role, with greater power in the stronger and more horizontal fields (Schunker and Braun 2011; Rajaguru et al. 2019). The power suppression of the acoustic waves in the high chromosphere has been suggested to be due to the mode conversion at the plasma-\(\beta \approx 1\) level (i.e., as a result of interactions between p-mode oscillations and the embedded magnetic fields; Moretti et al. 2007; Nutto et al. 2012a), less efficient wave propagation under the canopy, or the wave-energy dissipation before it reaches the canopy (Ulmschneider 1971a; Ulmschneider et al. 2005; Song 2017; Martínez-Sykora et al. 2020; Srivastava et al. 2021). The power suppression and its spatial scale has found to be directly correlated with the magnetic-field strength and/or geometric height (Chitta et al. 2012a; Jain et al. 2014; Krishna Prasad et al. 2016). From MHD simulations with the Bifrost code (Gudiksen et al. 2011), Heggland et al. (2011) showed that field inclination plays an important role in propagation of long-period waves (longer than 3 min) in the solar chromosphere. As such, they primarily found 3-min periodicities in regions with weak or vertical magnetic fields (including the center of strong flux tubes), whereas 5-min dominant waves in strong or inclined magnetic fields (such as the edges of flux tubes). Power suppression of 3 min oscillations in the upper solar chromosphere has been reported by Samanta et al. (2016), where almost no oscillatory power at this period was observed in time series of H\(\alpha \) line-core intensity images from SST/CRISP. The authors, however, found a slightly larger number of pixels demonstrating 3 min oscillations in Doppler velocity signatures of the same spectral line. In addition, they found power halos at lower atmospheric heights. In this study, the presence of ubiquitous chromospheric transient events (i.e., short-lived fibrillar structures) was speculated to be responsible for the power enhancements at lower heights. In addition, it was speculated that mode conversion was causing the magnetic shadows found around the 3 min periodicity in the upper chromosphere. Figure 42 illustrates the multi-height observations studied by Samanta et al. (2016) (on the left) along with their corresponding distribution of dominant periods of the oscillations (on the right), representing periods corresponding to the maximum power at each pixel. The lack of 3-min oscillations (i.e., the green color) on the top layer is evident. Using high-resolution H\(\alpha \) line-core observations with SST, De Pontieu et al. (2007a) had found longer periods in regions where the field is supposedly more inclined. From spatial distribution of dominant periods (from a wavelet analysis) they showed that while sunspots and plage regions were dominated by 3-min global p-modes, 5-min and longer periodicities were found in adjacent to the dense plage regions and in more inclined-field areas, respectively. We should, however, note that such dominant-period maps demonstrated by Samanta et al. (2016) and De Pontieu et al. (2007a) should be interpreted with great caution, since multiple peaks with comparable (or even equal) power may co-exist in a power spectrum. As such, the period associated to one absolute maximum of the power may not solely be representative of the oscillations in that pixel. In addition, we should note that the global wavelet spectrum is often considered a biased estimation of the true Fourier spectrum, with variable frequency resolution through the entire spectrum, that can also depend on the choice of wavelet function (see Sect. 2.4 for more details). Images reproduced with permission from Samanta et al. (2016), copyright by AAS Multi-layer observations of a quiet-Sun region from the low photosphere to the high chromosphere (left) whose dominant oscillatory periods are shown on the right. The green, red, and yellow colors in the dominant-period maps roughly represent periods around 3, 5, and 7 min, respectively. The bottom panels illustrate the corresponding line-of-sight magnetogram, from Stokes inversions of Fe i 630.2 nm spectral line. A recent investigation of such global oscillations (in brightness temperature) from millimeter observations with ALMA also revealed the lack of 3-min oscillations in the solar chromosphere in datasets with relatively large amounts of magnetic flux (Jafarzadeh et al. 2021). Conversely, the same study showed the presence of dominating 3-min oscillations in the most magnetically quiescent datasets employed. However, due to the uncertain nature of those millimeter observations, particularly, their exact heights of formation, further investigations are required. Furthermore, Norton et al. (2021) reported on global oscillations in the photosphere, from SDO/HMI data, in various regions, namely, the quiet-Sun, plage, umbra and the polarity inversion line of an active region. While the 5-min periodicity, with a considerably large power, was found in all four areas in Doppler velocity perturbations, much smaller power enhancements could be observed in intensity and line-width observations of the quiet and plage regions. Of particular importance is also the effect of magnetic topology in the chromosphere, with the multi-layer magnetic canopy whose strength and thickness depends on, e.g., the magnetic flux involved in their generation (Jafarzadeh et al. 2017a). By exploring the formation and properties of various chromospheric diagnostics, Rutten (2017) showed that the dense canopies of long opaque fibrils in the upper chromosphere, seen in H\(\alpha \) line-core intensity images, could act as an 'umbrella', obscuring the dynamics underneath. Thus, this could perhaps explain the lack of 3-min oscillations in the high chromosphere (in addition to the magnetic shadows effect). In the case of ALMA observations, Rutten (2017) speculated that the same phenomena could also occur, though at those wavelengths the dense fibrillar structures might not be visible due to their reduced lateral contrast (i.e., an insensitivity to Doppler shifts; ALMA observes continuum emission, and as such cannot be used to derive Doppler velocities) We note that similarities between ALMA observations (at 3 mm) and H\(\alpha \) line-width images have been shown by Molnar et al. (2019). All in all, it is important to investigate the variation of the global p-modes with height, throughout the lower solar atmosphere, in greater details. This can hopefully clarify whether the characteristic periodicity reported in previous studies is constant through the photosphere and the chromosphere, or whether they vary with height and/or in various solar regions. Large-scale magnetic structures Large magnetic structures, in the form of sunspots and solar pores, are considered ideal laboratories for the study of the excitation and propagation of MHD waves. Modern high resolution observations have revealed an extremely complex physical scenario in which different wave modes simultaneously co-exist in the same magnetic structure, hampering an unambiguous identification of individual wave modes. This is even more the case for highly structured magnetic fields, where the wave propagation reflects the geometrical complexity of the field lines acting as waveguides. However, in recent years our understanding of MHD waves in large magnetic structures, and their corresponding role in the heating of the solar atmosphere, has dramatically changed thanks to the opportunity offered by high-resolution fast-cadence tomographic imaging and to the new spectropolarimetric diagnostic capabilities, which have progressively extended up to chromospheric heights thanks to the technological advances of modern instrumentation. In particular, the inference of the plasma and magnetic field parameters obtained by spectropolarimetric inversion techniques have enabled the investigation of the effects of the magnetic field geometry on the wave propagation itself. In addition, new spectropolarimetric diagnostics have started to provide additional information about the magnetic field fluctuations, which are expected from several MHD wave modes. The first oscillatory phenomena in the umbra of sunspots were observed by Beckers and Tallant (1969) and Beckers and Schultz (1972), where the spatially localized brightenings (so-called 'umbral flashes') were immediately associated with locally excited magnetoacoustic waves propagating upwards along the field lines. Today, after more than 50 years from the first discovery of these oscillations, our view of wave excitation and propagation of MHD waves in large magnetic structures has changed dramatically. From the observational point of view, in addition to the localized wave phenomena and disturbances in sunspots and pores (e.g., umbral flashes), the aforementioned instrumental advances have also allowed the identification of global eigenmodes of the magnetic structure (e.g., sausage modes, kink modes). These are generally mixed with other local disturbances, requiring specific filtering techniques (e.g., \(k{-}\omega \) filtering; see Sect. 2.3) for their identification. Although most of the literature on the subject mainly reflects this apparent dichotomy with the two classes of waves (i.e., global and local oscillations) addressed independently, most recent observations have started suggesting a superposition of locally excited magnetoacoustic waves resulting, for example, from p-mode absorption or residual local magneto-convection (Krishna Prasad et al. 2015), and global resonances of the magnetic structure. These two components can coexist, both of which contribute to the physical complexity of the observed velocity patterns; an aspect that was highlighted by Roberts (2019). In the following we wish to strike a balance between local disturbances and global resonances, and we will summarize the results from the most relevant studies in recent literature. Magnetoacoustic waves in large-scale magnetic structures Sunspots and other large magnetic structures, such as solar pores, typically display intensity and Doppler velocity power spectra that are dominated by 5-min (\(\sim 3\) mHz) oscillations in the photosphere, and 3-min (\(\sim 5\) mHz) oscillations in the chromosphere (see for instance, Centeno et al. 2006b, 2009; Felipe et al. 2010; Felipe 2020; Felipe and Sangeetha 2020, and references therein). Of course, it must be noted that the frequencies/periodicities found at photospheric and chromospheric heights are not universal values at precisely 3 mHz and 5 mHz, respectively. Indeed, windows of power are normally referred to when discussing the corresponding Fourier spectra (Centeno et al. 2006b; Heggland et al. 2011; Gupta et al. 2013; Khomenko and Collados 2015), for example, \(5\pm 0.5\) min (3.0–3.7 mHz) and \(3\pm 0.5\) min (4.8–6.7 mHz) for the photosphere and chromosphere, respectively. These spectral features are depicted in Fig. 43, which clearly shows the frequency transition of peak power between the photospheric and chromospheric layers of two sunspots. While some authors have interpreted this to be the combined action of an acoustic cut-off (\(\omega _{c} \sim 5.3\) mHz, allowing the upward propagation of magnetoacoustic waves with \(\omega > \omega _{c}\); Deubner and Gough 1984; Duvall et al.
CommonCrawl
Characterization of dynamic wireless body area network channels during walking Marshed Mohamed ORCID: orcid.org/0000-0003-0203-06361, Wout Joseph2, Günter Vermeeren2, Emmeric Tanghe2 & Michael Cheffena1 In this work, finite-difference time-domain was used for the investigation of dynamic wireless body area network channel characteristics during walking, thus accounting for dynamic aspects and body postures. This involves the study of on-body, off-body, and body-to-body communication in an empty environment, at the center frequency of 2.45 GHz. The channels were investigated in terms of fade variation and their corresponding amplitude distributions. For on-body channels, the fade variation was found to be periodic, with larger fade variations for the channels involving the nodes at the hand and thigh. For off-body and body-to-body channels, channels with the absence of line of sight experienced constructive and destructive interference as the distance between the end nodes changes, resulting in larger fade variations. For the amplitude distribution of the channels, a multivariate normal distribution was considered. The distribution has the capability of modeling channels jointly which makes it easier for network analysis and was considered because of the significant correlation between the channels. The resulting estimated multivariate distributions fit well with the simulated data, for on-body, off-body, and body-to-body channels. In recent years, there has been substantial research on wireless body area networks (WBANs) due to their potential applications in areas involving monitoring and transmission of human physiological data, such as health monitoring and sports activities. The communication could involve the transmission between nodes mounted on the human body realizing an on-body network, between an on-body network and a node away from the human body (external node) acting as an access point realizing an off-body network [1], and between two on-body networks realizing a body-to-body network. These kinds of networks require a low-power communication approach due to their need for long battery life and the close proximity of body surface nodes to human tissue. In addition to this challenge, the networks are subjected to varying signal shadowing caused by relative human body orientation between the communicating nodes which vary with the human body movement. These challenges make a study and modeling of WBAN channel characteristics important [2]. Most of the existing studies on on-body, off-body, and body-to-body channels are based on measurements [3–5]. For on-body channels, measurements were conducted in [3] at four locations using seven receivers and a transmitter attached on the body. Autocorrelation and cross-correlation functions were reported, and the data collected were used to develop models for fading in various on-body links. Similar studies were conducted in [6] in the frequency range of 3–5 GHz and in [7] for ultrawideband 3.1–10.6 GHz. In [8], the effect of antenna polarization with respect to the body surface was examined, while in [9], the comparison for three locations of the hub on the human was conducted. For off-body channels, path loss was modeled in [4] from the measurement conducted in an anechoic chamber, while in [2], the measurements were conducted in an indoor environment. The lognormal distribution proved to be a good fit in describing normalized signal amplitude in both. As in [8], the impact of antenna polarization on off-body channel characteristics was studied in [10], and the importance of off-body diversity gain for various offbody channels was investigated in [11]. Further in [12], measurement conducted using multiple-input-multiple-output antennas showed improvement on the reliability of the off-body channels. A methodology for determining the optimal positions of these antennas independent of the frequency was presented in [13]. For body-to-body channels, a study in [5] investigated the impact of users' body rotation, tilting, walking in the line of sight (LOS), and non-line-of-sight conditions on the outdoor body-to-body channels. A similar study in an indoor environment was conducted in [14, 15] and at the ultrawideband frequency of 2–8 GHz in [16]. The studies highlighted how the movement affected the channel dynamic properties. This was confirmed by the study in [17] which was conducted in both indoor and outdoor environments to obtain the mean path loss and the standard deviation for different body motion scenarios and antenna placements. As in [12], a diversity combining scheme was investigated in [18] and has shown some promising results. Since the aforementioned models are applicable for a particular measurement setup, other researchers have tried to use physical models instead [19, 20]. These models are based on calculation of time-varying shadowing effects of human body movement and the addition of multipath component using statistical distribution depending on the location. Another approach is the use of numerical simulations such as finite-difference time-domain (FDTD) [21–24]. In [24], a walking phantom created by animation software is used for simulating the time-varying on-body communication channel in which mean path loss was investigated. A similar study was conducted in [23] to represent in addition the delay properties of the channels, and a study focusing on ultrawideband was conducted in [22]. The studies which apply FDTD have so far been limited to static on-body network channels, using a homogeneous phantom [21, 24], with low time resolution [22]. This does not cover the dynamic behavior from the off-body and body-to-body channels, which require a high time resolution, and the use of heterogeneous phantom to be more realistic. The novelties of this study are as follows: The on-body, off-body, and body-to-body dynamic channels are investigated together in the same conditions so that a more complete picture of the overall network can be observed and compared. The study makes use of heterogeneous phantom and uses time resolution of 50 frames per second to increase the accuracy of the data obtained. The channel gain is separated into path loss and antenna gain; this cannot be achieved through measurements since the body is within the near field of the antenna. The study investigates the correlation between the channels and the application of multivariate normal distributions in the modeling of WBAN channels. This could enable the modeling of a network of multiple channels jointly, instead of using separate models for separate channels The rest of this paper is organized as follows: Section 2 describes the methodology used in configuration and data analysis, Section 3 discusses the obtained FDTD simulation results, and Section 4 concludes the paper. This section describes the scenarios used to analyze different aspects of on-body, off-body, and body-to-body communication channels. The surrounding environment was not included in the simulations, so that the effect of the body on the radio channels can be obtained independent to the other factors. It also describes the method to analyze the results. Employing FDTD, a heterogeneous human model of height 1.77 m and weight 70.2 kg (Duke Model) [25] was used to represent a male adult human body. A half wavelength antenna (dipole) with a resonance frequency of 2.45 GHz was used as a radiating element for a single tone signal at the same frequency. This frequency was chosen to fit with the Wi-Fi technology that is often present in the indoor environment. The antenna was positioned 5 mm away parallel to the body to reduce the effect of the body on the radiation pattern of the antenna [26]. This separation is small enough for realistic applications. The antenna type and configuration were the same for all simulation scenarios. We simulated 50 frames of Duke walking 1 m per 1 s (velocity 1 m/s), in order to grasp not only the slow fading caused by shadowing but also the fast fading effects caused by body reflection, diffraction, and scattering involved in these dynamic channels [27]. The orientation of the body parts in these frames was estimated using the Thalmann model [28] and was applied to the phantom using Poser software [29]. For each frame, software capable of conducting FDTD calculations (Sim4Life) was set to calculate for each antenna, radiation pattern, and the gain at their current location. Due to the close proximity of the antenna to the human body, the radiation pattern and hence the gain are not similar to those of free space. Further, the software was set to calculate S21 parameter between all the nodes available on a given scenario [30]. During calculation, a grid of 2 mm was used on the phantom which is equivalent to λ/60 at the 2.45-GHz center frequency. This makes the limitations imposed by discretization of the round surface of the phantom negligible. Three scenarios were considered as shown in Fig. 1. Scenario 1 (on-body), where the subject walks with on-body nodes attached (Fig. 1a) The investigated scenarios. a Location of on-body nodes on the human model. In scenario 1, a subject walks with on-body nodes attached. b Scenario 2 (off-body). A subject walking with on-body nodes attached, towards the off-body node at a velocity of 1 m/s. c Scenario 3 (body-to-body). Two subjects with on-body nodes attached, walking at a velocity of 1 m/s towards each other. The ear and hand nodes are not visible on the subject on the right Scenario 2 (off-body), where the subject walks towards an off-body node (Fig. 1b) Scenario 3 (body-to-body), where two subjects walk towards each other (Fig. 1c) The same wearable antenna placements (Fig. 1a) were considered in all three simulated scenarios. Table 1 summarizes the parameters used in the FDTD simulations of the given scenarios. Table 1 Simulation parameters for on-body, off-body, and body-to-body scenarios For the on-body scenario, on-body nodes were placed at five different locations on the human body (Fig. 1a). The node positions were chosen with regard to possible medical and non-medical applications. For example, the node on the ear emulates hearing aid instrument, the node on the thigh represents a possible location for a smartphone, the hand represents a fitness tracker, etc. [31]. In addition to this, the node at the back is added as a possible space diversity node for off-body communication where human body shadowing could be significant [18]. This represents the on-body part of the network. For the off-body scenario, an off-body node located at a height of 2.65 m was considered in addition to the on-body nodes (Fig. 1b). The location of the off-body node at such height represents Wi-Fi routers, commonly placed on the ceiling in an indoor environment. The simulation starts at a horizontal distance of 2 m between the subject and the off-body node to capture the effect of being in close proximity to such an elevated antenna. For the body-to-body scenario, two identical subjects (Duke) with identical on-body nodes were used for the simulations (Fig. 1c). The subjects were set to walk towards each other, starting from a distance of 3 to 1 m representing one walking circle for each. The study in this close proximity is important not only for typical body-to-body network applications [15], but also for the study of inter-on-body network interference [32] and inter-on-body network cooperation [33]. A total of 30 S21 parameters were obtained from the simulations, in which 10 were for on-body channels, 5 for off-body, and 15 for body-to-body channels. The parameters were then used to give insight on channel fading variations with time and distance. The parameters were also used to analyze the correlation of different channels and the use of multivariate normal distribution in describing the channel amplitudes. Fading variation The amount of fading a specific WBAN channel experiences depends on the location of its two end nodes on the human body, and the fading variation depends on relative movements of the body parts on which these nodes are located. The close proximity of the antennas to the body affects the overall radiation of the signal [26], and the movements of body parts may also lead to a periodic change of the direction of maximum radiation and hence the gain in the direction of communication [34]. In addition, radio wave propagation is significantly attenuated by human body tissue, and hence, penetration through the body is negligible. The movement of body parts may also cause periodic shadowing to the channel. The S21 parameter can be expressed in terms of antenna gain, and the propagation losses as $$ S_{21}\text{[dB]}= P_{L}\text{[dB]} + G_{TX}\text{[dBi]} + G_{RX}\text{[dBi]} $$ where PL is the propagation loss and GTX and GRX are the transmitting and receiving antenna gains in the direction of communication, taking into consideration the proximity to the human body. With no cables involved in the simulation, S21 parameter can be interpreted as the overall channel gain as it is the ratio between the received signal at the receiver port and the transmitted signal at the transmitter port. Amplitude distribution One of the approaches of modeling potential correlated channels is by using multivariate normal distribution given by [35] $$ {\begin{aligned} f(x,\mathbf{\mu},\mathbf{\Sigma}) = \frac{1}{\sqrt{|\mathbf{\Sigma}|(2\pi)^{N}}}\exp{\left\{-\frac{1}{2}(x-\mathbf{\mu})\mathbf{\Sigma}^{-1}(x-\mathbf{\mu})'\right\}} \end{aligned}} $$ where N is the number of channels being modeled together, μ is 1×N matrix containing the channels' means, and Σ is the N×N covariance matrix. Multivariate normal distributions are relatively easy to work with especially in a network with multiple channels. However, this is only applicable when the channels to be considered show a significant correlation which is indicated by the large correlation coefficient and can be confirmed with low p values (< 0.05). Pearson's method was used to find the linear correlation coefficient between the channels, and their corresponding p values were calculated using Student's t distribution [36]. Further, the estimated distribution was compared with the simulated data using two-sample Kolmogorov-Smirnov test (KS test) at 0.05 significant level to quantify the goodness of the fit [37]. Results and discussions In this section, the results obtained from the simulations described in Section 2 are presented. The results for each scenario are presented separately, and when appropriate, they are compared. The channels are described by their antenna location, for example, a channel between a ceiling node and a hand node is called ceiling-to-hand. On-body channels Fade variation The results of the fade variation for the considered on-body channels are summarized in Fig. 2 and Table 2. As expected, the time series (Fig. 2) show a larger variation for the channels involving the nodes located at the hand and the thigh, than for the other channels. This is related to variation in shadowing, as these body parts move from one location to the other during walking. The largest variation of channel gain is obtained on the chest-to-hand channel (7.11 dB), as the hand moves from the shadowed region to the LOS. The variations are contributed by both, propagation loss and antenna gain. The channel that is least effected by movement of body parts during walking is the chest-to-back channel as confirmed by the lowest standard deviation (STD) of 0.31 dB. The largest average channel gain is obtained for the hand-to-thigh channel (− 49.3 dB). The channel has the shortest distance between the end nodes and least body shadowing as confirmed by the propagation loss (− 15.89 dB). The smallest average channel gain is obtained for the back-to-thigh channel (− 96.77 dB) contributed by small propagation loss and antenna gain. Time series of simulated data for on-body channels. a Hand channels. b Channels which do not involve the hand or thigh antenna Table 2 Average channel gain and standard deviation of on-body channels To confirm the use of a multivariate normal distribution in the estimation of the amplitude distribution, the channels have to be tested for correlation significance. Figure 3a shows the p values of the considered on-body channels, with 80% of the channels having p<0.05, indicating a significant correlation between them. Since the channels show significant correlations, a multivariate normal distribution given by (2) can be used to estimate the amplitude distribution. The estimated covariance matrix is given in Table 3. To evaluate the goodness of the fit, a quantile-quantile plot of the estimated multivariate normal distribution of all 10 on-body channels against the simulation results is shown in Fig. 4a. The plot approximately coincides with the identity line, meaning that the estimated distribution is a good fit of the simulated data. The KS test confirms that the two datasets are from the same distribution with asymptotic p=0.2178 and test statistic ks2stat=0.0468. Correlation significance test. Dark blue indicates significant correlation with p<0.05. a On-body channels. b Off-body channels. c Body-to-body channels Quantile-quantile plot of estimated multivariate normal distribution and the simulated data. a On-body channels. b Off-body channels. c Body-to-body channels Table 3 Covariance matrix Σ for on-body channels (× 10−9) Off-body channels The results of the fade variation for the considered off-body channels are summarized in Fig. 5 and Table 4. The results show overall stable channels with exception from the ceiling-to-back channel (Fig. 5a). This is because in the absence of LOS, the resulting power received is a summation of reflected and diffracted fields, resulting in constructive and destructive interference as the subjects move from one location to another. We also notice the overall increase in the channel gain with time as back node moves out of the shadow region as the subject gets closer to the off-body node. This is not the case for the other channels as they are affected by the change in elevation angle and hence the antenna gain in the direction of communication. There is, however, an overall decrease in propagation loss for these channels (ceiling-to-chest, ceiling-to-thigh, ceiling-to-ear) as the subject gets closer to the off-body node as shown in Fig. 5b. Time series of simulated data for off-body channels. a Overall channel gain. b Propagation gain Table 4 Average channel gain and standard deviation of off-body channels Similar to the on-body channels, the off-body channels were tested for correlation significance so that a multivariate normal distribution could be used to estimate the amplitude distribution of the channels. Figure 3b shows the p values of the considered off-body channels, with 60% of the channels having p<0.05, indicating a significant correlation between them. The resulting estimated multivariate distribution's quantile-quantile plot with estimated covariance matrix given Table 5 shows a good fit with simulated data (Fig. 4b). The KS test confirms that the two datasets are from the same distribution with asymptotic p=0.5982 and test statistic ks2stat=0.0481. Table 5 Covariance matrix Σ for off-body channels (× 10−9) Body-to-body channels The results of the fade variation for the considered body-to-body channels are summarized in Table 6. Similar to the off-body channels, the body-to-body channels involving the node located at the back have shown the largest fade variations due to the absence of LOS as indicated by their standard deviation. The channels also have the lowest average channel gain for the same reason. Figure 6 shows the time series of the channels involving the nodes located at the back. The effect of constructive and destructive interference can be seen clearly especially for the chest-to-back and back-to-back channels. On the other hand, the channels which have constant LOS (chest-to-chest, chest-to-thigh, and thigh-to-thigh) have the largest average channel gain. Time series of simulated data for body-to-body channels Table 6 Average channel gain and standard deviation of body-to-body channels The p values of the considered body-to-body channels are shown in Fig. 3c. The results show that 68% of the channels have p<0.05, indicating a significant correlation between the channels; hence, multivariate normal distribution could be used in the estimation of the amplitude distribution of the channels. The resulting estimated multivariate normal distribution's quantile-quantile plot with covariance matrix given in Table 7 is shown in Fig. 4c. The plot approximately coincides with the identity line, meaning the distribution is a good fit of the simulated data. The KS test confirms that the two datasets are from the same distribution with asymptotic p=0.9037 and test statistic ks2stat=0.0206 Table 7 Covariance matrix Σ for body-to-body channels (× 10−9) Comparison of channels The fade variations for on-body channels have shown to depend on the periodic movement of the body part in which the channel nodes are attached. Channels such as chest-to-hand had the largest variation due to the movement of the hand from the shadowed region to the LOS, while chest-to-back channel (on-body) has the least. This is not the case, for off-body and body-to-body channels in which the largest fade variations are shown by those channels with the absence of LOS instead. The ceiling-to-back and chest-to-back channels have the largest fade variations for off-body and body-to-body scenarios respectively. This is because these channels are dominated by constructive and destructive interference between the reflected and diffracted fields, which vary significantly with distance. For the case of the average channel gain, larger values have been seen in channels with LOS and relatively short distance for on-body (hand-to-thigh), off-body (ceiling-to-chest), and body-to-body (chest-to-thigh). The channels for all scenarios showed a significant correlation with p<0.05 for 80% of on-body channels, 60% of off-body channels, and 68% of body-to-body channels. This justified the use of multivariate normal distribution in the estimation of the amplitude distribution of the channels. The resulting estimate of amplitude distribution showed a good fit with the simulated data with quantile-quantile plots coinciding with the identity line for all the scenarios. The antenna placements favorable for off-body and body-to-body communications are those that have the least fading and least fade variations. The chest node appears to be more favorable for the off-body communication due to its low and stable fade values (Fig. 5a). However, this is only applicable in the LOS situation when the subject is facing the off-body node. When the subject orientation changes as in the case of a subject moving away from the off-body node, the back node could be the better option. The best practice will then be to alternate between the chest and back node depending on the subject orientation. This can be confirmed by the negative linear correlation between the two channels as shown in Fig. 7a. A similar approach can be taken to exploit spatial diversity from other channels with negative linear correlation in Fig. 7. For the on-body communications, we also notice that the channels involving the back node experience the most fading (Table 2). This implies that in a star topology network, the back node is not a feasible option as the center node. Pearson correlation coefficient. a On-body channels. b Off-body channels. c Body-to-body channels In this work, FDTD was used in the investigation of dynamic WBAN channel characteristics during walking. More specifically, the channels considered were 10 on-body, 5 off-body, and 15 body-to-body channels. The channels were investigated in terms of fade variation and their corresponding amplitude distributions. For on-body channels, the fade variation was found to be periodic, with a larger fade variation shown by the channels involving the nodes at the hand and thigh. Off-body and body-to-body channels with the absence of LOS experienced constructive and destructive interference as the distance between the end node changed. This resulted in larger fade variation than the channels involving the nodes at the hand and thigh. For the amplitude distribution of the channels, the multivariate normal distribution was considered. The distribution was considered after the channels show significant correlation between each other. The resulting estimated multivariate distributions fit well the simulated data, for on-body, off-body, and body-to-body channels. This will reduce the complexity of performance analysis of the network. FDTD: Finite-difference time-domain LOS: STD: WBAN: Wireless body area network Q. H. Abbasi, M. U. Rehman, K. Qaraqe, A. Alomainy, Advances in Body-Centric Wireless Communication: Applications and State-of-the-Art (Institution of Engineering and Technology, Stevenage, 2016). D. B. Smith, D. Miniutti, L. W. Hanlen, Characterization of the body-area propagation channel for monitoring a subject sleeping. IEEE Trans. Antennas Propag.59(11), 4388–4392 (2011). S. L. Cotton, G. A. Conway, W. G. Scanlon, A time-domain approach to the analysis and modeling of on-body propagation characteristics using synchronized measurements at 2.45 GHz. IEEE Trans. Antennas Propag.57(4), 943–955 (2009). S. Cotton, A. McKernan, A. Ali, W. Scanlon, in Proceedings of the 5th European Conference on Antennas and Propagation (EUCAP). An experimental study on the impact of human body shadowing in off-body communications channels at 2.45 GHz (IEEE, 2011), pp. 3133–3137. S. L. Cotton, A. McKernan, W. G. Scanlon, in Proceedings of Loughborough Antennas and Propagation Conference (LAPC). Received signal characteristics of outdoor body-to-body communications channels at 2.45 GHz (IEEE, 2011), pp. 1–4. R. D'Errico, L. Ouvry, A statistical model for on-body dynamic channels. Int. J. Wirel. Inf. Netw.17(3-4), 92–104 (2010). D. Goswami, K. Sarma, A. Mahanta, in Proceedings of IEEE International Conference on Signal Processing, Informatics, Communication and Energy Systems (SPICES). Experimental determination of path loss and delay dispersion parameters for on-body UWB WBAN channel (IEEE, 2015), pp. 1–4. R. Rosini, R. D'Errico, in Proceedings of Loughborough Antennas and Propagation Conference (LAPC). Comparing on-body dynamic channels for two antenna designs (IEEE, 2012), pp. 1–4. V. Sipal, D. Gaetano, P. McEvoy, M. J. Ammann, Impact of hub location on the performance of wireless body area networks for fitness applications. IEEE Antennas Wirel. Propag. Lett.14:, 1522–1525 (2015). R. Rosini, R. D'Errico, in Proceedings of 6th European Conference on Antennas and Propagation (EUCAP). Off-body channel modelling at 2.45 GHz for two different antennas (IEEE, 2012), pp. 3378–3382. Q. H. Abbasi, M. U. Rehman, H. T. Chattha, K. Qaraqe, A. Alomainy, Y. Hao, C. Parini, Ultra wideband antenna diversity characterisation for off-body communications in an indoor environment. IET Microwaves Antennas Propag.8(14), 1161–1169 (2014). P. Van Torre, L. Vallozzi, C. Hertleer, H. Rogier, M. Moeneclaey, J. Verhaevert, Indoor off-body wireless MIMO communication with dual polarized textile antennas. IEEE Trans. Antennas Propag.59(2), 631–642 (2011). M. Marinova, A. Thielens, E. Tanghe, L. Vallozzi, G. Vermeeren, W. Joseph, H. Rogier, L. Martens, Diversity performance of off-body MB-OFDM UWB-MIMO. IEEE Trans. Antennas Propag.63(7), 3187–3197 (2015). R. Rosini, R. Verdone, R. D'Errico, Body-to-body indoor channel modeling at 2.45 GHz. IEEE Trans. Antennas Propag.62(11), 5807–5819 (2014). F. Mani, R. D'Errico, A spatially aware channel model for body-to-body communications. IEEE Trans. Antennas Propag.64(8), 3611–3618 (2016). T. Kumpuniemi, M. Hämäläinen, K. Y. Yazdandoost, J. Iinatti, in Proceedings of 9th European Conference on Antennas and Propagation (EuCAP). Measurements for body-to-body UWB WBAN radio channels (IEEE, 2015), pp. 1–5. S. J. Ambroziak, L. M. Correia, K. Turbic, in Proceedings of URSI Asia-Pacific Radio Science Conference (URSI AP-RASC). Radio channel measurements in body-to-body communications in different scenarios (IEEE, 2016), pp. 1376–1379. S. L. Cotton, W. G. Scanlon, A. McKernan, in Proceedings of 6th European Conference on Antennas and Propagation (EUCAP). Improving signal reliability in outdoor body-to-body communications using front and back positioned antenna diversity (IEEE, 2012), pp. 3393–3396. L. Liu, R. D'Errico, L. Ouvry, P. De Doncker, C. Oestges, Dynamic channel modeling at 2.4 GHz for on-body area networks. Adv. Electron. Telecommun.2(4), 18–27 (2011). M. Mohamed, M. Cheffena, A. Moldsvor, F. P. Fontan, Physical-statistical channel model for off-body area network. IEEE Antennas Wirel. Propag. Lett.16:, 1516–1519 (2017). T. Aoyagi, M. Kim, J. -i. Takada, K. Hamaguchi, R. Kohno, et al., in Proceedings of the 5th European Conference onAntennas and Propagation (EUCAP). Body motion and channel response of dynamic body area channel (IEEE, 2011), pp. 3138–3142. Q. Wang, T. Tayamachi, I. Kimura, J. Wang, An on-body channel model for UWB body area communications for various postures. IEEE Trans. Antennas Propag.57(4), 991–998 (2009). E. Reusens, W. Joseph, B. Latré, B. Braem, G. Vermeeren, E. Tanghe, L. Martens, I. Moerman, C. Blondia, Characterization of on-body communication channel and energy efficient topology design for wireless body area networks. IEEE Trans. Inf. Technol. Biomed.13(6), 933–945 (2009). M. Gallo, P. S. Hall, Y. I. Nechayev, M. Bozzetti, Use of animation software in simulation of on-body communications channels at 2.45 GHz. IEEE Antennas Wirel. Propag. Lett.7:, 321–324 (2008). I. Foundation, Duke. https://www.itis.ethz.ch/virtual-population/virtual-population/vip3/duke/. Accessed 7 May 2018. L. Roelens, S. Van den Bulcke, W. Joseph, G. Vermeeren, L. Martens, Path loss model for wireless narrowband communication above flat phantom. Electron. Lett.42(1), 10–11 (2006). D. Smith, J. Zhang, L. Hanlen, D. Miniutti, D. Rodda, B. Gilbert, Temporal correlation of dynamic on-body area radio channel. Electron. Lett.45(24), 1212–1213 (2009). P. van Dorp, F. Groen, Human walking estimation with radar. IEE Proceedings-Radar, Sonar Navig.150(5), 356–365 (2003). Modeling Modules Framework: POSER. https://zmt.swiss/sim4life/framework/poser/. Accessed 7 May 2018. ZMT. Z MedTech, Sim4Life 4.0 User Manual. ZMT Zurich MedTech, Zurich, Switzerland (2018). D. B. Smith, D. Miniutti, T. A. Lamahewa, L. W. Hanlen, Propagation models for body-area networks: a survey and new outlook. IEEE Antennas Propag. Mag.55(5), 97–117 (2013). M. Alasti, M. Barbi, K. Sayrafian, in Proceedings of 25th Annual International Symposium on Personal, Indoor, and Mobile Radio Communication (PIMRC). Uncoordinated strategies for inter-ban interference mitigation (IEEE, 2014), pp. 2150–2154. J. Dong, D. Smith, in Proceedings of 23rd International Symposium on Personal Indoor and Mobile Radio Communications (PIMRC). Cooperative body-area-communications: enhancing coexistence without coordination between networks (IEEE, 2012), pp. 2269–2274. M. Mackowiak, L. M. Correia, A statistical model for the influence of body dynamics on the gain pattern of wearable antennas in off-body radio channels. Wirel. Pers. Commun.73(3), 381–399 (2013). P. Mukhopadhyay, Multivariate Statistical Analysis (World Scientific, Singapore, 2009). G. L. Shevlyakov, H. Oja, Robust Correlation: Theory and Applications, vol. 3 (Wiley, Hoboken, 2016). G. W. Corder, D. I. Foreman, Nonparametric Statistics: A Step-by-step Approach (Wiley, Hoboken, 2014). The authors would like to thank the Norwegian University of Science and Technology, Norway, and Ghent University/IMEC INTEC-WAVES for supporting this work. This work was supported by the Ministy of Education and Research, Norway. The dataset(s) supporting the conclusions of this article are available in the figshare repository https://figshare.com/articles/Characterization_of_Dynamic_Wireless_Body_Area_Network_Channels_During_Walking_ zip/6226949. Norwegian University of Science and Technology, Teknologiveien 22, Gjøvik, 2815, Norway Marshed Mohamed & Michael Cheffena Ghent University/IMEC INTEC-WAVES, iGent - Technologiepark 15, Gent, 9052, Belgium Wout Joseph , Günter Vermeeren & Emmeric Tanghe Search for Marshed Mohamed in: Search for Wout Joseph in: Search for Günter Vermeeren in: Search for Emmeric Tanghe in: Search for Michael Cheffena in: MM and WJ identified the channel to be modeled. MM conducted the simulations with GV, analyzed the data with ET, and wrote the manuscript. WJ supported and supervised the research. WJ, GV, ET, and MC proofread the manuscript. All authors read and approved the final manuscript. Correspondence to Marshed Mohamed. Mohamed, M., Joseph, W., Vermeeren, G. et al. Characterization of dynamic wireless body area network channels during walking. J Wireless Com Network 2019, 104 (2019). https://doi.org/10.1186/s13638-019-1415-3 Fading channels On-body communication Off-body communication Body-to-body communication
CommonCrawl
Analysis of shock wave unsteadiness using space and time correlations applied to shadowgraph flow visualization data S. Marko1 & P. M. Ligrani1 Advances in Aerodynamics volume 1, Article number: 2 (2019) Cite this article Unsteady flow characteristics of a normal shock wave, a lambda foot, and a separated turbulent boundary layer are investigated within a unique test section with supersonic inlet flow. The supersonic wind tunnel facility, containing this test section, provides a Mach number of approximately 1.54 at the test section entrance. Digitized shadowgraph flow visualization data are employed to visualize shock wave structure within the test section. These data are analyzed to determine shock wave unsteadiness characteristics, including grayscale spectral energy variations with frequency, as well as time and space correlations, which give coherence and time lag properties associated with perturbations associated with different flow regions. Results illustrate the complexity and unsteadiness of shock-wave-boundary-layer-interactions, including event frequencies from grayscale spectral energy distributions determined using a Lagrangian approach applied to shock wave location, and by grayscale spectral energy distributions determined using ensemble-averaging applied to multiple closely-located stationary pixel locations. Auto-correlation function results and two-point correlation functions (in the form of magnitude squared coherence) quantify the time-scales of periodic events, as well as the coherence of flow perturbations associated with different locations, over a range of frequencies. Associated time lag data provide information on the originating location of perturbation events, as well as the propagation direction and event sequence associated with different flow locations. Additional insight into spatial variations of time lag and flow coherence is provided by application of magnitude squared coherence analysis to multiple locations, relative to a single location associated with the normal shock wave. Shock wave boundary layer interactions occur in many aerospace applications. Some examples include ramjet isolator ducts, turbine blade tip gaps, and transonic wings. However, low frequency unsteadiness associated with shock wave boundary layer interactions is not well understood and continues to be a debated topic. This is because different studies concluded different and contradicting results. Some results indicated that shock wave unsteadiness originates from the turbulence in the upstream boundary layer. Other results indicated that a separation region formed as part of the shock wave boundary layer interaction generates the unsteadiness. Still other studies showed that both the upstream and downstream sources are responsible for shock wave unsteadiness. Clemens and Narayanaswamy [1] summarized some of the possibilities. Many methods and analysis approaches were used to collect and analyze associated data. For example, power spectral densities and cross correlations were often used, especially for wall pressure signal data. Many of these investigations also employed Schlieren and shadowgraph imaging to determine unsteady and spatially-varying shock wave structure. Erengil and Dolling [2], Handa et al. [3], Dupont et al. [4], and Bruce and Babinsky [5], among others, measured static pressure variations with time. Spatial pressure variations were then associated with shock wave position and motion. Power spectral densities were computed from time sequence pressure data at specific locations to determine the frequencies of the unsteadiness at various locations in the flow field. Bruce and Babinsky [5] also used schlieren imaging to determine the streamwise location of the shock wave. Gamba [6] used shock wave position with respect to time, in addition to the single location pressure measurements, to determine unsteadiness characteristics. Cross-spectrum and time lag were computed for pressure fluctuations and shock wave positions, such that pressure fluctuations were given relative to the position of the upstream most shock wave. Ganapathisubramani et al. [7] and Humble et al. [8] computed cross correlations between turbulent structures in the upstream boundary layer and the shock wave from particle image velocimetry data. They detected significant coherence between the turbulent structures in the upstream boundary layer and the shock wave motion. Other recent investigations which considered different methods of shock wave data analysis were described by Ganapathisubramani and Clemens [9], Wu and Martin [10], Piponniau et al. [11], Pirozzoli et al. [12], Touber and Sandham [13], and Grilli et al. [14]. Settles [15] also provided useful information in regard to schlieren and shadowgraph visualization techniques. The present investigation differs from many previous studies by providing greater detail of the analysis techniques used, and by using digitized shadowgraph data to visualize shock wave structure. Considered are unsteady flow and structural characteristics of a normal shock wave, a lambda foot, and a separated turbulent boundary layer within a unique research test section. The supersonic wind tunnel facility, containing this test section, provides a Mach number of approximately 1.54 at the test section entrance. The digitized shadowgraph flow visualization data are analyzed to provide grayscale spectral energy variations with frequency, as well as time and space correlations, which give coherence and time lag properties associated with perturbations associated with different flow regions. As such, the current investigation is aimed at improved analysis techniques for shock-wave-boundary-layer-interactions, for better physical understanding of these flows, and for development of improved techniques for control and management of flow unsteadiness associated with particular shock wave arrangements. Experimental apparatus and procedures Supersonic wind tunnel The supersonic wind tunnel (which is also referred to as the SS/TS/WT or SuperSonic/TranSonic/WindTunnel) is a blow down type facility, which is located within the Johnson Research Center on the University of Alabama in Huntsville campus. The low-pressure piping system used in these experiments (which is a portion of the overall piping system) consists of an air compressor, a vertical air supply tank, a series of pressure relief valves, a manual gate valve, a pneumatic flow control valve, a pressure regulating gate valve, and an air diverter plenum with 12 cubic meter volume. This plenum is then followed by the test section segment, an exhaust plenum with a volume of 2 cubic meters, and an exhaust piping system. The volume of the vertical air supply tank is 14 cubic meters. The tank pressure rating is 2.1 MPa. As the facility is prepared for testing, one of two compressors (either a Quincy QR 350, model BM350HPDT compressor with a ZEKS 75NDQCA100 air dryer, or a Bauer BP26-E3 compressor) fills the vertical supply tank with dried air. After the tank is pressurized to the appropriate supply pressure, the compressor is shut down, and the small Ham-Let H700SSL3/4TLD manual ball valve between the compressor and the supply tank is closed. As a blow down test is conducted, air flows through a 152.6 mm diameter pipe, which is located downstream of the vertical supply tank. A NEWCO N36726 manual gate valve, a Truline 330AITFM-CH-SQ pneumatic flow control ball valve, and a Fisher Valves Type 667-EWT-DVC6200 pressure regulating valve are located along this pipe. The manual gate valve is open fully during testing. The pneumatic flow control ball valve is also fully open during testing. The pressure regulating valve is controlled by a Fisher FIELDVUE DVC6200 Digital Valve Controller. The digital valve controller is set to maintain a constant downstream pressure of 413.7 kPa as an individual blow down test is underway. Downstream of the pressure regulating valve, the air flows into the air diverter plenum. Six Kunkle 913BFEM03 pressure relief valves provide means to vent over-pressurized air, both before and after the manual gate valve. Flow static pressure is measured before and after each control valve within the piping system, using both Ashcroft analog pressure gauges and Honeywell FPA 060-C860–15 digital pressure transducers. Three independent wind tunnel segments are connected to the downstream parts of the air diverter plenum. Three additional Kunkle 913BFEM03 pressure relief valves and a burst disk are additionally mounted on the air diverter plenum for pressure relief in the unlikely event of over-pressurization. A spectacle blind valve or a circular sliding gate valve is located at the inlet to each of the wind tunnel segments. To conduct a test with one such segment, the associated inlet valve is open, whereas valves at the inlets of the other segments are closed. The research test section, which is employed for the present study, is used as one of the wind tunnel segments. This segment includes a spectacle blind valve, which is followed by a straight inlet duct, a converging/diverging supersonic nozzle, the test section, an exhaust diffuser and plenum, and an exhaust piping system with three independent noise baffles. Each noise baffle bolts to the associated 203 mm exhaust duct by means of a flange, and is approximately 1 m in length, with an inner diameter of 203 mm. The converging diverging nozzle accelerates the flow to supersonic velocities. The flow then enters the test section. The test section is constructed with a flat bottom wall, a diverging top wall, and two side walls. The test section contains a shock wave holding plate and a choking flap. The angle of the choking flap, as well as the normal and streamwise positions of the shock wave holding plate, are all adjustable. The exhaust plenum includes four air exhaust duct vents. Two 203 mm diameter exhaust ducts vent the air outside of the laboratory through 7.6 m of straight 203 mm diameter pipe. A 38 mm diameter vent vents to the air inside of the laboratory. These three vents are always open. A Kunkle 913BFEM03 pressure relief valve and a burst disk are additionally mounted on the plenum for pressure relief in the unlikely event of over-pressurization. A 102 mm diameter vent is also installed on the exhaust plenum. This particular vent includes a custom made spectacle blind valve which is adjustable. For the present tests, the 102 mm diameter exhaust passage is half-open. Shadowgraph optical flow visualization system During wind tunnel tests, an Edmond Optics Corporation shadowgraph system records time-varying, shock wave flow features within the test section. The system consists of a white light source and two 152.4 mm diameter focusing mirrors. The light source produces a beam of light that is reflected off the first mirror. The beam is then columnated as it passes through the test section. Because each viewed image is generally invariant in the spanwise direction, the mirrors are aligned such that the light beam is parallel to the spanwise direction and orthogonal to the side walls. This alignment provides the images with the greatest contrast and visualization of density-varying flow features. The second mirror additionally focuses the resulting shadowgraph image into the NIKKOR 200 mm lens, which is connected to a Phantom v711 camera. This high speed camera captures a time sequence of digitized flow visualization images during each wind tunnel test. Phantom Camera Control Application 2.7 software then processes the images which are captured by the camera. The Phantom v711 Camera has a 20 μm resolution at all sampling rates, and can acquire a sequence of images at rates as high as 1400 kHz. Figure 1 shows an example instantaneous shadowgraph image, obtained with this apparatus, as part of the present investigation. Shadowgraph flow visualization image for data acquired on 05 April 2018, illustrating the presence of a normal shock wave, lambda foot, and separated turbulent boundary layer within the lower flow passage, and an oblique shock wave system within the upper flow passage Pressure measurements Surface static pressures are measured within the test section, the exhaust plenum, and downstream of the pressure regulating valve. The static pressure tap used for this purpose within the test section is located along the centerline of the bottom wall, upstream of the shock wave. Each pressure is measured using a Honeywell FPA 060-C54985172080 digital pressure transducer. Each transducer is connected to a National Instruments Voltage Input Module, NI 9209, mounted within a NI CompactDAQ USB Chassis, NI cDAQ-9174. These devices together convert the analog voltage signal from the transducers into a digital signal which is readable using LabVIEW 2015 Full Development System version 15.0f2 software. Experimental testing procedure Wind tunnel tests typically last approximately eight seconds from the time the pressure regulating valve is opened to the time it begins to close. For the majority of tests, data are captured from a time before the pressure regulating valve is opened, until after the flow stops. Flow conditions are typically established and maintained for about five seconds during each test. Only this portion of the wind tunnel test is considered during the subsequent analysis. The shadowgraph images captured during start-up and shut-down, prior and after flow is established, are generally discarded. Data analysis procedures The digitized time sequence of flow visualization images are processed to compute frequency spectra, and different correlations using National Instruments' LabVIEW 2015 Full Development System version 15.0f2 software, and Mathworks' MATLAB versions R2014b through R2018a software. Analysis of flow visualization time sequence data and determination of frequency spectra Frequency spectra are computed in six steps: extract data for a particular pixel location, filter the data, transform and normalize the data into the frequency domain, smooth the frequency domain results, remove the white noise, and ensemble average the data associated with multiple pixel locations. Extract pixel data For a selected collection of images, a MATLAB program extracts the time sequence of grayscale values at any specified pixel. During a wind tunnel test, flow conditions are established and maintained for a time interval. Only this portion of the wind tunnel test is considered during the analysis. The MATLAB code used to extract grayscale value data from specific pixels in the images has three necessary inputs. The user indicates which pixel to analyze, the file path to the folder of bitmap images during established flow conditions, and the output file name. MATLAB's "imread" function determines the grayscale value of the specified pixel for each image in the folder. The output is a time sequenced array of grayscale values. MATLAB's "xlswrite" function saves the associated array in a Microsoft Excel spreadsheet. Filter the time sequence The time sequence of grayscale values must be read into LabVIEW software in order to be filtered. The file path to the saved text file is specified within a subroutine in LabVIEW called the "Read from Measurements File" Express VI. The subroutine imports the time sequence of grayscale values into LabVIEW from the tab delimited text file. Once in LabVIEW, the data is converted from the dynamic data datatype to an array of double precision floating point values. A low-pass Butterworth filter is used to filter the time sequence data. A Butterworth filter is used, as opposed to another type of filter, because of its relatively uniform transfer function scaling over the frequencies of interest. The transfer function is the ratio of the filtered value to the unfiltered value as a function of frequency. Uniform scaling over the transfer function is important because it does not add extraneous frequency content to the data. This ensures that any peaks in the frequency domain data are from real events, not distortion from the filter. Figure 2 shows the magnitude of the transfer function with respect to frequency ratio for the first through fifth order Butterworth filters. The first through fifth order filters are labeled as A through E, respectively, in the figure. The frequency ratio on the abscissa is the frequency divided by the cut-off frequency. The figure shows how the transfer function is generally uniform but drops rapidly as the frequency approaches the cut-off frequency. The cut-off frequency is defined as the frequency where the filtered frequency content is 3 dB lower than the unfiltered frequency content. The drop-off of the transfer function's magnitude near the cut-off frequency is more abrupt for higher order filters than lower order filters. The fifth order filter is used in this analysis because of the abrupt decrease in magnitude of the transfer function near the cut-off frequency. The transform function of a Low Pass Butterworth Filter, where lines A through E represent orders 1 through 5 [17] The time sequence array of grayscale values is filtered with a fifth-order, low-pass Butterworth filter, using a cut-off frequency of 1 Hz less than the Nyquist folding frequency. The filtering is done with LabVIEW's "Butterworth Filter" VI subroutine. The data, the cut-off frequency, the sampling rate, the type of filter (low pass, for example), and the order of the filter are inputs to this subroutine. This output is the filtered data as an array of double precision floating point values. After being filtered, the data are converted to a dynamic data datatype. The resulting filtered data are saved to a tab delimited spreadsheet. Transformation to the frequency domain The transformation from the time domain to the frequency domain is accomplished using MATLAB software. This transformation gives the square root of the power spectra of grayscale values as it varies with frequency, also referred to as the grayscale spectral energy distribution. The filtered data is read into MATLAB from the spreadsheet using the "xlsread" function. This function's inputs are the file path and the range of cells in the spreadsheet that contain the data. The function's output is an array of the filtered data. The array of the filtered time sequence data undergoes a single sided discrete fast Fourier transform. This transform is done using the Fast Fourier Transform —"fft"— function in MATLAB software. The filtered time sequenced array of grayscale values is input into the Fast Fourier Transform function. The function gives the double sided frequency transform of that time sequenced array. Next, the second half of the frequency transform array is discarded. The resulting array is the single sided frequency transform. This transform is normalized such that it becomes the square root of the power spectra of the time domain function. With the exception of the first term in the frequency spectra array, this relationship is described by the equation given by $$ Y(f)=\frac{2}{N}\cdotp \left|\mathcal{F}\left(y(t)\right)\right| $$ Equation (1) indicates that the magnitude of each term in the single sided frequency transform is multiplied by 2 and divided by the number of indices in the time sequence array. The first term in this transform represents the time averaged grayscale value and corresponds to a frequency of zero. The value of this term is set equal to half of the value attained by using Eq. (1). Each term in the frequency transform corresponds to a unique frequency. The frequencies steadily increase from zero to the Nyquist folding frequency. The frequency resolution is the difference in frequency between any two consecutive terms in the frequency transform. As such, the frequency resolution is determined using $$ \varDelta f=\frac{Fs}{N} $$ The frequency resolution is defined as the sampling frequency divided by the number of points in the time sequence array. The frequency resolution is therefore a function of the sampling frequency and the amount of time that useful data are collected. The transformation preserves spectral energy content such that the variance of the time domain function is equal to half of the sum of the squares of the frequency domain function, excluding the steady state term. This relationship is described using the equation given by $$ \frac{1}{2}\cdotp \overline{Y{(f)}^2}=\overline{y(t){\prime}^2} $$ The variance formula used on the time domain data to attain the energy is expressed using $$ \overline{y(t){\prime}^2}=\frac{\sum_{k=1}^N{\left(y{(t)}_k-{\sum}_{j=1}^N\frac{y{(t)}_j}{N}\right)}^2}{N}=\overline{{\left(y{(t)}_k-\overline{y{(t)}_k}\right)}^2} $$ The first step in determining the variance is to calculate the average of the grayscale values in the time sequence. Next, the difference between each of the grayscale values in the time sequence and that average value is computed. Each of these differences is then squared. Finally, all of those squared differences are averaged together. That average gives a single value that is the variance of the time sequence. It is also equal to the total spectral energy. The overall energy from the frequency domain is evaluated using an equation of the form $$ \frac{1}{2}\cdotp \overline{Y{(f)}^2}=\frac{1}{2}\ast {\sum}_{k=2}^{\frac{N}{2}-1}Y{(f)}_k^2 $$ Equation (5) states that the total spectral energy is equal to half of the sum of the squared grayscale values in the single sided frequency spectra. The summation does not include the value of the first term where the frequency is zero. Associated results may be expressed with respect to frequency or with respect to the Strouhal Number. The Strouhal Number is determined using the equation given by $$ {St}_r=\frac{2\pi \cdotp f\cdotp {\delta}_o}{u_{\infty }} $$ The Strouhal Number is the frequency scaled by the boundary layer thickness and the incoming freestream velocity at the entrance to the test section. Similar to frequency, the Strouhal number has a resolution. The resolution of the Strouhal Number is determined using an equation given by $$ \varDelta {St}_r=\frac{2\pi \cdotp \varDelta f\cdotp {\delta}_o}{u_{\infty }} $$ The resolution of the Strouhal number is the dimensionless form of the frequency resolution. Smoothing the frequency transform The transformed arrays have large variations of grayscale spectral energy distribution values, given as Y(f), over small ranges of frequency. As mentioned, \( 1/2\ast \overline{Y{(f)}^2} \) is the spectral energy content. The variation of the grayscale spectral energy distribution varies with frequency. Figure 3 shows a typical example of the frequency transform result with respect to frequency, obtained using Eq. (1). Note that data smoothing is required to determine detailed frequency content. Grayscale spectral energy distribution before smoothing, for the data associated with pixel [736, 329] on the shock wave of the test on 13 November 2017. The spectrum is representative of unsmoothed grayscale spectral energy distribution data Smoothing is imposed using a simple running average which is applied to the normalized transform array data, such as are shown in Fig. 3. For a simple running average, each data point in an array is replaced by the average of the data within a specified interval centered on that data point. Table 1 summarizes the smoothing process used in the analysis. With this approach, the first 20 points in each array, corresponding to the points at the 20 lowest frequencies, are not averaged; they retain initial values. For data between the 21st point and the data point corresponding to 20 Hz, each data point is replaced by the value which is equal to the average of 3 data points comprised of the considered data point and the adjacent point on each side of the data point considered. A similar scheme is done for data corresponding to frequencies greater than 20 Hz, but the number of points per side employed to determine the smoothed value is given by $$ 0.05\cdotp \frac{f}{\varDelta f} $$ Table 1 Spectral local data averaging procedures over different frequency ranges This expression is equal to 5% of the array index of the data point considered. The array index is the frequency at the considered point divided by the frequency resolution. This expression is a function of the data point's frequency. Note that the expression 0.05 · f/Δf is inapplicable for very high frequency data because there are not 0.05 · f/Δf points at higher frequencies remaining in the array. The expression becomes invalid for frequencies that are greater than approximately 95% of the Nyquist folding frequency which equals Fs/2. This limit is given by the expression $$ f\le 0.95\cdotp \frac{Fs}{2} $$ As a result, averaging is not applied to the data points at frequencies greater than this limit. For the data corresponding to frequencies greater than approximately 95% of the Nyquist folding frequency, each value is replaced by a single value that is determined by averaging of all the non-smoothed high frequency data. This is the average of the frequency content over the high frequency range for a specific pixel during a specific test. Each of the points in this range is subsequently given that value. The data smoothing procedure is typically implemented one time for each spectral data set. Figure 4 is representative of the smoothed data. It shows the same frequency content as the result shown in Fig. 3, but after the smoothing process is complete. Note that spectral peaks in Fig. 4 are much more evident than in Fig. 3. Grayscale spectral energy distribution after smoothing, using the data from Fig. 3 for data acquired on 13 November 2017 Remove white noise The white noise is associated with the background electronic noise within the time series signal. The next step in the analysis is to subtract the white noise from the frequency domain data. The average value that replaces the grayscale spectral energy distribution data corresponding to frequencies greater than 95% of the Nyquist folding frequency during the smoothing process is considered the white noise energy content level. The same value of the white noise energy content level is subtracted from every spectral data point at each frequency within the smoothed frequency array. Note that each image pixel is associated with a unique frequency transform and therefore a unique white noise energy content level. Removing the white noise gives greater definition of the spectral variations on a log-log plot, especially at higher frequencies. Figure 5 shows a typical example of the smoothed data before and after the white noise is removed. Grayscale spectral energy distribution with and without removing the white noise. The data is from the pixel [736, 329] from data acquired on 13 November 2017. The spectrum is representative of data before and after removal of white noise Ensemble averaging the frequency data Grayscale spectral energy distribution results are also ensemble-averaged. Five pixels located within the same flow structure are selected for this purpose at specific positions. Grayscale spectral energy distribution data associated with the five pixel locations on the shock wave (from the test on 13 November 2017) are shown with respect to frequency in Fig. 6. Grayscale spectral energy distribution data for the five pixel locations are averaged together at each distinct frequency. The ensemble-averaged grayscale spectral energy distribution with respect to frequency for this data is then shown in Fig. 7. Grayscale spectral energy distributions with respect to frequency for the five pixel locations analyzed near the shock wave for data acquired on 13 November 2017 Ensemble-averaged grayscale spectral energy distribution with respect to frequency, determined from Fig. 6 data, for pixel locations near the shock wave from data acquired on 13 November 2017 Shock wave position tracking The MATLAB R2013a code is used to track the position of a shock wave in a time sequence of shadowgraph images. Shock waves generally appear as a dark line next to a bright line within shadowgraph images. Typically, the dark portion has the most contrast relative to the background. Hence, streamwise shock wave positions are represented by locations of the darkest pixels in a particular portion of the shadowgraph visualization images. In order to track the shock wave position, the user first specifies the file path containing the time sequence of shadowgraph images. The file path and name of the first image in the time sequence is passed into the "imshow" function in MATLAB. This function displays the image in a MATLAB figure. The user must select a location near the shock wave in a particular shadowgraph visualization figure for this purpose. The Get Points function, "getpts," determines the pixel coordinates at the location clicked by the user. Only the vertical coordinate is used. It is paired with all horizontal, i, pixel location coordinates, creating a horizontal line of pixels. Figure 8 shows a white horizontal line at the selected vertical pixel coordinate from data obtained on 05 April 2018. Shadowgraph flow visualization image, where horizontal white line indicates pixel locations where the grayscale data are analyzed to obtain results given in Figs. 9, 10, 11, and 13 for data acquired on 05 April 2018 The coordinates of each pixel along this line are input into the "imread" function. This "imread" function obtains the grayscale pixel value at every pixel along this line. The program determines the grayscale pixel value along the same horizontal line for each image in the time sequence array. An example of the variation of grayscale pixel values with respect to horizontal pixel coordinate, i, at one instant of time for one image, is shown in Fig. 9. This figure shows the grayscale pixel values for the last image in the time sequence associated with the 05 April 2018 test. The user must specify a region of streamwise, i, pixel coordinates where the shock wave is expected to appear. Doing this minimizes the possibility that the shock wave finding algorithm mistakes an image effect (not related to flow structure) for the shock wave. As an example, scratches on the side wall of the wind tunnel appear as dark spots within the shadowgraph images. A scratch is identified in Fig. 9, as are the edges of the viewing window. Microsoft Paint is used to determine the pixel range in which the shock wave is expected. The Minimum function, "min," in MATLAB then determines the pixel with the lowest grayscale value within the specified region which is present along the horizontal line. This is done for each image within the time sequence. This pixel is the darkest pixel. As such, it represents the location of the shock wave. The horizontal pixel number, i, is saved to an array. The darkest pixel in the range between i = 500 and 800 is identified as the shock wave in Fig. 9. Grayscale value variation with horizontal pixel location of the last frame in the time sequence for vertical pixel 277 for data acquired on 05 April 2018 With the shock wave location array in hand, the average pixel number representing the average shock wave position is then determined. This average value is subtracted from each value in the time sequence so that a value of 0 corresponds to the average shock wave location. Negative values indicate the shock wave is downstream of its average position. Positive values indicate the shock wave is upstream of its average position. The physical location corresponding to the pixel location is then determined. A scaling factor is used to convert the pixel coordinate of the shock wave into a physical distance in milllimeters. The scaling factor is determined by relating the physical length of a piece of tape in shadowgraph images to the number of pixels along its length. The number of pixels is found using Microsoft Paint. The length is measured using Mitutoyo 500–196-30 Absolute Solar digital calipers. The streamwise shock wave position, relative to its average position, is determined as it varies with time. Time is the index of the time sequence array associated with each shock wave position divided by the sampling frequency. Figure 10 shows an example of the resulting streamwise shock wave position with respect to time. Figure 11 shows a subset of these data that further illustrates the motion of the shock wave. Figure 13 then shows the associated grayscale spectral energy of shock wave position with respect to frequency. Normal shock wave location (with respect to the time-averaged shock wave position) as a function of time for vertical pixel number 277 for data acquired on 05 April 2018 A subset of the data from Fig. 10 showing the normal shock wave position (with respect to the time-averaged shock wave position) from 1.40 to 1.45 s, as a function of time for vertical pixel number 277 for data acquired on 05 April 2018 The MATLAB code is also used to determine different spatial and time correlations as they vary with location, frequency, phase lag magnitude, and time lag magnitude. Filtered grayscale value time-varying data, as described earlier, are used as inputs for correlation determination. Single point auto-correlation An auto-correlation of grayscale data from a single pixel is computed as it varies with time. The auto-correlation function is defined by $$ {ACF}_k=\frac{c_k}{c_o} $$ This gives the normalized correlation between the grayscale value time sequence data and the same data sequence at a time lag of k. The correlation is expressed by the equation $$ {c}_k=\frac{1}{N}{\sum}_{n=1}^{N-k}\left({y}_n-\overline{y}\right)\left({y}_{n+k}-\overline{y}\right) $$ The magnitude of ACFk given by Eq. (10) is normalized by the variance of the time sequence, which is determined using the equation given by $$ {c}_o=\frac{1}{N}{\sum}_{n=1}^N{\left({y}_n-\overline{y}\right)}^2 $$ Large positive values of the ACFk auto-correlation function indicate that the data are similar at both instances of time. Negative values outside of a confidence interval indicate that the data are strongly different, with less positive correlation. The confidence interval employed is 95%, as applied to the correlation to show time lags at which the auto-correlation is significant. MATLAB 2013a code is used to determine the auto-correlation of the grayscale time sequence data associated with a pixel located on the shock wave. The time sequence data is an input into the "autocorr" function. The number of lags to compute is the second input into the "autocorr" function. For this input, a number one less than the number of data points in the time sequence is used. Therefore, the auto-correlation is calculated over a range of possible time lag values. The auto-correlation, the time lag, and the confidence interval bounds are outputs to the "autocorr" function. The "autocorr" function assumes a sampling frequency of 1 Hz for determination of time lag values, which is not correct for the collected data. The time lag that is output by the "autocorr" function is then divided by the sampling frequency to determine actual time lag values in seconds. Squared coherence magnitude Data associated with two separate pixel locations, each from a different region, are inputs to a magnitude squared coherence calculation. The magnitude squared coherence is the ratio of the cross power spectral density, Py1y2, to the product of the power spectral densities of the two functions, Py1y1 and Py2y2, expressed as $$ {C}_{y_1{y}_2}=\frac{{\left|{P}_{y_1{y}_2}\right|}^2}{P_{y_1{y}_1}\ast {P}_{y_2{y}_2}} $$ The power spectral density is represented by the equation $$ {P}_{y_1{y}_1}=Y{(f)}^2 $$ The cross power spectral density is then given by $$ {P}_{y_1{y}_2}={\sum}_{m=-\infty}^{\infty }E\left\{{\left({y}_1\right)}_{n+m}{\left({y}_2\right)}_n^{\ast}\right\}\cdotp {e}^{-j2\pi f\cdotp m} $$ Equation (15) represents the Fourier transform of the cross correlation of the time sequences of grayscale data associated with two different pixel locations, designated y1 and y2. Larger values of magnitude squared coherence indicate greater coherence between the grayscale data associated with the two pixel locations. Magnitude squared coherence, power spectral density, and cross power spectral density all vary with frequency. This is because unsteadiness is more prominent at certain frequencies than at others, and prominent frequencies change with spatial position within the flow field. The magnitude squared coherence is determined using the "mscohere" subroutine in MATLAB. Filtered time sequence data from two separate pixel locations are inputs to the subroutine. The sampling frequency is also an input to the "mscohere" subroutine. This subroutine uses Welch's Overlapped Segment Averaging procedure. Use of this procedure requires input information related to window type, window size, and size of overlapped region. The averaging scheme employs Hanning windowing. The window size is equivalent to one quarter of the length of the filtered time sequence array of grayscale data. This is the same as a quarter of the number of shadowgraph flow visualization images acquired and analyzed. For example, there are 21,620 images captured during the test on 05 April 2018, so each window consists of 5405 data points (21,620 divided by 4). The default size of the overlapped region is also required. This default size is half of the size of the window length, rounded down. Hence, the windows overlap by 2702 data points. Magnitude squared coherence for five pairs of pixel locations within the same regions are ensemble-averaged. For example, data from five pixel locations in the downstream boundary layer are correlated, respectively, with data from five pixels locations on the shock wave. Those five coherence results are ensemble-averaged. Next, the resulting ensemble-averaged data are smoothed. However, smoothing begins at the sixth data point, not the twentieth. Table 2 summarizes this procedure. Table 2 Magnitude squared coherence local data averaging procedures over different frequency ranges Cross power spectral density, phase lag, and time lag The cross power spectral density, which is given by Eq. (15), is used to calculate the phase lag and the time lag between time sequences of grayscale data associated with two different pixel locations. The time lag is a function of frequency. Perturbations occur at a location and propagate to other locations. Time is required for flow perturbations to travel, so the signals detected at two locations are similar, but differ by some time lag. All the perturbations do not originate in the same location, nor do they propagate along the same path at the same rate. Hence, the perturbations at unique frequencies have unique time and phase lags when considered at the same two pixel locations. Time-sequences of grayscale values associated with individual pixels in different regions are inputs to the cross power spectral density, "cpsd," function in MATLAB. For example, one way used for such correlation determination is the variable containing the data associated with the boundary layers is entered as the first parameter, and the variable containing the data associated with the shock wave or lambda foot is entered as the second parameter. The "cpsd" function also uses Hanning windowing. The window size is equivalent to one quarter of the length of the filtered time sequence array of grayscale data. The default overlap is used, which is half of the window length. The sampling frequency is the last input parameter in the "cpsd" function. In regard to cross power spectral density results, the angle of the phase lag is calculated based on the real and imaginary parts of the complex result at each frequency. This is done using the negative of the "angle" function in MATLAB. The resulting phase angles are smoothed using the smoothing process given in Table 2. Phase angle results at different frequencies associated with five pairs of pixel locations from the same regions are ensemble-averaged. Time lag magnitudes are determined by dividing phase lag values by associated frequency values, as given by $$ \varDelta t=\frac{\phi }{2\pi f} $$ Although the phase and time lags are calculated at each discrete frequency between 0 and the Nyquist Folding frequency, the only meaningful phase lag results are at frequencies associated with strong coherence. Because the boundary layer signal is represented by the first term in the cross-power spectral density function, positive times and positive phase lag values indicate that the perturbation in the boundary layer occurs prior to the perturbation in the shock wave. Negative values indicate that the boundary layer disturbance occurs after the shock wave disturbance at the same frequencies. Note that the range of phase lag in this analysis is limited to values between -π and π. Spurious results may arise if the actual time lag corresponds to a phase lag that is outside of these limits (at a value of 3π/2, for example). Spatial variations The magnitude squared coherence and the time lag are determined for different pixel locations for certain frequencies. The magnitude squared coherence and the time lag are determined between pixels along a line, relative to a single pixel location. In this process, the magnitude squared coherence and time lag are calculated and smoothed, but are not ensemble-averaged. As such, the magnitude squared coherence and time lag results at a frequency of 40 Hz are considered with respect to spatial position. With this approach, the magnitude squared coherence and time lag at this frequency are determined for a single location associated with the normal shock wave, relative to other locations near and adjacent to the shock wave. Experimental results and discussion Test section flow characteristics The main flow Mach number at the test section inlet is approximately 1.54. This value is determined from the inlet nozzle design, and is confirmed by measurements of local static pressure and local stagnation pressure. The associated test section mass flow rate is approximately 12.5 kg/s. Local boundary layer thickness is estimated to be approximately 10.5 mm on the top and bottom walls of the test section at a streamwise location which is just upstream of the leading edge of the shock wave holding plate. When steady flow conditions are established within the test section, the absolute pressure downstream of the pressure regulator is approximately 380 kPa. Static absolute pressure upstream of the shock wave in the test section is approximately 97 to 98 kPa, and the absolute pressure within the exhaust plenum is 130 kPa.. Overall flow structure Figure 1 presents an instantaneous shadowgraph flow visualization image for the 05 April 2018 experimental result, illustrating the presence of a normal shock wave, a lambda foot, and a separated turbulent boundary layer located near the entrance of the lower flow passage, and an oblique shock wave system within the upper flow passage. Flow within this image is from right to left, with the shock wave holding plate evident within the left-hand side of the image. As such, the present experimental arrangement and resulting flow structure are similar to the ones described by Ogawa and Babinsky [16]. Flow visualization data are captured at an acquisition rate of 10.0 kHz during the test on 05 April 2018. A total of 21,619 images are collected, as steady flow conditions are maintained during this test. The frequency resolution is 0.4625 Hz. Each image size is 1024 × 512 pixels. To achieve the instantaneous result presented in Fig. 1, flow at the test section inlet must be uniform and steady, with relatively low freestream turbulence intensity, and no significant flow disturbances or non-uniformities. Note that a few oblique lines are evident within Fig. 1 on the right-hand side of the image, which are located just downstream of the test section inlet. These lines are due to the presence of Mach waves. According to Ogawa and Babinsky [16], "faint oblique lines represent Mach waves stemming from joints of parts or small disturbances on the tunnel walls, which have negligible effects on the flow." Also evident in Fig. 1 are a few dark spots, which are due to imperfections on the outside surfaces of the side walls of the wind tunnel. These spots have no effect on the flow. Shock wave Streamwise position grayscale spectral energy result A Lagrangian approach is used to determine the spectral energy distribution associated with the streamwise location of the normal shock wave, as discussed earlier. At a vertical location marked by the white line in Fig. 12, the shock wave position changes approximately 6 mm in either direction, relative to its average position, due to shock wave unsteadiness. Note that the shock wave is positioned slightly downstream of its average position in this shadowgraph image. The resulting grayscale energy spectrum is shown in Fig. 13. This is compared with the ensemble-averaged grayscale spectral energy distribution (determined for five closely-located pixel locations) near the shock wave, shown in Fig. 14. The grayscale spectral energy distributions at five pixel locations are ensemble-averaged to generate these results. The region in which these pixels are located is denoted by a white rectangle in Fig. 12. Note that Figs. 13 and 14 show peaks at similar frequencies. For example, both data sets show spectral peaks at approximately 40 Hz and between 2 Hz and 9 Hz. Figures 13 and 14 also show that the spectral energy decreases significantly for frequencies greater than 200 Hz. Shadowgraph flow visualization image, where short horizontal white line indicates pixel locations where the grayscale data are analyzed to obtain results given in Fig. 13, and where white rectangle indicates pixel locations where grayscale data are analyzed to obtain the ensemble-averaged results given in Fig. 14, for data acquired on 05 April 2018 Spectral energy distribution associated with the shock wave position (relative to average shock wave position) as a function of frequency for data acquired on 05 April 2018 Ensemble-averaged grayscale spectral energy distribution for locations near the shock wave for data acquired on 05 April 2018 Auto-correlation function result The auto-correlation is calculated for data associated with a specific pixel location. For the present result, a pixel location associated with the shock wave is considered. The auto-correlation of this time sequence data is shown in Fig. 15. Significant positive values are evident at 0.0 s, 1.4 s, and 0.75 s. Values for approximately 0.75 s indicate that the associated correlation is relatively weaker, since the auto-correlation magnitude barely exceeds the 95% confidence interval. Negative values of the auto-correlation indicate that the data are strongly dissimilar at time lags of 0.4 s and 0.9 s. Overall, the data in Fig. 15 show that gray scale time sequence data are roughly cyclical, with the most pronounced oscillation time period equal to approximately 1.4 s. Single point auto-correlation variation with time, determined from data from a single pixel near the shock wave. Horizontal blue lines indicate the range of a 95% confidence level relative to the zero value, for data acquired on 05 April 2018 Correlation function variations between shock wave and downstream boundary layer regions Magnitude squared coherence and time lag variations are determined between data associated with the shock wave and the downstream boundary layer. The regions containing the locations of the pixels that are analyzed are shown by white rectangles in Fig. 16. Figure 17 shows the magnitude squared coherence of the time sequence grayscale signals in these two regions with respect to frequency. Associated data values evidence significant coherence between the shock wave, and downstream boundary layer regions, at frequencies of approximately 6 Hz, 20 Hz, 40 Hz, and 100 Hz, which correspond to respective Strouhal numbers of 0.00079, 0.00263, 0.00526, and 0.0132. Note that the Hanning windowing creates some fluctuations in the results at high frequencies, which are not representative of true coherence. Shadowgraph flow visualization image, where rectangles mark pixel locations in the downstream boundary layer and near the normal shock wave, where time-varying data are analyzed to obtain the results shown in Figs. 17 and 18, for data acquired on 05 April 2018 Magnitude squared coherence variation with frequency of data associated with locations near the shock wave and in the downstream boundary layer, for data acquired on 05 April 2018 Time lag values from grayscale flow visualization results are shown in Fig. 18, also for perturbations between the downstream boundary layer and the shock wave. Perturbations of approximately 20 Hz, 30 Hz, and 70 to 100 Hz occur in the downstream boundary layer prior to the same frequency events in the shock wave. This is determined because values of time lag are positive for these different frequencies. At 100 Hz, perturbations in the downstream boundary layer occur approximately 1 ms before they do in the shock wave. However, for events at frequencies of approximately 5 Hz, 10 Hz, 40 Hz, and 60 Hz, time lag values are negative, indicating that the perturbations in the shock wave occur prior to the ones in the downstream boundary layer. Note that all of these time lags are approximately an order of magnitude longer than time lags associated with the estimated advection speed of the fluid. Time lag magnitude variation with frequency of data associated with locations near the shock wave and in the downstream boundary layer, for data acquired on 05 April 2018. Note that positive time lag values indicate the signal in the boundary layer (for a particular frequency) occurs prior to the signal (at the same frequency) which is associated with the normal shock wave Correlation function and time lag results to illustrate spatial variations The magnitude squared coherence and time lag at certain frequencies are determined for one location, relative to a range of other spatial locations. The locations are represented by the vertical white line within the flow visualization image of Fig. 19. Note that coordinate scale locations along this line are included. The magnitude squared coherence and time lag are calculated between these locations and a location on the shock wave, which is indicated by a white dot in Fig. 19. The origin is chosen to be the location of the pixel on the shock wave. The other pixel locations are measured relative to that location. Shadowgraph flow visualization image, where white dot and white line show locations where magnitude squared coherence and time lag are determined and analyzed at a frequency of 40 Hz to obtain the results shown in Fig. 20, for data acquired on 05 April 2018 Magnitude squared coherence values and magnitudes of time lag between the same shock wave pixel location and the pixels along the vertical line on the shock wave are given in Fig. 20 for a frequency of 40 Hz. This figure shows a magnitude squared coherence of 1.0, and a time lag of 0 s, at the location where y1 is the same as y2. This is because any signal is completely coherent with itself. For a frequency of 40 Hz, a peak in the coherence is evident approximately 12.7 mm beneath the location of y2. Note that this location approximately corresponds to the intersection point between the normal shock wave and the two oblique shock wave legs of the lambda foot. Associated time lag results in Fig. 20 show that little coherence is generally present for locations where time lag values are notable. The most significant time lag value is negative and is evident at a location about 7 mm below the single shock wave pixel location. This means that events for the single shock wave position y2 occur prior to events associated with the location 7 mm below, which is associated with y1. With larger values of coordinate z, time lag values in Fig. 20 are then either near zero or show large amounts of data scatter. Magnitude squared coherence and time lag value, as they vary with location associated with y1, at frequency of 40 Hz, where y1 is located on a line and y2 is positioned at one shock wave location, for data acquired on 05 April 2018. Positive values of time lag indicate that perturbations which originate at line locations occur prior to events which occur at the single shock wave location Within the present study, investigated are unsteady flow characteristics of a normal shock wave, a lambda foot, and a separated turbulent boundary layer within a unique research test section. The supersonic wind tunnel facility, containing this test section, provides a Mach number of approximately 1.54 at the test section entrance. A shadowgraph optical system is employed to visualize shock wave structure within the test section. Digitized shadowgraph flow visualization data are analyzed to determine shock wave structure and unsteadiness characteristics, including grayscale spectral energy variations with frequency, as well as time and space correlations, which give coherence and time lag properties associated with perturbations associated with different flow regions. A Lagrangian approach is used to determine the spectral energy distribution associated with the streamwise location of the normal shock wave, The resulting grayscale energy spectrum is compared with an ensemble-averaged grayscale spectral energy distribution (determined for five closely-located pixel locations), which shows that both results evidence peaks at similar frequencies at approximately 40 Hz and between 2 Hz and 9 Hz. Auto-correlation function results show that the time sequence of grayscale values, for particular shock wave locations, is approximately cyclical, with a period of approximately 1.4 s. Two-point correlation functions, as they vary with frequency, indicate that perturbations at a frequency of 100 Hz (Str = 0.0130) originate downstream of the shock wave and propagate upstream. Magnitudes of time lag between one stationary shock wave pixel location, and the pixels along the vertical line along and near to the shock wave, show that the most significant time lag value is negative and is evident at a location about 7 mm below the single shock wave pixel location. This means that events for the single shock wave position occur prior to events associated with the location 7 mm below that position. ACF k : Auto-correlation of grayscale value varying with time, normalized c m : Correlation of a time sequence with itself at an instant in time c o : Variance of the grayscale value at a certain pixel C y1y2 : Magnitude squared coherence estimate Expected value function f : \( \mathcal{F} \) : MATLAB's Fast Fourier Transform function operator f o : Cut-off Frequency Fs : Sampling Frequency GS : Grayscale value i : Horizontal pixel number counted from the left side of the frame j : Imaginary number N : Number of frames P y1 y2 : Cross power spectral density of two functions P yy : Power spectral density of a function St r : Strouhal Number t : u ∞ : Velocity of the free stream flow x : Streamwise location relative to the average position of the shock wave Y(f) : Normalized Fourier Transform of y(t) y(t) : Digitized filtered time sequence data Vertical location relative to a reference point on the shock wave Δf : Frequency resolution δ o : Boundary layer thickness, inches ΔSt r : Resolution of the Strouhal Number Δt: ζ : Magnitude of the Butterworth filter's transfer function ϕ: Phase lag Clemens NT, Narayanaswamy V (2014) Low-frequency unsteadiness of shock wave turbulent boundary layer interactions. Ann Rev Fluid Mech 46:469–492 MathSciNet Article Google Scholar Erengil ME, Dolling DS (1991) Correlation of separation shock motion with pressure fluctuations in the incoming boundary layer. AIAA J 29:1868–1877 Handa T, Masudo M, Matsuo K (2003) Mechanism of shock wave oscillation in transonic diffusers. AIAA J 41:64–70 Dupont P, Haddad C, Ardissone JP (2005) Space and time organization in a shock wave/turbulent boundary layer interaction. Aerosp Sci Technol 9:561–572 Bruce PJK, Babinsky H (2008) Unsteady shock wave dynamics. J Fluid Mech 603:463–473 Gamba M (2016) Roles of flows in a corner on the orderly response of a Mach 2 shock train. 10th Annual SWBLI TIM, Dayton, Ohio, USA Ganapathisubramani B, Clemens NT (2009) Low frequency dynamics of shock induced separation in a compression ramp interaction. J Fluid Mech 636:397–436 Humble RA, Elsinga GE, Scarano F, van Oudheusden BW (2009) Three-dimensional instantaneous structure of a shock wave/turbulent boundary layer interaction. J Fluid Mech 622:33–62 Ganapathisubramani B, Clemens NT (2007) Effects of upstream boundary layer on the unsteadiness of shock induced separation. J Fluid Mech 585:369–394 Wu M, Martín MP (2008) Analysis of shock motion in shockwave and turbulent boundary layer interaction using direct numerical simulation data. J Fluid Mech 594:71–83 Piponniau S, Dussauge JP, Debieve JF, Dupont P (2009) A simple model for low-frequency unsteadiness in shock-induced separation. J Fluid Mech 629:87–108 Pirozzoli S, Larsson J, Nichols JW, Bernardini M, Mogan BE, Lele SK (2010) Analysis of unsteady effects in shock/boundary layer interactions. Proceedings of the Summer Program 2010. Center for Turbulence Research, Stanford University, Stanford, pp 153–164 Touber E, Sandham ND (2011) Low-order stochastic modelling of low frequency motions in reflected shock-wave/boundary-layer interactions. J Fluid Mech 671:417–465 Grilli M, Schmid PJ, Hickel S, Adams NA (2012) Analysis of unsteady behavior in shockwave turbulent boundary layer interaction. J Fluid Mech 700:16–28 Settles GS (2001) Schlieren and shadowgraph techniques: visualizing phenomena in transparent media. Springer publishing Corp, New York, New York Ogawa H, Babinsky H (2006) Wind-tunnel setup for investigations of normal shock wave/boundary-layer interaction control. AIAA J 44:2803–2805 Butterworth S (1930) Theory of filter amplifiers. Exp Wirel Wirel Engr 7:536–541 The following individuals are also acknowledged for their assistance with laboratory development, laboratory testing, laboratory instrumentation and apparatus, and/or data analysis: Tony Hall, David Lineberry, Andrew Miller, Melissa Anderson, Robert Pertrimoulx, Daniel Corey, Soo Rhee, Conner Gisburne, Kaylee Hall, Sarah Stearman, Benjamin Shea, Benjamin Lund, Connor Pierce, Patrick McInturff, and Warren Buzzard. The Alabama Innovation Fund (Contract ID No. 61070000002), the University of Alabama Endowment Fund, and the Office of the Vice President for Research and Economic Development of the University of Alabama in Huntsville are acknowledged for financial support for the research which is reported within the present paper. Additional information on data analysis procedures can be obtained from the corresponding author. Propulsion Research Center, Department of Mechanical and Aerospace Engineering, University of Alabama in Huntsville, 5000 Technology Drive, Olin B. King Technology Hall S236, Huntsville, AL, 35899, USA S. Marko & P. M. Ligrani S. Marko P. M. Ligrani Both authors contributed to laboratory development, laboratory testing, laboratory instrumentation and apparatus, and data analysis which are associated with results presented within the present paper. Both authors read and approved the final manuscript. Correspondence to P. M. Ligrani. Marko, S., Ligrani, P.M. Analysis of shock wave unsteadiness using space and time correlations applied to shadowgraph flow visualization data. Adv. Aerodyn. 1, 2 (2019). https://doi.org/10.1186/s42774-019-0002-y Supersonic flow Time correlation Spatial correlation Autocorrelation Two-point correlation Frequency spectra Magnitude squared coherence Lagrangian flow analysis Unsteady flow analysis
CommonCrawl
Epistatic Net allows the sparse spectral regularization of deep neural networks for inferring fitness functions From systems to structure — using genetic data to model protein structures Hannes Braberg, Ignacia Echeverria, … Nevan J. Krogan Low-N protein engineering with data-efficient deep learning Surojit Biswas, Grigory Khimulya, … George M. Church Unified rational protein engineering with sequence-based deep representation learning Ethan C. Alley, Grigory Khimulya, … George M. Church ECNet is an evolutionary context-integrated deep learning framework for protein engineering Yunan Luo, Guangde Jiang, … Jian Peng Cluster learning-assisted directed evolution Yuchi Qiu, Jian Hu & Guo-Wei Wei NMR-guided directed evolution Sagar Bhattacharya, Eleonora G. Margheritis, … Ivan V. Korendovych Genome-wide functional screens enable the prediction of high activity CRISPR-Cas9 and -Cas12a guides in Yarrowia lipolytica Dipankar Baisya, Adithya Ramesh, … Ian Wheeldon Interpreting neural networks for biological sequences by learning stochastic masks Johannes Linder, Alyssa La Fleur, … Georg Seelig Machine-learning-guided directed evolution for protein engineering Kevin K. Yang, Zachary Wu & Frances H. Arnold Amirali Aghazadeh ORCID: orcid.org/0000-0003-0223-08731, Hunter Nisonoff2, Orhan Ocal1, David H. Brookes3, Yijie Huang1, O. Ozan Koyluoglu1, Jennifer Listgarten ORCID: orcid.org/0000-0002-6600-14311,2 & Kannan Ramchandran ORCID: orcid.org/0000-0002-4567-328X1 Nature Communications volume 12, Article number: 5225 (2021) Cite this article Protein function predictions Despite recent advances in high-throughput combinatorial mutagenesis assays, the number of labeled sequences available to predict molecular functions has remained small for the vastness of the sequence space combined with the ruggedness of many fitness functions. While deep neural networks (DNNs) can capture high-order epistatic interactions among the mutational sites, they tend to overfit to the small number of labeled sequences available for training. Here, we developed Epistatic Net (EN), a method for spectral regularization of DNNs that exploits evidence that epistatic interactions in many fitness functions are sparse. We built a scalable extension of EN, usable for larger sequences, which enables spectral regularization using fast sparse recovery algorithms informed by coding theory. Results on several biological landscapes show that EN consistently improves the prediction accuracy of DNNs and enables them to outperform competing models which assume other priors. EN estimates the higher-order epistatic interactions of DNNs trained on massive sequence spaces-a computational problem that otherwise takes years to solve. Recent advances in next-generation sequencing have enabled the design of high-throughput combinatorial mutagenesis assays that measure molecular functionality for tens of thousands to millions of sequences simultaneously. These assays have been applied to many different sequences in biology, including protein-coding sequences1,2,3, RNAs4,5,6, bacterial genes7,8,9,10, and the Cas9 target sites11,12,13. The labeled sequences collected from these assays have been used to train supervised machine learning (ML) models to predict functions (e.g., fluorescence, binding, repair outcome, etc.) from the sequence—a key step in the rational design of molecules using ML-assisted directed evolution14. However, due to the limitations in techniques for library preparation, these assays can only uncover a small subset of all the possible combinatorial sequences. This raises an important question in learning fitness functions: how can we enable supervised ML models to infer fitness functions using only a small number of labeled sequences? Inferring fitness functions is a challenging task since mutational sites interact nonlinearly to form the function, a phenomenon known as epistasis in genetics15,16. As a result, linear regression models which assume site-independent interactions achieve poor accuracy in predicting nonlinear functions. Augmenting linear models with pairwise, second-order epistatic interactions improves their prediction accuracy3; however, there is now increasing evidence that a large fraction of the variance in the fitness functions can be explained only by higher-order epistatic interactions, which contribute to the ruggedness of fitness landscapes17,18. Modeling rugged fitness landscapes is a hard task since the total number of possible higher-order interactions grows exponentially with the number of mutational sites. As a result, the number of parameters to be estimated (i.e., the problem dimension) also grows with the same exponential rate, which creates statistical challenges in inferring the fitness function since the number of labeled sequences does not scale with the problem dimension. In response, nonlinear ML models constrain the problem dimension by introducing various forms of inductive biases to capture hidden structures in the fitness functions. Random forests, for example, impose a tree structure over sites which favor tree-like hierarchical epistatic interactions. While these inductive biases are effective in some fitness functions19, they are too restrictive to capture the underlying higher-order epistatic interactions in other fitness functions3. Overparameterized models in deep learning (DL), such as deep neural networks (DNNs), are expressive enough to model high-order epistatic interactions given a large number of labeled training sequences; however, when the number of labeled sequences is small, they often overfit to the training data and compromise prediction accuracy. It has been observed that regularizing DNNs to induce domain-specific biases improves their prediction accuracy for various tasks in computer vision and natural language processing20. This opens up the question of whether there exists an inductive bias for DNNs trained on biological fitness landscapes that can be imposed using a computationally tractable regularization scheme. Recent studies in biological landscapes3,13,21 have reported that a large fraction of the variance in many fitness functions can be explained by only a few number of (high-order) interactions between the mutational sites. The epistatic interactions in these functions are a mixture of a small number of interactions with large coefficients, and a larger number of interactions with small coefficients; in other words, their epistatic interactions are highly sparse. Promoting sparsity among epistatic interactions is a powerful inductive bias for predictive modeling because it reduces the problem dimension without biasing the model towards a subset of (low-order) interactions. Despite its benefits, promoting sparsity among epistatic interactions has not been studied in DNNs as an inductive bias. The roadblock is in finding a method to promote epistatic sparsity in DNNs. Unfortunately, directly penalizing all or some of the parameters (weights) of DNNs with sparsity-promoting priors is not likely to result in sparse epistatic regularization since the epistatic coefficients are a complex nonlinear function of the weights in DNNs. Here, we develop a method for sparse epistatic regularization of DNNs. We call our method Epistatic Net (EN) because it resembles a fishing net which catches the epistatic interactions among all the combinatorially possible interactions in DNNs, without any restriction to a subset of (low-order) interactions. In order to find the epistatic interaction as a function of the weights in DNN, we find its spectral representation (also called the Walsh-Hadamard (WH) transform for binary sequences) by evaluating the DNN on the entire combinatorial space of mutations, and then take the WH spectral transform of the resulting landscape using the Fast WH Transform (FWHT). The resulting function of the weights in DNN is penalized to promote epistatic sparsity. For larger sequences this approach for epistatic regularization becomes computationally intractable due to the need to enumerate all possible mutations in DNN. Therefore, we leverage the fast sparsity-enabled algorithms in signal processing and coding theory in order to develop a greedy optimization method to regularize DNNs at scale. Our scalable regularization method, called EN-S, regularizes DNNs by sampling only a small subset of the combinatorial sequence space by choosing sequences that induce a specific sparse graph structure. The uniform sampling scheme allows us to find the WH transform of the combinatorial DNN landscape efficiently using a fast peeling algorithm over the induced sparse graph22. Results on several biological landscapes, from bacterial to protein fitness functions, shows that EN(-S) enables DNNs to achieve consistently higher prediction accuracy compared to competing models and estimate all the higher-order predictive interactions on massive combinatorial sequence space—a computational problem that takes years to solve without leveraging the epistatic sparsity structure in the fitness landscapes. Regularization using the Epistatic Net (EN) EN is a novel regularization scheme (Fig. 1b) which evaluates the DNN on all the possible combinatorial mutations of the input sequence; we call the resulting high-dimensional vector the DNN landscape. EN takes the WH transform of the DNN landscape and adds the sparsity-promoting ℓ1-norm (i.e., the sum of the absolute values) of the WH coefficients (or total sum of the magnitude of epistasis) to the log-likelihood loss. The resulting WH loss is a differentiable function (except at zero) of the weights in DNN and is weighted by a scalar which strikes a balance between the fidelity of DNN to the labeled sequences and sparsity among epistatic interactions (see "Methods", Supplementary Notes, and Supplementary Fig. 1). We use the stochastic gradient descent (SGD) algorithm to minimize the aggregate loss and update the weights of DNN in every iteration. Fig. 1: Schematic illustration of our sparse epistatic regularization method, called Epistatic Net (EN). a Conventional deep neural network (DNN) training is depicted, where the log-likelihood loss (computed over n labeled training sequences encoded into binary sequences of length d) is minimized using the stochastic gradient descent (SGD) algorithm. b In every iteration, EN queries DNN for all the 2d possible binary input sequences, finds the Walsh Hadamard (WH) spectral transform of the resulting landscape using the Fast WH Transform (FWHT), and then adds the ℓ1-norm of the WH transform to the log-likelihood loss from panel (a). c In the scalable version of EN, EN-S regularizes DNN using only a few number of uniformly subsampled sequences from the combinatorial input space that casts the sparse WH recovery problem on an induced sparse-graph code. EN-S iterates between these two subproblems until convergence: (1) finding the sparse WH transform of DNN (using sublinear samples and in sublinear time) through peeling over the induced sparse-graph codes, and (2) minimizing the sum of the log-likelihood loss and the WH loss using SGD. For larger sequences (of size d > 25), EN regularization becomes intractable in time and space complexity. This is because EN needs to query the DNN p = 2d times to form the DNN landscape (exponential time complexity in d) and then find the WH transform of the queried DNN landscape (exponential time and space complexity in d). To overcome this, EN-S leverages the sparsity in the WH spectral domain to regularize DNN using only a small number of uniformly subsampled sequences from the combinatorial input space (Fig. 1c). EN-S decouples the DNN training, following the alternating direction method of multipliers (ADMM) framework23, into two subproblems: (1) finding the k-sparse WH spectral transform of DNN in a sample and time efficient manner, and (2) minimizing the sum of the log-likelihood loss and the WH loss. The WH loss penalizes the distance between DNN and a function constructed using the top-k WH coefficients recovered in the first subproblem. In order to solve the first subproblem, we design a careful subsampling of the input sequence space22 that induces a linear mixing of the WH coefficients such that a greedy belief propagation algorithm (peeling-decoding) over a sparse-graph code recovers the noisy DNN landscape in sublinear sample (i.e., \({{{{{{{\mathcal{O}}}}}}}}(k{{{{{{{\mathrm{log}}}}}}}\,}^{2}p)\)) and time (i.e., \({{{{{{{\mathcal{O}}}}}}}}(k{{{{{{{\mathrm{log}}}}}}}\,}^{3}p)\)) complexity in p (with high probability)13,22,24,25. Briefly, the peeling-decoding algorithm identifies the nodes on the induced sparse-graph code that are connected to only a single WH coefficient and peels off the edges connected to those nodes and their contributions on the overall graph. The algorithm repeats these steps until all the edges are removed. We solve the second subproblem using the SGD algorithm. EN-S alternates between these two steps until convergence (see "Methods" and Supplementary Notes). Inferring four canonical functions in bacterial fitness We collected four canonical bacterial fitness functions, whose combinatorial landscapes have been measured experimentally in previously published works (see Supplementary Table 1). Figure 2a shows the sparsity level in epistatic interactions of these bacterial fitness functions. We found the coefficients for epistatic interactions by taking the WH transform of the measured combinatorial landscape (see "Methods" section for various ways to preprocess the landscapes). Figure 2a plots the fraction of variance explained as a function of the top WH coefficients. Sparsity levels can be assessed by the proximity of the resulting curve towards the top-left corner of the plot. For comparison, we also plotted synthetic fitness functions that have all possible epistatic interactions up to a certain order of interactions in Fig. 2a. While the sparsity levels vary across fitness functions, the top-5 WH coefficients consistently explain more than 80% of the variance across all the landscapes. Fig. 2: Predicting bacterial fitness and inferring epistatic interactions in four canonical landscapes. a Fraction of variance explained by the top WH coefficients revealing the sparsity in the bacterial fitness functions. b Prediction accuracy of deep neural network (DNN) with Epistatic Net (EN) regularization against competing models in ML. The error bars show the standard error of the mean (SEM) across 20 independent repeats of the experiments with random split of the data into training, validation, and test sets. c Visualization of the epistatic interactions of DNN with and without EN regularization and the baseline models after training on Escherichia coli fitness landscape of Khan et al.9. R2 values show the correlation of the recovered epistatic interaction with the interactions in the measured combinatorial Escherichia coli fitness landscape. Figure 2b shows the prediction performance of DNN with EN regularization on the bacterial landscapes compared to various competing models. All the models are trained on the same randomly sampled subset (i.e., 31%) of the sequences from the measured combinatorial landscapes and tested on a subset of unseen sequences (see Supplementary Notes for more details). The prediction accuracy is reported in terms of the coefficient of determination, R2 (i.e., the fraction of the variance in the test set explained from the sequence). DNN with EN regularization consistently outperforms the baseline models in all the landscapes. In particular, DNN with EN regularization performs significantly better than the EN-unregularized variant consistently across all data sets (ΔR2 > 0.1, P < 0.033), even though DNN is optimized (in terms of architecture) for best validation performance in isolation (i.e., without epistatic regularization) and has been subjected to other forms of common sparsity-promoting regularization techniques applied directly to the weights of the DNN ("Methods", Supplementary Data 1, and Supplementary Fig. 2). Figure 2c shows the WH transform of the DNN landscape with and without EN regularization, as well as the WH transform of the landscapes corresponding to the rest of the competing models trained on a training set sampled from the Escherichia coli fitness landscape of Khan et al.9 (see Supplementary Figs. 3, 4 for a detailed analysis of the landscapes in the spectral domain). In order to find these landscapes, we queried each model for all the combinatorial mutations. In this plot, the epistatic coefficient indexed by 10100, as an example, shows an order 2 interaction between the mutational sites 1 and 3. The rest of the indices can be interpreted similarly. The WH coefficients in the measured Escherichia coli fitness function show three first-order interactions with higher magnitude and several higher-order interactions with lower magnitude. The interactions recovered by DNN with EN regularization closely match the epistatic interactions of the measured Escherichia coli fitness function (R2 = 0.67), a considerable improvement over DNN without EN regularization (R2 = 0.41). EN regularization effectively denoises the WH spectrum of DNN by removing spurious higher-order interactions; nevertheless, given a larger training set, EN would have accepted a larger number of higher-order interactions. The WH coefficients of gradient boosted trees (R2 = 0.51) and random forests (R2 = 0.36) also show several spurious high-order interactions. Lasso regression finds two of the three measured interactions with higher magnitude, however, recovers a spurious third-order interaction which results in a low epistatic correlation coefficient (R2 = 0.18). When restricted to up to order 2 interactions, the performance of Lasso improves; it recovers the two interactions with higher coefficients, however, misses the third coefficient and the rest of the small epistatic interactions (R2 = 0.53). Entacmaea quadricolor fluorescent protein A comprehensive experimental study has reported all the combinatorial mutants that link two phenotypically distinct variants of the Entacmaea quadricolor fluorescent protein3. The variants are different in d = 13 mutational sites. The study shows the existence of several high-order epistatic interactions between the sites, but also reveals extraordinary sparsity in the interactions. We used this protein landscape to assess EN in regularizing DNN for predicting protein function. We split the 213 = 8192 labeled proteins randomly into three sets: training, validation, and test. The size of the test set was fixed to 3000 and the validation set size was set equal to the training set size. We varied the training set size from a minimum of n = 20 proteins to a maximum of n = 100 proteins and evaluated the accuracy of the models in (1) predicting fitness in Fig. 3a in terms of R2 (Supplementary Data 2) and (2) recovering the experimentally measured epistatic interactions in Fig. 3b in terms of normalized mean squared error (NMSE) (Supplementary Data 3). Fig. 3: Inferring the sparse epistatic protein landscape of Poelwijk et al.3. a Deep neural network (DNN) with Epistatic Net (EN) regularization outperforms the baselines in terms of prediction accuracy. To achieve the same prediction accuracy, DNN with EN regularization needs up to 3 times less number of samples compared to DNN without EN regularization. b DNN with EN regularization recovers the experimentally measured (higher-order) epistatic interactions with significantly lower normalized mean squared error (NMSE). c The prediction performance of DNN with EN-S regularization is plotted when EN-S subsamples DNN at progressively smaller fractions of the combinatorial sequence space of proteins, that is, 100% (no subsampling), 54%, and 20%. DNN with EN-S regularization outperforms DNN without the regularization despite restricting EN-S to only sample 20% of the protein sequence space to induce a sparse-graph code. Error bars in all the plots show the standard error of the mean (SEM) in 20 independent repeats of the experiments with random splits of the data into training, validation, and test sets. d Scatter plot of the DNN-predicted fluorescence values trained on n = 60 labeled proteins. e Scatter plot of the predicted fluorescence values by the EN-regularized variant of the same DNN. f Comparison of the recovered epistatic interactions of the EN-regularized and unregularized DNNs. DNN with EN regularization significantly outperforms DNN without EN regularization in terms of prediction accuracy (ΔR2 > 0.1, P < 10−5), consistently across all training sizes. Moreover, DNN with EN regularization recovers the experimentally measured epistatic interactions with significantly lower error (ΔNMSE > 0.07, P < 9 × 10−5), consistently across all training sizes. Applying various forms of ℓ1 and ℓ2-norm regularization on the weights of different layers of the DNN does not change the performance gap between DNN with and without EN regularization (see Supplementary Fig. 5). In particular, in order to achieve the same level of prediction accuracy (R2 = 0.7), DNN without EN regularization requires up to 3 times more training samples compared to DNN with EN regularization. Figures 3d, e show the scatter plots of the predicted fluorescence values of DNN and its EN-regularized variant, respectively, when both models are trained on n = 60 labeled proteins. The performance gap naturally reduces for larger training sets, however, it stays consistently positive even up to n = 200 (i.e., 2.5% of the entire combinatorial landscape), which is typically larger than the number of available labeled sequences in protein function prediction problems (Supplementary Fig. 5). Our analysis also reveals the improved performance of the epistatic interactions recovered by DNN with EN regularization in predicting the pairwise contacts (residues with smaller than 4.5 Å distance26) and triplet contacts (group of three residues with smaller than 4.5 Å pairwise distances) in the 3D structure of the protein—even though the networks are not trained for protein structure prediction task (Supplementary Fig. 6). DNN with EN regularization predicts contacts with \({\,{{\mbox{F}}}}_{1}^{{{\mbox{order 2}}}\,}=0.76\) and \({\,{{\mbox{F}}}}_{1}^{{{\mbox{order 3}}}\,}=0.68\) compared to DNN without EN regularization with \({\,{{\mbox{F}}}}_{1}^{{{\mbox{order 2}}}\,}=0.67\) and \({\,{{\mbox{F}}}}_{1}^{{{\mbox{order 3}}}\,}=0.66\) (F1 score takes the harmonic mean of the precision and recall rates). The dimension of the fluorescent landscape of Entacmaea quadricolor protein enabled us to use the data set to compare the performance of DNN under EN regularization with its scalable version, EN-S. The prediction performance of DNN with EN-S regularization showed a slight drop in accuracy due to the approximations made by the ADMM decoupling (Fig. 3c, Supplementary Data 4, and "Methods"). EN-S stayed fairly consistent when we decreased the number of proteins sampled from DNN to induce a sparse-graph code. Using as low as 1678 samples (out of the total of 8192 combinatorial proteins, i.e., 20% subsampling) enabled successful regularization of DNN, resulting in a significant performance gap compared to DNN without EN regularization. Green fluorescent protein from Aequorea victoria (avGFP) The local fitness landscape of the green fluorescent protein from Aequorea victoria (avGFP) has been investigated in a comprehensive study2. The authors estimated the fluorescence levels of genotypes obtained by random mutagenesis of the avGFP protein sequence at 236 amino acid mutational sites. The final data set included 56,086 unique nucleotide sequences coding for 51,715 different protein sequences. Considering the absence or presence of a mutation at a site, created a data set with input sequence size of d = 236. Regularization in the resulting p = 2236-dimensional space was impossible using EN, illustrating the need for EN-S. We first analyzed the peeling algorithm by inspecting the WH spectral representation of DNN once trained on the avGFP landscape. Figure 4a shows the first-order WH coefficients of DNN, recovered using peeling after sampling DNN at 5,074,944 (out of 2236 ≈ 1071) proteins following uniform patterns that induce a sparse-graph code. We repeated the same procedure with an independent set of uniformly subsampled sequences (with random offset) and visualized the recovered first-order WH coefficients in a scatter plot as a function of the recovered coefficients using the first set of proteins in Fig. 4b. When sampled at two different relatively tiny subsets of this massive p = 2236-dimensional space, the peeling algorithm recovered similar first-order coefficients (with R2 = 0.99), without assuming any prior knowledge on the WH coefficients of avGFP being low-order (also see Supplementary Fig. 7). The higher variance of the scatter plot around the center shows the small number of coefficients (30 out of 236) that are differentially recovered under the two subsamplings. The peeling algorithm associated 3.2% and 2.9% of the variation of DNN to higher-order interactions, respectively for the first and second subsampling. We compared the second-order interactions recovered under these subsamplings (Supplementary Fig. 8). Despite the small variation associated with higher-order epistasis, 10% of the recovered second-order interactions were exactly equal, and the rest of the interactions were locally correlated (R2 = 0.60 correlation within blocks of three neighboring interactions). Fig. 4: Inferring epistatic interactions in two large canonical protein landscapes using the scalable Epistatic Net (EN-S) regularizer. a The first-order Walsh-Hadamard (WH) coefficients of unregularized DNN trained on the Aequorea victoria (avGFP) landscape of Sarkisyan et al.2 recovered by the peeling algorithm using a set of 5, 074, 944 uniformly subsampled proteins (out of 2236). b The scatter plot of the first-order WH recovered by EN-S using two independent sets of 5, 074, 944 uniformly subsampled proteins. The recovered coefficients are highly consistent (R2 = 0.99). The higher variance of the scatter plot around the center shows the small number (20 out of 236) of coefficients that are differentially recovered. c DNN with EN-S regularization outperforms the baselines in terms of prediction accuracy in avGFP. d Histogram of the order of epistatic interactions recovered while training the EN-S regularized DNN. e The prediction accuracy gained by the higher-order epistatic interactions when added to a purely linear model. f DNN with EN-S regularization outperforms the baselines in terms of prediction accuracy in the GB1 landscape of Wu et al.1. Error bars show the standard error of the mean (SEM) in 3 independent repeats of the experiments with random splits of the data into training, validation, and test sets. Next, we trained the same DNN architecture with EN-S regularization. Figure 4c shows that the prediction accuracy of DNN with EN-S regularization is higher than the baseline algorithms (Supplementary Data 5). The gap between DNN with and without EN-S regularization is smaller compared to the previously described protein landscapes. We speculate that this is due to the nature of the local landscape of avGFP around the wild-type protein, where most of the variance can be explained by first-order interactions and the rest can be explained by higher-order interactions that are spread throughout the WH spectrum. Figure 4d illustrates the histogram of the order of epistatic interactions recovered by invoking the peeling algorithm in every iteration of the EN-S regularization scheme. Figure 4e depicts the gain in prediction accuracy after adding the recovered interactions to a purely linear model, suggesting that the difference in prediction accuracy of DNN with and without regularization can be explained (approximately) by a collection of a large number of WH coefficients with small magnitude—this analysis further demonstrates the computational power of EN-S in recovering higher-order interactions in such massively large combinatorial space of interactions. Immunoglobulin-binding domain of protein G (GB1) A recent study investigated the fitness landscape of all the 204 = 160,000 variants at four amino acid sites (V39, D40, G41, and V54) in an epistatic region of protein G domain B1, an immunoglobulin-binding protein expressed in Streptococcal bacteria1. One-hot binary encoding of the amino acids results in binary sequences of length d = 80. As EN does not scale to regularize DNNs trained on this landscape, we relied on EN-S. Figure 4f shows the prediction performance of DNN with EN-S regularization compared to the baseline models that were scaled to such a dimension. All the models were trained on a random subset of n = 2000 proteins. EN-S subsamples DNN at 215,040 proteins in order to perform the sparse epistatic regularization, which is about 1018 times smaller than the entire sequence space. Despite such an enormous level of undersampling, the DNN regularized with EN-S consistently outperforms the competing baselines and the EN-S unregularized DNN (ΔR2 > 0.035, P < 0.05, Supplementary Data 5, and Supplementary Fig. 9). The performance gap between the DNNs with and without EN-S regularization is naturally smaller compared to the same gap in the Entacmaea quadricolor fluorescent protein landscape. This is because the protein landscape of Entacmaea quadricolor is defined over 13 mutational sites (with 8192 possible positional interactions and two possible amino acids for each site) while the protein landscape of GB1 is defined over 4 mutational sites (with 16 possible positional interactions and 20 possible amino acids for each site); the former benefits more from promoting sparsity among a larger number of biologically meaningful positional interactions. We showed that several of the functional landscapes in biology have common structures (i.e., inductive bias) in their epistatic interactions that manifest as sparsity in the spectral Walsh-Hadamard (WH) domain. Sparse epistatic regularization of deep neural networks (DNNs) is an effective method to improve their prediction accuracy, especially when the number of available training samples is small compared to the vastness of sequence space. To this end, our Epistatic Net (EN) regularization method combined the advantages offered by the sparsity of biological landscapes with sublinear algorithms in signal processing and coding theory for epistatic regularization of DNNs in the combinatorial space of interactions. Analysis of the recovered higher-order epistatic interactions by the DNNs with and without regularization also revealed the power of EN in finding biologically relevant epistatic interactions. The superior prediction performance of DNNs with EN regularization comes with the additional computational cost of finding the WH transform of the DNN landscape, which increases the computational complexity of the training algorithm by only a linear factor in the product of the length of the sequence and the epistatic sparsity level. While training can be done offline (e.g., on a server) there are avenues for making the algorithm even more efficient such as using the prior knowledge on the maximum order of interaction to constrain the regularization space. In addition, EN regularization can be extended using generalized Fourier transform to more efficiently encode amino acids compared to the more conventional one-hot binary encoding strategies27. Moreover, while this work laid out the algorithmic principles of sparse epistatic regularization in supervised models, unsupervised models, such as Potts model28, Ising model29, and Variational Autoencoders (VAEs)30 can benefit from such regularization scheme as well; it would be tempting to hypothesize that these energy landscapes also have structures that appear as high-order sparse coefficients in WH basis. Overall, our sparse epistatic regularization method expands the machine learning toolkit for inferring and understanding fitness functions in biology. It helps us to visualize, analyze, and regularize the powerful, however less interpretable black-box models in deep learning in terms of their higher-order interactions in the sequence space. We believe that our work will initiate new research directions towards developing hybrid methodologies that draws power from statistical learning, signal processing, coding theory, and physics-inspired deep learning for protein design and engineering. Notation and background Suppose we are given n (experimental) samples \({({{{{{{{{\bf{x}}}}}}}}}_{i},{y}_{i})}_{i = 1}^{n}\), that is, (sequence, value) pairs from a biological landscape, where xi ∈ {−1, +1}d denotes the binary encoding of d mutational sites in a variant and \({y}_{i}\in {\mathbb{R}}\) is its associated fitness value. We are interested in learning a function f(x) that maps all subsets of mutations to fitness values. In other words, we seek to learn a set function \(f({{{{{{{\bf{x}}}}}}}}):{{\mathbb{F}}}^{d}\to {\mathbb{R}}\), where \({{\mathbb{F}}}^{d}\) denotes the space of all the binary vectors of length d. A key theorem31 in mathematics states that any set function (also known as pseudo-Boolean function) f(x) = f(x1, x2, …, xd) can be represented uniquely by a multi-linear polynomial over the hypercube (x1, x2, …, xd) ∈ {−1, +1}d: $$f({x}_{1},{x}_{2},\ldots ,{x}_{d})=\mathop{\sum}\limits_{{{{{{{{\mathcal{S}}}}}}}}\subseteq [d]}{\alpha }_{{{{{{{{\mathcal{S}}}}}}}}}\mathop{\prod}\limits_{i\in {{{{{{{\mathcal{S}}}}}}}}}{x}_{i},$$ where \({{{{{{{\mathcal{S}}}}}}}}\) is a subset of {1, 2, 3, …, d} = [d] and \({\alpha }_{{{{{{{{\mathcal{S}}}}}}}}}\in {\mathbb{R}}\) is the WH transform coefficient (or equivalently the epistatic coefficient) associated with the monomial (interaction) \({\prod }_{i\in {{{{{{{\mathcal{S}}}}}}}}}{x}_{i}\). For example, the pseudo-Boolean function $$f({x}_{1},{x}_{2},{x}_{3},{x}_{4},{x}_{5})=12{x}_{1}{x}_{4}-3{x}_{3}+6{x}_{1}{x}_{2}{x}_{5},$$ defined over d = 5 mutational sites, has three monomials with orders 2, 1, and 3 and WH coefficients 12, − 3, and 6, respectively. The WH transform of this function is sparse with k = 3 non-zero coefficients out of a total of 25 = 32 coefficients. Each monomial can be easily explained, for example, the first monomial in the WH transform, that is 12x1x4, indicates that mutation sites 1 and 4 are interacting and the interaction enriches fitness because the sign of the coefficient is positive. On the hand, the second monomial − 3x3 shows that a mutation at site 3 depletes fitness. The last monomial 6x1x2x5 shows a third-order interaction between mutational sites 1, 2, and 5 which also enrich fitness. If the fitness function is measured (known) for all the combinatorial p = 2d inputs xi, then we can use the Fast WH Transform (FWHT)32 to find the WH coefficients in \({{{{{{{\mathcal{O}}}}}}}}(p{{{{{{\mathrm{log}}}}}}}\,p)\) time complexity. The problem is so-called fully determined in such a scenario. However, as discussed in the introduction, in inferring fitness functions, we typically face problems where the number of observed samples (sequences) n is much smaller than the total number of possible sequences, that is, n ≪ p = 2d; in other words, we are in an underdetermined regime. In full generality, we assume that the data is generated according to a noisy nonlinear model $${y}_{i}={f}_{\theta }({{{{{{{{\bf{x}}}}}}}}}_{i})+{\varepsilon }_{e},$$ where θ are the parameters of the model, εe is a random variable drawn from a Gaussian distribution with zero mean and variance \({\sigma }_{e}^{2}\). Under this setting the maximum likelihood estimate is $${\theta }_{MLE}=\mathop{{{{{{{{\rm{arg}}}}}}}}\ {{{{{{{\rm{max}}}}}}}}}\limits_{\theta }\mathop{\sum }\limits_{i=1}^{n}{({y}_{i}-{f}_{\theta }({{{{{{{{\bf{x}}}}}}}}}_{i}))}^{2}.$$ We denote a deep neural network (DNN) by gθ(x), where θ is a vector of all the weights in DNN. The DNN, gθ(x), takes in a binary input vector xi and predicts the output \(\hat{{y}_{i}}\). Let \({{{{{{{\bf{X}}}}}}}}\in {{\mathbb{R}}}^{p\times d}\) denote a matrix which comprises all the p = 2d enumeration of the binary sequence xi of length d in its rows. We slightly abuse the notation and let \({{{{{{{{\bf{g}}}}}}}}}_{\theta }({{{{{{{\bf{X}}}}}}}})\in {{\mathbb{R}}}^{p}\) denote the real-valued vector of DNN outputs over all these binary sequences. We call this high-dimensional vector the DNN landscape. In order to find the WH transform of the DNN we can multiply the DNN landscape, gθ(X), by the WH matrix, \({{{{{{{\bf{H}}}}}}}}\in {{\mathbb{R}}}^{p\times p}\). The WH matrix H can be defined using the recursive equation $${{{{{{{{\bf{H}}}}}}}}}^{{2}^{d}}={{{{{{{{\bf{H}}}}}}}}}^{2}\otimes {{{{{{{{\bf{H}}}}}}}}}^{{2}^{d-1}},$$ where H2 is the 2 × 2 mother WH matrix defined as \({{{{{{{{\bf{H}}}}}}}}}^{2}=\left[\begin{array}{ll}1&1\\ 1&-1\end{array}\right]\) and ⊗ denotes the Kronecker product. The WH matrix is a symmetric unitary matrix; in other words, (1/2d)HH = I. Each of the 2d columns of H corresponds to a monomial (\({\prod }_{i\in {{{{{{{\mathcal{S}}}}}}}}}{x}_{i}\)) in the pseudo-Boolean representation of set functions and equivalently corresponds to one of the terms in WH transform. In biology literature, this coefficient is known as an epistatic interaction when \(| {{{{{{{\mathcal{S}}}}}}}}| \ge 2\). The WH transform of the DNN can be calculated as \({{{{{{{\bf{H}}}}}}}}{{{{{{{{\bf{g}}}}}}}}}_{\theta }({{{{{{{\bf{X}}}}}}}})\in {{\mathbb{R}}}^{p}\). Note that in this manuscript we assume H is properly normalized to be unitary. Epistatic net (EN) EN regularizes the epistatic interactions in gθ(X) by adding a new WH loss term to the original log-likelihood loss, $$\mathop{\min }\limits_{\theta }\mathop{\sum }\limits_{i=1}^{n}{({y}_{i}-{g}_{\theta }({{{{{{{{\bf{x}}}}}}}}}_{i}))}^{2}+\alpha \parallel{\!\!}{{{{{{{\bf{H}}}}}}}}{{{{{{{{\bf{g}}}}}}}}}_{\theta }({{{{{{{\bf{X}}}}}}}}){\parallel }_{0},$$ where \({{{{{{{\bf{H}}}}}}}}\in {{\mathbb{R}}}^{p\times p}\) is the WH matrix, the ℓ0-norm ∥.∥0 counts the number of non-zero values in the WH transform of the DNN (i.e., Hgθ(X)), and α is a scalar which strikes balance between the log-likelihood loss and the regularization term. The scalar α is set using cross-validation. The ℓ0-norm is a non-convex and non-differentiable term and is not suitable for optimization using the SGD algorithm since the gradient is not well-defined for this term; therefore, following the common practice in convex optimization, we relaxed the ℓ0-norm and approximated it by a convex and differentiable (except at zero) sparsity promoting ℓ1-norm in EN. We will discuss in the next section that in the scalable version of EN, it is more efficient to approximately solve the ℓ0-norm minimization problem using the greedy peeling-decoding algorithm from coding theory, which does not rely on gradient descent optimization. EN approximately solves the following relaxed optimization problem using the SGD algorithm: Note that despite our convex relaxation, this optimization problem is still non-convex since both the log-likelihood loss and the DNN landscape are non-convex (still differentiable) functions. In general, convergence to the global minimum can not be guaranteed due to non-convexity of DNN, however, in practice we observe that SGD converges smoothly to a useful stationary locally optimal point. To avoid convergence to locally optimal points with poor generalization performance, the DNN can be trained multiple times with several random initialization, however, as we have elaborated in the experimental section, for most of the experiments in this paper random Xavier initialization resulted in good generalization using a single initialization (no need for multiple initializations). Scalable Epistatic Net (EN-S) For larger sequences (i.e., d > 25), the optimization algorithm in EN does not scale well with d. There are two factors that prevent EN from scaling to larger sequences: time and space complexity. We elaborate on these two factors. (1) In order to find the DNN landscape, we need to query the DNN p = 2d times. Regardless of how fast DNN inference is, the time complexity of this task grows exponentially with d. For example, it would take years to query the DNN with the simplest structure on all the binary sequences of length d = 236 in the avGFP protein landscape. Furthermore, finding the WH transform of the DNN landscape, even using FWHT with \({{{{{{{\mathcal{O}}}}}}}}(p{{{{{{\mathrm{log}}}}}}}\,p)\) computational cost, will not be possible since the computational cost grows exponentially with d. 2) The WH matrix H is a p × p matrix and the DNN landscape gθ(X) is a p-dimensional vector. Regardless of the time required to find those matrices, they need exponential memory to store, which again becomes infeasible for even moderate values of d. We need a method that scales sublinear in p (i.e., \({{{{{{{\mathcal{O}}}}}}}}(\,{{\mbox{polylog}}}\,p)\)) both in time and space complexity. Here, we develop EN-S to approximately solve our optimization problem efficiently. We first perform a change of variables and define the WH transform of the DNN landscape as u = Hgθ(X) and set it as an explicit constraint in the optimization problem. Following this change of variable, we reformulate the optimization problem in equation (7) as, $$\mathop{\min }\limits_{\theta ,{{{{{{{\bf{u}}}}}}}}}\mathop{\sum }\limits_{i=1}^{n}{({y}_{i}-{g}_{\theta }({{{{{{{{\bf{x}}}}}}}}}_{i}))}^{2}+\alpha \parallel{\!\!}{{{{{{{\bf{u}}}}}}}}{\parallel }_{1}\ \,{{\mbox{subject to}}}\,\ {{{{{{{\bf{u}}}}}}}}={{{{{{{\bf{H}}}}}}}}{{{{{{{{\bf{g}}}}}}}}}_{\theta }({{{{{{{\bf{X}}}}}}}}).$$ This change of variable enables us to use an augmented Lagrangian method to decouple the optimization problem in equation (7) into two subproblems: (1) updating the weights of DNN using SGD, and, 2) finding the WH transform of DNN using a fast greedy algorithm based on sparse-graph codes. The alternating direction method of the multipliers (ADMM) is a variant of the augmented Lagrangian methods that use partial updates for the dual variables and provides a principled framework to decouple the optimization problem above. Following the scaled-dual form of ADMM23, we decoupled the optimization problem above into two separate minimization problems and a dual update. At iteration t, we first fix \({{{{{{{{\bf{u}}}}}}}}}_{t}\in {{\mathbb{R}}}^{p}\) and solve a θ-minimization problem, then fix \({\theta }_{t}\in {{\mathbb{R}}}^{p}\) and solve a u-optimization problem, and finally update the dual variable \(\lambda \in {{\mathbb{R}}}^{p}\) as follows: \(\theta -\,{{\mbox{minimization}}}\,\quad \ {\theta }^{t+1}={{{{{{{\rm{arg}}}}}}}}\ {{{{{{{{\rm{min}}}}}}}}}_{\theta }\mathop{\sum }\nolimits_{i = 1}^{n}\scriptstyle{({y}_{i}-{g}_{\theta }({{{{{{{{\bf{x}}}}}}}}}_{i}))}^{2}+\frac{\rho }{2}\parallel {{{{{{{\bf{H}}}}}}}}{{{{{{{{\bf{g}}}}}}}}}_{\theta }({{{{{{{\bf{X}}}}}}}})-{{{{{{{{\bf{u}}}}}}}}}^{t}+{\lambda }^{t}{\parallel }_{2}^{2}\) \({{{{{{{\bf{u}}}}}}}}-\,{{\mbox{minimization}}}\,\quad {{{{{{{{\bf{u}}}}}}}}}^{t+1}={{{{{{{\rm{arg}}}}}}}}\ {{{{{{{{\rm{min}}}}}}}}}_{{{{{{{{\bf{u}}}}}}}}}\alpha \parallel{\!\!}{{{{{{{\bf{u}}}}}}}}{\parallel }_{1}+\frac{\rho }{2}\parallel{\!\!}{{{{{{{\bf{H}}}}}}}}{{{{{{{{\bf{g}}}}}}}}}_{{\theta }^{t+1}}({{{{{{{\bf{X}}}}}}}})-{{{{{{{\bf{u}}}}}}}}+{\lambda }^{t}{\parallel }_{2}^{2}\) \({{\mbox{dual update}}}\, \ \ \ {\lambda }^{t+1}={\lambda }^{t}+{{{{{{{\bf{H}}}}}}}}{{{{{{{{\bf{g}}}}}}}}}_{{\theta }^{t+1}}({{{{{{{\bf{X}}}}}}}})-{{{{{{{{\bf{u}}}}}}}}}^{t+1}\), where \(\rho \in {\mathbb{R}}\) is a hyperparameter set using cross-validation. Note that the time and space scaling issues remain here and will be addressed momentarily. Assuming an infinite time and space budget, the θ-minimization problem can be tackled using SGD and the u-minimization problem can be solved by projecting \({{{{{{{{\bf{w}}}}}}}}}^{t+1}:={{{{{{{\bf{H}}}}}}}}{{{{{{{{\bf{g}}}}}}}}}_{{\theta }^{t+1}}({{{{{{{\bf{X}}}}}}}})+{\lambda }^{t}\) onto the ℓ1-norm ball of radius ρ/α. This projection can be solved using the soft-thresholding operator in Lasso33: $${{{{{{{{\bf{u}}}}}}}}}_{i}^{t+1}=\left\{\begin{array}{ll}{{{{{{{{\bf{w}}}}}}}}}_{i}^{t+1}-\rho /2\alpha & {{\mbox{if}}}\; {{{{{{{{\bf{w}}}}}}}}}_{i}^{t+1} \, > \, \rho /2\alpha \hfill\\ 0\hfill& {{\mbox{if}}}\; \rho /2\alpha\, \le\, {{{{{{{{\bf{w}}}}}}}}}_{i}^{t+1}\,\le\, \rho /2\alpha \\ {{{{{{{{\bf{w}}}}}}}}}_{i}^{t+1}+\rho /2\alpha & {{\mbox{if}}}\; {{{{{{{{\bf{w}}}}}}}}}_{i}^{t+1}\, < \, \rho /2\alpha .\hfill\end{array}\right.$$ Unfortunately, all the three steps above still have exponential time and space scaling with d. In what follows we will show how to exploit the sparsity of the WH transform of the DNN landscape u = Hgθ(X) to reformulate new minimization steps such that we need to subsample only a logarithmic factor \({{{{{{{\mathcal{O}}}}}}}}(\,{{\mbox{polylog}}}\,p)\) of rows in H and approximately solve these steps in sublinear time and space complexity in p (i.e., at most polynomial in d). We call this regularization scheme EN-S. The first step to arrive at the EN-S regularization scheme is to reformulate the optimizations above such that the WH matrix H appears as a multiplicative term behind the dual variable λ and u. This enables us to convert the u-minimization problem from a ℓ1-norm ball projection to a sparse WH recovery problem with H as the basis, for which we have fast solvers from signal processing and coding theory. Note that \(\parallel{\!\!}{{{{{{{\bf{H}}}}}}}}{{{{{{{{\bf{g}}}}}}}}}_{\theta }({{{{{{{\bf{X}}}}}}}})-{{{{{{{{\bf{u}}}}}}}}}^{t}+{\lambda }^{t}{\parallel }_{2}^{2}=\parallel{\!\!}{{{{{{{{\bf{g}}}}}}}}}_{\theta }({{{{{{{\bf{X}}}}}}}})-{{{{{{{\bf{H}}}}}}}}{{{{{{{{\bf{u}}}}}}}}}^{t}+{{{{{{{\bf{H}}}}}}}}{\lambda }^{t}{\parallel }_{2}^{2}\) and \(\parallel{\!\!}{{{{{{{\bf{H}}}}}}}}{{{{{{{{\bf{g}}}}}}}}}_{{\theta }^{t+1}}({{{{{{{\bf{X}}}}}}}})-{{{{{{{\bf{u}}}}}}}}+{\lambda }^{t}{\parallel }_{2}^{2}=\parallel{\!\!}[{{{{{{{{\bf{g}}}}}}}}}_{{\theta }^{t+1}}({{{{{{{\bf{X}}}}}}}})+{{{{{{{\bf{H}}}}}}}}{\lambda }^{t}]-{{{{{{{\bf{H}}}}}}}}{{{{{{{\bf{u}}}}}}}}{\parallel }_{2}^{2}\) because H is a unitary matrix. Therefore, we can write the optimization steps above as \(\theta -\,{{\mbox{minimization}}}\,\quad \ \scriptstyle{\theta }^{t+1}={{{{{{{\rm{arg}}}}}}}}\ {{{{{{{{\rm{min}}}}}}}}}_{\theta }\mathop{\sum }\nolimits_{i = 1}^{n}{({y}_{i}-{g}_{\theta }({{{{{{{{\bf{x}}}}}}}}}_{i}))}^{2}+\frac{\rho }{2}\parallel {{{{{{{{\bf{g}}}}}}}}}_{\theta }({{{{{{{\bf{X}}}}}}}})-{{{{{{{\bf{H}}}}}}}}{{{{{{{{\bf{u}}}}}}}}}^{t}+{{{{{{{\bf{H}}}}}}}}{\lambda }^{t}{\parallel }_{2}^{2}\) \({{{{{{{\bf{u}}}}}}}}-\,{{\mbox{minimization}}}\,\quad {{{{{{{{\bf{u}}}}}}}}}^{t+1}={{{{{{{\rm{arg}}}}}}}}\ {{{{{{{{\rm{min}}}}}}}}}_{{{{{{{{\bf{u}}}}}}}}}\scriptstyle\alpha \parallel {{{{{{{\bf{u}}}}}}}}{\parallel }_{1}+\frac{\rho }{2}\parallel [{{{{{{{{\bf{g}}}}}}}}}_{{\theta }^{t+1}}({{{{{{{\bf{X}}}}}}}})+{{{{{{{\bf{H}}}}}}}}{\lambda }^{t}]-{{{{{{{\bf{H}}}}}}}}{{{{{{{\bf{u}}}}}}}}{\parallel }_{2}^{2}\) \({{\mbox{dual update}}}\, \ {{{{{{{\bf{H}}}}}}}}{\lambda }^{t+1}={{{{{{{\bf{H}}}}}}}}{\lambda }^{t}+{{{{{{{{\bf{g}}}}}}}}}_{{\theta }^{t+1}}({{{{{{{\bf{X}}}}}}}})-{{{{{{{\bf{H}}}}}}}}{{{{{{{{\bf{u}}}}}}}}}^{t+1}\). Now, the u-minimization problem is to find the WH transform of \({{{{{{{{\bf{g}}}}}}}}}_{{\theta }^{t+1}}({{{{{{{\bf{X}}}}}}}})+{{{{{{{\bf{H}}}}}}}}{\lambda }^{t}\) with an ℓ1-norm sparsity prior. In order to solve this u-minimization problem, we resort to the fast sparsity-enabled tools in signal processing and coding theory. This class of greedy algorithms solves the original ℓ0-norm minimization problem and finds the k-WH sparse landscape (for the specific value of k) in a time and space efficient manner (\({{{{{{{\mathcal{O}}}}}}}}(k\,{{\mbox{polylog}}}\,p)\), i.e., \({{{{{{{\mathcal{O}}}}}}}}(k\,{{\mbox{poly}}}\,d)\)) using sparse-graph codes (see Supplementary Notes for an overview of these methods). To this end, we leverage subsampling of input sequences based on patterns that induce sparse-graph codes22. We denote the rows corresponding to these subsampled sequences as XT, where \(| T| \sim {{{{{{{\mathcal{O}}}}}}}}(k{{{{{{{\mathrm{log}}}}}}}\,}^{2}p)\). The subsampling induces a linear mixing of WH coefficients such that a belief propagation algorithm (peeling-decoding) over a sparse-graph code recovers a p-dimensional noisy landscape with k non-zero WH coefficients in the sublinear sample (i.e., \({{{{{{{\mathcal{O}}}}}}}}(k{{{{{{{\mathrm{log}}}}}}}\,}^{2}p)\)) and time complexity (i.e., \({{{{{{{\mathcal{O}}}}}}}}(k{{{{{{{\mathrm{log}}}}}}}\,}^{3}p)\)) with high probability13,22,24,25 (see Supplementary Materials for a full discussion). This addresses both the time and space scalability issues in solving the u-minimization problem. In order to resolve the time and space scalability issues in the θ-minimization problem and the dual update we introduce a novel approximation. We follow the subsampling patterns dictated by the sparse-graph codes in solving the u-minimization problem, and restrict both the θ-minimization problem and the dual update to those subsamples as well to arrive at, \(\theta -\,{{\mbox{minimization}}}\,\quad \ \scriptstyle{\theta }^{t+1}\;=\;{{{{{{{\rm{arg}}}}}}}}\ {{{{{{{{\rm{min}}}}}}}}}_{\theta }\mathop{\sum }\nolimits_{i = 1}^{n}{({y}_{i}-{g}_{\theta }({{{{{{{{\bf{x}}}}}}}}}_{i}))}^{2}+\frac{\rho }{2}\parallel {{{{{{{{\bf{g}}}}}}}}}_{\theta }({{{{{{{{\bf{X}}}}}}}}}_{T})-{{{{{{{{\bf{H}}}}}}}}}_{T}{{{{{{{{\bf{u}}}}}}}}}^{t}+{\gamma }^{t}{\parallel }_{2}^{2}\) \({{{{{{{\bf{u}}}}}}}}-\,{{\mbox{minimization}}}\,\quad {{{{{{{{\bf{u}}}}}}}}}^{t+1} \scriptstyle\; =\, {{{{{{{\rm{arg}}}}}}}}\ {{{{{{{{\rm{min}}}}}}}}}_{{{{{{{{\bf{u}}}}}}}}}\alpha \parallel {{{{{{{\bf{u}}}}}}}}{\parallel }_{0}+\frac{\rho }{2}\parallel [{{{{{{{{\bf{g}}}}}}}}}_{{\theta }^{t+1}}({{{{{{{{\bf{X}}}}}}}}}_{T})+{\gamma }^{t}]-{{{{{{{{\bf{H}}}}}}}}}_{T}{{{{{{{\bf{u}}}}}}}}{\parallel }_{2}^{2}\) \({{\mbox{dual update}}}\,\ {\gamma }^{t+1}={\gamma }^{t}+{{{{{{{{\bf{g}}}}}}}}}_{{\theta }^{t+1}}({{{{{{{{\bf{X}}}}}}}}}_{T})-{{{{{{{{\bf{H}}}}}}}}}_{T}{{{{{{{{\bf{u}}}}}}}}}^{t+1}\), where \({\gamma }^{t}:={{{{{{{{\bf{H}}}}}}}}}_{T}{\lambda }^{t+1}\in {{\mathbb{R}}}^{| T| }\) and HT comprises the rows of H that are in T. Note that the change of dual variable γt = HTλt+1 is only possible because in all the three steps the dual variable λt+1 appears in the WH basis. Note that while the columns of the subsampled WH matrix HT still live in a p-dimensional space, this matrix is never instantiated in memory because it only appears as HTu, where u is a k-sparse vector. Therefore, HTu is computed on the fly by only finding the columns of the (row-subsampled) WH matrix HT that corresponds to the non-zero values in u. The final EN-S method iterates over these three steps to train the DNN until convergence. We indicate the algorithm to solve each step in brackets: EN-S \(\scriptstyle\theta -\,{{\mbox{minimization}}}\,\quad \ {\theta }^{t+1}={{{{{{{\rm{arg}}}}}}}}\ {{{{{{{{\rm{min}}}}}}}}}_{\theta }\mathop{\sum }\nolimits_{i = 1}^{n}{({y}_{i}-{g}_{\theta }({{{{{{{{\bf{x}}}}}}}}}_{i}))}^{2}+\frac{\rho }{2}\parallel {{{{{{{{\bf{g}}}}}}}}}_{\theta }({{{{{{{{\bf{X}}}}}}}}}_{T})-{{{{{{{{\bf{H}}}}}}}}}_{T}{{{{{{{{\bf{u}}}}}}}}}^{t}+{\gamma }^{t}{\parallel }_{2}^{2}\, [{{{{{\rm{SGD}}}}}}]\) \(\scriptstyle{{{{{{{\bf{u}}}}}}}}-\,{{\mbox{minimization}}}\,\quad {{{{{{{{\bf{u}}}}}}}}}^{t+1}={{{{{{{\rm{arg}}}}}}}}\ {{{{{{{{\rm{min}}}}}}}}}_{{{{{{{{\bf{u}}}}}}}}}\alpha \parallel {{{{{{{\bf{u}}}}}}}}{\parallel }_{0}+\frac{\rho }{2}\parallel [{{{{{{{{\bf{g}}}}}}}}}_{{\theta }^{t+1}}({{{{{{{{\bf{X}}}}}}}}}_{T})+{\gamma }^{t}]-{{{{{{{{\bf{H}}}}}}}}}_{T}{{{{{{{\bf{u}}}}}}}}{\parallel }_{2}^{2}\,[{{{{{\rm{Peeling}}}}}}]\) \({{\mbox{dual update}}}\, \ {\gamma }^{t+1}={\gamma }^{t}+{{{{{{{{\bf{g}}}}}}}}}_{{\theta }^{t+1}}({{{{{{{{\bf{X}}}}}}}}}_{T})-{{{{{{{{\bf{H}}}}}}}}}_{T}{{{{{{{{\bf{u}}}}}}}}}^{t+1}\,[{{{{{\rm{Directly}}}}}}\, {{{{{\rm{computed}}}}}}]\) All the three steps above in the EN-S method scale sublinearly with p (i.e., at most polynomial with d) both in terms of time and space complexity. Experimental setup The architecture of DNN was selected in isolation (i.e., without any WH regularization). In our architecture search, we considered a four-layer fully connected DNN with batch normalization and leaky ReLU as the activation function. The dimension of the layers was set to d × fd, fd × fd, fd × d, and the dimension of the final layer was d × 1, where f is an expansion factor. We searched for a value of f that resulted in the best generalization accuracy on an independent data set—a prediction task on DNA repair landscapes13 which we did not use for evaluation in this paper. DNN prediction performance was stable around f = 10 with the highest validation accuracy on the independent data set. We selected f = 10 in all our experiments, except for the experiments done on the avGFP landscape2, where due to the sheer dimensionality of the problem (i.e., d = 236), we set f = 1 (on limited independent tests with f = 10 on the same landscape, we observed no considerable difference in prediction accuracy). The weights of the DNN were always initialized with the Xavier uniform initialization34. We used the exact same initialization (random seed) for the baseline DNN with and without EN(-S) regularization to ensure that we solely capture the effect of regularization and not the variations due to the initialization of DNN. We used the Adam optimizer in all the steps of the methods requiring SGD and a learning rate of 0.001, which resulted in the best validation accuracy. We set α = 0.1 in EN. For the DNN with EN(-S) regularization, a learning rate of 0.01 resulted in the best validation accuracy. In EN-S, the hyperparameters α and ρ have to be jointly set since they are dependent. We set α = 1 and ρ = 0.01 in EN-S although other value pairs could have resulted in the same accuracy. The validation accuracy of DNN was monitored and used for early stopping to avoid over-fitting based on the performance on a hold-out validation set (with a maximum of 1000 epochs). We used the exact same validation set to perform hyperparameter tuning of the baseline algorithms, including the Lasso family, random forest, and gradient boosted trees. For the family of Lasso regression, we performed an extra step to improve the prediction performance. We selected the top most recovered coefficients and performed ordinary least squares (OLS) on the reduced problem. This step improves the robustness and the prediction accuracy of Lasso35. Therefore, in addition to the standard λ regularization parameter, which strikes a balance between sparsity and the fidelity term (i.e., the mean squared error), we also did hyperparameter tuning for the number of top coefficients in the OLS (note that the regular Lasso is included in our hyperparameter search and appears when all the non-zero coefficients are selected to perform OLS). We did a grid search over the hyperparameter λ and the number of top coefficients in Lasso. For λ we considered 50 values spanning the range [10−7, 1]. Overall, this comprised of an exhaustive hyperparameter search to make sure the best performance of Lasso is being captured. For training gradient boosted trees and random forests baselines, we used packages from sklearn in python. We did hyperparameter tuning for max depth and the number of estimators, using the default values for all other parameters. For max depth, we considered parameters ranging from 1 to a constant times the maximum number of mutations in the fitness function (i.e., d), for the number of estimators we considered values in {10, 50, 100, 200, 300, 400, 500, 1000, 2000, 3000}, and chose the pair that resulted in best validation accuracy. As a general trend, we observed that larger numbers of estimators result in higher validation accuracies before they saturate. Here, we report the hyperparameters that resulted in the highest validation accuracy, that is, the ones we selected in our experiments. For the avGFP landscape, we set the number of estimators to 300 and max depth to 11 for gradient boosted trees and set the number of estimators to 100 and max depth to 55 for random forests. We set λ = 1 × 10−4 for Lasso regression when considering up to first-order interactions and λ = 1 × 8−4 when considering up to second-order interactions. For the GB1 landscape, we set the number of estimators to 100 and max depth to 2 for both gradient boosted trees and random forests. We set λ = 7 × 10−3 for Lasso regression when considering up to first-order interactions and λ = 2.5 × 10−2 when considering up to second-order interactions. For the protein landscape in Fig. 3, we set the number of estimators to 3000 and the max depth varied between the values in the sets {1, 2, 3, 4} and {1, 2, …, 15} across the random repeats of the experiments with different train, test, and validation set, respectively for gradient boosted trees and random forest; the value with the best validation performance was selected for each repeat. For the bacterial landscapes in Fig. 2, we set the number of estimators to 300 and the max depth varied between the values in the set {1, 2, 3} across the random repeats of the experiments with different train, test, and validation set; the value with the best validation performance was selected for each repeat. In all the relevant protein and biological data sets, we performed a two-sided T-test for the null hypothesis that the independent prediction from DNN with and without EN regularization (across random Xavier initialization) has identical average (expected) values and reported the p-values. Prepossessing the fitness landscapes For some of the landscapes tested in this paper, we followed the Box-Cox power transform method as described in ref. 18 to remove possible global nonlinearities from the landscape. Although the effect of removing such nonlinearities was small in our analysis, global nonlinearities in general can produce high-order epistatic interactions that are not truly needed. Removing these nonlinearities can reduce noise and increase epistatic sparsity. Nevertheless, one can completely ignore this preprocessing step and rely on DNN with EN regularization to capture the global nonlinearites and infer the fitness landscape for prediction purposes. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. The canonical bacterial fitness data used in Fig. 2 are available in the github repository associated with the Ref. 18. The E. quadricolor fluorescent protein data used in Fig. 3 are available in Supplementary Data 3 of https://doi.org/10.1038/s41467-019-12130-8 Ref. 3. The avGFP protein data used in Fig. 4 are available in the figshare data repository under accession code 3102154. The GB1 protein data used in Fig. 4 are available in Supplementary file 1 and Supplementary file 2 of https://elifesciences.org/articles/16965/figures Ref. 1. All other data generated or analyzed in this study are included in this published article (and in its accompanying Supplementary Information and Supplementary Data). A software for the Epistatic Net regularization algorithms has been developed in Python and is publicly available in our github repository at https://github.com/amirmohan/epistatic-net36. Wu, N. C., Dai, L., Olson, C. A., Lloyd-Smith, J. O. & Sun, R. Adaptation in protein fitness landscapes is facilitated by indirect paths. eLife 5, e16965 (2016). Sarkisyan, K. et al. Local fitness landscape of the green fluorescent protein. Nature 533, 397–401 (2016). Poelwijk, F., Socolich, M. & Ranganathan, R. Learning the pattern of epistasis linking genotype and phenotype in a protein. Nat. Commun. 10, 1–11 (2019). Kopsidas, G. et al. RNA mutagenesis yields highly diverse mRNA libraries for in vitroprotein evolution. BMC Biotechnol. 7, 18 (2007). Sanjuán, R. Mutational fitness effects in RNA and single-stranded DNA viruses: common patterns revealed by site-directed mutagenesis studies. Philos. Trans. R. Soc. B: Biol. Sci. 365, 1975–1982 (2010). Takata, M. A. et al. Global synonymous mutagenesis identifies cis-acting RNA elements that regulate HIV-1 splicing and replication. PLoS Pathogens 14, e1006824 (2018). Weinreich, D., Delaney, N., DePristo, M. & Hartl, D. Darwinian evolution can follow only very few mutational paths to fitter proteins. Science 312, 111–114 (2006). Hall, D., Agan, M. & Pope, S. Fitness epistasis among 6 biosynthetic loci in the budding yeast Saccharomyces cerevisiae. J. Heredity 101, S75–S84 (2010). Khan, A., Dinh, D., Schneider, D., Lenski, R. & Cooper, T. Negative epistasis between beneficial mutations in an evolving bacterial population. Science 332, 1193–1196 (2011). De Visser, J. A. G. & Krug, J. Empirical fitness landscapes and the predictability of evolution. Nat. Rev. Genet. 15, 480–490 (2014). Listgarten, J. et al. Prediction of off-target activities for the end-to-end design of CRISPR guide RNAs. Nat. Biomed. Eng. 2, 38–47 (2018). Leenay, R. T. et al. Large dataset enables prediction of repair after CRISPR-Cas9 editing in primary T cells. Nat. Biotechnol. 37, 1034–1037 (2019). Aghazadeh, A., Ocal, O. & Ramchandran, K. CRISPRLand: Interpretable large-scale inference of DNA repair landscape based on a spectral approach. Bioinformatics 36, i560–i568 (2020). Wu, Z., Kan, S. J., Lewis, R. D., Wittmann, B. J. & Arnold, F. H. Machine learning-assisted directed protein evolution with combinatorial libraries. Proc. Natl Acad. Sci. 116, 8852–8858 (2019). Wolf, J. B., Brodie, E. D. & Wade, M. J., Epistasis and the Evolutionary Process. Oxford University Press (2000). Cordell, H. J. Epistasis: what it means, what it doesn't mean, and statistical methods to detect it in humans. Human Mol. Genet. 11, 2463–2468 (2002). Sailer, Z. R. & Harms, M. J. High-order epistasis shapes evolutionary trajectories. PLoS Comput. Biol. 13, e1005541 (2017). Sailer, Z. & Harms, M. Detecting high-order epistasis in nonlinear genotype-phenotype maps. Genetics 205, 1079–1088 (2017). Jiang, R., Tang, W., Wu, X. & Fu, W. A random forest approach to the detection of epistatic interactions in case-control studies. BMC Bioinform. 10, 1–12 (2009). Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I. & Salakhutdinov, R. Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learning Res. 15, 1929–1958 (2014). Ballal, A. et al. Sparse epistatic patterns in the evolution of Terpene synthases. Mol. Biol. Evol. 37, 7, 1907–1924 (2020). Xiao, L., Bradley, J., Pawar, S. & Ramchandran, K. The SPRIGHT algorithm for robust sparse Hadamard transforms. IEEE Intl. Symp. Info. Theory, 1857–1861 (2014). Boyd, S., Parikh, N. & Chu, E. Distributed Optimization and Statistical Learning Via the Alternating Direction Method of Multipliers. now Publishers Inc. (2011). Li, X., Bradley, J., Pawar, S. & Ramchandran, K. SPRIGHT: A fast and robust framework for sparse Walsh-Hadamard transform. Preprint at https://arxiv.org/abs/1508.06336 (2015). Li, X. & Ramchandran, K. An active learning framework using sparse-graph codes for sparse polynomials and graph sketching. Proc. 28th Intl. Conf. Neural Info. Process. Syst. 2, 2170–2178 (2015). Romero, P. A., Krause, A. & Arnold, F. H. Navigating the protein fitness landscape with gaussian processes. Proc. Natl Acad. Sci. 110, E193–E201 (2013). Article ADS MathSciNet CAS Google Scholar Brookes, D. H., Aghazadeh, A. and Listgarten, J. On the sparsity of fitness functions and implications for learning. bioRxiv https://doi.org/10.1101/2021.05.24.445506 (2021). Wu, F.-Y. The Potts model. Rev. Mod. Phys. 54, 235 (1982). Article ADS MathSciNet Google Scholar McCoy, B. M. & Wu, T. T. The two-dimensional Ising model. Harvard University Press (2014). Doersch, C. Tutorial on variational autoencoders. https://arxiv.org/abs/1606.05908 (2016). Boros, E. & Hammer, P. Pseudo-Boolean optimization. Discrete Appl. Math. 123, 155–225 (2002). Cochran, W. T. et al. What is the fast Fourier transform? Proc. IEEE 55, 1664–1674 (1967). Figueiredo, M., Nowak, R. & Wright, S. Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems. IEEE J. Sel. Top. Signal Process. 1, 586–597 (2007). Boris, H. & Rolnick, D. How to start training: the effect of initialization and architecture. Proc. 32th Intl. Conf. Neural Info. Process. Syst. 1, 569–579 (2018). Friedman, J., Hastie, T. & Tibshirani, R. The Elements of Statistical Learning. Springer Series in Statistics New York (2001). Aghazadeh, A. et al. Epistatic Net Software https://doi.org/10.5281/zenodo.5111310 (2021). A.A., O.O., and K.R. were supported by the NSF (1703678) and ARO (W911NF2110117). H.N. was supported by the National Library of Medicine of the NIH (T32LM012417); the content is solely the responsibility of the author and does not necessarily represent the official views of the NIH. O.O.K. was supported by the NSF (1748692). D.H.B and J.L. were supported by the DOE, Office of Biological and Environmental Research, Genomic Science Program Lawrence Livermore National Laboratory's Secure Biosystems Design Scientific Focus Area (SCW1710). The authors thank Clara Wong-Fannjiang for insightful discussions. Department of Electrical Engineering and Computer Sciences, Berkeley, CA, USA Amirali Aghazadeh, Orhan Ocal, Yijie Huang, O. Ozan Koyluoglu, Jennifer Listgarten & Kannan Ramchandran Center for Computational Biology, Berkeley, CA, USA Hunter Nisonoff & Jennifer Listgarten Biophysics Graduate Group, University of California, Berkeley, CA, USA David H. Brookes Amirali Aghazadeh Hunter Nisonoff Orhan Ocal Yijie Huang O. Ozan Koyluoglu Jennifer Listgarten Kannan Ramchandran A.A., H.N., and O.O. designed research; A.A., H.N., O.O., D.B., and Y.H. conducted research; A.A., O.O., O.O.K., J.L., and K.R. wrote the manuscript. Correspondence to Kannan Ramchandran. Jennifer Listgarten is on the Scientific Advisory Board for Foresite Labs and Patch Biosciences. Other authors declare no competing interests. Peer review information Nature Communications thanks Raghav Shroff and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available. Peer Review File Description of Additional Supplementary Files Supplementary Data 1 Aghazadeh, A., Nisonoff, H., Ocal, O. et al. Epistatic Net allows the sparse spectral regularization of deep neural networks for inferring fitness functions. Nat Commun 12, 5225 (2021). https://doi.org/10.1038/s41467-021-25371-3 Current progress and open challenges for applying deep learning across the biosciences Nicolae Sapoval Todd J. Treangen Learning protein fitness models from evolutionary and assay-labeled data Chloe Hsu Nature Biotechnology (2022) Experimental exploration of a ribozyme neutral network using evolutionary algorithm and deep learning Rachapun Rotrattanadumrong Yohei Yokobayashi Editors' Highlights Nature Communications (Nat Commun) ISSN 2041-1723 (online)
CommonCrawl
Introduction to Theoretical Computer Science p Preface p.1 To the student p.1.1 Is the effort worth it? p.2 To potential instructors p.3 Acknowledgements 0.1 Integer multiplication: an example of an algorithm 0.2 Extended Example: A faster way to multiply (optional) 0.3 Algorithms beyond arithmetic 0.4 On the importance of negative results 0.5 Roadmap to the rest of this book 0.5.1 Dependencies between chapters 0.6 Exercises 0.7 Bibliographical notes 1 Mathematical Background 1.1 This chapter: a reader's manual 1.2 A quick overview of mathematical prerequisites 1.3 Reading mathematical texts 1.3.1 Definitions 1.3.2 Assertions: Theorems, lemmas, claims 1.3.3 Proofs 1.4 Basic discrete math objects 1.4.1 Sets 1.4.2 Special sets 1.4.3 Functions 1.4.4 Graphs 1.4.5 Logic operators and quantifiers 1.4.6 Quantifiers for summations and products 1.4.7 Parsing formulas: bound and free variables 1.4.8 Asymptotics and Big-O notation 1.4.9 Some "rules of thumb" for Big-O notation 1.5 Proofs 1.5.1 Proofs and programs 1.5.2 Proof writing style 1.5.3 Patterns in proofs 1.6 Extended example: Topological Sorting 1.6.1 Mathematical induction 1.6.2 Proving the result by induction 1.6.3 Minimality and uniqueness 1.7 This book: notation and conventions 1.7.1 Variable name conventions 1.7.2 Some idioms 2 Computation and Representation 2.1 Defining representations 2.1.1 Representing natural numbers 2.1.2 Meaning of representations (discussion) 2.2 Representations beyond natural numbers 2.2.1 Representing (potentially negative) integers 2.2.2 Two's complement representation (optional) 2.2.3 Rational numbers and representing pairs of strings 2.3 Representing real numbers 2.4 Cantor's Theorem, countable sets, and string representations of the real numbers 2.4.1 Corollary: Boolean functions are uncountable 2.4.2 Equivalent conditions for countability 2.5 Representing objects beyond numbers 2.5.1 Finite representations 2.5.2 Prefix-free encoding 2.5.3 Making representations prefix-free 2.5.4 "Proof by Python" (optional) 2.5.5 Representing letters and text 2.5.6 Representing vectors, matrices, images 2.5.7 Representing graphs 2.5.8 Representing lists and nested lists 2.5.9 Notation 2.6 Defining computational tasks as mathematical functions 2.6.1 Distinguish functions from programs! 3 Defining computation 3.1 Defining computation 3.2 Computing using AND, OR, and NOT. 3.2.1 Some properties of AND and OR 3.2.2 Extended example: Computing \ensuremath{\mathit{XOR}} from \ensuremath{\mathit{AND}}, \ensuremath{\mathit{OR}}, and \ensuremath{\mathit{NOT}} 3.2.3 Informally defining "basic operations" and "algorithms" 3.3 Boolean Circuits 3.3.1 Boolean circuits: a formal definition 3.4 Straight-line programs 3.4.1 Specification of the AON-CIRC programming language 3.4.2 Proving equivalence of AON-CIRC programs and Boolean circuits 3.5 Physical implementations of computing devices (digression) 3.5.1 Transistors 3.5.2 Logical gates from transistors 3.5.3 Biological computing 3.5.4 Cellular automata and the game of life 3.5.5 Neural networks 3.5.6 A computer made from marbles and pipes 3.6 The NAND function 3.6.1 NAND Circuits 3.6.2 More examples of NAND circuits (optional) 3.6.3 The NAND-CIRC Programming language 3.7 Equivalence of all these models 3.7.1 Circuits with other gate sets 3.7.2 Specification vs. implementation (again) 3.9 Biographical notes 4 Syntactic sugar, and computing every function 4.1 Some examples of syntactic sugar 4.1.1 User-defined procedures 4.1.2 Proof by Python (optional) 4.1.3 Conditional statements 4.2 Extended example: Addition and Multiplication (optional) 4.3 The LOOKUP function 4.3.1 Constructing a NAND-CIRC program for \ensuremath{\mathit{LOOKUP}} 4.4 Computing every function 4.4.1 Proof of NAND's Universality 4.4.2 Improving by a factor of n (optional) 4.5 Computing every function: An alternative proof 4.6 The class \ensuremath{\mathit{SIZE}}_{n,m}(s) 5 Code as data, data as code 5.1 Representing programs as strings 5.2 Counting programs, and lower bounds on the size of NAND-CIRC programs 5.2.1 Size hierarchy theorem (optional) 5.3 The tuples representation 5.3.1 From tuples to strings 5.4 A NAND-CIRC interpreter in NAND-CIRC 5.4.1 Efficient universal programs 5.4.2 A NAND-CIRC interpeter in "pseudocode" 5.4.3 A NAND interpreter in Python 5.4.4 Constructing the NAND-CIRC interpreter in NAND-CIRC 5.5 A Python interpreter in NAND-CIRC (discussion) 5.6 The physical extended Church-Turing thesis (discussion) 5.6.1 Attempts at refuting the PECTT 5.7 Recap of Part I: Finite Computation 6 Functions with Infinite domains, Automata, and Regular expressions 6.1 Functions with inputs of unbounded length 6.1.1 Varying inputs and outputs 6.1.2 Formal Languages 6.1.3 Restrictions of functions 6.2 Deterministic finite automata (optional) 6.2.1 Anatomy of an automaton (finite vs. unbounded) 6.2.2 DFA-computable functions 6.3 Regular expressions 6.3.1 Algorithms for matching regular expressions 6.4 Efficient matching of regular expressions (optional) 6.4.1 Matching regular expressions using DFAs 6.4.2 Equivalence of regular expressions and automata 6.4.3 Closure properties of regular expressions 6.5 Limitations of regular expressions and the pumping lemma 6.6 Answering semantic questions about regular expressions 7 Loops and infinity 7.1 Turing Machines 7.1.1 Extended example: A Turing machine for palindromes 7.1.2 Turing machines: a formal definition 7.1.3 Computable functions 7.1.4 Infinite loops and partial functions 7.2 Turing machines as programming languages 7.2.1 The NAND-TM Programming language 7.2.2 Sneak peak: NAND-TM vs Turing machines 7.2.3 Examples 7.3 Equivalence of Turing machines and NAND-TM programs 7.3.1 Specification vs implementation (again) 7.4 NAND-TM syntactic sugar 7.4.1 "GOTO" and inner loops 7.5 Uniformity, and NAND vs NAND-TM (discussion) 8 Equivalent models of computation 8.1 RAM machines and NAND-RAM 8.2 The gory details (optional) 8.2.1 Indexed access in NAND-TM 8.2.2 Two dimensional arrays in NAND-TM 8.2.3 All the rest 8.3 Turing equivalence (discussion) 8.3.1 The "Best of both worlds" paradigm 8.3.2 Let's talk about abstractions 8.3.3 Turing completeness and equivalence, a formal definition (optional) 8.4 Cellular automata 8.4.1 One dimensional cellular automata are Turing complete 8.4.2 Configurations of Turing machines and the next-step function 8.5 Lambda calculus and functional programming languages 8.5.1 Applying functions to functions 8.5.2 Obtaining multi-argument functions via Currying 8.5.3 Formal description of the λ calculus 8.5.4 Infinite loops in the λ calculus 8.6 The "Enhanced" λ calculus 8.6.1 Computing a function in the enhanced λ calculus 8.6.2 Enhanced λ calculus is Turing-complete 8.7 From enhanced to pure λ calculus 8.7.1 List processing 8.7.2 The Y combinator, or recursion without recursion 8.8 The Church-Turing Thesis (discussion) 8.8.1 Different models of computation 8.10 Bibliographical notes 9 Universality and uncomputability 9.1 Universality or a meta-circular evaluator 9.1.1 Proving the existence of a universal Turing Machine 9.1.2 Implications of universality (discussion) 9.2 Is every function computable? 9.3 The Halting problem 9.3.1 Is the Halting problem really hard? (discussion) 9.3.2 A direct proof of the uncomputability of \ensuremath{\mathit{HALT}} (optional) 9.4 Reductions 9.4.1 Example: Halting on the zero problem 9.5 Rice's Theorem and the impossibility of general software verification 9.5.1 Rice's Theorem 9.5.2 Halting and Rice's Theorem for other Turing-complete models 9.5.3 Is software verification doomed? (discussion) 10 Restricted computational models 10.1 Turing completeness as a bug 10.2 Context free grammars 10.2.1 Context-free grammars as a computational model 10.2.2 The power of context free grammars 10.2.3 Limitations of context-free grammars (optional) 10.3 Semantic properties of context free languages 10.3.1 Uncomputability of context-free grammar equivalence (optional) 10.4 Summary of semantic properties for regular expressions and context-free grammars 10.5 Exercises 10.6 Bibliographical notes 11 Is every theorem provable? 11.1 Hilbert's Program and Gödel's Incompleteness Theorem 11.1.1 Defining "Proof Systems" 11.2 Gödel's Incompleteness Theorem: Computational variant 11.3 Quantified integer statements 11.4 Diophantine equations and the MRDP Theorem 11.5 Hardness of quantified integer statements 11.5.1 Step 1: Quantified mixed statements and computation histories 11.5.2 Step 2: Reducing mixed statements to integer statements 12 Efficient computation: An informal introduction 12.1 Problems on graphs 12.1.1 Finding the shortest path in a graph 12.1.2 Finding the longest path in a graph 12.1.3 Finding the minimum cut in a graph 12.1.4 Min-Cut Max-Flow and Linear programming 12.1.5 Finding the maximum cut in a graph 12.1.6 A note on convexity 12.2 Beyond graphs 12.2.1 SAT 12.2.2 Solving linear equations 12.2.3 Solving quadratic equations 12.3 More advanced examples 12.3.1 Determinant of a matrix 12.3.2 Permanent of a matrix 12.3.3 Finding a zero-sum equilibrium 12.3.4 Finding a Nash equilibrium 12.3.5 Primality testing 12.3.6 Integer factoring 12.4 Our current knowledge 12.7 Further explorations 13 Modeling running time 13.1 Formally defining running time 13.1.1 Polynomial and Exponential Time 13.2 Modeling running time using RAM Machines / NAND-RAM 13.3 Extended Church-Turing Thesis (discussion) 13.4 Efficient universal machine: a NAND-RAM interpreter in NAND-RAM 13.4.1 Timed Universal Turing Machine 13.5 The time hierarchy theorem 13.6 Non-uniform computation 13.6.1 Oblivious NAND-TM programs 13.6.2 "Unrolling the loop": algorithmic transformation of Turing Machines to circuits 13.6.3 Can uniform algorithms simulate non-uniform ones? 13.6.4 Uniform vs. Non-uniform computation: A recap 14 Polynomial-time reductions 14.1 Formal definitions of problems 14.2 Polynomial-time reductions 14.2.1 Whistling pigs and flying horses 14.3 Reducing 3SAT to zero one and quadratic equations 14.3.1 Quadratic equations 14.4 The independent set problem 14.5 Some exercises and anatomy of a reduction. 14.5.1 Dominating set 14.5.2 Anatomy of a reduction 14.6 Reducing Independent Set to Maximum Cut 14.7 Reducing 3SAT to Longest Path 14.7.1 Summary of relations 15 NP, NP completeness, and the Cook-Levin Theorem 15.1 The class \mathbf{NP} 15.1.1 Examples of functions in \mathbf{NP} 15.1.2 Basic facts about \mathbf{NP} 15.2 From \mathbf{NP} to 3SAT: The Cook-Levin Theorem 15.2.1 What does this mean? 15.2.2 The Cook-Levin Theorem: Proof outline 15.3 The \ensuremath{\mathit{NANDSAT}} Problem, and why it is \mathbf{NP} hard 15.4 The 3\ensuremath{\mathit{NAND}} problem 15.5 From 3\ensuremath{\mathit{NAND}} to 3\ensuremath{\mathit{SAT}} 15.6 Wrapping up 16 What if P equals NP? 16.1 Search-to-decision reduction 16.2 Optimization 16.2.1 Example: Supervised learning 16.2.2 Example: Breaking cryptosystems 16.3 Finding mathematical proofs 16.4 Quantifier elimination (advanced) 16.4.1 Application: self improving algorithm for 3\ensuremath{\mathit{SAT}} 16.5 Approximating counting problems and posterior sampling (advanced, optional) 16.6 What does all of this imply? 16.7 Can \mathbf{P} \neq \mathbf{NP} be neither true nor false? 16.8 Is \mathbf{P}=\mathbf{NP} "in practice"? 16.9 What if \mathbf{P} \neq \mathbf{NP}? 16.10 Exercises 16.11 Bibliographical notes 17 Space bounded computation 18 Probability Theory 101 18.1 Random coins 18.1.1 Random variables 18.1.2 Distributions over strings 18.1.3 More general sample spaces 18.2 Correlations and independence 18.2.1 Independent random variables 18.2.2 Collections of independent random variables 18.3 Concentration and tail bounds 18.3.1 Chebyshev's Inequality 18.3.2 The Chernoff bound 18.3.3 Application: Supervised learning and empirical risk minimization 19 Probabilistic computation 19.1 Finding approximately good maximum cuts 19.1.1 Amplifying the success of randomized algorithms 19.1.2 Success amplification 19.1.3 Two-sided amplification 19.1.5 Solving SAT through randomization 19.1.6 Bipartite matching 19.4 Acknowledgements 20 Modeling randomized computation 20.1 Modeling randomized computation 20.1.1 An alternative view: random coins as an "extra input" 20.1.2 Success amplification of two-sided error algorithms 20.2 \mathbf{BPP} and \mathbf{NP} completeness 20.3 The power of randomization 20.3.1 Solving \mathbf{BPP} in exponential time 20.3.2 Simulating randomized algorithms by circuits 20.4 Derandomization 20.4.1 Pseudorandom generators 20.4.2 From existence to constructivity 20.4.3 Usefulness of pseudorandom generators 20.5 \mathbf{P}=\mathbf{NP} and \mathbf{BPP} vs \mathbf{P} 20.6 Non-constructive existence of pseudorandom generators (advanced, optional) 21 Cryptography 21.1 Classical cryptosystems 21.2 Defining encryption 21.3 Defining security of encryption 21.4 Perfect secrecy 21.4.1 Example: Perfect secrecy in the battlefield 21.4.2 Constructing perfectly secret encryption 21.5 Necessity of long keys 21.6 Computational secrecy 21.6.1 Stream ciphers or the "derandomized one-time pad" 21.7 Computational secrecy and \mathbf{NP} 21.8 Public key cryptography 21.8.1 Defining public key encryption 21.8.2 Diffie-Hellman key exchange 21.9 Other security notions 21.10 Magic 21.10.1 Zero knowledge proofs 21.10.2 Fully homomorphic encryption 21.10.3 Multiparty secure computation 22 Proofs and algorithms 23 Quantum computing 23.1 The double slit experiment 23.2 Quantum amplitudes 23.2.1 Linear algebra quick review 23.3 Bell's Inequality 23.4 Quantum weirdness 23.5 Quantum computing and computation - an executive summary 23.6 Quantum systems 23.6.1 Quantum amplitudes 23.6.2 Quantum systems: an executive summary 23.7 Analysis of Bell's Inequality (optional) 23.8 Quantum computation 23.8.1 Quantum circuits 23.8.2 QNAND-CIRC programs (optional) 23.8.3 Uniform computation 23.9 Physically realizing quantum computation 23.10 Shor's Algorithm: Hearing the shape of prime factors 23.10.1 Period finding 23.10.2 Shor's Algorithm: A bird's eye view 23.11 Quantum Fourier Transform (advanced, optional) 23.11.1 Quantum Fourier Transform over the Boolean Cube: Simon's Algorithm 23.11.2 From Fourier to Period finding: Simon's Algorithm (advanced, optional) 23.11.3 From Simon to Shor (advanced, optional) Modeling running time See any bugs/typos/confusing explanations? Open a GitHub issue. You can also comment below ★ See also the PDF version of this chapter (better formatting/references) ★ Formally modeling running time, and in particular notions such as \(O(n)\) or \(O(n^3)\) time algorithms. The classes \(\mathbf{P}\) and \(\mathbf{EXP}\) modelling polynomial and exponential time respectively. The time hierarchy theorem, that in particular says that for every \(k \geq 1\) there are functions we can compute in \(O(n^{k+1})\) time but can not compute in \(O(n^k)\) time. The class \(\mathbf{P_{/poly}}\) of non-uniform computation and the result that \(\mathbf{P} \subseteq \mathbf{P_{/poly}}\) "When the measure of the problem-size is reasonable and when the sizes assume values arbitrarily large, an asymptotic estimate of … the order of difficulty of [an] algorithm .. is theoretically important. It cannot be rigged by making the algorithm artificially difficult for smaller sizes", Jack Edmonds, "Paths, Trees, and Flowers", 1963 Max Newman: It is all very well to say that a machine could … do this or that, but … what about the time it would take to do it? Alan Turing: To my mind this time factor is the one question which will involve all the real technical difficulty. BBC radio panel on "Can automatic Calculating Machines Be Said to Think?", 1952 In Chapter 12 we saw examples of efficient algorithms, and made some claims about their running time, but did not give a mathematically precise definition for this concept. We do so in this chapter, using the models of Turing machines and RAM machines (or equivalently NAND-TM and NAND-RAM) we have seen before. The running time of an algorithm is not a fixed number since any non-trivial algorithm will take longer to run on longer inputs. Thus, what we want to measure is the dependence between the number of steps the algorithms takes and the length of the input. In particular we care about the distinction between algorithms that take at most polynomial time (i.e., \(O(n^c)\) time for some constant \(c\)) and problems for which every algorithm requires at least exponential time (i.e., \(\Omega(2^{n^c})\) for some \(c\)). As mentioned in Edmond's quote in Chapter 12, the difference between these two can sometimes be as important as the difference between being computable and uncomputable. 13.1: Overview of the results of this chapter. In this chapter we formally define what it means for a function to be computable in a certain number of steps. As discussed in Chapter 12, running time is not a number, rather what we care about is the scaling behevaiour of the number of steps as the input size grows. We can use either Turing machines or RAM machines to give such a formal definition - it turns out that this doesn't make a difference at the resolution we care about. We make several important definitions and prove some important theorems in this chapter. We will define the main time complexity classes we use in this book, and also show the Time Hierarchy Theorem which states that given more resources (more time steps per input size) we can compute more functions. To put this in more "mathy" language, in this chapter we define what it means for a function \(F:\{0,1\}^* \rightarrow \{0,1\}^*\) to be computable in time \(T(n)\) steps, where \(T\) is some function mapping the length \(n\) of the input to the number of computation steps allowed. Using this definition we will do the following (see also Figure 13.1): We define the class \(\mathbf{P}\) of Boolean functions that can be computed in polynomial time and the class \(\mathbf{EXP}\) of functions that can be computed in exponential time. Note that \(\mathbf{P} \subseteq \mathbf{EXP}\) if we can compute a function in polynomial time, we can certainly compute it in exponential time. We show that the times to compute a function using a Turing machine and using a RAM machine (or NAND-RAM program) are polynomially related. In particular this means that the classes \(\mathbf{P}\) and \(\mathbf{EXP}\) are identical regardless of whether they are defined using Turing machines or RAM machines / NAND-RAM programs. We give an efficient universal NAND-RAM program and use this to establish the time hierarchy theorem that in particular implies that \(\mathbf{P}\) is a strict subset of \(\mathbf{EXP}\). We relate the notions defined here to the non-uniform models of Boolean circuits and NAND-CIRC programs defined in Chapter 3. We define \(\mathbf{P_{/poly}}\) to be the class of functions that can be computed by a sequence of polynomial-sized circuits. We prove that \(\mathbf{P} \subseteq \mathbf{P_{/poly}}\) and that \(\mathbf{P_{/poly}}\) contains uncomputable functions. Formally defining running time Our models of computation such Turing machines, NAND-TM and NAND-RAM programs and others all operate by executing a sequence of instructions on an input one step at a time. We can define the running time of an algorithm \(M\) in one of these models by measuring the number of steps \(M\) takes on input \(x\) as a function of the length \(|x|\) of the input. We start by defining running time with respect to Turing machines: Let \(T:\N \rightarrow \N\) be some function mapping natural numbers to natural numbers. We say that a function \(F:\{0,1\}^* \rightarrow \{0,1\}^*\) is computable in \(T(n)\) Turing-Machine time (TM-time for short) if there exists a Turing machine \(M\) such that for every sufficiently large \(n\) and every \(x\in \{0,1\}^n\), when given input \(x\), the machine \(M\) halts after executing at most \(T(n)\) steps and outputs \(F(x)\). We define \(\ensuremath{\mathit{TIME}}_{\mathsf{TM}}(T(n))\) to be the set of Boolean functions (functions mapping \(\{0,1\}^*\) to \(\{0,1\}\)) that are computable in \(T(n)\) TM time. For a function \(F:\{0,1\}^* \rightarrow \{0,1\}\) and \(T:\N \rightarrow \N\), we can formally define what it means for \(F\) to be computable in time at most \(T(n)\) where \(n\) is the size of the input. Definition 13.1 is not very complicated but is one of the most important definitions of this book. As usual, \(\ensuremath{\mathit{TIME}}_{\mathsf{TM}}(T(n))\) is a class of functions, not of machines. If \(M\) is a Turing machine then a statement such as "\(M\) is a member of \(\ensuremath{\mathit{TIME}}_{\mathsf{TM}}(n^2)\)" does not make sense. The concept of TM-time as defined here is sometimes known as "single-tape Turing machine time" in the literature, since some texts consider Turing machines with more than one working tape. The relaxation of considering only "sufficiently large" \(n\)'s is not very important but it is convenient since it allows us to avoid dealing explicitly with un-interesting "edge cases". We will mostly anyway be interested in determining running time only up to constant and even polynomial factors. While the notion of being computable within a certain running time can be defined for every function, the class \(\ensuremath{\mathit{TIME}}_{\mathsf{TM}}(T(n))\) is a class of Boolean functions that have a single bit of output. This choice is not very important, but is made for simplicity and convenience later on. In fact, every non-Boolean function has a computationally equivalent Boolean variant, see Exercise 13.3. Prove that \(\ensuremath{\mathit{TIME}}_{\mathsf{TM}}(10\cdot n^3) \subseteq \ensuremath{\mathit{TIME}}_{\mathsf{TM}}(2^n)\). 13.2: Comparing \(T(n)=10n^3\) with \(T'(n) = 2^n\) (on the right figure the Y axis is in log scale). Since for every large enough \(n\), \(T'(n) \geq T(n)\), \(\ensuremath{\mathit{TIME}}_{\mathsf{TM}}(T(n)) \subseteq \ensuremath{\mathit{TIME}}_{\mathsf{TM}}(T'(n))\). The proof is illustrated in Figure 13.2. Suppose that \(F\in \ensuremath{\mathit{TIME}}_{\mathsf{TM}}(10\cdot n^3)\) and hence there some number \(N_0\) and a machine \(M\) such that for every \(n> N_0\), and \(x\in \{0,1\}^*\), \(M(x)\) outputs \(F(x)\) within at most \(10\cdot n^3\) steps. Since \(10\cdot n^3 = o(2^n)\), there is some number \(N_1\) such that for every \(n>N_1\), \(10\cdot n^3 < 2^n\). Hence for every \(n > \max\{ N_0, N_1 \}\), \(M(x)\) will output \(F(x)\) within at most \(2^n\) steps, just demonstrating that \(F \in \ensuremath{\mathit{TIME}}_{\mathsf{TM}}(2^n)\). Polynomial and Exponential Time Unlike the notion of computability, the exact running time can be a function of the model we use. However, it turns out that if we only care about "coarse enough" resolution (as will most often be the case) then the choice of the model, whether Turing machines, RAM machines, NAND-TM/NAND-RAM programs, or C/Python programs, does not matter. This is known as the extended Church-Turing Thesis. Specifically we will mostly care about the difference between polynomial and exponential time. The two main time complexity classes we will be interested in are the following: Polynomial time: A function \(F:\{0,1\}^* \rightarrow \{0,1\}\) is computable in polynomial time if it is in the class \(\mathbf{P} = \cup_{c\in \{1,2,3,\ldots \}} \ensuremath{\mathit{TIME}}_{\mathsf{TM}}(n^c)\). That is, \(F\in \mathbf{P}\) if there is an algorithm to compute \(F\) that runs in time at most polynomial (i.e., at most \(n^c\) for some constant \(c\)) in the length of the input. Exponential time: A function \(F:\{0,1\}^* \rightarrow \{0,1\}\) is computable in exponential time if it is in the class \(\mathbf{EXP} = \cup_{c\in \{1,2,3,\ldots \}} \ensuremath{\mathit{TIME}}_{\mathsf{TM}}(2^{n^c})\). That is, \(F\in \mathbf{EXP}\) if there is an algorithm to compute \(F\) that runs in time at most exponential (i.e., at most \(2^{n^c}\) for some constant \(c\)) in the length of the input. In other words, these are defined as follows: Let \(F:\{0,1\}^* \rightarrow \{0,1\}\). We say that \(F\in \mathbf{P}\) if there is a polynomial \(p:\N \rightarrow \R\) and a Turing machine \(M\) such that for every \(x\in \{0,1\}^*\), when given input \(x\), the Turing machine halts within at most \(p(|x|)\) steps and outputs \(F(x)\). We say that \(F\in \mathbf{EXP}\) if there is a polynomial \(p:\N \rightarrow \R\) and a Turing machine \(M\) such that for every \(x\in \{0,1\}^*\), when given input \(x\), \(M\) halts within at most \(2^{p(|x|)}\) steps and outputs \(F(x)\). Please take the time to make sure you understand these definitions. In particular, sometimes students think of the class \(\mathbf{EXP}\) as corresponding to functions that are not in \(\mathbf{P}\). However, this is not the case. If \(F\) is in \(\mathbf{EXP}\) then it can be computed in exponential time. This does not mean that it cannot be computed in polynomial time as well. Prove that \(\mathbf{P}\) as defined in Definition 13.2 is equal to \(\cup_{c\in \{1,2,3,\ldots \}} \ensuremath{\mathit{TIME}}_{\mathsf{TM}}(n^c)\) To show these two sets are equal we need to show that \(\mathbf{P} \subseteq \cup_{c\in \{1,2,3,\ldots \}} \ensuremath{\mathit{TIME}}_{\mathsf{TM}}(n^c)\) and \(\cup_{c\in \{1,2,3,\ldots \}} \ensuremath{\mathit{TIME}}_{\mathsf{TM}}(n^c) \subseteq \mathbf{P}\). We start with the former inclusion. Suppose that \(F \in \mathbf{P}\). Then there is some polynomial \(p:\N \rightarrow \R\) and a Turing machine \(M\) such that \(M\) computes \(F\) and \(M\) halts on every input \(x\) within at most \(p(|x|)\) steps. We can write the polynomial \(p:\N \rightarrow \R\) in the form \(p(n) = \sum_{i=0}^d a_i n^i\) where \(a_0,\ldots,a_d \in \R\), and we assume that \(a_d\) is non-zero (or otherwise we just let \(d\) correspond to the largest number such that \(a_d\) is non-zero). The degree if \(p\) the number \(d\). Since \(n^d = o(n^{d+1})\), no matter what is the coefficient \(a_d\), for large enough \(n\), \(p(n) < n^{d+1}\) which means that the Turing machine \(M\) will halt on inputs of length \(n\) within fewer than \(n^{d+1}\) steps, and hence \(F \in \ensuremath{\mathit{TIME}}_{\mathsf{TM}}(n^{d+1}) \subseteq \cup_{c\in \{1,2,3,\ldots \}} \ensuremath{\mathit{TIME}}_{\mathsf{TM}}(n^c)\). For the second inclusion, suppose that \(F \in \cup_{c\in \{1,2,3,\ldots \}} \ensuremath{\mathit{TIME}}_{\mathsf{TM}}(n^c)\). Then there is some positive \(c \in \N\) such that \(F \in \ensuremath{\mathit{TIME}}_{\mathsf{TM}}(n^c)\) which means that there is a Turing machine \(M\) and some number \(N_0\) such that \(M\) computes \(F\) and for every \(n>N_0\), \(M\) halts on length \(n\) inputs within at most \(n^c\) steps. Let \(T_0\) be the maximum number of steps that \(M\) takes on inputs of length at most \(N_0\). Then if we define the polynomial \(p(n) = n^c + T_0\) then we see that \(M\) halts on every input \(x\) within at most \(p(|x|)\) steps and hence the existence of \(M\) demonstrates that \(F\in \mathbf{P}\). Since exponential time is much larger than polynomial time, \(\mathbf{P}\subseteq \mathbf{EXP}\). All of the problems we listed in Chapter 12 are in \(\mathbf{EXP}\), but as we've seen, for some of them there are much better algorithms that demonstrate that they are in fact in the smaller class \(\mathbf{P}\). \(\mathbf{P}\) \(\mathbf{EXP}\) (but not known to be in \(\mathbf{P}\)) Shortest path Longest Path Min cut Max cut 2SAT 3SAT Linear eqs Quad. eqs Zerosum Nash Determinant Permanent Primality Factoring Table : A table of the examples from Chapter 12. All these problems are in \(\mathbf{EXP}\) but only the ones on the left column are currently known to be in \(\mathbf{P}\) as well (i.e., they have a polynomial-time algorithm). See also Figure 13.3. 13.3: Some examples of problems that are known to be in \(\mathbf{P}\) and problems that are known to be in \(\mathbf{EXP}\) but not known whether or not they are in \(\mathbf{P}\). Since both \(\mathbf{P}\) and \(\mathbf{EXP}\) are classes of Boolean functions, in this figure we always refer to the Boolean (i.e., Yes/No) variant of the problems. Many of the problems defined in Chapter 12 correspond to non-Boolean functions (functions with more than one bit of output) while \(\mathbf{P}\) and \(\mathbf{EXP}\) are sets of Boolean functions. However, for every non-Boolean function \(F\) we can always define a computationally-equivalent Boolean function \(G\) by letting \(G(x,i)\) be the \(i\)-th bit of \(F(x)\) (see Exercise 13.3). Hence the table above, as well as Figure 13.3, refer to the computationally-equivalent Boolean variants of these problems. Modeling running time using RAM Machines / NAND-RAM Turing machines are a clean theoretical model of computation, but do not closely correspond to real-world computing architectures. The discrepancy between Turing machines and actual computers does not matter much when we consider the question of which functions are computable, but can make a difference in the context of efficiency. Even a basic staple of undergraduate algorithms such as "merge sort" cannot be implemented on a Turing machine in \(O(n\log n)\) time (see Section 13.8). RAM machines (or equivalently, NAND-RAM programs) match more closely actual computing architecture and what we mean when we say \(O(n)\) or \(O(n \log n)\) algorithms in algorithms courses or whiteboard coding interviews. We can define running time with respect to NAND-RAM programs just as we did for Turing machines. Let \(T:\N \rightarrow \N\) be some function mapping natural numbers to natural numbers. We say that a function \(F:\{0,1\}^* \rightarrow \{0,1\}^*\) is computable in \(T(n)\) RAM time (RAM-time for short) if there exists a NAND-RAM program \(P\) such that for every sufficiently large \(n\) and every \(x\in \{0,1\}^n\), when given input \(x\), the program \(P\) halts after executing at most \(T(n)\) lines and outputs \(F(x)\). We define \(\ensuremath{\mathit{TIME}}_{\mathsf{RAM}}(T(n))\) to be the set of Boolean functions (functions mapping \(\{0,1\}^*\) to \(\{0,1\}\)) that are computable in \(T(n)\) RAM time. Because NAND-RAM programs correspond more closely to our natural notions of running time, we will use NAND-RAM as our "default" model of running time, and hence use \(\ensuremath{\mathit{TIME}}(T(n))\) (without any subscript) to denote \(\ensuremath{\mathit{TIME}}_{\mathsf{RAM}}(T(n))\). However, it turns out that as long as we only care about the difference between exponential and polynomial time, this does not make much difference. The reason is that Turing machines can simulate NAND-RAM programs with at most a polynomial overhead (see also Figure 13.4): Let \(T:\N \rightarrow \N\) be a function such that \(T(n) \geq n\) for every \(n\) and the map \(n \mapsto T(n)\) can be computed by a Turing machine in time \(O(T(n))\). Then \[ \ensuremath{\mathit{TIME}}_{\mathsf{TM}}(T(n)) \subseteq \ensuremath{\mathit{TIME}}_{\mathsf{RAM}}(10\cdot T(n)) \subseteq \ensuremath{\mathit{TIME}}_{\mathsf{TM}}(T(n)^4) \;. \;\;(13.1) \] The technical details of Theorem 13.5, such as the condition that \(n \mapsto T(n)\) is computable in \(O(T(n))\) time or the constants \(10\) and \(4\) in Equation 13.1 (which are not tight and can be improved), are not very important. In particular, all non-pathological time bound functions we encounter in practice such as \(T(n)=n\), \(T(n)=n\log n\), \(T(n)=2^n\) etc. will satisfy the conditions of Theorem 13.5, see also Remark 13.6. The main message of the theorem is Turing machines and RAM machines are "roughly equivalent" in the sense that one can simulate the other with polynomial overhead. Similarly, while the proof involves some technical details, it's not very deep or hard, and merely follows the simulation of RAM machines with Turing machines we saw in Theorem 8.1 with more careful "book keeping". 13.4: The proof of Theorem 13.5 shows that we can simulate \(T\) steps of a Turing machine with \(T\) steps of a NAND-RAM program, and can simulate \(T\) steps of a NAND-RAM program with \(o(T^4)\) steps of a Turing machine. Hence \(\ensuremath{\mathit{TIME}}_{\mathsf{TM}}(T(n)) \subseteq \ensuremath{\mathit{TIME}}_{\mathsf{RAM}}(10\cdot T(n)) \subseteq \ensuremath{\mathit{TIME}}_{\mathsf{TM}}(T(n)^4)\). For example, by instantiating Theorem 13.5 with \(T(n)=n^a\) and using the fact that \(10n^a = o(n^{a+1})\), we see that \(\ensuremath{\mathit{TIME}}_{\mathsf{TM}}(n^a) \subseteq \ensuremath{\mathit{TIME}}_{\mathsf{RAM}}(n^{a+1}) \subseteq \ensuremath{\mathit{TIME}}_{\mathsf{TM}}(n^{4a+4})\) which means that (by Solved Exercise 13.2) \[ \mathbf{P} = \cup_{a = 1,2,\ldots} \ensuremath{\mathit{TIME}}_{\mathsf{TM}}(n^a) = \cup_{a = 1,2,\ldots} \ensuremath{\mathit{TIME}}_{\mathsf{RAM}}(n^a) \;. \] That is, we could have equally well defined \(\mathbf{P}\) as the class of functions computable by NAND-RAM programs (instead of Turing machines) that run in time polynomial in the length of the input. Similarly, by instantiating Theorem 13.5 with \(T(n)=2^{n^a}\) we see that the class \(\mathbf{EXP}\) can also be defined as the set of functions computable by NAND-RAM programs in time at most \(2^{p(n)}\) where \(p\) is some polynomial. Similar equivalence results are known for many models including cellular automata, C/Python/Javascript programs, parallel computers, and a great many other models, which justifies the choice of \(\mathbf{P}\) as capturing a technology-independent notion of tractability. (See Section 13.3 for more discussion of this issue.) This equivalence between Turing machines and NAND-RAM (as well as other models) allows us to pick our favorite model depending on the task at hand (i.e., "have our cake and eat it too") even when we study questions of efficiency, as long as we only care about the gap between polynomial and exponential time. When we want to design an algorithm, we can use the extra power and convenience afforded by NAND-RAM. When we want to analyze a program or prove a negative result, we can restrict our attention to Turing machines. All "reasonable" computational models are equivalent if we only care about the distinction between polynomial and exponential. The adjective "reasonable" above refers to all scalable computational models that have been implemented, with the possible exception of quantum computers, see Section 13.3 and Chapter 23. The direction \(\ensuremath{\mathit{TIME}}_{\mathsf{TM}}(T(n)) \subseteq \ensuremath{\mathit{TIME}}_{\mathsf{RAM}}(10 \cdot T(n))\) is not hard to show, since a NAND-RAM program \(P\) can simulate a Turing machine \(M\) with constant overhead by storing the transition table of \(M\) in an array (as is done in the proof of Theorem 9.1). Simulating every step of the Turing machine can be done in a constant number \(c\) of steps of RAM, and it can be shown this constant \(c\) is smaller than \(10\). Thus the heart of the theorem is to prove that \(\ensuremath{\mathit{TIME}}_{\mathsf{RAM}}(T(n)) \subseteq \ensuremath{\mathit{TIME}}_{\mathsf{TM}}(T(n)^4)\). This proof closely follows the proof of Theorem 8.1, where we have shown that every function \(F\) that is computable by a NAND-RAM program \(P\) is computable by a Turing machine (or equivalently a NAND-TM program) \(M\). To prove Theorem 13.5, we follow the exact same proof but just check that the overhead of the simulation of \(P\) by \(M\) is polynomial. The proof has many details, but is not deep. It is therefore much more important that you understand the statement of this theorem than its proof. We only focus on the non-trivial direction \(\ensuremath{\mathit{TIME}}_{\mathsf{RAM}}(T(n)) \subseteq \ensuremath{\mathit{TIME}}_{\mathsf{TM}}(T(n)^4)\). Let \(F\in \ensuremath{\mathit{TIME}}_{\mathsf{RAM}}(T(n))\). \(F\) can be computed in time \(T(n)\) by some NAND-RAM program \(P\) and we need to show that it can also be computed in time \(T(n)^4\) by a Turing machine \(M\). This will follow from showing that \(F\) can be computed in time \(T(n)^4\) by a NAND-TM program, since for every NAND-TM program \(Q\) there is a Turing machine \(M\) simulating it such that each iteration of \(Q\) corresponds to a single step of \(M\). As mentioned above, we follow the proof of Theorem 8.1 (simulation of NAND-RAM programs using NAND-TM programs) and use the exact same simulation, but with a more careful accounting of the number of steps that the simulation costs. Recall, that the simulation of NAND-RAM works by "peeling off" features of NAND-RAM one by one, until we are left with NAND-TM. We will not provide the full details but will present the main ideas used in showing that every feature of NAND-RAM can be simulated by NAND-TM with at most a polynomial overhead: Recall that every NAND-RAM variable or array element can contain an integer between \(0\) and \(T\) where \(T\) is the number of lines that have been executed so far. Therefore if \(P\) is a NAND-RAM program that computes \(F\) in \(T(n)\) time, then on inputs of length \(n\), all integers used by \(P\) are of magnitude at most \(T(n)\). This means that the largest value i can ever reach is at most \(T(n)\) and so each one of \(P\)'s variables can be thought of as an array of at most \(T(n)\) indices, each of which holds a natural number of magnitude at most \(T(n)\). We let \(\ell = \ceil{\log T(n)}\) be the number of bits needed to encode such numbers. (We can start off the simulation by computing \(T(n)\) and \(\ell\).) We can encode a NAND-RAM array of length \(\leq T(n)\) containing numbers in \(\{0,\ldots, T(n)-1 \}\) as an Boolean (i.e., NAND-TM) array of \(T(n)\ell =O(T(n)\log T(n))\) bits, which we can also think of as a two dimensional array as we did in the proof of Theorem 8.1. We encode a NAND-RAM scalar containing a number in \(\{0,\ldots, T(n)-1 \}\) simply by a shorter NAND-TM array of \(\ell\) bits. We can simulate the two dimensional arrays using one-dimensional arrays of length \(T(n)\ell = O(T(n) \log T(n)\). All the arithmetic operations on integers use the grade-school algorithms, that take time that is polynomial in the number \(\ell\) of bits of the integers, which is \(poly(\log T(n))\) in our case. Hence we can simulate \(T(n)\) steps of NAND-RAM with \(O(T(n)poly(\log T(n))\) steps of a model that uses random access memory but only Boolean-valued one-dimensional arrays. The most expensive step is to translate from random access memory to the sequential memory model of NAND-TM/Turing machines. As we did in the proof of Theorem 8.1 (see Section 8.2), we can simulate accessing an array Foo at some location encoded in an array Bar by: Copying Bar to some temporary array Temp Having an array Index which is initially all zeros except \(1\) at the first location. Repeating the following until Temp encodes the number \(0\): (Number of repetitions is at most \(T(n)\).) Decrease the number encoded temp by \(1\). (Take number of steps polynomial in \(\ell = \ceil{\log T(n)}\).) Decrease i until it is equal to \(0\). (Take \(O(T(n)\) steps.) Scan Index until we reach the point in which it equals \(1\) and then change this \(1\) to \(0\) and go one step further and write \(1\) in this location. (Takes \(O(T(n))\) steps.) When we are done we know that if we scan Index until we reach the point in which Index[i]\(=1\) then i contains the value that was encoded by Bar (Takes \(O(T(n)\) steps.) The total cost for each such operation is \(O(T(n)^2 + T(n)poly(\log T(n))) = O(T(n)^2)\) steps. In sum, we simulate a single step of NAND-RAM using \(O(T(n)^2 poly(\log T(n)))\) steps of NAND-TM, and hence the total simulation time is \(O(T(n)^3 poly(\log T(n)))\) which is smaller than \(T(n)^4\) for sufficiently large \(n\). When considering general time bounds such we need to make sure to rule out some "pathological" cases such as functions \(T\) that don't give enough time for the algorithm to read the input, or functions where the time bound itself is uncomputable. We say that a function \(T:\N \rightarrow \N\) is a nice time bound function (or nice function for short) if for every \(n\in \N\), \(T(n) \geq n\) (i.e., \(T\) allows enough time to read the input), for every \(n' \geq n\), \(T(n') \geq T(n)\) (i.e., \(T\) allows more time on longer inputs), and the map \(F(x) = 1^{T(|x|)}\) (i.e., mapping a string of length \(n\) to a sequence of \(T(n)\) ones) can be computed by a NAND-RAM program in \(O(T(n))\) time. All the "normal" time complexity bounds we encounter in applications such as \(T(n)= 100 n\), \(T(n) = n^2 \log n\),\(T(n) = 2^{\sqrt{n}}\), etc. are "nice". Hence from now on we will only care about the class \(\ensuremath{\mathit{TIME}}(T(n))\) when \(T\) is a "nice" function. The computability condition is in particular typically easily satisfied. For example, for arithmetic functions such as \(T(n) = n^3\), we can typically compute the binary representation of \(T(n)\) in time polynomial in the number of bits of \(T(n)\) and hence poly-logarithmic in \(T(n)\). Hence the time to write the string \(1^{T(n)}\) in such cases will be \(T(n) + poly(\log T(n)) = O(T(n))\). Extended Church-Turing Thesis (discussion) Theorem 13.5 shows that the computational models of Turing machines and RAM machines / NAND-RAM programs are equivalent up to polynomial factors in the running time. Other examples of polynomially equivalent models include: All standard programming languages, including C/Python/JavaScript/Lisp/etc. The \(\lambda\) calculus (see also Section 13.8). Parallel computers Biological computing devices such as DNA-based computers. The Extended Church Turing Thesis is the statement that this is true for all physically realizable computing models. In other words, the extended Church Turing thesis says that for every scalable computing device \(C\) (which has a finite description but can be in principle used to run computation on arbitrarily large inputs), there is some constant \(a\) such that for every function \(F:\{0,1\}^* \rightarrow \{0,1\}\) that \(C\) can compute on \(n\) length inputs using an \(S(n)\) amount of physical resources, \(F\) is in \(\ensuremath{\mathit{TIME}}(S(n)^a)\). This is a strengthening of the ("plain") Church-Turing Thesis, discussed in Section 8.8, which states that the set of computable functions is the same for all physically realizable models, but without requiring the overhead in the simulation between different models to be at most polynomial. All the current constructions of scalable computational models and programming languages conform to the Extended Church-Turing Thesis, in the sense that they can be simulated with polynomial overhead by Turing machines (and hence also by NAND-TM or NAND-RAM programs). Consequently, the classes \(\mathbf{P}\) and \(\mathbf{EXP}\) are robust to the choice of model, and we can use the programming language of our choice, or high level descriptions of an algorithm, to determine whether or not a problem is in \(\mathbf{P}\). Like the Church-Turing thesis itself, the extended Church-Turing thesis is in the asymptotic setting and does not directly yield an experimentally testable prediction. However, it can be instantiated with more concrete bounds on the overhead, yielding experimentally-testable predictions such as the Physical Extended Church-Turing Thesis we mentioned in Section 5.6. In the last hundred+ years of studying and mechanizing computation, no one has yet constructed a scalable computing device that violates the extended Church Turing Thesis. However, quantum computing, if realized, will pose a serious challenge to the extended Church-Turing Thesis (see Chapter 23). However, even if the promises of quantum computing are fully realized, the extended Church-Turing thesis is "morally" correct, in the sense that, while we do need to adapt the thesis to account for the possibility of quantum computing, its broad outline remains unchanged. We are still able to model computation mathematically, we can still treat programs as strings and have a universal program, we still have time hierarchy and uncomputability results, and there is still no reason to doubt the ("plain") Church-Turing thesis. Moreover, the prospect of quantum computing does not seem to make a difference for the time complexity of many (though not all!) of the concrete problems that we care about. In particular, as far as we know, out of all the example problems mentioned in Chapter 12 the complexity of only one— integer factoring— is affected by modifying our model to include quantum computers as well. Efficient universal machine: a NAND-RAM interpreter in NAND-RAM We have seen in Theorem 9.1 the "universal Turing machine". Examining that proof, and combining it with Theorem 13.5 , we can see that the program \(U\) has a polynomial overhead, in the sense that it can simulate \(T\) steps of a given NAND-TM (or NAND-RAM) program \(P\) on an input \(x\) in \(O(T^4)\) steps. But in fact, by directly simulating NAND-RAM programs we can do better with only a constant multiplicative overhead. That is, there is a universal NAND-RAM program \(U\) such that for every NAND-RAM program \(P\), \(U\) simulates \(T\) steps of \(P\) using only \(O(T)\) steps. (The implicit constant in the \(O\) notation can depend on the program \(P\) but does not depend on the length of the input.) There exists a NAND-RAM program \(U\) satisfying the following: (\(U\) is a universal NAND-RAM program.) For every NAND-RAM program \(P\) and input \(x\), \(U(P,x)=P(x)\) where by \(U(P,x)\) we denote the output of \(U\) on a string encoding the pair \((P,x)\). (\(U\) is efficient.) There are some constants \(a,b\) such that for every NAND-RAM program \(P\), if \(P\) halts on input \(x\) after most \(T\) steps, then \(U(P,x)\) halts after at most \(C\cdot T\) steps where \(C \leq a |P|^b\). As in the case of Theorem 13.5, the proof of Theorem 13.7 is not very deep and so it is more important to understand its statement. Specifically, if you understand how you would go about writing an interpreter for NAND-RAM using a modern programming language such as Python, then you know everything you need to know about the proof of this theorem. 13.5: The universal NAND-RAM program \(U\) simulates an input NAND-RAM program \(P\) by storing all of \(P\)'s variables inside a single array Vars of \(U\). If \(P\) has \(t\) variables, then the array Vars is divided into blocks of length \(t\), where the \(j\)-th coordinate of the \(i\)-th block contains the \(i\)-th element of the \(j\)-th array of \(P\). If the \(j\)-th variable of \(P\) is scalar, then we just store its value in the zeroth block of Vars. To present a universal NAND-RAM program in full we would need to describe a precise representation scheme, as well as the full NAND-RAM instructions for the program. While this can be done, it is more important to focus on the main ideas, and so we just sketch the proof here. A specification of NAND-RAM is given in the appendix, and for the purposes of this simulation, we can simply use the representation of the code NAND-RAM as an ASCII string. The program \(U\) gets as input a NAND-RAM program \(P\) and an input \(x\) and simulates \(P\) one step at a time. To do so, \(U\) does the following: \(U\) maintains variables program_counter, and number_steps for the current line to be executed and the number of steps executed so far. \(U\) initially scans the code of \(P\) to find the number \(t\) of unique variable names that \(P\) uses. It will translate each variable name into a number between \(0\) and \(t-1\) and use an array Program to store \(P\)'s code where for every line \(\ell\), Program[\(\ell\)] will store the \(\ell\)-th line of \(P\) where the variable names have been translated to numbers. (More concretely, we will use a constant number of arrays to separately encode the operation used in this line, and the variable names and indices of the operands.) \(U\) maintains a single array Vars that contains all the values of \(P\)'s variables. We divide Vars into blocks of length \(t\). If \(s\) is a number corresponding to an array variable Foo of \(P\), then we store Foo[0] in Vars[\(s\)], we store Foo[1] in Var_values[\(t+s\)], Foo[2] in Vars[\(2t + s\)] and so on and so forth (see Figure 13.5). Generally,if the \(s\)-th variable of \(P\) is a scalar variable, then its value will be stored in location Vars[\(s\)]. If it is an array variable then the value of its \(i\)-th element will be stored in location Vars[\(t\cdot i + s\)]. To simulate a single step of \(P\), the program \(U\) recovers from Program the line corresponding to program_counter and executes it. Since NAND-RAM has a constant number of arithmetic operations, we can implement the logic of which operation to execute using a sequence of a constant number of if-then-else's. Retrieving from Vars the values of the operands of each instruction can be done using a constant number of arithmetic operations. The setup stages take only a constant (depending on \(|P|\) but not on the input \(x\)) number of steps. Once we are done with the setup, to simulate a single step of \(P\), we just need to retrieve the corresponding line and do a constant number of "if elses" and accesses to Vars to simulate it. Hence the total running time to simulate \(T\) steps of the program \(P\) is at most \(O(T)\) when suppressing constants that depend on the program \(P\). Timed Universal Turing Machine One corollary of the efficient universal machine is the following. Given any Turing machine \(M\), input \(x\), and "step budget" \(T\), we can simulate the execution of \(M\) for \(T\) steps in time that is polynomial in \(T\). Formally, we define a function \(\ensuremath{\mathit{TIMEDEVAL}}\) that takes the three parameters \(M\), \(x\), and the time budget, and outputs \(M(x)\) if \(M\) halts within at most \(T\) steps, and outputs \(0\) otherwise. The timed universal Turing machine computes \(\ensuremath{\mathit{TIMEDEVAL}}\) in polynomial time (see Figure 13.6). (Since we measure time as a function of the input length, we define \(\ensuremath{\mathit{TIMEDEVAL}}\) as taking the input \(T\) represented in unary: a string of \(T\) ones.) Let \(\ensuremath{\mathit{TIMEDEVAL}}:\{0,1\}^* \rightarrow \{0,1\}^*\) be the function defined as \[\ensuremath{\mathit{TIMEDEVAL}}(M,x,1^T) = \begin{cases} M(x) & M \text{ halts within $\leq T$ steps on $x$} \\ 0 & \text{otherwise}\end{cases} \;.\] Then \(\ensuremath{\mathit{TIMEDEVAL}} \in \mathbf{P}\). 13.6: The timed universal Turing machine takes as input a Turing machine \(M\), an input \(x\), and a time bound \(T\), and outputs \(M(x)\) if \(M\) halts within at most \(T\) steps. Theorem 13.8 states that there is such a machine that runs in time polynomial in \(T\). We only sketch the proof since the result follows fairly directly from Theorem 13.5 and Theorem 13.7. By Theorem 13.5 to show that \(\ensuremath{\mathit{TIMEDEVAL}} \in \mathbf{P}\), it suffices to give a polynomial-time NAND-RAM program to compute \(\ensuremath{\mathit{TIMEDEVAL}}\). Such a program can be obtained as follows. Given a Turing machine \(M\), by Theorem 13.5 we can transform it in time polynomial in its description into a functionally-equivalent NAND-RAM program \(P\) such that the execution of \(M\) on \(T\) steps can be simulated by the execution of \(P\) on \(c\cdot T\) steps. We can then run the universal NAND-RAM machine of Theorem 13.7 to simulate \(P\) for \(c\cdot T\) steps, using \(O(T)\) time, and output \(0\) if the execution did not halt within this budget. This shows that \(\ensuremath{\mathit{TIMEDEVAL}}\) can be computed by a NAND-RAM program in time polynomial in \(|M|\) and linear in \(T\), which means \(\ensuremath{\mathit{TIMEDEVAL}} \in \mathbf{P}\). The time hierarchy theorem Some functions are uncomputable, but are there functions that can be computed, but only at an exorbitant cost? For example, is there a function that can be computed in time \(2^n\), but can not be computed in time \(2^{0.9 n}\)? It turns out that the answer is Yes: For every nice function \(T:\N \rightarrow \N\), there is a function \(F:\{0,1\}^* \rightarrow \{0,1\}\) in \(\ensuremath{\mathit{TIME}}(T(n)\log n) \setminus \ensuremath{\mathit{TIME}}(T(n))\). There is nothing special about \(\log n\), and we could have used any other efficiently computable function that tends to infinity with \(n\). If we have more time, we can compute more functions. The generality of the time hierarchy theorem can make its proof a little hard to read. It might be easier to follow the proof if you first try to prove by yourself the easier statement \(\mathbf{P} \subsetneq \mathbf{EXP}\). You can do so by showing that the following function \(F:\{0,1\}^* :\rightarrow \{0,1\}\) is in \(\mathbf{EXP} \setminus \mathbf{P}\): for every Turing machine \(M\) and input \(x\), \(F(M,x)=1\) if and only if \(M\) halts on \(x\) within at most \(|x|^{\log |x|}\) steps. One can show that \(F \in \ensuremath{\mathit{TIME}}(n^{O(\log n)}) \subseteq \mathbf{EXP}\) using the universal Turing machine (or the efficient universal NAND-RAM program of Theorem 13.7). On the other hand, we can use similar ideas to those used to show the uncomputability of \(\ensuremath{\mathit{HALT}}\) in Section 9.3.2 to prove that \(F \not\in \mathbf{P}\). 13.7: The Time Hierarchy Theorem (Theorem 13.9) states that all of these classes are distinct. In the proof of Theorem 9.6 (the uncomputability of the Halting problem), we have shown that the function \(\ensuremath{\mathit{HALT}}\) cannot be computed in any finite time. An examination of the proof shows that it gives something stronger. Namely, the proof shows that if we fix our computational budget to be \(T\) steps, then not only we can't distinguish between programs that halt and those that do not, but cannot even distinguish between programs that halt within at most \(T'\) steps and those that take more than that (where \(T'\) is some number depending on \(T\)). Therefore, the proof of Theorem 13.9 follows the ideas of the uncomputability of the halting problem, but again with a more careful accounting of the running time. Our proof is inspired by the proof of the uncomputability of the halting problem. Specifically, for every function \(T\) as in the theorem's statement, we define the Bounded Halting function \(\ensuremath{\mathit{HALT}}_T\) as follows. The input to \(\ensuremath{\mathit{HALT}}_T\) is a pair \((P,x)\) such that \(|P| \leq \log \log |x|\) encodes some NAND-RAM program. We define \[ \ensuremath{\mathit{HALT}}_T(P,x) = \begin{cases}1, & P \text{ halts on } x \text{ within } \leq 100\cdot T(|P|+|x|) \text{ steps} \\ 0, & \text{otherwise} \end{cases} \;. \] (The constant \(100\) and the function \(\log \log n\) are rather arbitrary, and are chosen for convenience in this proof.) Theorem 13.9 is an immediate consequence of the following two claims: Claim 1: \(\ensuremath{\mathit{HALT}}_T \in \ensuremath{\mathit{TIME}}(T(n)\cdot \log n)\) Claim 2: \(\ensuremath{\mathit{HALT}}_T \not\in \ensuremath{\mathit{TIME}}(T(n))\). Please make sure you understand why indeed the theorem follows directly from the combination of these two claims. We now turn to proving them. Proof of claim 1: We can easily check in linear time whether an input has the form \(P,x\) where \(|P| \leq \log\log |x|\). Since \(T(\cdot)\) is a nice function, we can evaluate it in \(O(T(n))\) time. Thus, we can compute \(\ensuremath{\mathit{HALT}}_T(P,x)\) as follows: Compute \(T_0=T(|P|+|x|)\) in \(O(T_0)\) steps. Use the universal NAND-RAM program of Theorem 13.7 to simulate \(100\cdot T_0\) steps of \(P\) on the input \(x\) using at most \(poly(|P|)T_0\) steps. (Recall that we use \(poly(\ell)\) to denote a quantity that is bounded by \(a\ell^b\) for some constants \(a,b\).) If \(P\) halts within these \(100\cdot T_0\) steps then output \(1\), else output \(0\). The length of the input is \(n=|P|+|x|\). Since \(|x| \leq n\) and \((\log \log |x|)^b = o(\log |x|)\) for every \(b\), the running time will be \(o(T(|P|+|x|) \log n)\) and hence the above algorithm demonstrates that \(\ensuremath{\mathit{HALT}}_T \in \ensuremath{\mathit{TIME}}(T(n)\cdot \log n)\), completing the proof of Claim 1. Proof of claim 2: This proof is the heart of Theorem 13.9, and is very reminiscent of the proof that \(\ensuremath{\mathit{HALT}}\) is not computable. Assume, for the sake of contradiction, that there is some NAND-RAM program \(P^*\) that computes \(\ensuremath{\mathit{HALT}}_T(P,x)\) within \(T(|P|+|x|)\) steps. We are going to show a contradiction by creating a program \(Q\) and showing that under our assumptions, if \(Q\) runs for less than \(T(n)\) steps when given (a padded version of) its own code as input then it actually runs for more than \(T(n)\) steps and vice versa. (It is worth re-reading the last sentence twice or thrice to make sure you understand this logic. It is very similar to the direct proof of the uncomputability of the halting problem where we obtained a contradiction by using an assumed "halting solver" to construct a program that, given its own code as input, halts if and only if it does not halt.) We will define \(Q^*\) to be the program that on input a string \(z\) does the following: If \(z\) does not have the form \(z=P1^m\) where \(P\) represents a NAND-RAM program and \(|P|< 0.1 \log\log m\) then return \(0\). (Recall that \(1^m\) denotes the string of \(m\) ones.) Compute \(b= P^*(P,z)\) (at a cost of at most \(T(|P|+|z|)\) steps, under our assumptions). If \(b=1\) then \(Q^*\) goes into an infinite loop, otherwise it halts. Let \(\ell\) be the length description of \(Q^*\) as a string, and let \(m\) be larger than \(2^{2^{1000 \ell}}\). We will reach a contradiction by splitting into cases according to whether or not \(\ensuremath{\mathit{HALT}}_T(Q^*,Q^*1^m)\) equals \(0\) or \(1\). On the one hand, if \(\ensuremath{\mathit{HALT}}_T(Q^*,Q^*1^m)=1\), then under our assumption that \(P^*\) computes \(\ensuremath{\mathit{HALT}}_T\), \(Q^*\) will go into an infinite loop on input \(z=Q^*1^m\), and hence in particular \(Q^*\) does not halt within \(100 T(|Q^*|+m)\) steps on the input \(z\). But this contradicts our assumption that \(\ensuremath{\mathit{HALT}}_T(Q^*,Q^*1^m)=1\). This means that it must hold that \(\ensuremath{\mathit{HALT}}_T(Q^*,Q^*1^m)=0\). But in this case, since we assume \(P^*\) computes \(\ensuremath{\mathit{HALT}}_T\), \(Q^*\) does not do anything in phase 3 of its computation, and so the only computation costs come in phases 1 and 2 of the computation. It is not hard to verify that Phase 1 can be done in linear and in fact less than \(5|z|\) steps. Phase 2 involves executing \(P^*\), which under our assumption requires \(T(|Q^*|+m)\) steps. In total we can perform both phases in less than \(10 T(|Q^*|+m)\) in steps, which by definition means that \(\ensuremath{\mathit{HALT}}_T(Q^*,Q^*1^m)=1\), but this is of course a contradiction. This completes the proof of Claim 2 and hence of Theorem 13.9. Prove that \(\mathbf{P} \subsetneq \mathbf{EXP}\). We show why this statement follows from the time hierarchy theorem, but it can be an instructive exercise to prove it directly, see Remark 13.10. We need to show that there exists \(F \in \mathbf{EXP} \setminus \mathbf{P}\). Let \(T(n) = n^{\log n}\) and \(T'(n) = n^{\log n / 2}\). Both are nice functions. Since \(T(n)/T'(n) = \omega(\log n)\), by Theorem 13.9 there exists some \(F\) in \(\ensuremath{\mathit{TIME}}(T'(n)) \subsetneq \ensuremath{\mathit{TIME}}(T(n))\). Since for sufficiently large \(n\), \(2^n > n^{\log n}\), \(F \in \ensuremath{\mathit{TIME}}(2^n) \subseteq \mathbf{EXP}\). On the other hand, \(F \not\in \mathbf{P}\). Indeed, suppose otherwise that there was a constant \(c>0\) and a Turing machine computing \(F\) on \(n\)-length input in at most \(n^c\) steps for all sufficiently large \(n\). Then since for \(n\) large enough \(n^c < n^{\log n/2}\), it would have followed that \(F \in \ensuremath{\mathit{TIME}}(n^{\log n /2})\) contradicting our choice of \(F\). The time hierarchy theorem tells us that there are functions we can compute in \(O(n^2)\) time but not \(O(n)\), in \(2^n\) time, but not \(2^{\sqrt{n}}\), etc.. In particular there are most definitely functions that we can compute in time \(2^n\) but not \(O(n)\). We have seen that we have no shortage of natural functions for which the best known algorithm requires roughly \(2^n\) time, and that many people have invested significant effort in trying to improve that. However, unlike in the finite vs. infinite case, for all of the examples above at the moment we do not know how to rule out even an \(O(n)\) time algorithm. We will however see that there is a single unproven conjecture that would imply such a result for most of these problems. 13.8: Some complexity classes and some of the functions we know (or conjecture) to be contained in them. The time hierarchy theorem relies on the existence of an efficient universal NAND-RAM program, as proven in Theorem 13.7. For other models such as Turing machines we have similar time hierarchy results showing that there are functions computable in time \(T(n)\) and not in time \(T(n)/f(n)\) where \(f(n)\) corresponds to the overhead in the corresponding universal machine. Non-uniform computation We have now seen two measures of "computation cost" for functions. In Section 4.6 we defined the complexity of computing finite functions using circuits / straightline programs. Specifically, for a finite function \(g:\{0,1\}^n \rightarrow \{0,1\}\) and number \(s\in \N\), \(g\in \ensuremath{\mathit{SIZE}}_n(s)\) if there is a circuit of at most \(s\) NAND gates (or equivalently an \(s\)-line NAND-CIRC program) that computes \(g\). To relate this to the classes \(\ensuremath{\mathit{TIME}}(T(n))\) defined in this chapter we first need to extend the class \(\ensuremath{\mathit{SIZE}}_n(s)\) from finite functions to functions with unbounded input length. Let \(F:\{0,1\}^* \rightarrow \{0,1\}\) and \(T:\N \rightarrow \N\) be a nice time bound. For every \(n\in \N\), define \(F_{\upharpoonright n} : \{0,1\}^n \rightarrow \{0,1\}\) to be the restriction of \(F\) to inputs of size \(n\). That is, \(F_{\upharpoonright n}\) is the function mapping \(\{0,1\}^n\) to \(\{0,1\}\) such that for every \(x\in \{0,1\}^n\), \(F_{\upharpoonright n}(x)=F(x)\). We say that \(F\) is non-uniformly computable in at most \(T(n)\) size, denoted by \(F \in \ensuremath{\mathit{SIZE}}(T)\) if there exists a sequence \((C_0,C_1,C_2,\ldots)\) of NAND circuits such that: For every \(n\in \N\), \(C_n\) computes the function \(F_{\upharpoonright n}\) For every sufficiently large \(n\), \(C_n\) has at most \(T(n)\) gates. In other words, \(F \in \ensuremath{\mathit{SIZE}}(T)\) iff for every \(n \in \N\), it holds that \(F_{\upharpoonright n} \in \ensuremath{\mathit{SIZE}}_n(T(n))\). The non-uniform analog to the class \(\mathbf{P}\) is the class \(\mathbf{P_{/poly}}\) defined as \[ \mathbf{P_{/poly}} = \cup_{c\in \N} \ensuremath{\mathit{SIZE}}(n^c) \; . \;\;(13.2) \] There is a big difference between non-uniform computation and uniform complexity classes such as \(\ensuremath{\mathit{TIME}}(T(n))\) or \(\mathbf{P}\). The condition \(F\in \mathbf{P}\) means that there is a single Turing machine \(M\) that computes \(F\) on all inputs in polynomial time. The condition \(F\in \mathbf{P_{/poly}}\) only means that for every input length \(n\) there can be a different circuit \(C_n\) that computes \(F\) using polynomially many gates on inputs of these lengths. As we will see, \(F\in \mathbf{P_{/poly}}\) does not necessarily imply that \(F\in \mathbf{P}\). However, the other direction is true: 13.9: We can think of an infinite function \(F:\{0,1\}^* \rightarrow \{0,1\}\) as a collection of finite functions \(F_0,F_1,F_2,\ldots\) where \(F_{\upharpoonright n}:\{0,1\}^n \rightarrow \{0,1\}\) is the restriction of \(F\) to inputs of length \(n\). We say \(F\) is in \(\mathbf{P_{/poly}}\) if for every \(n\), the function \(F_{\upharpoonright n}\) is computable by a polynomial-size NAND-CIRC program, or equivalently, a polynomial-sized Boolean circuit. There is some \(a\in \N\) s.t. for every nice \(T:\N \rightarrow \N\) and \(F:\{0,1\}^* \rightarrow \{0,1\}\), \[\ensuremath{\mathit{TIME}}(T(n)) \subseteq \ensuremath{\mathit{SIZE}}(T(n)^a)\;.\] In particular, Theorem 13.12 shows that for every \(c\), \(\ensuremath{\mathit{TIME}}(n^c) \subseteq \ensuremath{\mathit{SIZE}}(n^{ca})\) and hence \(\mathbf{P} \subseteq \mathbf{P_{/poly}}\). The idea behind the proof is to "unroll the loop". Specifically, we will use the programming language variants of non-uniform and uniform computation: namely NAND-CIRC and NAND-TM. The main difference between the two is that NAND-TM has loops. However, for every fixed \(n\), if we know that a NAND-TM program runs in at most \(T(n)\) steps, then we can replace its loop by simply "copying and pasting" its code \(T(n)\) times, similar to how in Python we can replace code such as print(i) with the "loop free" code To make this idea into an actual proof we need to tackle one technical difficulty, and this is to ensure that the NAND-TM program is oblivious in the sense that the value of the index variable i in the \(j\)-th iteration of the loop will depend only on \(j\) and not on the contents of the input. We make a digression to do just that in Section 13.6.1 and then complete the proof of Theorem 13.12. Oblivious NAND-TM programs Our approach for proving Theorem 13.12 involves "unrolling the loop". For example, consider the following NAND-TM to compute the \(\ensuremath{\mathit{XOR}}\) function on inputs of arbitrary length: temp_0 = NAND(X[0],X[0]) Y_nonblank[0] = NAND(X[0],temp_0) temp_2 = NAND(X[i],Y[0]) temp_3 = NAND(X[i],temp_2) temp_4 = NAND(Y[0],temp_2) Y[0] = NAND(temp_3,temp_4) MODANDJUMP(X_nonblank[i],X_nonblank[i]) Setting (as an example) \(n=3\), we can attempt to translate this NAND-TM program into a NAND-CIRC program for computing \(\ensuremath{\mathit{XOR}}_3:\{0,1\}^3 \rightarrow \{0,1\}\) by simply "copying and pasting" the loop three times (dropping the MODANDJMP line): However, the above is still not a valid NAND-CIRC program since it contains references to the special variable i. To make it into a valid NAND-CIRC program, we replace references to i in the first iteration with \(0\), references in the second iteration with \(1\), and references in the third iteration with \(2\). (We also create a variable zero and use it for the first time any variable is instantiated, as well as remove assignments to non-output variables that are never used later on.) The resulting program is a standard "loop free and index free" NAND-CIRC program that computes \(\ensuremath{\mathit{XOR}}_3\) (see also Figure 13.10): one = NAND(X[0],temp_0) zero = NAND(one,one) temp_2 = NAND(X[0],zero) temp_3 = NAND(X[0],temp_2) temp_4 = NAND(zero,temp_2) temp_2 = NAND(X[1],Y[0]) 13.10: A NAND circuit for \(\ensuremath{\mathit{XOR}}_3\) obtained by "unrolling the loop" of the NAND-TM program for computing \(\ensuremath{\mathit{XOR}}\) three times. Key to this transformation was the fact that in our original NAND-TM program for \(\ensuremath{\mathit{XOR}}\), regardless of whether the input is \(011\), \(100\), or any other string, the index variable i is guaranteed to equal \(0\) in the first iteration, \(1\) in the second iteration, \(2\) in the third iteration, and so on and so forth. The particular sequence \(0,1,2,\ldots\) is immaterial: the crucial property is that the NAND-TM program for \(\ensuremath{\mathit{XOR}}\) is oblivious in the sense that the value of the index i in the \(j\)-th iteration depends only on \(j\) and does not depend on the particular choice of the input. Luckily, it is possible to transform every NAND-TM program into a functionally equivalent oblivious program with at most quadratic overhead. (Similarly we can transform any Turing machine into a functionally equivalent oblivious Turing machine, see Exercise 13.6.) Let \(T:\N \rightarrow \N\) be a nice function and let \(F\in \ensuremath{\mathit{TIME}}_{\mathsf{TM}}(T(n))\). Then there is a NAND-TM program \(P\) that computes \(F\) in \(O(T(n)^2)\) steps and satisfying the following. For every \(n\in \N\) there is a sequence \(i_0,i_1,\ldots, i_{m-1}\) such that for every \(x\in \{0,1\}^n\), if \(P\) is executed on input \(x\) then in the \(j\)-th iteration the variable i is equal to \(i_j\). In other words, Theorem 13.13 implies that if we can compute \(F\) in \(T(n)\) steps, then we can compute it in \(O(T(n)^2)\) steps with a program \(P\) in which the position of i in the \(j\)-th iteration depends only on \(j\) and the length of the input, and not on the contents of the input. Such a program can be easily translated into a NAND-CIRC program of \(O(T(n)^2)\) lines by "unrolling the loop". We can translate any NAND-TM program \(P'\) into an oblivious program \(P\) by making \(P\) "sweep" its arrays. That is, the index i in \(P\) will always move all the way from position \(0\) to position \(T(n)-1\) and back again. We can then simulate the program \(P'\) with at most \(T(n)\) overhead: if \(P'\) wants to move i left when we are in a rightward sweep then we simply wait the at most \(2T(n)\) steps until the next time we are back in the same position while sweeping to the left. 13.11: We simulate a \(T(n)\)-time NAND-TM program \(P'\) with an oblivious NAND-TM program \(P\) by adding special arrays Atstart and Atend to mark positions \(0\) and \(T-1\) respectively. The program \(P\) will simply "sweep" its arrays from right to left and back again. If the original program \(P'\) would have moved i in a different direction then we wait \(O(T)\) steps until we reach the same point back again, and so \(P\) runs in \(O(T(n)^2)\) time. Let \(P'\) be a NAND-TM program computing \(F\) in \(T(n)\) steps. We construct an oblivious NAND-TM program \(P\) for computing \(F\) as follows (see also Figure 13.11). On input \(x\), \(P\) will compute \(T=T(|x|)\) and set up arrays Atstart and Atend satisfying Atstart[\(0\)]\(=1\) and Atstart[\(i\)]\(=0\) for \(i>0\) and Atend[\(T-1\)]\(=1\) and Atend[i]\(=0\) for all \(i \neq T-1\). We can do this because \(T\) is a nice function. Note that since this computation does not depend on \(x\) but only on its length, it is oblivious. \(P\) will also have a special array Marker initialized to all zeroes. The index variable of \(P\) will change direction of movement to the right whenever Atstart[i]\(=1\) and to the left whenever Atend[i]\(=1\). The program \(P\) simulates the execution of \(P'\). However, if the MODANDJMP instruction in \(P'\) attempts to move to the right when \(P\) is moving left (or vice versa) then \(P\) will set Marker[i] to \(1\) and enter into a special "waiting mode". In this mode \(P\) will wait until the next time in which Marker[i]\(=1\) (at the next sweep) at which points \(P\) zeroes Marker[i] and continues with the simulation. In the worst case this will take \(2T(n)\) steps (if \(P\) has to go all the way from one end to the other and back again.) We also modify \(P\) to ensure it ends the computation after simulating exactly \(T(n)\) steps of \(P'\), adding "dummy steps" if \(P'\) ends early. We see that \(P\) simulates the execution of \(P'\) with an overhead of \(O(T(n))\) steps of \(P\) per one step of \(P'\), hence completing the proof. Theorem 13.13 implies Theorem 13.12. Indeed, if \(P\) is a \(k\)-line oblivious NAND-TM program computing \(F\) in time \(T(n)\) then for every \(n\) we can obtain a NAND-CIRC program of \((k-1)\cdot T(n)\) lines by simply making \(T(n)\) copies of \(P\) (dropping the final MODANDJMP line). In the \(j\)-th copy we replace all references of the form Foo[i] to foo_\(i_j\) where \(i_j\) is the value of i in the \(j\)-th iteration. "Unrolling the loop": algorithmic transformation of Turing Machines to circuits The proof of Theorem 13.12 is algorithmic, in the sense that the proof yields a polynomial-time algorithm that given a Turing machine \(M\) and parameters \(T\) and \(n\), produces a circuit of \(O(T^2)\) gates that agrees with \(M\) on all inputs \(x\in \{0,1\}^n\) (as long as \(M\) runs for less than \(T\) steps these inputs.) We record this fact in the following theorem, since it will be useful for us later on: 13.12: The function \(\ensuremath{\mathit{UNROLL}}\) takes as input a Turing machine \(M\), an input length parameter \(n\), a step budget parameter \(T\), and outputs a circuit \(C\) of size \(poly(T)\) that takes \(n\) bits of inputs and outputs \(M(x)\) if \(M\) halts on \(x\) within at most \(T\) steps. There is algorithm \(\ensuremath{\mathit{UNROLL}}\) such that for every Turing machine \(M\) and numbers \(n,T\), \(\ensuremath{\mathit{UNROLL}}(M,1^T,1^n)\) runs for \(poly(|M|,T,n)\) steps and outputs a NAND circuit \(C\) with \(n\) inputs, \(O(T^2)\) gates, and one output, such that \[ C(x) = \begin{cases}y & M \text{ halts in $\leq T$ steps and outputs $y$} \\ 0 & \text{otherwise} \end{cases}\;. \] We only sketch the proof since it follows by directly translating the proof of Theorem 13.12 into an algorithm together with the simulation of Turing machines by NAND-TM programs (see also Figure 13.13). Specifically, \(\ensuremath{\mathit{UNROLL}}\) does the following: Transform the Turing machine \(M\) into an equivalent NAND-TM program \(P\). Transform the NAND-TM program \(P\) into an equivalent oblivious program \(P'\) following the proof of Theorem 13.13. The program \(P'\) takes \(T' = O(T^2)\) steps to simulate \(T\) steps of \(P\). "Unroll the loop" of \(P'\) by obtaining a NAND-CIRC program of \(O(T')\) lines (or equivalently a NAND circuit with \(O(T^2)\) gates) corresponding to the execution of \(T'\) iterations of \(P'\). 13.13: We can transform a Turing machine \(M\), input length parameter \(n\), and time bound \(T\) into an \(O(T^2)\)-sized NAND circuit that agrees with \(M\) on all inputs \(x\in \{0,1\}^n\) on which \(M\) halts in at most \(T\) steps. The transformation is obtained by first using the equivalence of Turing machines and NAND-TM programs \(P\), then turning \(P\) into an equivalent oblivious NAND-TM program \(P'\) via Theorem 13.13, then "unrolling" \(O(T^2)\) iterations of the loop of \(P'\) to obtain an \(O(T^2)\) line NAND-CIRC program that agrees with \(P'\) on length \(n\) inputs, and finally translating this program into an equivalent circuit. By "unrolling the loop" we can transform an algorithm that takes \(T(n)\) steps to compute \(F\) into a circuit that uses \(poly(T(n))\) gates to compute the restriction of \(F\) to \(\{0,1\}^n\). Reviewing the transformations described in Figure 13.13, as well as solving the following two exercises is a great way to get more comfort with non-uniform complexity and in particular with \(\mathbf{P_{/poly}}\) and its relation to \(\mathbf{P}\). Prove that for every \(F:\{0,1\}^* \rightarrow \{0,1\}\), \(F\in \mathbf{P}\) if and only if there is a polynomial-time Turing machine \(M\) such that for every \(n\in \N\), \(M(1^n)\) outputs a description of an \(n\) input circuit \(C_n\) that computes the restriction \(F_{\upharpoonright n}\) of \(F\) to inputs in \(\{0,1\}^n\). We start with the "if" direction. Suppose that there is a polynomial-time Turing machine \(M\) that on input \(1^n\) outputs a circuit \(C_n\) that computes \(F_{\upharpoonright n}\). Then the following is a polynomial-time Turing machine \(M'\) to compute \(F\). On input \(x\in \{0,1\}^*\), \(M'\) will: Let \(n=|x|\) and compute \(C_n = M(1^n)\). Return the evaluation of \(C_n\) on \(x\). Since we can evaluate a Boolean circuit on an input in polynomial time, \(M'\) runs in polynomial time and computes \(F(x)\) on every input \(x\). For the "only if" direction, if \(M'\) is a Turing machine that computes \(F\) in polynomial-time, then (applying the equivalence of Turing machines and NAND-TM as well as Theorem 13.13) there is also an oblivious NAND-TM program \(P\) that computes \(F\) in time \(p(n)\) for some polynomial \(p\). We can now define \(M\) to be the Turing machine that on input \(1^n\) outputs the NAND circuit obtained by "unrolling the loop" of \(P\) for \(p(n)\) iterations. The resulting NAND circuit computes \(F_{\upharpoonright n}\) and has \(O(p(n))\) gates. It can also be transformed to a Boolean circuit with \(O(p(n))\) AND/OR/NOT gates. Let \(F:\{0,1\}^* \rightarrow \{0,1\}\). Then \(F\in\mathbf{P_{/poly}}\) if and only if there exists a polynomial \(p:\N \rightarrow \N\), a polynomial-time Turing machine \(M\) and a sequence \(\{ a_n \}_{n\in \N}\) of strings, such that for every \(n\in \N\): \(|a_n| \leq p(n)\) For every \(x\in \{0,1\}^n\), \(M(a_n,x)=F(x)\). We only sketch the proof. For the "only if" direction, if \(F\in \mathbf{P_{/poly}}\) then we can use for \(a_n\) simply the description of the corresponding circuit \(C_n\) and for \(M\) the program that computes in polynomial time the evaluation of a circuit on its input. For the "if" direction, we can use the same "unrolling the loop" technique of Theorem 13.12 to show that if \(P\) is a polynomial-time NAND-TM program, then for every \(n\in \N\), the map \(x \mapsto P(a_n,x)\) can be computed by a polynomial-size NAND-CIRC program \(Q_n\). Can uniform algorithms simulate non-uniform ones? Theorem 13.12 shows that every function in \(\ensuremath{\mathit{TIME}}(T(n))\) is in \(\ensuremath{\mathit{SIZE}}(poly(T(n)))\). One can ask if there is an inverse relation. Suppose that \(F\) is such that \(F_{\upharpoonright n}\) has a "short" NAND-CIRC program for every \(n\). Can we say that it must be in \(\ensuremath{\mathit{TIME}}(T(n))\) for some "small" \(T\)? The answer is an emphatic no. Not only is \(\mathbf{P_{/poly}}\) not contained in \(\mathbf{P}\), in fact \(\mathbf{P_{/poly}}\) contains functions that are uncomputable! There exists an uncomputable function \(F:\{0,1\}^* \rightarrow \{0,1\}\) such that \(F \in \mathbf{P_{/poly}}\). Since \(\mathbf{P_{/poly}}\) corresponds to non-uniform computation, a function \(F\) is in \(\mathbf{P_{/poly}}\) if for every \(n\in \N\), the restriction \(F_{\upharpoonright n}\) to inputs of length \(n\) has a small circuit/program, even if the circuits for different values of \(n\) are completely different from one another. In particular, if \(F\) has the property that for every equal-length inputs \(x\) and \(x'\), \(F(x)=F(x')\) then this means that \(F_{\upharpoonright n}\) is either the constant function zero or the constant function one for every \(n\in \N\). Since the constant function has a (very!) small circuit, such a function \(F\) will always be in \(\mathbf{P_{/poly}}\) (indeed even in smaller classes). Yet by a reduction from the Halting problem, we can obtain a function with this property that is uncomputable. Consider the following "unary halting function" \(\ensuremath{\mathit{UH}}:\{0,1\}^* \rightarrow \{0,1\}\) defined as follows. We let \(S:\N \rightarrow \{0,1\}^*\) be the function that on input \(n\in \N\), outputs the string that corresponds to the binary representation of the number \(n\) without the most significant \(1\) digit. Note that \(S\) is onto. For every \(x\in \{0,1\}^*\), we define \(\ensuremath{\mathit{UH}}(x)=\ensuremath{\mathit{HALTONZERO}}(S(|x|))\). That is, if \(n\) is the length of \(x\), then \(\ensuremath{\mathit{UH}}(x)=1\) if and only if the string \(S(n)\) encodes a NAND-TM program that halts on the input \(0\). \(\ensuremath{\mathit{UH}}\) is uncomputable, since otherwise we could compute \(\ensuremath{\mathit{HALTONZERO}}\) by transforming the input program \(P\) into the integer \(n\) such that \(P=S(n)\) and then running \(\ensuremath{\mathit{UH}}(1^n)\) (i.e., \(\ensuremath{\mathit{UH}}\) on the string of \(n\) ones). On the other hand, for every \(n\), \(\ensuremath{\mathit{UH}}_n(x)\) is either equal to \(0\) for all inputs \(x\) or equal to \(1\) on all inputs \(x\), and hence can be computed by a NAND-CIRC program of a constant number of lines. The issue here is of course uniformity. For a function \(F:\{0,1\}^* \rightarrow \{0,1\}\), if \(F\) is in \(\ensuremath{\mathit{TIME}}(T(n))\) then we have a single algorithm that can compute \(F_{\upharpoonright n}\) for every \(n\). On the other hand, \(F_{\upharpoonright n}\) might be in \(\ensuremath{\mathit{SIZE}}(T(n))\) for every \(n\) using a completely different algorithm for every input length. For this reason we typically use \(\mathbf{P_{/poly}}\) not as a model of efficient computation but rather as a way to model inefficient computation. For example, in cryptography people often define an encryption scheme to be secure if breaking it for a key of length \(n\) requires more than a polynomial number of NAND lines. Since \(\mathbf{P} \subseteq \mathbf{P_{/poly}}\), this in particular precludes a polynomial time algorithm for doing so, but there are technical reasons why working in a non-uniform model makes more sense in cryptography. It also allows to talk about security in non-asymptotic terms such as a scheme having "\(128\) bits of security". While it can sometimes be a real issue, in many natural settings the difference between uniform and non-uniform computation does not seem so important. In particular, in all the examples of problems not known to be in \(\mathbf{P}\) we discussed before: longest path, 3SAT, factoring, etc., these problems are also not known to be in \(\mathbf{P_{/poly}}\) either. Thus, for "natural" functions, if you pretend that \(\ensuremath{\mathit{TIME}}(T(n))\) is roughly the same as \(\ensuremath{\mathit{SIZE}}(T(n))\), you will be right more often than wrong. 13.14: Relations between \(\mathbf{P}\), \(\mathbf{EXP}\), and \(\mathbf{P_{/poly}}\). It is known that \(\mathbf{P} \subseteq \mathbf{EXP}\), \(\mathbf{P} \subseteq \mathbf{P_{/poly}}\) and that \(\mathbf{P_{/poly}}\) contains uncomputable functions (which in particular are outside of \(\mathbf{EXP}\)). It is not known whether or not \(\mathbf{EXP} \subseteq \mathbf{P_{/poly}}\) though it is believed that \(\mathbf{EXP} \not\subseteq \mathbf{P_{/poly}}\). Uniform vs. Non-uniform computation: A recap To summarize, the two models of computation we have described so far are: Uniform models: Turing machines, NAND-TM programs, RAM machines, NAND-RAM programs, C/JavaScript/Python, etc. These models include loops and unbounded memory hence a single program can compute a function with unbounded input length. Non-uniform models: Boolean Circuits or straightline programs have no loops and can only compute finite functions. The time to execute them is exactly the number of lines or gates they contain. For a function \(F:\{0,1\}^* \rightarrow \{0,1\}\) and some nice time bound \(T:\N \rightarrow \N\), we know that: If \(F\) is uniformly computable in time \(T(n)\) then there is a sequence of circuits \(C_1,C_2,\ldots\) where \(C_n\) has \(poly(T(n))\) gates and computes \(F_{\upharpoonright n}\) (i.e., restriction of \(F\) to \(\{0,1\}^n\)) for every \(n\). The reverse direction is not necessarily true - there are examples of functions \(F:\{0,1\}^n \rightarrow \{0,1\}\) such that \(F_{\upharpoonright n}\) can be computed by even a constant size circuit but \(F\) is uncomputable. This means that non-uniform complexity is more useful to establish hardness of a function than its easiness. We can define the time complexity of a function using NAND-TM programs, and similarly to the notion of computability, this appears to capture the inherent complexity of the function. There are many natural problems that have polynomial-time algorithms, and other natural problems that we'd love to solve, but for which the best known algorithms are exponential. The definition of polynomial time, and hence the class \(\mathbf{P}\), is robust to the choice of model, whether it is Turing machines, NAND-TM, NAND-RAM, modern programming languages, and many other models. The time hierarchy theorem shows that there are some problems that can be solved in exponential, but not in polynomial time. However, we do not know if that is the case for the natural examples that we described in this lecture. By "unrolling the loop" we can show that every function computable in time \(T(n)\) can be computed by a sequence of NAND-CIRC programs (one for every input length) each of size at most \(poly(T(n))\) Prove that the classes \(\mathbf{P}\) and \(\mathbf{EXP}\) defined in Definition 13.2 are equal to \(\cup_{c\in \{1,2,3,\ldots \}} \ensuremath{\mathit{TIME}}(n^c)\) and \(\cup_{c\in \{1,2,3,\ldots \}} \ensuremath{\mathit{TIME}}(2^{n^c})\) respectively. (If \(S_1,S_2,S_3,\ldots\) is a collection of sets then the set \(S = \cup_{c\in \{1,2,3,\ldots \}} S_c\) is the set of all elements \(e\) such that there exists some \(c\in \{ 1,2,3,\ldots \}\) where \(e\in S_c\).) Theorem 13.5 shows that the classes \(\mathbf{P}\) and \(\mathbf{EXP}\) are robust with respect to variations in the choice of the computational model. This exercise shows that these classes are also robust with respect to our choice of the representation of the input. Specifically, let \(F\) be a function mapping graphs to \(\{0,1\}\), and let \(F', F'':\{0,1\}^* \rightarrow \{0,1\}\) be the functions defined as follows. For every \(x\in \{0,1\}^*\): \(F'(x)=1\) iff \(x\) represents a graph \(G\) via the adjacency matrix representation such that \(F(G)=1\). \(F''(x)=1\) iff \(x\) represents a graph \(G\) via the adjacency list representation such that \(F(G)=1\). Prove that \(F' \in \mathbf{P}\) iff \(F'' \in \mathbf{P}\). More generally, for every function \(F:\{0,1\}^* \rightarrow \{0,1\}\), the answer to the question of whether \(F\in \mathbf{P}\) (or whether \(F\in \mathbf{EXP}\)) is unchanged by switching representations, as long as transforming one representation to the other can be done in polynomial time (which essentially holds for all reasonable representations). For every function \(F:\{0,1\}^* \rightarrow \{0,1\}^*\), define \(Bool(F)\) to be the function mapping \(\{0,1\}^*\) to \(\{0,1\}\) such that on input a (string representation of a) triple \((x,i,\sigma)\) with \(x\in \{0,1\}^*\), \(i \in \N\) and \(\sigma \in \{0,1\}\), \[ Bool(F)(x,i,\sigma) = \begin{cases} F(x)_i & \sigma =0, i<|F(x)| \\ 1 & \sigma = 1,i<|F(x)| \\ 0 & \text{otherwise} \end{cases} \] where \(F(x)_i\) is the \(i\)-th bit of the string \(F(x)\). Prove that for every \(F:\{0,1\}^* \rightarrow \{0,1\}^*\), \(Bool(F) \in \mathbf{P}\) if and only if there is a Turing Machine \(M\) and a polynomial \(p:\N \rightarrow \N\) such that for every \(x\in \{0,1\}^*\), on input \(x\), \(M\) halts within \(\leq p(|x|)\) steps and outputs \(F(x)\). Say that a (possibly non-Boolean) function \(F:\{0,1\}^* \rightarrow \{0,1\}^*\) is computable in polynomial time, if there is a Turing Machine \(M\) and a polynomial \(p:\N \rightarrow \N\) such that for every \(x\in \{0,1\}^*\), on input \(x\), \(M\) halts within \(\leq p(|x|)\) steps and outputs \(F(x)\). Prove that for every pair of functions \(F,G:\{0,1\}^* \rightarrow \{0,1\}^*\) computable in polynomial time, their composition \(F\circ G\), which is the function \(H\) s.t. \(H(x)=F(G(x))\), is also computable in polynomial time. Say that a (possibly non-Boolean) function \(F:\{0,1\}^* \rightarrow \{0,1\}^*\) is computable in exponential time, if there is a Turing Machine \(M\) and a polynomial \(p:\N \rightarrow \N\) such that for every \(x\in \{0,1\}^*\), on input \(x\), \(M\) halts within \(\leq 2^{p(|x|)}\) steps and outputs \(F(x)\). Prove that there is some \(F,G:\{0,1\}^* \rightarrow \{0,1\}^*\) s.t. both \(F\) and \(G\) are computable in exponential time, but \(F\circ G\) is not computable in exponential time. We say that a Turing machine \(M\) is oblivious if there is some function \(T:\N\times \N \rightarrow \Z\) such that for every input \(x\) of length \(n\), and \(t\in \N\) it holds that: If \(M\) takes more than \(t\) steps to halt on the input \(x\), then in the \(t\)-th step \(M\)'s head will be in the position \(T(n,t)\). (Note that this position depends only on the length of \(x\) and not its contents.) If \(M\) halts before the \(t\)-th step then \(T(n,t) = -1\). Prove that if \(F\in \mathbf{P}\) then there exists an oblivious Turing machine \(M\) that computes \(F\) in polynomial time. See footnote for hint.1 Let \(\ensuremath{\mathit{EDGE}}:\{0,1\}^* \rightarrow \{0,1\}\) be the function such that on input a string representing a triple \((L,i,j)\), where \(L\) is the adjacency list representation of an \(n\) vertex graph \(G\), and \(i\) and \(j\) are numbers in \([n]\), \(\ensuremath{\mathit{EDGE}}(L,i,j)=1\) if the edge \(\{i,j \}\) is present in the graph. \(\ensuremath{\mathit{EDGE}}\) outputs \(0\) on all other inputs. Prove that \(\ensuremath{\mathit{EDGE}} \in \mathbf{P}\). Let \(\ensuremath{\mathit{PLANARMATRIX}}:\{0,1\}^* \rightarrow \{0,1\}\) be the function that on input an adjacency matrix \(A\) outputs \(1\) if and only if the graph represented by \(A\) is planar (that is, can be drawn on the plane without edges crossing one another). For this question, you can use without proof the fact that \(\ensuremath{\mathit{PLANARMATRIX}} \in \mathbf{P}\). Prove that \(\ensuremath{\mathit{PLANARLIST}} \in \mathbf{P}\) where \(\ensuremath{\mathit{PLANARLIST}}:\{0,1\}^* \rightarrow \{0,1\}\) is the function that on input an adjacency list \(L\) outputs \(1\) if and only if \(L\) represents a planar graph. Let \(\ensuremath{\mathit{NANDEVAL}}:\{0,1\}^* \rightarrow \{0,1\}\) be the function such that for every string representing a pair \((Q,x)\) where \(Q\) is an \(n\)-input \(1\)-output NAND-CIRC (not NAND-TM!) program and \(x\in \{0,1\}^n\), \(\ensuremath{\mathit{NANDEVAL}}(Q,x)=Q(x)\). On all other inputs \(\ensuremath{\mathit{NANDEVAL}}\) outputs \(0\). Prove that \(\ensuremath{\mathit{NANDEVAL}} \in \mathbf{P}\). Let \(\ensuremath{\mathit{NANDHARD}}:\{0,1\}^* \rightarrow \{0,1\}\) be the function such that on input a string representing a pair \((f,s)\) where \(f \in \{0,1\}^{2^n}\) for some \(n\in \mathbb{N}\) \(s\in \mathbb{N}\) \(\ensuremath{\mathit{NANDHARD}}(f,s)=1\) if there is no NAND-CIRC program \(Q\) of at most \(s\) lines that computes the function \(F:\{0,1\}^n \rightarrow \{0,1\}\) whose truth table is the string \(f\). That is, \(\ensuremath{\mathit{NANDHARD}}(f,s)=1\) if for every NAND-CIRC program \(Q\) of at most \(s\) lines, there exists some \(x\in \{0,1\}^{n}\) such that \(Q(x) \neq f_x\) where \(f_x\) denote the \(x\)-the coordinate of \(f\), using the binary representation to identify \(\{0,1\}^n\) with the numbers \(\{0,\ldots,2^n -1 \}\). Prove that \(\ensuremath{\mathit{NANDHARD}} \in \mathbf{EXP}\). (Challenge) Prove that there is an algorithm \(\ensuremath{\mathit{FINDHARD}}\) such that if \(n\) is sufficiently large, then \(\ensuremath{\mathit{FINDHARD}}(1^n)\) runs in time \(2^{2^{O(n)}}\) and outputs a string \(f \in \{0,1\}^{2^n}\) that is the truth table of a function that is not contained in \(\ensuremath{\mathit{SIZE}}(2^n/(1000n))\). (In other words, if \(f\) is the string output by \(\ensuremath{\mathit{FINDHARD}}(1^n)\) then if we let \(F:\{0,1\}^n \rightarrow \{0,1\}\) be the function such that \(F(x)\) outputs the \(x\)-th coordinate of \(f\), then \(F\not\in \ensuremath{\mathit{SIZE}}(2^n/(1000n))\).2 Suppose that you are in charge of scheduling courses in computer science in University X. In University X, computer science students wake up late, and have to work on their startups in the afternoon, and take long weekends with their investors. So you only have two possible slots: you can schedule a course either Monday-Wednesday 11am-1pm or Tuesday-Thursday 11am-1pm. Let \(\ensuremath{\mathit{SCHEDULE}}:\{0,1\}^* \rightarrow \{0,1\}\) be the function that takes as input a list of courses \(L\) and a list of conflicts \(C\) (i.e., list of pairs of courses that cannot share the same time slot) and outputs \(1\) if and only if there is a "conflict free" scheduling of the courses in \(L\), where no pair in \(C\) is scheduled in the same time slot. More precisely, the list \(L\) is a list of strings \((c_0,\ldots,c_{n-1})\) and the list \(C\) is a list of pairs of the form \((c_i,c_j)\). \(\ensuremath{\mathit{SCHEDULE}}(L,C)=1\) if and only if there exists a partition of \(c_0,\ldots,c_{n-1}\) into two parts so that there is no pair \((c_i,c_j) \in C\) such that both \(c_i\) and \(c_j\) are in the same part. Prove that \(\ensuremath{\mathit{SCHEDULE}} \in \mathbf{P}\). As usual, you do not have to provide the full code to show that this is the case, and can describe operations as a high level, as well as appeal to any data structures or other results mentioned in the book or in lecture. Note that to show that a function \(F\) is in \(\mathbf{P}\) you need to both (1) present an algorithm \(A\) that computes \(F\) in polynomial time, (2) prove that \(A\) does indeed run in polynomial time, and does indeed compute the correct answer. Try to think whether or not your algorithm extends to the case where there are three possible time slots. Bibliographical notes Because we are interested in the maximum number of steps for inputs of a given length, running-time as we defined it is often known as worst case complexity. The minimum number of steps (or "best case" complexity) to compute a function on length \(n\) inputs is typically not a meaningful quantity since essentially every natural problem will have some trivially easy instances. However, the average case complexity (i.e., complexity on a "typical" or "random" input) is an interesting concept which we'll return to when we discuss cryptography. That said, worst-case complexity is the most standard and basic of the complexity measures, and will be our focus in most of this book. Some lower bounds for single-tape Turing machines are given in (Maass, 1985) . For defining efficiency in the \(\lambda\) calculus, one needs to be careful about the order of application of the reduction steps, which can matter for computational efficiency, see for example this paper. The notation \(\mathbf{P_{/poly}}\) is used for historical reasons. It was introduced by Karp and Lipton, who considered this class as corresponding to functions that can be computed by polynomial-time Turing machines that are given for any input length \(n\) an advice string of length polynomial in \(n\). Hint: This is the Turing machine analog of Theorem 13.13. We replace one step of the original TM \(M'\) computing \(F\) with a "sweep" of the obliviouss TM \(M\) in which it goes \(T\) steps to the right and then \(T\) steps to the left. Hint: Use Item 1, the existence of functions requiring exponentially hard NAND programs, and the fact that there are only finitely many functions mapping \(\{0,1\}^n\) to \(\{0,1\}\). Comments are posted on the GitHub repository using the utteranc.es app. A GitHub login is required to comment. If you don't want to authorize the app to post on your behalf, you can also comment directly on the GitHub issue for this page. Compiled on 09/02/2021 18:25:10 Copyright 2019, Boaz Barak. Produced using pandoc and panflute with templates derived from gitbook and bookdown.
CommonCrawl
Calculus I -- Spring 2017 MTH 229 - Calculus with Analytic Geometry I (CT). An introduction to analytic geometry. Limits, derivatives, and integrals of the elementary functions of one variable, including the transcendental functions. (PR: MTH ACT of 27 or above, or MTH 130 and 122, or MTH 127 and 122, or MTH 132) This course meets a Core I/Critical Thinking requirement. Time and Place: 5:00 pm - 5:50 pm MW and 5:00 pm - 6:15 pm TR at 518 Smith Hall. Instructor: Peter Saveliev (call me Peter) Office: Smith Hall 713 Office Hours: MW 2:30 - 5:00, or by appointment Office Phone: x4639 E-mail: [email protected] Class Web-Page: math01.com Prerequisites: fluency with algebra, good understanding of functions Text: Calculus by Rogawski and Adams, Chapters 2 - 5 Goals: good understanding of limits, the derivative and the integral, fluent differentiation Computer Restrictions: graphic calculator TI-83 or TI-83+ Activities: the student will practice each outcome via the homework given in the textbook and online. Evaluation: the student achievement of each outcome will be assessed via in-class quizzes: taken from the textbook's exercise sets homework: accessed and graded online, provided by Webwork, http://webwork.marshall.edu/webwork2/S17-Math-229-Saveliev/ in-class tests: based on the textbook's exercises project(s): written applications of calculus in real life Grade Breakdown: TOTAL $= .05 \times P + .25\times (Q + H) + .20\times FP + .20\times M + .30\times F$ participation: $5\%$ quizzes and online homework: $25\%$ final project: $20\%$ midterm: $20\%$ final exam: $30\% $ For other details, see Course policy. 4 Tutoring 5 Exams They will appear here as the course progresses. The derivative Differentiable calculus Applications of differential calculus The integral Calculus in one picture... The student will be given a short, one or two sentences, description of a problem that uses no calculus terminology. Stage 1: The student's first task is to ask the right questions in order to recast the problem in the language of the appropriate mathematics. It also includes critique of the old submission. Stage 2: Then he is to explore the problem numerically and graphically with a spreadsheet and make an approximation of what the answer might be and ask further questions. Stage 3: Finally, the problem is solved analytically with the calculus tools and the answer is confirmed. All the steps, the explanations of the methods, the data, the illustrations, and the computations are presented in writing. The methods should be based entirely on the material of the course. The actual project statements are written as if by a person with no knowledge of calculus. That is why the problems may be poorly formulated, incomplete, and have many possible answers or none at all. As a result, the structure of the project might be much more complex than the three stages above. Project statements: 1. How do I throw a ball down a staircase so that it bounces off each step? -- (fall 2016) 2. How should you throw a ball from the top of a $100$ story building so that it hits the ground at $100$ feet per second? -- (spring 2017, fall 2016) 3. I would like to use a cannon with a muzzle velocity of $100$ feet per second to bombard the inside of a fortification $300$ feet away with walls $20$ feet high. -- (spring 2017, fall 2016) 4. I have a toy cannon and I want to shoot it from a table and hit a spot on the floor $10$ feet away from the table. -- (spring 2017, fall 2016) 5. How hard do I have to push a toy truck from the floor up a $30$ degree incline to make it reach the top of the table at zero speed? -- (spring 2017, fall 2016) 6. How much does moving in or out of the passing lane affect the reading of a radar gun? -- (spring 2017, fall 2016) 7. How fast does the shadow of a falling ball on a sliding ladder move? -- (spring 2017) 8. How fast do I have to move my hand while spinning a sling in order to throw the rock $100$ feet away? -- (spring 2017, fall 2016) 9. Where should I stand in order to be best illuminated by two light bulbs on the ceiling if the amount of light is inversely proportional to the square of the distance? -- (spring 2017) A group should contain one or two persons. Your submission should be both on paper and in a digital format such as PDF (it will be linked here). It is a good idea to go digital from the beginning; for example, write with Latex or MS Word, draw with MS Paint, analyze data with MS Excel or Google spreadsheets. Include: the name, the title, introduction, conclusions, labeled pictures and tables, references, etc. The Excel file with the "free fall" equations is here for download, some explanations are here. Due date: April 23 List of topics Brief review of basic concepts of algebra Number systems. Distance formula. Slope of a line. Standard equations of lines. A library of functions The basic equations and qualitative behavior of linear functions, power functions, polynomial functions, rational functions, exponential and logarithmic functions, and trigonometric functions. Limits and applications The limit of a function at a point. One-sided limits. Continuity and the intermediate value theorem. Infinite limits. Limits at infinity. Applications of limits to engineering and science. Differentiation and applications Definition of the derivative at a point and on an interval. Slope of a tangent line. Derivatives of polynomials. Derivatives of trigonometric functions. Derivatives of exponential and logarithmic functions. Rules for differentiation. Mean value theorem. Implicit differentiation. Maxima and minima. Critical points and intervals of increase and decrease. Concavity and inflection points. Newton's Method. Differentials and linear approximation. Applications of derivatives to engineering and science. Integration and applications Area as an integral. Antiderivatives. Riemann sums. Definite integrals as limits of Riemann sums. The Fundamental Theorem of Calculus. The substitution method for integrals. Applications of integrals to engineering and science. 1. Students will be able to evaluate limits, derivatives, and integrals symbolically. 2. Students will be able to approximate limits, derivatives, and definite integrals from tabular and graphical data. 3. Students will be familiar with the definitions of limits, derivatives, and integrals; be able to apply these definitions to test properties of these concepts; and be able to produce verbal arguments and examples showing that basic properties hold or do not hold. 4. Students will be able to apply the techniques of calculus to answer questions about the analytic geometry of functions, including vertical and horizontal asymptotes, tangent lines, local extrema, and global extrema. 5. Students will be able to verbally explain the meaning of limits, derivatives, and integrals in their own words, both in general terms and in the context of specific problems. 6. Students will be able to select or construct an appropriate function to model an applied situation for which calculus is applicable, based on a verbal description of the situation. 7. Students will be able to apply techniques of calculus to solve applied problems from fields such as engineering and the sciences. 8. Students will be able to interpret symbolic and numerical results in real-world terms, and analyze the validity of their results in a real-world setting. To give students an understanding of the fundamental concepts of calculus and an appreciation of its many applications. To develop critical thinking skills by asking students to convert real-world problems into forms suitable for calculus, and interpret the results of calculus in real-world terms. To provide students with a deeper understanding of the mathematics that is used in their science and engineering courses. To develop facility in using graphing calculators and computers to solve mathematics problems. To satisfy program requirements. The math tutoring lab will be open this semester during the following hours: Smith Music 115: Monday-Thursday 10am-4pm, Friday 10am-12noon; Smith Hall 620: Monday-Thursday 5:00pm-6:30pm. The lab will open the second week of classes. More information: http://www.marshall.edu/math/tutoringlab.asp. Also, the university tutoring center is in Smith Communications on the second floor. Calculus I -- Spring 2017 -- midterm, Monday March 6. Calculus I -- Spring 2017 -- final: Monday May 1, 5 pm, same place. Each old exam below is a small (<5%) sample of the types of problems that may appear: 2016: Calculus I -- Fall 2016 -- midterm, Calculus I -- Fall 2016 -- final 2012: Calculus I -- Fall 2012 -- midterm, Calculus I -- Fall 2012 -- final exam 2010: Calculus 1: midterm 1, Calculus 1: midterm 2, Calculus 1: final 2008: Calculus 1: test 1, Calculus 1: test 2, Calculus 1: test 3, Calculus 1: final exam Test 1: 1, 2, 3, 4. Final: 1, 2, 3, 4. Calculus 1: exercises Retrieved from "https://calculus123.com/index.php?title=Calculus_I_--_Spring_2017&oldid=1924"
CommonCrawl
1-5 of 5 records found matching your query: Goles, E., & Gomez, L. (2018). Combinatorial game associated to the one dimensional Schelling's model of social segregation. Nat. Comput., 17(2), 427–436. Abstract: In this paper we consider a finite one-dimensional lattice with sites such that one of them is empty and the others have a black or white token. There are two players (one for each color), such that step by step alternately they move one of their tokens to the empty site trying to obtain a connected configuration. This game is related with the Schelling's social segregation model, where colors represent two different populations such that each one tries to take up a position with more neighbors as itself (same color). In this work we study strategies to play the game as well as their relation with the associated Schelling's one-dimensional case (line and cycle graphs). Keywords: Combinatorial game; Schelling's social segregation model; Draw strategy; Energy Goles, E., Lobos, F., Ruz, G. A., & Sene, S. (2020). Attractor landscapes in Boolean networks with firing memory: a theoretical study applied to genetic networks. Nat. Comput., 19(2), 295–319. Abstract: In this paper we study the dynamical behavior of Boolean networks with firing memory, namely Boolean networks whose vertices are updated synchronously depending on their proper Boolean local transition functions so that each vertex remains at its firing state a finite number of steps. We prove in particular that these networks have the same computational power than the classical ones, i.e. any Boolean network with firing memory composed of m vertices can be simulated by a Boolean network by adding vertices. We also prove general results on specific classes of networks. For instance, we show that the existence of at least one delay greater than 1 in disjunctive networks makes such networks have only fixed points as attractors. Moreover, for arbitrary networks composed of two vertices, we characterize the delay phase space, i.e. the delay values such that networks admits limit cycles or fixed points. Finally, we analyze two classical biological models by introducing delays: the model of the immune control of the lambda\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\lambda $$\end{document}-phage and that of the genetic control of the floral morphogenesis of the plant Arabidopsis thaliana. Keywords: Discrete dynamical systems; Boolean networks; Biological network modeling Montalva-Medel, M., de Oliveira, P. P. B., & Goles, E. (2018). A portfolio of classification problems by one-dimensional cellular automata, over cyclic binary configurations and parallel update. Nat. Comput., 17(3), 663–671. Abstract: Decision problems addressed by cellular automata have been historically expressed either as determining whether initial configurations would belong to a given language, or as classifying the initial configurations according to a property in them. Unlike traditional approaches in language recognition, classification problems have typically relied upon cyclic configurations and fully paralell (two-way) update of the cells, which render the action of the cellular automaton relatively less controllable and difficult to analyse. Although the notion of cyclic languages have been studied in the wider realm of formal languages, only recently a more systematic attempt has come into play in respect to cellular automata with fully parallel update. With the goal of contributing to this effort, we propose a unified definition of classification problem for one-dimensional, binary cellular automata, from which various known problems are couched in and novel ones are defined, and analyse the solvability of the new problems. Such a unified perspective aims at increasing existing knowledge about classification problems by cellular automata over cyclic configurations and parallel update. Keywords: One-dimensional cellular automata; Classification problem; Decision problem; Language recognition; Density; Parity; Emergent computation Perrot, K., Montalva-Medel, M., de Oliveira, P. P. B., & Ruivo, E. L. P. (2020). Maximum sensitivity to update schedules of elementary cellular automata over periodic configurations. Nat. Comput., 19(1), 51–90. Abstract: This work is a thoughtful extension of the ideas sketched in Montalva et al. (AUTOMATA 2017 exploratory papers proceedings, 2017), aiming at classifying elementary cellular automata (ECA) according to their maximal one-step sensitivity to changes in the schedule of cells update. It provides a complete classification of the ECA rule space for all period sizes n[ 9 and, together with the classification for all period sizes n <= 9 presented in Montalva et al. (Chaos Solitons Fractals 113:209-220, 2018), closes this problem and opens further questionings. Most of the 256 ECA rule's sensitivity is proved or disproved to be maximum thanks to an automatic application of basic methods. We formalize meticulous case disjunctions that lead to the results, and patch failing cases for some rules with simple arguments. This gives new insights on the dynamics of ECA rules depending on the proof method employed, as for the last rules 45 and 105 requiring o0011THORN induction patterns. Keywords: Synchronism sensitivity; Elementary cellular automata; Update digraph Travisany, D., Goles, E., Latorre, M., Cort?s, M. P., & Maass, A. (2020). Generation and robustness of Boolean networks to model Clostridium difficile infection. Nat. Comput., 19(1), 111–134. Abstract: One of the more common healthcare associated infection is Chronic diarrhea. This disease is caused by the bacterium Clostridium difficile which alters the normal composition of the human gut flora. The most successful therapy against this infection is the fecal microbial transplant (FMT). They displace C. difficile and contribute to gut microbiome resilience, stability and prevent further episodes of diarrhea. The microorganisms in the FMT their interactions and inner dynamics reshape the gut microbiome to a healthy state. Even though microbial interactions play a key role in the development of the disease, currently, little is known about their dynamics and properties. In this context, a Boolean network model for C. difficile infection (CDI) describing one set of possible interactions was recently presented. To further explore the space of possible microbial interactions, we propose the construction of a neutral space conformed by a set of models that differ in their interactions, but share the final community states of the gut microbiome under antibiotic perturbation and CDI. To begin with the analysis, we use the previously described Boolean network model and we demonstrate that this model is in fact a threshold Boolean network (TBN). Once the TBN model is set, we generate and use an evolutionary algorithm to explore to identify alternative TBNs. We organize the resulting TBNs into clusters that share similar dynamic behaviors. For each cluster, the associated neutral graph is constructed and the most relevant interactions are identified. Finally, we discuss how these interactions can either affect or prevent CDI. Keywords: Threshold network; Neutral space; Evolutionary computation; Microbiome; Clostridium difficile infection
CommonCrawl
Graphite calorimetry Radiotherapy: Graphite calorimetry, a basis of the traceability chain Introduction to ionizing radiation: sources, useful quantities and standards Principle of calorimetric measurement Controlling medical exposure Chapeau à créer Since medical exposure is by far the most common source of artificial irradiation, there is a clear need to provide specific metrological references in this field. Radiotherapy in particular requires precise control of the doses administered in "target" volumes (for tumours), which in turn depends on highly accurate characterization of the doses produced by radiation sources (accelerators, cobalt). To meet this need, metrologists must develop primary reference standards that represent as closely as possible the dose absorbed in biological environments, in irradiation conditions that are identical to those of human exposure. Reference standards are defined in environments close to biological tissue, for their properties of interaction with matter (similar atomic number): air, water, graphite, etc. Medical physicists thus characterize radiation beams in terms of absorbed dose to water. Thorough knowledge of the characteristics of irradiation enables us to precisely determine its repercussions on human health, and consequently better control its use. Radiation is said to be ionizing when its energy is sufficient to eject an electron from the electron cloud of an atom or a molecule. There are two categories of ionizing radiation: directly ionizing radiation, where charged particles such as electrons and protons interact directly with the electrons of the atomic layers, exciting or ionizing them indirectly ionizing radiation, which involves neutral particles, such as photons and neutrons. Ionization is produced only through interaction of the "primary" particle with the environment (Compton effect, photoelectric interaction, creation of pairs, etc.), which releases a "secondary" charged particle. In France, artificial exposure accounts for a third of all exposure to ionizing radiation. Around 60 million examinations and 180,000 radiotherapy treatments are performed each year, and involve many radiation sources. X-rays produced by generators are used for radiodiagnosis. Their energy varies widely, from around 30 kV in mammography or 50 kV in dental treatment to as much as 120 kV in standard diagnosis. For radiotherapy, particularly treatment of tumours, the most commonly used rays are provided by linear accelerators producing photons or electrons up to 30 MeV, sources of 60Co (cobalt therapy) or 192Ir (brachytherapy). Whatever its source, ionizing radiation produces ionization in biological environments that causes radiolysis of the water they contain. The radicals produced alter the DNA and proteins of cells. The effects on tissues or organs vary, depending on the volume irradiated, the type and energy of the radiation, and the energy absorbed per unit of mass (called the absorbed dose). The notion of absorbed dose is thus essential for medical applications, as this quantity corresponds to the amount of energy transmitted to tissues by the incident radiation. The "absorbed dose" metrological quantity D is expressed in grays. It equals the quantity of energy absorbed by the unit of mass: \[D=\genfrac{}{}{1pt}{}{\text{d}E_{abs}}{\text{d}m}\] The value of a given radiation depends on the environment in which it is measured. As we have seen, the metrological quantity characterizing a radiation field depends on several factors, including the environment in which it is measured, and the nature and energy of the radiation considered. Metrological standards are therefore provided by reference measuring instruments, enabling us to measure the metrological quantity in the given environment in a specific radiation field – and by this radiation field – once it has been characterized. The graphite calorimeter is one of these reference measuring instruments. It is one of the most accurate primary dosimeters and provides the most direct access to the absorbed dose quantity. The energy imparted to the irradiated matter by the ionizing radiation is converted into heat, and it is the resulting temperature rise that is measured. If the energy imparted is converted almost entirely into heat, it is possible to obtain the required quantity, i.e. the absorbed dose. Calorimetry involves measuring the rise in temperature of an element of volume of this instrument when it is exposed to a radiation source. To do this, the corresponding quantity of heat must be exactly equal to the energy imparted to the element of volume by the ionizing radiation. There must be no heat exchange with the surrounding elements, or the exchange balance must be nil. Nor must there be any heat defect, i.e. all the energy transmitted by the radiation must be converted into heat. In certain environments ionizing radiation may cause chemical or physicochemical reactions producing or absorbing heat. To meet these conditions a solid calorimeter is used. It consists of several concentric bodies – usually three: an inner core ("absorbeur"), a shield ("écran") and an outer jacket ("manteau") – which are inserted in a block of the same reference material. The calorimeter's design is shown below (figure 1, top), in a side-view diagram and in a top-view radiograph of the model (figure 1, bottom) undergoing validation at the Laboratoire National Henri Becquerel (LHNB). Figure 1 : Solid calorimeter The core is the sensitive part of the detector, where the temperature rise caused by irradiation is determined. Its exact mass must be known, so that the value of the mean absorbed dose in this component may be deduced from the energy imparted. The nested assembly is held in place by silk threads ("fils de soie") which keep each part in suspension and ensure they do not come into direct contact. A vacuum of 10-3 Pa is maintained in the gaps by means of a pumping system. Each part is covered in aluminized mylar film. These three precautions make it possible to minimize all phenomena of heat transfer (by conduction, convection and radiation) between the calorimeter's component parts and its outside environment, thus ensuring thermal insulation of the whole and of the core in particular. The rise in temperature is measured with thermistors, whose resistance varies with temperature according to a specific law. A set of thermistors is inserted in each of the calorimeter's components. The thermistors have two uses: some measure temperature, others deliver heat through the Joule effect. Certain thermistors permanently monitor the difference in temperature between the calorimeter's different components. The heating thermistors are servo-controlled to supply the quantity of heat needed to keep the shield at the same temperature as the core – making heat exchange by radiation between these two bodies insignificant – and to keep the jacket at a fixed temperature, a few degrees higher than the ambient temperature in the laboratory. The jacket acts as insulation against the environment. Graphite was chosen as the calorimeter material for three of its intrinsic properties. Firstly, it presents no heat defect for the radiation (photons, electrons) and energy (up to around 50 MeV) concerned. Secondly, its atomic number represents an environment that is close to biological tissues. Thirdly, its mechanical qualities are such that precision machining is possible and its geometrical characteristics are not affected by irradiation. Temperature measurements are made using a Wheatstone bridge with four arms. One arm consists of the measurement thermistor and the other three consist of resistances, two known and one variable. The variation in temperature, producing a variation in resistance, is quantified by measuring the imbalance voltage. This is the bridge reading L. The proportionality between the temperature variation DT and the reading L can thus be established: \[\Delta T =A\times L\] where A is the coefficient of proportionality. The dose absorbed in graphite (Dg) is given by the following relation: \[D_{g}=\genfrac{}{}{1pt}{}{Q}{m}\times \genfrac{}{}{1pt}{}{1}{r_{cal}}\times\prod_i k_{i}\] where : Q is the heat produced under irradiation; it is equal to $C\times \Delta T$, where C is the heat capacity of the core rcal is the heat efficiency (equal to 1 for graphite, which has no heat defect) ki are the dose correction factors. In practice, there is no need to establish the heat capacity of the core C and the temperature rise ∆T through electrical calibration of the device. It is calibrated by dissipating a quantity of heat Qét in the heat thermistors (by Joule effect) and measuring the corresponding bridge reading Lét. This gives: \[D_{g}=\genfrac{}{}{1pt}{}{F\times L}{m}\times\genfrac{}{}{1pt}{}{1}{r_{cal}}\times\prod_i k_{i}\] where F is the electrical calibration coefficient (Qét / Lét). By way of example, an absorbed dose of 1 Gy corresponds to a temperature rise in the region of 10-3 K, resulting in a resistance imbalance across the Wheatstone bridge of 0.3 Ω. We can thus see that the main limitation of this technique is its low sensitivity, around 1 mK/Gy. The correction factors ki take into account disturbance produced by: non-graphite components, i.e. the vacuum gaps and "impurities" present in the core (thermistors, glue, thread, etc.) dose gradient effects in the core differences in heat leakage between calibration and measurement geometrical deviation (alignment of the calorimeter with respect to the incident radiation axis), etc. In order to ensure accurate measurements, the heat insulation must be perfect and the system must operate adiabatically. It is difficult to control heat exchange between the measurement system and its environment, but the problem is overcome by working in two operating modes that approach ideal conditions: quasi-adiabatic mode ("mode quasi-adiabatique") and constant-temperature mode ("mode température-constante"). Figure 2 : Diagram representing the two operating modes of the solid calorimeter The quasi-adiabatic mode is based on thermal feedback between the core and the shield. When a temperature difference between these two elements is detected, the PID (proportional-integral-derivative) controller sends an electrical charge to the shield to reduce the difference. In this operating mode, the temperature of the calorimeter increases continuously throughout irradiation. When the measurements are completed, thermal feedback is halted and the elements return to their equilibrium temperature. The return to equilibrium is slow, however, on account of the calorimeter's heat insulation. The second operating mode – constant-temperature mode – has been developed over the last few years at LNHB. The principle consists in keeping the core, shield and jacket at fixed temperatures with highly accurate heat controllers. To maintain this equilibrium, the energy dissipated in each element must be constant. Without irradiation, the energy is dissipated only by the Joule effect. Under irradiation, the electrical energy is only the complement of the energy imparted by irradiation. The value of the dose rate to be measured in the core is therefore proportional to the difference in the electrical power supplied to it without irradiation (P0) and during irradiation (Pi). A comparative measurement campaign has been carried out at LNHB with a cobalt-60 gamma photon beam. The results are consistent and show a deviation of only 0.04%. The Type A reading uncertainty in constant-temperature mode (0.03%) is significantly lower than in quasi-adiabatic mode (0.06%). The traceability chains defined by French accreditation body COFRAC (formed in 1994) meet a need for rationalization and reliability in the testing and calibration field. They control the transfer of a reference standard from its creation in the national metrology laboratory to end users, via accredited calibration laboratories. The International Weights and Measures Bureau (BIPM) runs regular comparisons of national references to ensure their consistency at international level. The Laboratoire National Henri Becquerel (LNHB) is France's national metrology laboratory for ionizing radiation. It is the guarantor of three derived units of the international SI system: the gray (Gy), the sievert (Sv) and the becquerel (Bq). The first two, which concern dosimetry, correspond to energy divided by mass, or joules per kilogram (J.kg-1), and are therefore derived from the base units of mass, length and time. The traceability chain for ionizing radiation for medical applications is very short. This is partly due to the small number of accredited calibration laboratories, but also to the low variance between the accuracy of the metrological references (around 1%) and the accuracy required for radiotherapy applications (better than 5% for the dose delivered to a tumour). A direct link to the national laboratory minimizes the uncertainty attached to calibration. LNHB performs the twofold task of developing reference standards and transferring them to users. This is the case for radiotherapy in particular, where LNHB calibrates the reference dosimeters used by radiotherapy departments to characterize their radiation beams. At present, the French primary reference instrument for absorbed dose to graphite is the GR8 calorimeter, used with a cobalt-60 beam in precise geometrical conditions. A second calorimeter, called GR9, was developed in 2007 and is currently undergoing validation. It should provide a lower uncertainty level and will replace the GR8 in the primary instrumentation chain. It has already been tested in the two standard operating modes in a cobalt-60 beam. Following validation of these results, the GR9 should be usable in standard mode by the end of 2008. The GR8 and GR9 calorimeters are shown in figure 3. The core is a solid graphite cylinder, 16 mm in diameter and 3 mm thick. The shield and jacket, which are symmetrical in relation to the planar axis, are each 2 mm thick. The three elements are thermally insulated from each other. The vacuum gaps ("interstices de vide) between each element are 1 mm thick. The graphite element is vacuum pumped ("pompage") and insulated by Kapton and PMMA film. Figure 3 : Diagram of the GR8 and GR9 calorimeters To perform dosimeter calibration, the reference calorimeter is inserted in a "dummy" graphite parallelepiped. Cylinders of the same material may be inserted between the calorimeter and the incident beam in order to adjust the depth of measurement. The combined standard uncertainty on Dg is 0.24% for cobalt-60 photons. Its main components stem from the electrical calibration coefficient (0.05%), the calorimeter's mechanical design (heat insulation, thermistor, 0.19%), and the physical data rcal (0.1%). The dose absorbed to water is determined from the dose absorbed to graphite by performing comparative graphite-water transfer experiments with ionization chambers and/or Fricke chemical dosimeters. Characterization of exposure to ionizing radiation involves measurement and calculation of the dose absorbed in the biological environment. This in turn involves intermediate environments and numerous corrective factors. Maintaining national standards and providing users with access to them is one of LNHB's key roles. As such, and in the context of its COFRAC accreditation, the laboratory ensures traceability for accredited calibration laboratories and performs calibration in medical and radiological protection fields. The services provided include: calibration of dosimeters in a radiotherapy beam of 60Co gamma photons, in terms of dose absorbed to water or kerma in the air calibration of dosimeters in high-energy X-ray beams (Saturne 43F linear accelerator), in terms of dose absorbed to water combined calibration of dosimeters in high-energy X-ray beams (Saturne 43F linear accelerator) and 60Co gamma photon beams, in terms of dose absorbed to water calibration of dosimeters in high-energy electron beams (Saturne 43 linear accelerator), in terms of dose absorbed to water combined calibration of dosimeters in high-energy electron beams (Saturne 43 linear accelerator) and 60Co gamma photon beams, in terms of dose absorbed to water calibration of personal dosimeters and area dosimeters for radiological protection in 60Co and 137Cs gamma photon beams and beta radiation beams emitted by 90Sr-90Y, 85Kr and 147Pm sources, in terms of equivalent dose (ambient or directional, and personal) irradiation of passive dosimeters at given doses in all photon and beta radiation beams. J. BARTHE, B. CHAUVENET and J.-M. BORDY, "La métrologie de la dose au CEA: le Laboratoire National Henri Becquerel", Radioprotection, 41, n° 5, 2006, S9-S24. J.-M. BORDY, "References métrologiques en dosimétrie, possibilités d'étalonnages offertes aux utilisateurs, analyse des priorités et problématique de réduction des incertitudes", Enseignement post-universitaire - CHU Henri Mondor, 2006. J. DAURES and A. OSTROWSKY, "New constant-temperature operating mode for graphite calorimeter at LNE-LNHB", Phys. Med. Biol., 50, 2005, 4035-4052. J. DAURES and A. OSTROWSKY, "Test of the new GR9 graphite calorimeter. Comparison with GR8", Workshop on absorbed dose and kerma in air, 2007. J. CHAVAUDRA, B. CHAUVENET and A. WAMBERSIE, "Medicine and ionizing radiation: metrology requirements", C. R. Physique, 5, 2004, 921-931. J. M. BORDY, "Réalisation de prestations d'étalonnage", www-carmin.cea.fr/content/download/476/3362/version/6/file/lnhb_etalonnage.pdf J. DAURES, A. OSTROWSKY and B. CHAUVENET, "Mesure de la dose absorbée de reference dans le graphite, à l'aide du calorimètre graphite GR8 sous le faisceau de Cobalt 60 n°2B", Rapport technique, 3, 1995. D. BLANC, "Les rayonnements ionisants : détection, spectrométrie, dosimétrie", Editions Masson – Physique fondamentale et appliquée, 1990.
CommonCrawl
Coexistence and extinction in Time-Periodic Volterra-Lotka type systems with nonlocal dispersal Extinction and coexistence of species for a diffusive intraguild predation model with B-D functional response November 2018, 23(9): 3787-3797. doi: 10.3934/dcdsb.2018077 Global attractor of complex networks of reaction-diffusion systems of Fitzhugh-Nagumo type B. Ambrosio 1,, , M. A. Aziz-Alaoui 1, and V. L. E. Phan 2,3, Normandie Univ, UNIHAVRE, LMAH, FR-CNRS-3335, ISCN, 76600 Le Havre, France Normandie Univ, France An Giang University, Long Xuyen City, Vietnam Received May 2016 Revised November 2017 Published March 2018 Fund Project: This research was funded by Region Normandie France and the ERDF (European Regional Development Fund) project XTERM We focus on the long time behavior of complex networks of reaction-diffusion systems. We prove the existence of the global attractor and the $L^{∞}$-bound for networks of $n$ reaction-diffusion systems that belong to a class that generalizes the FitzHugh-Nagumo reaction-diffusion equations. Keywords: FitzHugh-Nagumo, networks, complex systems, reaction-diffusion systems, attractor. Mathematics Subject Classification: 35B40, 35B41, 35K57. Citation: B. Ambrosio, M. A. Aziz-Alaoui, V. L. E. Phan. Global attractor of complex networks of reaction-diffusion systems of Fitzhugh-Nagumo type. Discrete & Continuous Dynamical Systems - B, 2018, 23 (9) : 3787-3797. doi: 10.3934/dcdsb.2018077 B. Ambrosio and J.-P. Françcoise, Propagation of Bursting Oscillations, Phil. Trans. R. Soc. A., 367 (2009), 4863-4875. doi: 10.1098/rsta.2009.0143. Google Scholar B. Ambrosio and M. A. Aziz-Alaoui, Synchronization and control of coupled reaction-diffusion systems of the FitzHugh-Nagumo-type, Comput. Math. Appl., 64 (2012), 934-943. doi: 10.1016/j.camwa.2012.01.056. Google Scholar B. Ambrosio and M. A. Aziz-Alaoui, Basin of Attraction of Solutions with Pattern Formation in Slow-Fast Reaction-Diffusion Systems, Acta Biotheoretica, 64 (2016), 311-325. doi: 10.1007/s10441-016-9294-z. Google Scholar B. Ambrosio, M. A. Aziz-Alaoui and V. L. E. Phan, Large time behavior and synchronization for a complex network system of reaction-diffusion systems, preprint, arXiv: 1504.07763. Google Scholar A. V. Babin and M. I. Vishik, Regular attractors of semigroups and evolution equations, J. Math. Pures Appl., 62 (1983), 441-491. Google Scholar E. Conway, D. Hoff and J. Smoller, Large time behavior of solutions of systems of nonlinear reaction-diffusion equations, SIAM J. Appl. Math., 35 (1978), 1-16. doi: 10.1137/0135001. Google Scholar G. B. Ermentrout and D. H. Terman, Mathematical Foundations of Neurosciences Springer, 2010. Google Scholar R. A. FitzHugh, Impulses and physiological states in theoretical models of nerve membrane, Biophysical Journal, 1 (1961), 445-466. doi: 10.1016/S0006-3495(61)86902-6. Google Scholar C. Foias and G. Prodi, Sur le comportement global des solutions non stationnaires des équations de Navier-Stokes en dimension 2, Rend. Sem. Mat. Univ. Padova, 39 (1967), 1-34. Google Scholar D. Henry, Geometric Theory of Semilinear Parabolic Equations Springer, 1981. Google Scholar A. L. Hodgkin and A. F. Huxley, A quantitative description of membrane current and its application to conduction and excitation in nerve, J. Physiol., 117 (1952), 500-544. Google Scholar E. M. Izhikevich Dynamical Systems in Neuroscience: The Geometry of Excitability and Bursting MIT Press, Cambridge, MA, 2007. Google Scholar C. K. R. T. Jones, Stability of the travelling wave solution of the FitzHugh-Nagumo system, Transactions of the AMS, 286 (1984), 431-469. doi: 10.1090/S0002-9947-1984-0760971-6. Google Scholar N. Kopell and D. Ruelle, Bounds on complexity in reaction-diffusion systems, SIAM J. Appl. Math, 46 (1986), 68-80. doi: 10.1137/0146007. Google Scholar O. A. Ladyzenskaja, V. A. Solonnikov and N. N. Ural'ceva, Linear and Quasilinear Equations of Parabolic Type Providence, Rhode Island, Transl. of Math. Monographs 23,1968. Google Scholar J. L. Lions, Quelques Méthodes de Résolution des Problémes aux Limites non Linéaires Dunod, Paris, 1969. Google Scholar M. Marion, Finite Dimensionnal Attractors associated with Partly Dissipative Reaction-Diffusion Systems, SIAM, J. Math. Anal., 20 (1989), 816-844. doi: 10.1137/0520057. Google Scholar J. Nagumo, S. Arimoto and S. Yoshizawa, An active pulse transmission line simulating nerve axon, Proc. IRE., 50 (1962), 2061-2070. doi: 10.1109/JRPROC.1962.288235. Google Scholar J. Rauch and J. Smoller, Qualitative theory of the fitzhugh nagumo equations, Advances in Mathematics, 27 (1978), 12-44. doi: 10.1016/0001-8708(78)90075-0. Google Scholar J. Robinson, Infinite-Dimensional Dynamical Systems Cambridge University Press, 2001. Google Scholar F. Rothe, Global Solutions of Reaction-Diffusion Systems Springer-Verlag, 1984. Google Scholar J. Smoller, Shock Waves and Reaction-Diffusion Equations Springer, 1994. Google Scholar R. Temam, Infinite-Dimensional Dynamical Systems in Mechanics and Physics Springer, 1988. Google Scholar Takashi Kajiwara. The sub-supersolution method for the FitzHugh-Nagumo type reaction-diffusion system with heterogeneity. Discrete & Continuous Dynamical Systems - A, 2018, 38 (5) : 2441-2465. doi: 10.3934/dcds.2018101 Takashi Kajiwara. A Heteroclinic Solution to a Variational Problem Corresponding to FitzHugh-Nagumo type Reaction-Diffusion System with Heterogeneity. Communications on Pure & Applied Analysis, 2017, 16 (6) : 2133-2156. doi: 10.3934/cpaa.2017106 Vyacheslav Maksimov. Some problems of guaranteed control of the Schlögl and FitzHugh-Nagumo systems. Evolution Equations & Control Theory, 2017, 6 (4) : 559-586. doi: 10.3934/eect.2017028 Anhui Gu, Bixiang Wang. Asymptotic behavior of random fitzhugh-nagumo systems driven by colored noise. Discrete & Continuous Dynamical Systems - B, 2018, 23 (4) : 1689-1720. doi: 10.3934/dcdsb.2018072 Boris Andreianov, Halima Labani. Preconditioning operators and $L^\infty$ attractor for a class of reaction-diffusion systems. Communications on Pure & Applied Analysis, 2012, 11 (6) : 2179-2199. doi: 10.3934/cpaa.2012.11.2179 Abiti Adili, Bixiang Wang. Random attractors for stochastic FitzHugh-Nagumo systems driven by deterministic non-autonomous forcing. Discrete & Continuous Dynamical Systems - B, 2013, 18 (3) : 643-666. doi: 10.3934/dcdsb.2013.18.643 Abiti Adili, Bixiang Wang. Random attractors for non-autonomous stochastic FitzHugh-Nagumo systems with multiplicative noise. Conference Publications, 2013, 2013 (special) : 1-10. doi: 10.3934/proc.2013.2013.1 Klemens Fellner, Wolfang Prager, Bao Q. Tang. The entropy method for reaction-diffusion systems without detailed balance: First order chemical reaction networks. Kinetic & Related Models, 2017, 10 (4) : 1055-1087. doi: 10.3934/krm.2017042 Arnold Dikansky. Fitzhugh-Nagumo equations in a nonhomogeneous medium. Conference Publications, 2005, 2005 (Special) : 216-224. doi: 10.3934/proc.2005.2005.216 Anna Cattani. FitzHugh-Nagumo equations with generalized diffusive coupling. Mathematical Biosciences & Engineering, 2014, 11 (2) : 203-215. doi: 10.3934/mbe.2014.11.203 Ching-Shan Chou, Yong-Tao Zhang, Rui Zhao, Qing Nie. Numerical methods for stiff reaction-diffusion systems. Discrete & Continuous Dynamical Systems - B, 2007, 7 (3) : 515-525. doi: 10.3934/dcdsb.2007.7.515 Laurent Desvillettes, Klemens Fellner. Entropy methods for reaction-diffusion systems. Conference Publications, 2007, 2007 (Special) : 304-312. doi: 10.3934/proc.2007.2007.304 A. Dall'Acqua. Positive solutions for a class of reaction-diffusion systems. Communications on Pure & Applied Analysis, 2003, 2 (1) : 65-76. doi: 10.3934/cpaa.2003.2.65 Dieter Bothe, Michel Pierre. The instantaneous limit for reaction-diffusion systems with a fast irreversible reaction. Discrete & Continuous Dynamical Systems - S, 2012, 5 (1) : 49-59. doi: 10.3934/dcdss.2012.5.49 John Guckenheimer, Christian Kuehn. Homoclinic orbits of the FitzHugh-Nagumo equation: The singular-limit. Discrete & Continuous Dynamical Systems - S, 2009, 2 (4) : 851-872. doi: 10.3934/dcdss.2009.2.851 Zhen Zhang, Jianhua Huang, Xueke Pu. Pullback attractors of FitzHugh-Nagumo system on the time-varying domains. Discrete & Continuous Dynamical Systems - B, 2017, 22 (10) : 3691-3706. doi: 10.3934/dcdsb.2017150 Yiqiu Mao. Dynamic transitions of the Fitzhugh-Nagumo equations on a finite domain. Discrete & Continuous Dynamical Systems - B, 2018, 23 (9) : 3935-3947. doi: 10.3934/dcdsb.2018118 Francesco Cordoni, Luca Di Persio. Optimal control for the stochastic FitzHugh-Nagumo model with recovery variable. Evolution Equations & Control Theory, 2018, 7 (4) : 571-585. doi: 10.3934/eect.2018027 Wei-Jian Bo, Guo Lin, Shigui Ruan. Traveling wave solutions for time periodic reaction-diffusion systems. Discrete & Continuous Dynamical Systems - A, 2018, 38 (9) : 4329-4351. doi: 10.3934/dcds.2018189 C. van der Mee, Stella Vernier Piro. Travelling waves for solid-gas reaction-diffusion systems. Conference Publications, 2003, 2003 (Special) : 872-879. doi: 10.3934/proc.2003.2003.872 B. Ambrosio M. A. Aziz-Alaoui V. L. E. Phan
CommonCrawl
Scalar field transformation behaviour of N=4 SYM after a topological twist Importance of studying mass deformed field theory Constructive field theory & N=4 super Yang-Mills D0-branes in two-dimensional ${\cal N}=(2,2)$ Landau-Ginzburg models at critical points of the superpotential Why is the symmetry group of the $A_{N-1}$ theory the $Osp(2,6|4)$? Anyone have a FeynArts model file for N=4 sYM? Non-extended and extended SUSY What would a potential absence of power-law singularities in of N=4 SYM four particle amplitudes mean from a physics point of view? N=4 SYM from Klebanov Witten field theory This is with reference to M. J. Strassler's lectures on "The Duality Cascade" pg. 46. I want to see how $\mathcal{N}=4$ SYM emerges when D3 branes, in the KW setup, are placed at smooth point of the conifold, purely from field theory point of view. That is: Given a $U(N) \times U(N)$ quiver with $A_i$ and $B_i$ in the bi-fundamental and the anti-bi-fundamental respectively and an $SU(2) \times SU(2)$ invariant quartic superpotential (which vanishes for $N=1$), in what manner do we higgs the theory so as to get $\mathcal{N}=4$ SYM? For the case of $N=1$ there is a comment in the lectures. It says, ... suppose we just have one D3 brane and we allow $A_1B_1$ to have an expectation value, so that the D-brane sits at some point away from the singular point of the conifold. Then the gauge group is broken to $U (1)$, and six scalars remain massless — the six possible translations of the D3-brane away from its initial point — exactly the number needed to fill out an $\mathcal{N}= 4$ $U(1)$ vector multiplet. I want to understand this statement. Please help me understand how to work out for case of $N=1$. I can try for the generic value of $N$. This post imported from StackExchange Physics at 2014-08-31 18:06 (UCT), posted by SE-user Orbifold string-theory field-theory asked Aug 31, 2014 in Theoretical Physics by orbifold (195 points) [ no revision ] The conifold is the space of vacua of the Klebanov-Witten theory. It is obtained as the quotient of the complex four dimensional space of fields $A_1$,$A_2$,$B_1$,$B_2$ by the action of the gauge group $U(1) \times U(1)$. In this action, the diagonal $U(1)$ fixes everyone and only the point $(0,0,0,0)$ is fixed by the full $U(1) \times U(1)$. This implies that the gauge group of the theory is $U(1) \times U(1)$ at the singular point of the conifold and is Higgsed to the diagonal $U(1)$ for a point away from the singularity. At the singular point, the four fields $A_1$,$A_2$,$B_1$, $B_2$ can freely fluctuate despite the fact that the conifold is complex three dimensional: it is the definition of a singular point to have a tangent space of dimension, here complex four, greater than the dimension of the manifold. At a point away from the singularity, the four fields $A_1$,$A_2$,$B_1$, $B_2$ are in the adjoint of the diagonal $U(1)$ (of course, adjoint of $U(1)$ is trivial but I keep a terminology which works for the general $U(N)$ case) but one linear combination is frozen because the tangent space at such a point is complex three dimensional (for example, if $A_1 B_1$ is the only combination with a non-trivial expectation value, computation of the tangent space shows that the $A_2 B_2$ direction is frozen). This implies that the theory at this point is a $U(1)$ gauge theory with three massless chiral multiplets in the adjoint representation. This is exactly the content of a $U(1)$ $N=4$ vector multiplet (indeed, a $N=4$ vector multiplet is the same thing as a $N=2$ vector multiplet with a massless $N=2$ hypermultiplet in the adjoint, which is the same thing as a $N=1$ vector multiplet with three massless $N=1$ chiral multiplets in the adjoint). The $U(N)$ case is similar. The only new thing is to check that the superpotential of the Klebanov-Witten theory reduces to the superpotential of the $N=4$ super Yang-Mills at a Higgsed point of the moduli space. answered Sep 2, 2014 by 40227 (5,120 points) [ revision history ] p$\hbar$ysicsOver$\varnothing$low
CommonCrawl
Diophantine Equations and Harmonic Analysis Org: Scott Parsell (West Chester University of Pennsylvania) and Craig Spencer (Kansas State) BRIAN COOK, University of Wisconsin - Madison Diophantine equations in the primes [PDF] We are concerned with the set of solutions, say $V$, to a given Diophantine equation of the form $Q(x)=0$, $x=(x_1,...,x_n)$, which have each coordinate $x_i$ a prime number. If $Q$ is large in terms of a certain notion of rank and satisfies certain local conditions, then $V$ behaves as expected according to standard heuristics. Our aim in this talk is to overview some further results (and questions) for the set $V$. In particular, we shall discuss convergence results for certain ergodic averages on $V$. BRANDON HANSON, Penn State Character Sums with Convolutions [PDF] We discuss applications of sum-product theory and additive combinatorics to the estimation of multiplicative character sums over a finite field. JING-JING HUANG, University of Nevada, Reno Rational points near hypersurfaces [PDF] The distribution of rational points on algebraic varieties is a central problem in number theory. An even more general problem is to investigate rational points lying very close to manifolds, where the algebraic condition is replaced with the non-vanishing curvature condition. In this talk, we will establish a sharp upper bound for the number of rational points of a given height and within a given distance to a hypersurface. This has surprising applications to counting rational points lying on the manifold; indeed setting the distance to zero, we are able to prove an analogue of Serre's Dimension Growth Conjecture (originally stated for projective varieties) in this general setup. Our main innovation in the proof is a bootstrap method that relies on the synthesis of Poisson summation, projective duality and the method of stationary phase. ALEX IOSEVICH, University of Rochester Exponential bases and frames: the return of the Fuglede Conjecture [PDF] We shall explore exponential bases and frames both in Euclidean space and vector spaces over finite fields. In the process we shall prove a recent result due to Iosevich, Mayeli and Pakianathan establishing the Fuglede Conjecture in ${\Bbb Z}_p^2$, namely that the a subset of ${\Bbb Z}_p^2$ has an orthogonal basis of characters if and only if it tiles ${\Bbb Z}_p^2$ by translation. Different types of interaction between analytic, combinatorial and algebraic methods shall be emphasized throughout. MIKE KNAPP, Loyola University Maryland Sextic forms over extensions of $\mathbb{Q}_2$ [PDF] In this talk, we determine the minimum number of variables needed to guarantee that a homogeneous polynomial of the form $a_1 x_1^6 + a_2x_2^6 + \cdots + a_s x_s^6$ has a nontrivial zero in certain quadratic extensions of $\mathbb{Q}_2$. ANGEL KUMCHEV, Towson University Recent progress in the Waring-Goldbach problem [PDF] Recent progress on Vinogradov's mean-value theorem has resulted in improved estimates for exponential sums of Weyl type. In recent joint work with T.D. Wooley, we apply these new estimates to obtain sharper bounds for the function $H(k)$ in the Waring-Goldbach problem. We obtain new results for all exponents $k \ge 8$; in particular, we establish that $H(k) \le (4k-2)\log k+k-7$ when $k$ is large. The latter bound represents the first improvement on a classical result of Hua from the 1940s. WENTANG KUO, University of Waterloo Multi-dimensional Waring's problem in function fields [PDF] The classical Waring's problem is to consider the representations of positive integers as sum of \(k\)th powers. S. T. Parsell considers the multi-dimensional generalization of Waring's problem. For a positive integer \(d \geq 2\), let \( \mathcal{M}=\big\{(i_1,\ldots,i_d)\in \mathbb{N}^d\,\big |\, i_1+\cdots+i_d =k\; \big\}\). For positive integers $P$ and $n_{\bf i}$ $({\bf i}\in\mathcal{M})$, denote by $R_{s,k,d}({\bf n},P)$ the number of solutions of the system of equations \[x_{11}^{i_1}\cdots x_{1d}^{i_d} +\cdots+x_{s1}^{i_1}\cdots x_{sd}^{i_d}=n_{\bf i} \quad({\bf i}\in\mathcal{M})\] with $x_{ij}\in \{1,2,\ldots,P\}$. S. T. Parsell first proves that when $d=2$ and \(s\geq (14/3)k^2\log k+(10/3)k^2\log\log k+O(k^2)\), under certain solubility hypothesis, one has \(R_{s,k,d}({\bf n},P)\gg P^{2s-k(k+1)}\). In this talk, we will discuss the function field analogue of the multi-dimensional generalization of Waring's problem and apply the recent improvement of Vinogradov-type estimates to get the best known result in the function field setting. It is a joint work with Yu-Ru Liu and Xiaomei Zhao. NEIL LYALL, University of Georgia On some problems in Geometric Ramsey Theory [PDF] We plan to discuss progress in some ongoing projects in geometric Ramsey theory. AKOS MAGYAR, University of Georgia On the distribution of integer and prime solutions to diophantine equations [PDF] Let Q be a positive homogeneous integral polynomial. We study the equi-distribution of the solutions to the diophantine equation Q(x)=l, both with integer and prime variables. We prove some quantitative results in terms of the discrepancy with respect to caps, defined as intersections of the level surface {Q=l} with half-spaces, assuming that the form Q has sufficiently high rank. In the simplest settings one obtains the equidistribution of lattice points on spherical caps in dimensions at least 4. MISTY OSTERGAARD, Kansas State University Solutions of Diagonal Congruences with Variables Restricted to Small Intervals [PDF] Our interest is in finding solutions to the diagonal congruence \begin{equation}\label{cong} \sum_{i=1}^n a_ix_i^k \equiv c \pmod q \end{equation} in a cube $\mathcal{B}$ of side length $b$. As an example, for a sufficiently large prime modulus and $n \geq 2k^3$, we obtain a solution to \eqref{cong} in any cube $\mathcal{B}$ of side length $b \geq p^{\frac{1}{k} + \frac{2(k-2)}{n} + \varepsilon}$. Similar results are found for fewer variables. Refinements are obtained for the case of small solutions, and for the case where the number of variables is very large. Results are also given for a general modulus $q$. ANDY POLLINGTON, National Science Foundation Diophantine approximation, a conjecture of Wolfgang Schmidt, Schmidt games and irregularities of distribution [PDF] We describe Littlewood's conjecture in Diophantine approximation and some related questions and also present some connections to work on irregularities of distribution. Some of the work I will describe is Joint with Dmitry Bhadzian, Sanju Velani and some with William Moran. PAUL POTGIETER, University of South Africa Salem sets, equidistribution and arithmetic progressions [PDF] The decay of the Fourier-Stieltjes transform of a measure on a set of positive Fourier dimension is an indication of the uniformity of distribution of the support of the measure. Such uniformity could also indicate that the set has additive properies, such as containing arithmetic progressions, if certain density conditions are satisfied. The same phenomenon can be observed in initial segments of the integers whose discrete Fourier transform decays sufficiently quickly. By adapting a construction of Salem, we make this correspondence precise by showing that infinite subsets of the integers can be used to construct Salem sets in the continuum. Through a characterisation of Salem sets in terms of equidistribution, we further show that one can construct subsets of the integers with a given uniformity requirement from Salem sets, in such a way that arithmetic progressions are preserved. ALEX RICE, University of Rochester Difference Sets and Polynomials [PDF] In a series of papers in the 1970s, Sárközy proved that any set of integers of positive upper density necessarily contains two distinct elements which differ by a perfect square, as well as two elements which differ by one less than a prime number, confirming conjectures of Lovász and Erdös, respectively. In this talk, we provide a brief survey of the extensive literature that has developed on improvements and extensions of these results, culminating in a brand new "super theorem" which expands to sums of polynomials, improves certain quantitative bounds, and includes most previous results as special cases. This is joint work with Neil Lyall. ROBERT VAUGHAN, Penn State University Zeros of Dirichlet series [PDF] We are concerned here with Dirichlet series \begin{equation*} f(s) = 1 +\sum_{n=2}^{\infty} \frac{c(n)}{n^s} \end{equation*} which satisfy a function equation similar to that of the Riemann zeta function, typically of the form \begin{equation*} f(s) = 2^s q^{1/2-s} \pi^{s-1} \Gamma(1-s) \big(\sin\textstyle\frac{\pi}{2}(s+\kappa)\big) f(1-s), \end{equation*} but for which the Riemann hypothesis is false. Indeed we show that the zeros of such functions are ubiquitous in the complex plane. TREVOR WOOLEY, University of Bristol Efficient congruencing, Fourier restriction theory, and Waring's problem [PDF] We report on recent progress concerning Vinogradov's mean value theorem, and related systems of equations of only approximately translation-dilation invariant nature. Based on the application of ``efficient congruencing'', these ideas may also be applied to establish mean value estimates of the shape $$\int_{[0,1)^t} \Biggl| \sum_{1\le n\le N} a_ne(\alpha_1n^{k_1}+\ldots +\alpha_tn^{k_t})\Biggr|^{2s} \, {\rm d}\alpha_1\ldots \, {\rm d}\alpha_t\ll N^\epsilon \Biggl( \sum_{1\le n\le N}|a_n|^2\Biggr)^s\quad (a_n\in \Bbb C),$$ of significance in Fourier restriction theory. We will also announce progress on the problem of establishing the asymptotic formula in Waring's problem for fourth and higher powers. SHUNTARO YAMAGISHI, Queen's University Zeroes of polynomials in many variables with prime inputs [PDF] (This is a joint work with S.Y.Xiao) Given a non-zero form $f(\mathbf{x})\in \mathbb{Q}[x_1, ..., x_n]$, the $h$-invariant of $f$ is defined to be the smallest positive number $h = h(f)$ such that there exists a representation $ f = u_1 v_1 + ... + u_h v_h, $ where $u_i, v_i$ are rational forms of positive degree $(1 \leq i \leq h)$. Let $b(\mathbf{x}) \in \mathbb{Z}[x_1, ..., x_n]$ be a degree $d$ polynomial, and $f_b(\mathbf{x})$ be the degree $d$ portion of $b(\mathbf{x})$. We build on the work of Cook and Magyar to prove that the equation $b(\mathbf{x}) = 0$ is soluble in primes provided that $b(\mathbf{x})$ satisfies suitable local conditions and $f_b$ has a representation $f_b = u_1 v_1 + ... + u_h v_h$, where $h = h(f_b)$, $u_i, v_i$ are rational forms of positive degree $(1 \leq i \leq h)$, and sufficiently large number of $u_i$ are linear.
CommonCrawl
Yi-heng Zhai, Tao Wang, Zhi-kai Qi, Xiang-hua Kong, Hang-xun Xu, Heng-xing Ji. Highly Sensitive Flexible Pressure Sensors based on Graphene/Graphene Scrolls Multilayer Hybrid Films[J]. Chinese Journal of Chemical Physics , 2020, 33(3): 365-370. doi: 10.1063/1674-0068/cjcp1907146 Citation: Yi-heng Zhai, Tao Wang, Zhi-kai Qi, Xiang-hua Kong, Hang-xun Xu, Heng-xing Ji. Highly Sensitive Flexible Pressure Sensors based on Graphene/Graphene Scrolls Multilayer Hybrid Films[J]. Chinese Journal of Chemical Physics , 2020, 33(3): 365-370. doi: 10.1063/1674-0068/cjcp1907146 Highly Sensitive Flexible Pressure Sensors based on Graphene/Graphene Scrolls Multilayer Hybrid Films Yi-heng Zhaia , Tao Wangb , Zhi-kai Qic , Xiang-hua Konga , , , Hang-xun Xub , , , Heng-xing Jic , , School of Chemistry and Chemical Engineering, Hefei University of Technology, Hefei 230009, China CAS Key Laboratory of Soft Matter Chemistry, Department of Polymer Science and Engineering, University of Science and Technology of China, Hefei 230026, China Department of Applied Chemistry, CAS Key Laboratory of Materials for Energy Conversion, University of Science and Technology of China, Hefei 230026, China Corresponding author: Xiang-hua Kong, E-mail:[email protected]; Hang-xun Xu, E-mail:[email protected]; Heng-xing Ji, E-mail:[email protected] In recent years, flexible pressure sensors have attracted much attention owing to their potential applications in motion detection and wearable electronics. As a result, important innovations have been reported in both conductive materials and the underlying substrates, which are the two crucial components of a pressure sensor. 1D materials like nanowires are being widely used as the conductive materials in flexible pressure sensors, but such sensors usually exhibit low performances mainly due to the lack of strong interfacial interactions between the substrates and 1D materials. In this paper, we report the use of graphene/graphene scrolls hybrid multilayers films as the conductive material and a micro-structured polydimethylsiloxane substrate using Epipremnum aureum leaf as the template to fabricate highly sensitive pressure sensors. The 2D structure of graphene allows to strongly anchor the scrolls to ensure the improved adhesion between the highly conductive hybrid films and the patterned substrate. We attribute the increased sensitivity (3.5 kPa$ ^{-1} $), fast response time ($ < $50 ms), and the good reproducibility during 1000 loading-unloading cycles of the pressure sensor to the synergistic effect between the 1D scrolls and 2D graphene films. Test results demonstrate that these sensors are promising for electronic skins and motion detection applications. Pressure sensor, Graphene scrolls, Hybrid films, Electronic skins [1] J. Park, Y. Lee, J. Hong, M. Ha, Y. D. Jung, H. Lim, S. Y. Kim, and H. Ko, ACS Nano 8, 4689 (2014). doi: 10.1021/nn500441k [2] S. H. Shin, S. Ji, S. Choi, K. H. Pyo, B. Wan An, J. Park, J. Kim, J. Y. Kim, and J. U. Park, Nat. Commun. 8, 14950 (2017). doi: 10.1038/ncomms14950 [3] K. Takei, T. Takahashi, J. C. Ho, H. Ko, A. G. Gillies, P. W. Leu, R. S. Fearing, and A. Javey, Nat. Mater. 9, 821 (2010). doi: 10.1038/nmat2835 [4] Y. Zhu, J. Li, H. Cai, Y. Wu, H. Ding, N. Pan, and X. Wang, Sensor. Actuat. B 255, 1262 (2018). doi: 10.1016/j.snb.2017.08.116 [5] C. Wang, X. Li, E. Gao, M. Jian, K. Xia, Q. Wang, Z. Xu, T. Ren, and Y. Zhang, Adv. Mater. 28, 6640 (2016). doi: 10.1002/adma.201601572 [6] Y. Pang, K. Zhang, Z. Yang, S. Jiang, Z. Ju, Y. Li, X. Wang, D. Wang, M. Jian, Y. Zhang, R. Liang, H. Tian, Y. Yang, and T. L. Ren, ACS Nano 12, 2346 (2018). doi: 10.1021/acsnano.7b07613 [7] H. Chen, Z. Su, Y. Song, X. Cheng, X. Chen, B. Meng, Z. Song, D. Chen, and H. Zhang, Adv. Funct. Mater. 27, (2017). [8] L. Q. Tao, K. N. Zhang, H. Tian, Y. Liu, D. Y. Wang, Y. Q. Chen, Y. Yang, and T. L. Ren, ACS Nano 11, 8790 (2017). doi: 10.1021/acsnano.7b02826 [9] K. Xia, C. Wang, M. Jian, Q. Wang, and Y. Zhang, Nano Res. 11, 1124 (2017). [10] B. Zhu, Z. Niu, H. Wang, W. R. Leow, H. Wang, Y. Li, L. Zheng, J. Wei, F. Huo, and X. Chen, Small 10, 3625 (2014). doi: 10.1002/smll.201401207 [11] L. Zhang, G. Hou, Z. Wu, and V. Shanov, Nano LIFE 6, 1642005 (2016). doi: 10.1142/S1793984416420058 [12] Y. Hou, D. Wang, X. M. Zhang, H. Zhao, J. W. Zha, and Z. M. Dang, J. Mater. Chem. C 1, 515 (2013). [13] A. Rinaldi, A. Tamburrano, M. Fortunato, and M. S. Sarto, Sensors 16, (2016). [14] T. Q. Trung, S. Ramasundaram, B. U. H Wang, and N. E. Lee, Adv. Mater. 28, 502 (2016). doi: 10.1002/adma.201504441 [15] A. S. Shaplov, D. O. Ponkratov, P. H. Aubert, E. I. Lozinskaya, C. Plesse, F. Vidal, and Y. S. Vygodskii, Chem. Commun. 50, 3191 (2014). doi: 10.1039/C3CC49876J [16] J. Lee, P. Lee, H. B. Lee, S. Hong, I. Lee, J. Yeo, S. S. Lee, T. S. Kim, D. Lee, and S. H. Ko, Adv. Funct. Mater. 23, 4171 (2013). doi: 10.1002/adfm.201203802 [17] A. V. Shirinov and W. K. Schomburg, Sensor. Actuat. A 142, 48 (2008). doi: 10.1016/j.sna.2007.04.002 [18] C. Dagdeviren, Y. Su, P. Joe, R. Yona, Y. Liu, Y. S. Kim, Y. Huang, A. R. Damadoran, J. Xia, L. W. Martin, Y. Huang, and J. A. Rogers, Nat. Commun. 5, 4496 (2014). doi: 10.1038/ncomms5496 [19] D. Mandal, S. Yoon, and K. J. Kim, Macromol. Rapid. Commun. 32, 831 (2011). doi: 10.1002/marc.201100040 [20] L. Pan, A. Chortos, G. Yu, Y. Wang, S. Isaacson, R. Allen, Y. Shi, R. Dauskardt, and Z. Bao, Nat. Commun. 5, 3002 (2014). doi: 10.1038/ncomms4002 [21] S. Gong, W. Schwalb, Y. Wang, Y. Chen, Y. Tang, J. Si, B. Shirinzadeh, and W. Cheng, Nat. Commun. 5, 3132 (2014). doi: 10.1038/ncomms4132 [22] C. Pang, G. Y. Lee, T. i. Kim, S. M. Kim, H. N. Kim, S. H. Ahn, and K. Y. Suh, Nat. Mater. 11, 795 (2012). doi: 10.1038/nmat3380 [23] C. H. Mastrangelo, Z. Xia, and W. C. Tang, J. Microelectromech. S 5, 98 (1996). doi: 10.1109/84.506197 [24] S. Jung, J. Lee, T. Hyeon, M. Lee, and D. H. Kim, Adv. Mater. 26, 6329 (2014). doi: 10.1002/adma.201402439 [25] D. J. Cohen, D. Mitra, K. Peterson, and M. M. Maharbiz, Nano Lett. 12, 1821 (2012). doi: 10.1021/nl204052z [26] H. B. Yao, J. Ge, C. F. Wang, X. Wang, W. Hu, Z. J. Zheng, Y. Ni, and S. H. Yu, Adv. Mater. 25, 6692 (2013). doi: 10.1002/adma.201303041 [27] M. K. Shin, J. Oh, M. Lima, M. E. Kozlov, S. J. Kim, and R. H. Baughman, Adv. Mater. 22, 2663 (2010). doi: 10.1002/adma.200904270 [28] J. Kuang, Z. Dai, L. Liu, Z. Yang, M. Jin, and Z. Zhang, Nanoscale 7, 9252 (2015). doi: 10.1039/C5NR00841G [29] J. Park, M. Kim, Y. Lee, H. S. Lee, and H. Ko, Sci. Adv. 1, e1500661 (2015). doi: 10.1126/sciadv.1500661 [30] K. Y. Lee, H. J. Yoon, T. Jiang, X. Wen, W. Seung, S. W. Kim, and Z. L. Wang, Adv. Energy Mater. 6, 1502566 (2016). doi: 10.1002/aenm.201502566 [31] X. Wang, Y. Gu, Z. Xiong, Z. Cui, and T. Zhang, Adv. Mater. 26, 1336 (2014). doi: 10.1002/adma.201304248 [32] S. C. Mannsfeld, B. C. Tee, R. M. Stoltenberg, C. V. Chen, S. Barman, B. V. Muir, A. N. Sokolov, C. Reese, and Z. Bao, Nat. Mater. 9, 859 (2010). doi: 10.1038/nmat2834 [33] C. Pang, J. H. Koo, A. Nguyen, J. M. Caves, M. G. Kim, A. Chortos, K. Kim, P. J. Wang, J. B. Tok, and Z. Bao, Adv. Mater. 27, 634 (2015). doi: 10.1002/adma.201403807 [34] B. Su, S. Gong, Z. Ma, L. W. Yap, and W. Cheng, Small 11, 1886 (2015). doi: 10.1002/smll.201403036 [35] Y. Lin, X. Dong, S. Liu, S. Chen, Y. Wei, and L. Liu, ACS Appl. Mater. Inter. 8, 24143 (2016). doi: 10.1021/acsami.6b08587 [36] Y. Qin, Q. Peng, Y. Ding, Z. Lin, C. Wang, Y. Li, F. Xu, J. Li, Y. Yuan, X. He, and Y. Li, ACS Nano 9, 8933 (2015). doi: 10.1021/acsnano.5b02781 [37] Y. Si, X. Wang, C. Yan, L. Yang, J. Yu, and B. Ding, Adv. Mater. 28, 9512 (2016). doi: 10.1002/adma.201603143 [38] S. Chun, H. Jung, Y. Choi, G. Bae, J. P. Kil, and W. Park, Carbon 94, 982 (2015). doi: 10.1016/j.carbon.2015.07.088 [39] G. Y. Bae, S. W. Pak, D. Kim, G. Lee, D. H. Kim, Y. Chung, and K. Cho, Adv. Mater. 28, 5300 (2016). doi: 10.1002/adma.201600408 [40] L. Lin, Y. Xie, S. Wang, W. Wu, S. Niu, X. Wen, and Z. L. Wang, ACS Nano 7, 8266 (2013). doi: 10.1021/nn4037514 [41] L. Z. Sheng, Y. Liang, L. L. Jiang, Q. Wang, T. Wei, L. T. Qu, and Z. J. Fan, Adv. Funct. Mater. 25, 6545 (2015). doi: 10.1002/adfm.201502960 [42] M. Jian, K. Xia, Q. Wang, Z. Yin, H. Wang, C. Wang, H. Xie, M. Zhang, and Y. Zhang, Adv. Funct. Mater. 27, 1606066 (2017). doi: 10.1002/adfm.201606066 [43] W. He, G. Li, and S. Zhang, ACS Nano 9, 4244 (2015). doi: 10.1021/acsnano.5b00626 [44] S. Chun, Y. Kim, H. S. Oh, G. Bae, and W. Park, Nanoscale 7, 11652 (2015). doi: 10.1039/C5NR00076A [45] N. Liu, A. Chortos, T. Lei, L. Jin, T. R. Kim, W. G. Bae, C. Zhu, S. Wang, R. Pfattner, X. Chen, R. Sinclair, and Z. Bao, Sci. Adv. 3, (2017). [46] N. Lu, C. Lu, S. Yang, and J. Rogers, Adv. Funct. Mater. 22, 4044 (2012). doi: 10.1002/adfm.201200498 [47] Y. Wei, S. Chen, X. Yuan, P. Wang, and L. Liu, Adv. Funct. Mater. 26, 5078 (2016). doi: 10.1002/adfm.201600580 [48] Y. Wang, R. Yang, Z. Shi, L. Zhang, D. Shi, E. Wang, and G. Zhang, ACS Nano 5, 3645 (2011). doi: 10.1021/nn103523t Article views(111) PDF downloads(15) Cited by() Yi-heng Zhaia, Tao Wangb, Zhi-kai Qic, Xiang-hua Konga, , , Hang-xun Xub, , , Heng-xing Jic, , a. School of Chemistry and Chemical Engineering, Hefei University of Technology, Hefei 230009, China b. CAS Key Laboratory of Soft Matter Chemistry, Department of Polymer Science and Engineering, University of Science and Technology of China, Hefei 230026, China c. Department of Applied Chemistry, CAS Key Laboratory of Materials for Energy Conversion, University of Science and Technology of China, Hefei 230026, China Pressure sensor / Graphene scrolls / Hybrid films / Abstract: In recent years, flexible pressure sensors have attracted much attention owing to their potential applications in motion detection and wearable electronics. As a result, important innovations have been reported in both conductive materials and the underlying substrates, which are the two crucial components of a pressure sensor. 1D materials like nanowires are being widely used as the conductive materials in flexible pressure sensors, but such sensors usually exhibit low performances mainly due to the lack of strong interfacial interactions between the substrates and 1D materials. In this paper, we report the use of graphene/graphene scrolls hybrid multilayers films as the conductive material and a micro-structured polydimethylsiloxane substrate using Epipremnum aureum leaf as the template to fabricate highly sensitive pressure sensors. The 2D structure of graphene allows to strongly anchor the scrolls to ensure the improved adhesion between the highly conductive hybrid films and the patterned substrate. We attribute the increased sensitivity (3.5 kPa$ ^{-1} $), fast response time ($ < $50 ms), and the good reproducibility during 1000 loading-unloading cycles of the pressure sensor to the synergistic effect between the 1D scrolls and 2D graphene films. Test results demonstrate that these sensors are promising for electronic skins and motion detection applications. Flexible and wearable electronic devices are being developed for potential applications in electronic skins [1-3], biomedical monitors [4, 5], and motion detection [6-8]. In this context, materials with high flexibility that can be produced through low-cost processes are highly desired [9-16]. Currently available pressure sensors can be mainly classified into three different types: piezoelectric sensors [17-19], piezoresistive sensors [20-22], and capacitive sensors [23-25]. Piezoresistive sensors that show a change in resistance under applied pressure have been widely used owing to their many advantages, including ease of preparation, simple device structure, and convenient signal acquisition [26-28]. In general, a micro structured flexible substrate and a material with excellent conductivity are used in the fabrication of piezoresistive pressure sensors to optimize performance. With respect to the substrates, flexible substrates containing hemispheres arrays [29, 30], prism arrays [31], pyramid arrays [32, 33], and microgrooves [34] have been used in high-performance sensors. These microstructures are obtained by traditional lithography and often show good performance, but these processes are time-consuming in addition to being costly. Therefore, flexible substrates that can be made using simple low-cost preparation methods have been explored for the fabrication of high-sensitivity pressure sensors. For instance, Pang et al. [6] designed a pressure sensor with a sensitivity of 25.1 kPa$ ^{-1} $ based on reduced graphene oxide (rGO) coated on a micro structured polydimethylsiloxane (PDMS) film with microroughness copied using an abrasive paper as a mold. Apart from designing micro-structured substrates, it is also important to select a conductive material with high sensitivity to obtain pressure sensors with optimal performance. To date, conductive materials including carbon nanotubes (CNTs) [35, 36], carbonized nanofibers [37], graphene [38, 39], and metal nanoparticles [40] have been used in pressure sensors. Pan et al. [20] reported a hollow polypyrrole hydrogel-based pressure sensor with high sensitivity of 133.1 kPa$ ^{-1} $ to detect pressure under 30 Pa. Sheng et al. [41] developed honeycomb-like graphene film-based pressure sensor decorated with bubbles on the surface, which showed ultrahigh sensitivity of 161.6 kPa$ ^{-1} $. Metal nanowires have been commonly used as the conductive material in flexible pressure sensors. However, the interface between the substrate and the 1D material, such as a nanowire, is not strong enough, resulting in low sensitivity of the sensor [42]. For example, He et al. [43] reported polypyrrole/silver coaxial nanowire aero-sponge with a sensitivity of only 0.33 kPa$ ^{-1} $. In this context, a 2D material, such as a graphene film, can serve as both a conductive contact and as a support for the 1D conductor, thereby increasing contacting area with the substrate [44]. Therefore, we believe that the synergistic effects between the two constituent components in a 1D-2D hybrid film can be potentially attractive for pressure sensors. An optimal conductive material should be interconnected to form a continuous electron pathway, deformable in shape and variable in conductivity with the pressure, and robust under repeated pressure loading/unloading. In this paper, we report a high-performance flexible pressure sensor based on graphene/graphene scrolls (G/GS) hybrid films as the conductive material and using micro-structured PDMS (m-PDMS) replicated from Epipremnum aureum leaf as the flexible substrate. The unique composite structure combining G with GS, both of which are prepared by chemical vapor deposition (CVD) followed by simple etching, allowing to significantly improve performance of the pressure sensor. The substrate m-PDMS with its hierarchical microstructure copied from leaves also plays an important role in achieving high performance. The obtained pressure sensor shows high sensitivity (3.5 kPa$ ^{-1} $), fast response time ($ < $50 ms), and good reproducibility during $ > $1000 loading-unloading cycles. We have also tested the applications of the G/GS pressure sensor for human motion detection based on its response to periodic stress and bending force to demonstrate its potential application in smart wearable devices. A. Structure characterization Optical microscopy (Leica DM2500M) and scanning electron microscopy (SEM, FEI Apreo) were used to study the morphology and structure of the samples. Raman spectra were recorded using Renishaw inVia micro Raman spectrometer with a 50$ \times $ objective lens and a 532 nm laser as the excitation source. B. Mechanical properties measurements A compression performance testing system (UTM 2000) was used to apply pressure and a precision LCR meter (TH2827A) was used to characterize the electrical properties and to record the electrical response of the pressure sensor. A. Fabrication of the G/GS/m-PDMS pressure sensor Single-layer graphene was prepared by CVD (see details in supplementary materials). To transfer graphene, we spin-coated a thin layer of polymethylmethacrylate (PMMA) on one side of the graphene layer (which we called top graphene) forming a PMMA/top graphene/Cu foil/bottom graphene structure. Next, the PMMA/top graphene/Cu foil/bottom graphene was floated on (NH$ _4 $)$ _2 $S$ _2 $O$ _8 $ solution to etch Cu. During Cu foil etching, the unprotected bottom-side graphene invariably developed cracks and defects that allow the etchant to penetrate through. Due to surface tension, the released graphene domains rolled up into scrolls and attached onto the top-G/PMMA films [45]. After the Cu was removed, the PMMA layer was dissolved with acetone to form graphene/graphene scrolls (G/GS) hybrid films, which we denote here as 1-layer G/GS hybrid films. The G/GS films can be transferred to essentially any other substrates. By repeating the transfer process, 2-, 3-, or 4-layer G/GS films can be obtained as shown in FIG. 1(a-c). For micro-structuring the substrate, we used a leaf of the common indoor plant, Epipremnum aureum. After cutting and cleaning, the leaf was placed on a 2.2 cm$ \times $2.2 cm glass slide. The PDMS (Sylgard 184, Dow Corning) was prepared by mixing the base and curing agent in the weight ratio 10:1 [46]. The uncured PDMS was spread on the leaf-covered glass sheet by adding it dropwise onto the leaf to completely cover the sheet. After heating at 60 ℃ for 8 h, the excess PDMS was scraped off with a blade, leaving m-PDMS substrate to replicate the microstructure of the leaf fragment. The presence of a large number of sharp ridges significantly increased surface roughness and provided a large number of contact points which contributed to improving the performance of the pressure sensor [42]. The G/GS film of different layer numbers were transferred onto the m-PDMS substrate to serve as the conductive material for the pressure sensor (FIG. 1(d)). Silver paste was coated on one side of each G/GS film to which, copper electrodes were attached (FIG. 1(e)). The schematic diagram of the working mechanism is shown in FIG. 1(f). Figure 1. Fabrication of G/GS pressure sensor. (a$ - $c) Fabrication of G/GS films of different layers. (d) PDMS substrate with microstructures on surface that replicates the surface structure of Epipremnum aureum leaf coated with G/GS films. (e) Photograph of a G/GS pressure sensor assembled using m-PDMS and G/GS films. (f) Schematic of the structure deformation when pressure is loading/unloading. The distribution of graphene scrolls in the hybrid film was studied from optical microscopy images. The scrolls are randomly distributed over the surface, and their in-plane density increases with the number of stacked G/GS layers. FIG. 2(a-e) show that the percentage area covered by 1, 2, 3, and 4-layer graphene scrolls are (7.87$ \pm $1.84)%, (16.22$ \pm $1.65)%, (25.02$ \pm $1.52)%, and (30.37$ \pm $1.13)%, respectively (see statistical methods in supplementary materials). Individual G/GS films can be transferred to m-PDMS substrates to conformally coat the micro-structured surface. The hybrid films were further characterized by Raman spectroscopy. FIG. S1 (supplementary materials) shows Raman spectra taken from two areas on a hybrid film. Both spectra show no D band ($ \sim $1350 cm$ ^{-1} $) confirming the high quality of the graphene films. Besides, $ I_{\rm{2D}}/I_{\rm{G}} $ ratio and the full width at half-maximum of the 2D band confirm the multilayer structure of the hybrid films. The SEM image of m-PDMS coated with G/GS films shows clearly that the presence graphene enables the G/GS films to make better contact with the m-PDMS substrate and the composite films closely adhere to the micro-structured surface (FIG. S2 in supplementary materials). In addition, the optical micrographs of 4-layer PG films are shown in FIG. 2(f). The conformal coating and good adhesion of G/GS films on the m-PDMS substrate provide excellent conditions for the testing of pressure sensors. Figure 2. Structure of the G/GS films. (a$ - $d) Optical micrographs of 1-, 2-, 3-, and 4-layer G/GS films. (e) The fraction of the area covered by graphene scrolls in hybrid films with different number of stacked layers. (f) Optical micrographs of 4-layer PG films. B. Performance of the G/GS pressure sensor Sensitivity is one of the most important parameters of a pressure sensor. Mathematically, sensitivity can be expressed as $ S $ = ($ \Delta R/R_0)/\Delta P $, where $ \Delta R/R_0 $ is the relative change in resistance and $ \Delta P $ is the change in pressure [47]. Even though all the sensors tested in this study use graphene films as the conductive material, 4-, 3-, and 2-layer GS/G pressure sensors show different $ S $ values of 3.5, 3.0, and 2.1 kPa$ ^{-1} $ (FIG. 3(a)), in the pressure range of 0$ - $0.45, 0$ - $0.75, and 0$ - $1.30 kPa, respectively. Figure 3. Electromechanical properties of the G/GS pressure sensor. (a) Sensitivity of G/GS pressure sensors with different numbers of stacked G/GS layers. (b) Sensitivity of the pressure sensor in the presence of graphene scrolls compared to that of pure graphene film. (c) Response of the pressure sensor under different stress conditions. (d) Linear relationship between $ \Delta R/R_0 $ and stress. (e) Performance of the pressure sensor during 1000 cycles under 459.1 Pa pressure. (f) Magnified image of (e). Response time of (g) 50 ms and (h) 30 ms for the 4-layer G/GS pressure sensor. We attribute the different performances of the G/GS pressure sensors with different number of G/GS layers to the presence of graphene scrolls. In the initial state, G/GS films are contacted to each other, forming conductive pathways. When the pressure is applied, a greater number of scrolls bridge the bilevel monolayer graphene to form a percolating conducting network (FIG. 1(f)), resulting in gradual decrease in resistances. Meanwhile, the scrolls also prevent contact between layers, leading to a highly resistive initial configuration when no pressure is applied on the sensor. Under applied pressure, the contact area of the scrolls increases dramatically and the resistance of the pressure sensor rapidly decreases, leading to the high sensitivity. This also accounts for the highest sensitivity obtained for the 4-layer G/GS pressure sensor. To further compare the performance of the sensors in the presence and absence of scrolls and its impact on the sensitivity, we also tested a pressure sensor with 4-layer PG (pure graphene) films; as shown in FIG. 3(b), the sensitivity $ S $ is measured to be 0.6 kPa$ ^{-1} $. This result highlights the significant difference in sensitivity in the presence of scrolls. Furthermore, we list the recently reported piezoresistive with general 1D materials sensors from previous work (Table S1 in supplementary materials). The results described above have confirmed the key role of scrolls in maintaining high sensitivity by forming a conductive bridge between the two layers of monolayer graphene. To test the stability of the sensor under repeated application of pressure, we have tested its response and repeatability. We applied a dynamic pressure cycle of 51.1, 153.1, 357.1, and 510.2 Pa on a 4-layer G/GS pressure sensor. As seen in FIG. 3(c), the response of the pressure sensor is both stable and reproducible. Moreover, the observed change in $ \Delta R/R_0 $ under dynamic cyclic pressure of different magnitudes is consistent with the change in sensitivity described in FIG. 3(a). Furthermore, the slope of the linearity range determines the sensitivity, while the length of the linearity range determines the detection limit of the pressure sensor. In our work, we tested the 4-layer G/GS pressure sensor at 51.1, 153.1, 357.1, and 510.2 Pa for 50 cycles, and the average of the maximum $ \Delta R/R_0 $ value at each pressure was calculated. This average was then plotted against stress. A linear relationship ($ R^2 $ = 0.992) is obtained as shown in FIG. 3(d). FIG. 3(e) describes results of the repeatability testing for 4-layer G/GS pressure sensor during 1000 cycles at 459.1 Pa. As can be seen from FIG. 3(e), the pressure sensor maintains a high signal-to-noise ratio [48] and the change in $ \Delta R/R_0 $ is within 2.5% after 1000 loading-unloading cycles; the magnified plot covering cycle numbers from 657 to 660 shown in FIG. 3(f) confirms that the pressure sensor has good reproducibility under repeated application of pressures and the graphene scrolls remain stable under these conditions. We also measured the response time of the sensors to 2.0 kPa pressure; the response time is within 50 ms for the 4-layer G/GS pressure sensor (FIG. 3(g, h)). The response time of the 3-, and 2-layer G/GS pressure is within 95 and 175 ms, respectively (FIG. S3 in supplementary materials). C. Artificial stress and bending monitoring The G/GS pressure sensor for artificial stress and bending monitoring are shown in FIG. 4. Our pressure sensor has a wide range of applications in the field of wearable electronics. When applying a periodic stress and bending force to the G/GS pressure sensor, the pressure sensor shows fast and repeatable responses. High signal-to-noise ratios are observed in both cases, demonstrating the high sensitivity of our G/GS pressure sensor. Moreover, uniform performance under different types of stress will enable the application of this sensor in wearable electronic devices. Figure 4. Artificial stress and bending monitoring. Scale bar of the insets is 10 cm. The G/GS pressure sensor fabricated from graphene/graphene scrolls hybrid films on micro structured PMDS shows high sensitivity (3.5 kPa$ ^{-1} $), fast response time (within 50 ms) and excellent durability over 1000 cycles. Furthermore, tests on the application of the sensor demonstrate that it can be used to detect movement in human subjects including stress and bending force monitoring. Besides, the sensor has the advantages of easy preparation, low-cost and much flexibility, all of which are critically important for its future application in wearable electronic devices. Supplementary materials: Preparation of graphene, statistical method of ratio about graphene scrolls, Raman spectroscopy of hybrid films, SEM image of the G/GS films coated on m-PDMS, response times of 3-, 2-layer G/GS pressure sensor and summary of recently reported piezoresistive sensors from previous work are available. This work was supported by the National Natural Science Foundation of China (No.21503064), Anhui Provincial Natural Science Foundation (No.1508085QE103), and the 100 Talents Program of the Chinese Academy of Sciences. Preparation of Graphene A copper foil (25µm thick, 99.8%, Alfa-Aesar batch #13382) was used as the substrate for the growth of single-layer graphene. The foil was cut into a rectangular piece (10 cm × 6 cm) and folded into a cambered shape. The foil was then placed in a tube furnace (Hefei Kejing Company OTF-1200x) and heated to 1040 ℃, after which, 50 sccm of H2 was introduced into the quartz tube for 30 min. The CVD reaction was then carried out in three stages of 10 min each; the hydrogen flow rate was maintained constant during all the three stages. In the first stage, 0.1 sccm of CH4 was injected, which was increased to 1 sccm and 2 sccm in the next two stages. Most importantly, the hybrid gas was pressurized for 2 min to enable the growing graphene film to fully cover the copper surface. At the end of the reaction, both the sides of the copper foil were uniformly covered by a monolayer of graphene. We used the lasso function of the Photo Shop software to count the number of graphene scrolls in each picture. We selected four representative pictures and the number of scrolls was counted ten times in each picture; the average value was taken and the relative error between the different values obtained was estimated. Figure S1. Raman spectra taken on two different areas on the hybrid films. Figure S2. SEM image of the G/GS films coated on m-PDMS. Figure S3. Response times of (a and b) 3-layer G/GS pressure sensor and (c and d) 2-layer G/GS pressure sensor. Table S1. Summary of recently reported piezoresistive sensors from previous work. [1] W. He, G. Li, S. Zhang, Y. Wei, J. Wang, Q. Li, and X. Zhang, ACS Nano 9, 4244 (2015). [2] X. Wang, Y. Gu, Z. Xiong, Z. Cui, and T. Zhang, Adv. Mater. 26, 1336 (2014). [3] Y. Tai, and G. Lubineau, Adv. Funct. Mater. 26, 4078 (2016). [4] S. Gong, W. Schwalb, Y. Wang, Y. Chen, Y. Tang, J. Si, B. Shirinzadeh, and W. Cheng, Nat. Commun. 5, 3132 (2014). [5] H. Zhang, N. Liu, Y. Shi, W. Liu, Y. Yue, and S. Wang, ACS Appl. Mater. Inter. 8, 2237 (2016). [6] M. Jian, K. Xia, Q. Wang, Z. Yin, H. Wang, C. Wang, H. Xie, M. Zhang, and Y. Zhang, Adv. Funct. Mater. 27, 1606 (2017). [1] J. Park, Y. Lee, J. Hong, M. Ha, Y. D. Jung, H. Lim, S. Y. Kim, and H. Ko, ACS Nano 8, 4689 (2014). [2] S. H. Shin, S. Ji, S. Choi, K. H. Pyo, B. Wan An, J. Park, J. Kim, J. Y. Kim, and J. U. Park, Nat. Commun. 8, 14950 (2017). [3] K. Takei, T. Takahashi, J. C. Ho, H. Ko, A. G. Gillies, P. W. Leu, R. S. Fearing, and A. Javey, Nat. Mater. 9, 821 (2010). [4] Y. Zhu, J. Li, H. Cai, Y. Wu, H. Ding, N. Pan, and X. Wang, Sensor. Actuat. B 255, 1262 (2018). [5] C. Wang, X. Li, E. Gao, M. Jian, K. Xia, Q. Wang, Z. Xu, T. Ren, and Y. Zhang, Adv. Mater. 28, 6640 (2016). [6] Y. Pang, K. Zhang, Z. Yang, S. Jiang, Z. Ju, Y. Li, X. Wang, D. Wang, M. Jian, Y. Zhang, R. Liang, H. Tian, Y. Yang, and T. L. Ren, ACS Nano 12, 2346 (2018). [8] L. Q. Tao, K. N. Zhang, H. Tian, Y. Liu, D. Y. Wang, Y. Q. Chen, Y. Yang, and T. L. Ren, ACS Nano 11, 8790 (2017). [10] B. Zhu, Z. Niu, H. Wang, W. R. Leow, H. Wang, Y. Li, L. Zheng, J. Wei, F. Huo, and X. Chen, Small 10, 3625 (2014). [11] L. Zhang, G. Hou, Z. Wu, and V. Shanov, Nano LIFE 6, 1642005 (2016). [14] T. Q. Trung, S. Ramasundaram, B. U. H Wang, and N. E. Lee, Adv. Mater. 28, 502 (2016). [15] A. S. Shaplov, D. O. Ponkratov, P. H. Aubert, E. I. Lozinskaya, C. Plesse, F. Vidal, and Y. S. Vygodskii, Chem. Commun. 50, 3191 (2014). [16] J. Lee, P. Lee, H. B. Lee, S. Hong, I. Lee, J. Yeo, S. S. Lee, T. S. Kim, D. Lee, and S. H. Ko, Adv. Funct. Mater. 23, 4171 (2013). [17] A. V. Shirinov and W. K. Schomburg, Sensor. Actuat. A 142, 48 (2008). [18] C. Dagdeviren, Y. Su, P. Joe, R. Yona, Y. Liu, Y. S. Kim, Y. Huang, A. R. Damadoran, J. Xia, L. W. Martin, Y. Huang, and J. A. Rogers, Nat. Commun. 5, 4496 (2014). [19] D. Mandal, S. Yoon, and K. J. Kim, Macromol. Rapid. Commun. 32, 831 (2011). [20] L. Pan, A. Chortos, G. Yu, Y. Wang, S. Isaacson, R. Allen, Y. Shi, R. Dauskardt, and Z. Bao, Nat. Commun. 5, 3002 (2014). [21] S. Gong, W. Schwalb, Y. Wang, Y. Chen, Y. Tang, J. Si, B. Shirinzadeh, and W. Cheng, Nat. Commun. 5, 3132 (2014). [22] C. Pang, G. Y. Lee, T. i. Kim, S. M. Kim, H. N. Kim, S. H. Ahn, and K. Y. Suh, Nat. Mater. 11, 795 (2012). [23] C. H. Mastrangelo, Z. Xia, and W. C. Tang, J. Microelectromech. S 5, 98 (1996). [24] S. Jung, J. Lee, T. Hyeon, M. Lee, and D. H. Kim, Adv. Mater. 26, 6329 (2014). [25] D. J. Cohen, D. Mitra, K. Peterson, and M. M. Maharbiz, Nano Lett. 12, 1821 (2012). [26] H. B. Yao, J. Ge, C. F. Wang, X. Wang, W. Hu, Z. J. Zheng, Y. Ni, and S. H. Yu, Adv. Mater. 25, 6692 (2013). [27] M. K. Shin, J. Oh, M. Lima, M. E. Kozlov, S. J. Kim, and R. H. Baughman, Adv. Mater. 22, 2663 (2010). [28] J. Kuang, Z. Dai, L. Liu, Z. Yang, M. Jin, and Z. Zhang, Nanoscale 7, 9252 (2015). [29] J. Park, M. Kim, Y. Lee, H. S. Lee, and H. Ko, Sci. Adv. 1, e1500661 (2015). [30] K. Y. Lee, H. J. Yoon, T. Jiang, X. Wen, W. Seung, S. W. Kim, and Z. L. Wang, Adv. Energy Mater. 6, 1502566 (2016). [31] X. Wang, Y. Gu, Z. Xiong, Z. Cui, and T. Zhang, Adv. Mater. 26, 1336 (2014). [32] S. C. Mannsfeld, B. C. Tee, R. M. Stoltenberg, C. V. Chen, S. Barman, B. V. Muir, A. N. Sokolov, C. Reese, and Z. Bao, Nat. Mater. 9, 859 (2010). [33] C. Pang, J. H. Koo, A. Nguyen, J. M. Caves, M. G. Kim, A. Chortos, K. Kim, P. J. Wang, J. B. Tok, and Z. Bao, Adv. Mater. 27, 634 (2015). [34] B. Su, S. Gong, Z. Ma, L. W. Yap, and W. Cheng, Small 11, 1886 (2015). [35] Y. Lin, X. Dong, S. Liu, S. Chen, Y. Wei, and L. Liu, ACS Appl. Mater. Inter. 8, 24143 (2016). [36] Y. Qin, Q. Peng, Y. Ding, Z. Lin, C. Wang, Y. Li, F. Xu, J. Li, Y. Yuan, X. He, and Y. Li, ACS Nano 9, 8933 (2015). [37] Y. Si, X. Wang, C. Yan, L. Yang, J. Yu, and B. Ding, Adv. Mater. 28, 9512 (2016). [38] S. Chun, H. Jung, Y. Choi, G. Bae, J. P. Kil, and W. Park, Carbon 94, 982 (2015). [39] G. Y. Bae, S. W. Pak, D. Kim, G. Lee, D. H. Kim, Y. Chung, and K. Cho, Adv. Mater. 28, 5300 (2016). [40] L. Lin, Y. Xie, S. Wang, W. Wu, S. Niu, X. Wen, and Z. L. Wang, ACS Nano 7, 8266 (2013). [41] L. Z. Sheng, Y. Liang, L. L. Jiang, Q. Wang, T. Wei, L. T. Qu, and Z. J. Fan, Adv. Funct. Mater. 25, 6545 (2015). [42] M. Jian, K. Xia, Q. Wang, Z. Yin, H. Wang, C. Wang, H. Xie, M. Zhang, and Y. Zhang, Adv. Funct. Mater. 27, 1606066 (2017). [43] W. He, G. Li, and S. Zhang, ACS Nano 9, 4244 (2015). [44] S. Chun, Y. Kim, H. S. Oh, G. Bae, and W. Park, Nanoscale 7, 11652 (2015). [46] N. Lu, C. Lu, S. Yang, and J. Rogers, Adv. Funct. Mater. 22, 4044 (2012). [47] Y. Wei, S. Chen, X. Yuan, P. Wang, and L. Liu, Adv. Funct. Mater. 26, 5078 (2016). [48] Y. Wang, R. Yang, Z. Shi, L. Zhang, D. Shi, E. Wang, and G. Zhang, ACS Nano 5, 3645 (2011).
CommonCrawl
Quadratic Equation and Inequalities Mathematical Induction and Binomial Theorem Sequences and Series Matrices and Determinants Vector Algebra and 3D Geometry Mathematical Reasoning Sets and Relations Trigonometric Functions & Equations Properties of Triangle Inverse Trigonometric Functions Straight Lines and Pair of Straight Lines Limits, Continuity and Differentiability Application of Derivatives Indefinite Integrals Definite Integrals and Applications of Integrals Let $$\left\{ {{a_n}} \right\}_{n = 1}^\infty $$ be a sequence such that a1 = 1, a2 = 1 and $${a_{n + 2}} = 2{a_{n + 1}} + {a_n}$$ for all n $$\ge$$ 1. Then the value of $$47\sum\limits_{n = 1}^\infty {{{{a_n}} \over {{2^{3n}}}}} $$ is equal to ______________. Sn(x) = loga1/2x + loga1/3x + loga1/6x + loga1/11x + loga1/18x + loga1/27x + ...... up to n-terms, where a > 1. If S24(x) = 1093 and S12(2x) = 265, then value of a is equal to ____________. Let $${1 \over {16}}$$, a and b be in G.P. and $${1 \over a}$$, $${1 \over b}$$, 6 be in A.P., where a, b > 0. Then 72(a + b) is equal to ___________. Consider an arithmetic series and a geometric series having four initial terms from the set {11, 8, 21, 16, 26, 32, 4}. If the last terms of these series are the maximum possible four digit numbers, then the number of common terms in these two series is equal to ___________. Questions Asked from Sequences and Series (Numerical) JEE Main 2022 (Online) 29th July Morning Shift (2) JEE Main 2022 (Online) 28th July Evening Shift (1) JEE Main 2022 (Online) 27th July Evening Shift (1) JEE Main 2022 (Online) 26th July Evening Shift (2) JEE Main 2022 (Online) 26th July Morning Shift (1) JEE Main 2022 (Online) 25th July Morning Shift (2) JEE Main 2022 (Online) 29th June Evening Shift (1) JEE Main 2022 (Online) 28th June Evening Shift (1) JEE Main 2022 (Online) 27th June Morning Shift (1) JEE Main 2022 (Online) 26th June Evening Shift (1) JEE Main 2022 (Online) 25th June Morning Shift (2) JEE Main 2021 (Online) 31st August Evening Shift (1) JEE Main 2021 (Online) 31st August Morning Shift (1) JEE Main 2021 (Online) 26th August Evening Shift (2) JEE Main 2021 (Online) 27th July Morning Shift (1) JEE Main 2021 (Online) 25th July Morning Shift (1) JEE Main 2021 (Online) 22th July Evening Shift (1) JEE Main 2021 (Online) 20th July Evening Shift (2) JEE Main 2021 (Online) 16th March Evening Shift (2) JEE Main 2021 (Online) 16th March Morning Shift (1) JEE Main 2021 (Online) 26th February Evening Shift (2) JEE Main 2021 (Online) 25th February Morning Shift (1) JEE Main 2020 (Online) 3rd September Evening Slot (1) JEE Main 2020 (Online) 3rd September Morning Slot (1) JEE Main 2020 (Online) 9th January Evening Slot (1) JEE Main 2020 (Online) 8th January Evening Slot (1) JEE Main 2020 (Online) 8th January Morning Slot (1)
CommonCrawl