text
stringlengths
100
500k
subset
stringclasses
4 values
Physics as Analog Finite Precision Computation vs Physics as Statistics I am exploring an approach to physics as "analog finite precision computation" to be be compared with classical physics as "analog infinite precision physics" and modern physics as "physics of dice games" or "statistical physics". The step from classical to modern physics was forced upon physicists starting in the mid 19th century when it became clear that the 2nd law of thermodynamics could not be found in classical infinite precision physics of irreversible systems. The way to achieve irreversibility was to assume that atoms play dice games with the outcome of a throw of a dice inherently irreversible: To "unthrow" a dice was (correctly) understood to be impossible and thus irreversibility was introduced and the paralysis of reversible classical physics was broken. So far so good. But the fix came with severe side effects as real physics independent of human observation was replaced by statistical physics representing "human understanding", as if the world goes around just because some physicist is making observations and claim them to be understandable. Einstein and Schrödinger could never be convinced that atoms play dice, despite major pressure from the physics community. The unfortunate result of this collapse of rationality of deterministic physics, has led modern physics into wildly speculative physics of strings and multiverse, which nobody can understand. But there is a milder way of introducing irreversibility into classical reversible physics, and that is to view physics as analog computation with finite precision instead of infinite precision. This connects directly to a computer operating with finite decimal expansion of real numbers as a necessary restriction of infinite decimal expansion, in order to allow computations to be performed in finite time: In order to make the world go around, and it does go around, and thus not come into a halt, physical processes cannot be realised with infinite precision and thus finite precision computation is a must in a world that goes around. It is thus necessary, but it is also sufficient to introduce irreversibility into classical reversible physics. Finite precision computation thus solves the main problem which motivated the introduction of statistical physics, but in a much more gentle way and without the severe side effects of full-blown statistics based on dice games. Finite precision computational physics is represented by the modern computer, while statistical physics would correspond to a "dice computer" throwing a dice in every step of decision, just like the "dice man" created by the pseudonym Luke Rhinehart. The life of the "dice man" turned into misery, which can be compared with (reasonably) successful ordinary (reasonably controlled) life under finite precision, without a dice but with constant pressure to go onto the next day. So if you want to compare finite precision analog physics to modern statistical physics, make the thought experiment of comparing your usual finite precision computer, which you use to your advantage, to a "dice computer" which would be completely unpredicatable. This is the comparison between an experienced computer wiz often getting reliable results, to a totally inexperienced user pushing the keys randomely and getting garbage. Or make the comparison of getting married to a person which follows a principle of "finite precision" to a person like the "dice man" who is completely unpredictable. What would you prefer? Few ideas can change your view in the same way as "physics as analog finite precision computation". Try it! Upplagd av Claes Johnson kl. 19:26 2 kommentarer: Länkar till det här inlägget Etiketter: finite precision computation Physical Quantum Mechanics: Time Dependent Schrödinger Equation We consider a Schrödinger equation for an atom with $N$ electrons of the normalized form: Find a wave function $\psi (x,t) = \sum_{j=1}^N\psi_j(x,t)$ as a sum of $N$ electronic complex-valued wave functions $\psi_j(x,t)$, depending on a common 3d space coordinate $x$ and time coordinate $t$ with non-overlapping spatial supports $\Omega_1(t)$,...,$\Omega_N(t)$, filling 3d space, satisfying $i\dot\psi (x,t) + H\psi (x,t) = 0$ for all $(x,t)$, (1) where the (normalised) Hamiltonian $H$ is given by $H(x) = -\frac{1}{2}\Delta - \frac{N}{\vert x\vert}+\sum_{k\neq j}\int\frac{\vert\psi_k(y,t)\vert^2}{2\vert x-y\vert}dy$ for $x\in\Omega_j(t)$, and the electronic wave functions are normalised to unit charge: $\int_{\Omega_j}\vert\psi_j(x,t)\vert^2 =1$ for all $t$ for $j=1,..,N$. The total wave function $\psi (x,t)$ is thus assumed to be continuously differentiable and the electronic potential of the Hamiltonian acting in $\Omega_j(t)$ is given as the attractive kernel potential together with the repulsive kernel potential resulting from the combined electronic charge distributions $\vert\psi_k\vert^2$ for $k\neq j$. The Schrödinger equation in the form (1) is a free-boundary problem where the supports $\Omega_j(t)$ of the electronic wave functions may change over time. We solve (1) by time-stepping the system $\dot u + Hv = 0$, $\dot v - Hu = 0$ (2) obtained by splitting the complex-valued wave function $\psi = u+iv$ into real-valued real and imaginary parts $u$ and $v$ (and with $\vert\psi\vert^2 =u^2+v^2$.) This is a free-boundary electron (or charge) density formulation keeping the individuality of the electrons, which can be viewed as a "smoothed $N$-particle problem" of interacting non-overlapping "electron clouds" under Laplacian smoothing. The model (1) connects to the study in Quantum Contradictions showing a surprisingly good agreement with observations. In particular, the time-dependent form (2) is now readily computable as a system of wave functions depending on a common 3d space variable and time, to be compared to the standard wave equation in $3N$ space dimensions which is uncomputable. I am now testing this model for the atoms in the second row of the periodic table, from Helium (N=2) to Neon (N=10), and the results are encouraging: It seems that time dependent N-electron quantum mechanics indeed is computable in this formulation and the model appears to be in reasonable agreement with observations. This gives promise to exploration of atoms interacting with external fields, which has been hindered by uncomputability with standard multi-d wave functions. PS The formulation readily extends to electrodynamics with the Laplacian term of the Hamiltonian replaced by $\frac{1}{2}(i\nabla + A)^2$ and the potential augmented by $\phi$, where $A=A(x,t)$ is a vector potential, $\phi =\phi (x,t)$ is a scalar potential with $E = -\nabla\phi -\dot A$ and $B=\nabla\times A$ given electric and magnetic fields $E=E(x,t)$ and $B=B(x,t)$ depending on space and time. Upplagd av Claes Johnson kl. 15:48 Inga kommentarer: Länkar till det här inlägget Etiketter: physical quantum mechanics, Schrödinger's equation "Back Radiation" as Violation of the 2nd Law of Thermodynamics A new article by Martin Herzberg in Energy&Environment reviews & summarizes the NIPCC Report Climate Change Reconsidered -Physical Science. In particular, Herzberg gives the following devastating review of the phenomenon of "back radiation" supposedly being the "heating mechanism" of the "greenhouse effect: The most prevalent definition or heating mechanism involves what is referred to as "back radiation". Greenhouse gases absorb some of the IR radiation that the Earth's surface radiates toward free space after it is heated by solar radiation. According to the Environmental Protection Agency, "reradiated energy in the IR portion of the spectrum is trapped within the atmosphere keeping the surface temperature warm." This mechanism has the colder atmosphere blithely and spontaneously emitting radiant energy toward the warmer surface. That energy is supposed to be absorbed by the Earth's surface and heat it further. Thus the warmer surface should get even warmer by absorbing energy from a colder source: in direct violation of the Second Law of Thermodynamics. Advocates of CO2 alarmism work hard to meet this argument claiming that the violation of the 2nd Law is only apparent: "Back radiation" always comes along with "forward radiation" and the net radiation is always from warm to cold and so the 2nd law is not violated. The trouble with this way of handling the objection expressed by Herzberg (and myself), is that "back radiation" and "forward radiation" are supposed to be independent physical processes as "two-way flow of infrared photons", and at the same dependent coupled processes guaranteeing the the 2nd laws is not violated. But independent processes which are dependent, is a contradiction and so the effort to save "back radiation" from joining phlogistons in the wardrobe of unphysical processes, comes to nil and so the "greenhouse effect" is "hanging in the air" without scientific basis. Etiketter: 2nd law of thermodynamics, myth of backradiation Does an Undetectable "Greenhouse Effect" Exist? Vincent Gray seeks to clarify the physics of the "greenhouse effect" in a new blog post at The Australian Climate Sceptics - Exposing the flaws in the greatest hoax inflicted on the human race: Greenhouse gases, predominantly water vapour, do absorb infra red radiation from the earth, radiate the additional energy in all directions, including downwards and so warm the earth. So the greenhouse effect does exist. This effect must be very small as it has not been detected, despite the enormous effort that has been applied to try and find it. We read that Vincent here puts forward the idea that the atmosphere by radiating heat energy downwards causes warming of the Earth surface in a process of two-way radiative heat transfer between the atmosphere and surface including "back radiation" from a cold atmosphere to a warm surface. Vincent thus accepts the picture painted by CO2 alarmism based on a "greenhouse effect" and thereby gives it a free ride. Vincent shares this view with many "skeptics". At the same time as Vincents claims that "the greenhouse effect exists", he informs us that it has not been detected, presumably then because "it must be very small". All this is unfortunate because the two-way heat transfer including back radiation which Vincent describes, is not true physics but fake physics, as I have argued in extended writing. Vincent defends his position with a direct attack on my position with the following argument (in bold): Radiation energy is converted to heat if it is absorbed by any suitable object. The temperature of that object is quite irrelevant. The speculation by some that radiation cannot be absorbed by an object whose temperature is bigger than that of the radiant emitter requires the absurd assumption that radiation, is capable of detecting the temperature of distant objects before deciding whether they are fit to receive absorption. Such an assumption restores the need for a belief in the existence of an ether. But is it absurd that an absorber can detect if the temperature of an emitter is bigger than its own temperature? Not at all! That information is encoded in the spectrum of the emission as the high-frequency cut-off described in Wien's displacement law with the cut-off increasing linearly with temperature. The result is that emission from a certain temperature cannot be re-emitted by an absorber at lower temperature and thus must be absorbed and turned into heat causing warming. A stone put in the sun light, can thus very well detect that the sun light falling upon itself was emitted at a temperature higher than its own, because sun light contains frequencies above the cut-off frequency of the stone. The stone detects this by finding itself being unable to re-emit these frequencies and thus cannot prevent getting heated by the Sun. This is nothing the absorber "decides" to do in Vincent's vocabulary, because atoms have no free will "to decide", but simply something the absorber is unable to do, which involves no "decision" and thus can be physics. So in conclusion I pose the following question to Vincent (and other "skeptics"): Since the "greenhouse effect" cannot be detected experimentally and your theoretical argument in support of its existence has shown to be incorrect, wouldn't it be more rational to give up arguing that the "greenhouse effect exists" because there is "back radiation", thus giving support to CO2 alarmism? I have asked Vincent to respond to this post, but it may well be be that Vincent, like some other "skeptics", simply hides (and warms up) after making an attack on a skeptic position of a frequency above his own cut-off. PS Vincent's claim of existence of a phenomenon that is not detectable, connects to an (unfortunate) aspect of modern physics, as opposed to classical physics, rooted in the Bohr Copenhagen interpretation of quantum mechanics, where the wave function is not viewed to represent real physics independent of human observation, but instead represents human understanding in statistical terms limited to what can be observed by humans. Modern physicists following Bohr are thus allowed to speak about only physics which is observable and then in statistical terms. But this is too narrow, and has opened the possibility of a vast physical landscape beyond observation, which physicists are now eagerly exploring in the extreme forms of string theory and multiversa. The (unfortunate) result is that speaking about phenomena of physics which cannot be detected, which in the view of classical physics is nonsense, has now become mainstream modern physics. Upplagd av Claes Johnson kl. 12:24 47 kommentarer: Länkar till det här inlägget Etiketter: myth of backradiation, radiative heat transfer Two-way Heat Transfer and 2nd Law: Contradiction! The discussion with edX in the previous post exhibits a "greenhouse effect" connected to "back radiation" or "Downwelling Longwave Radiation DLR" from a cold atmosphere to a warmer Earth surface, as a part of two-way radiative heat transfer between two bodies each supposed to emit independently of the other according to a Stefan-Boltzmann law of the form $Q=\sigma T^4$ with $T$ body temperature and $\sigma$ a positive constant. As the discussion shows, advocating two-way heat transfer requires an argument showing that what appears to be a violation of the 2nd law of thermodynamics with the colder body transfering heat to the warmer, is only apparent. The argument is then that the heat transfer is always bigger from the warmer and so the net transfer is always from warm to cold. However, this argument is contradictory: Each body is supposed to emit independently of the other, yet at the same time the two transfer processes must somehow be linked to guarantee that the net transfer always comes out right, even if they are nearly equal. The transfer processes are thus assumed to be both independent and dependent, which is a contradiction. And contradictory physics can only be non-physical illusion. Unfortunately, in modern physics contradictions such as wave-particle contradiction, have come to be accepted by Bohr sophistery as "complementarity" or "duality". A "round square" is thus in modern physics not a contradiction, but just expresses "complementary" or "dual" properties of some higher physical existence incomprehensible to human understanding but still physics. But sophistery is not science and contradictory physics is non-physics. In this context, recall that the idea of two-way heat transfer was used by Schwarzschild in 1906 to set up a simple model for radiative heat transfer allowing a simple analytical solution as a linear function. The unphysical aspect of Schwarzschild'd model is exposed in the recent post Unphysical Schwarzschild vs Physical Model for Radiative Heat transfer. What was unphysical in 1906 is still unphysical today. edX: "Back Radiation" as the Physics of the "Greenhouse Effect" I started my journey as climate skeptic in 2009 in an attempt to understand the physics of the so called "greenhouse effect" threatening human civilisation by global warming from human emission of CO2 as a powerful "greenhouse gas". I then discovered that the "greenhouse effect"was (and still is) is very vaguely identified in the scientific literature which poses a severe difficulty to skepticism of CO2 alarmism. The ongoing edX course Denial101x Making Sense of Climate Science Denial is an attack on skepticism to CO2 alarmism, referred to as "denialism", introduced by: In this first week you will be introduced to some of the terminology we will use in the course in order to begin building your understanding of scientific consensus, the psychology of denial and the spread of denial. The course shows which skeptics arguments are the most effective and thus requires special effort to kill. In this sense the course offers valuable insight. In a video lecture in week 3 on the "Greenhouse Effect", we are informed that: The glow from the Earth surface goes upwards, greenhouse gasses absorb some of this heat and they then glow in every direction including down towards us. This is how the greenhouse effect works. We measure it every day here at Reading Atmospheric Obervatory by a pyrgeometer...it has a special window only allowing infrared light through to be measured. Even during a cloudless night it measures the constant greenhouse glow. Even though the greenhouse effect is an observed fact, there is a myth that it does not exist. This myth misinterprets a law of physics called the second law of thermodynamics. The 2nd law says that even though heat moves in all directions, overall heat moves from hot to cold, and not from cold to hot. The myth says that the greenhouse effect does not exist because it means heat moving from a cooler sky to warmer surface. But this is a misrepresentation: The greenhouse effect obeys the law: A square meter of Earth surface send about 500 Watts upwards, so it works like a 500 W heater. The greenhouse effect sends down about 330 Watts of heat, so in total about 170 Watts goes from the warmer surface to the cooler sky. Heat overall goes from hot to cold but the greenhouse effect sends som back to warm us up. The myth misrepresents the 2nd law. Meanwhile observatories measures the greenhouse effect every day all over the globe. We understand that the "greenhouse effect" is based on "back radiation" from the atmosphere, which is measured by pyrgeometers. From the beginning of my skeptics journey I understood, from a new proof of Planck's radiation I had constructed as part of a larger effort to describe physics as analog computation, that "back radiation" is an illusion without physical reality, and so that a pyrgeometer is constructed to sell this illusion to a market in need of "instrumental evidence". This insight has made me into a "denier" in the view of not only alarmists but strangely enough also in the view of leading skeptics such as Singer and Spencer and many others. As a "denier of back radiation" based on a view of physics as computation, I have met many strong reactions often including direct censorship of this my view. The edX course gives me more courage to not give up this view including a new proof of Planck's law leading to the conclusion that "back radiation" is non-physical illusion. The edX course shows that if "back radiation" is illusion, then so is the "greenhouse effect". Unfortunately, leading skeptics have fallen into the trap of "How to fool yourself with a pyrgeometer". You find more material under the categories "myth of back radiation" and "pyrgeometer". To get rid of illusions may get very quickly, once you meet the right argument. Notice in particular the recent post on the unphysical aspect of Schwarzschild's radiation model introducing the unfortunate unphysical idea of "back radiation" or "downwelling longwave radiation DLR" as the warming element of the "greenhouse effect". Take a special look at the argument presented: The 2nd law says that even though heat moves in all directions, overall heat moves from hot to cold. If anything, this is a false version of the 2nd law: It is not true that "heat moves in all directions". I asked edX to give the scientific justification of this statement, and report the answer. If you go through the response from edX below, you will find that after an exchange of nearly 100 comments back and forth, we are still far from getting an answer from edX. The tactic used by edX is to meet any question from me as a student following the course, by a battery of counter-questions with the objective of keeping me busy and so avoiding to answer my question. Clever, but tiresome both for edX and me. Here is a copy of the edX Discussion with teacher Gavin Cawley: Gavin: The physical mechanism is very straightforward. I would happily go through the physics with you, step-by-step, starting with back body radiation, to see where we agree and where we disagree. Do you agree that a spherical black body object, in a vacuum, will radiate energy in all directions in the form of photons, according to the fourth power of its temperature (i.e. the Stefan-Boltzman law), with the spectrum of radiation governed by Plank's law? If you answer my questions directly, I am sure we will soon reach agreement. Claes: Yes, if you by a vacuum mean a surrounding environment at 0 K. Is that so? And if you by "photons" mean electromagnetic waves. Is that so? Gavin: By vacuum, I did indeed mean that the surrounding environment is at 0K. By "photon" I was referring to "an elementary particle, the quantum of light and all other forms of electromagnetic radiation". Do you still agree, given those clarifications? Claes: Yes, go on. Gavin: Thank you. A second black-body object (lets call it "B" and the first "A") is then introduced. For convenience make B spherical, with the same radius as A, and placed a short distance from, but not touching A. Would you agree that B also radiates photons in all directions according to the fourth power of its temperature (Stefan Boltzmann) with a spectrum given by Plank's law? Claes: No, since with A present, the environment of B is not a vacuum at 0 K. And then? Gavin: The intensity and spectrum of black body radiation depends only on its temperature, as given by the Stefan-Boltzmann and Plank laws, which is why black-body objects are a useful idealization. Can you provide a reference to a derivation for black-body radiation that explicitly states the dependence on environmental temperature? The thought experiment we are conducting doesn't depend on this point, but we appear to have identified a point of divergence, so it would be useful to understand the source of the disagreement. Claes: It is certainly very natural to expect that the state of the environment is of importance. We just agreed that a body emits according to Planck's law into a vacuum at 0 K. Don't you remember that? If you claim that the environment is of no importance, then you have to back that with strong evidence, since it is such a strange utterly surprising statement. So what is your evidence? Planck's proof of his law only counts degrees of freedom in a cavity and says nothing about independence of surrounding environment, and thus cannot be used as positive evidence of your claim about independence. Right? Gavin:Regarding the equivalence between cavity radiation and black body radiation, there is a nice explanation here by Prof. Alan Guth of MIT (it is from an excellent course on the early universe that is well worth watching). Essentially the radiation within a cavity is described by Plank's law, but if you were to put a black body into that cavity and wait for it to reach thermal equilibrium, then it must radiate according to Plank's law as well in order for the incoming energy from the cavity radiation to match the outbound black-body radiation, and like cavity radiation, only depends on temperature. Note in this case, the derivation definitely doesn't depend on the environment being at 0K. Prof. Guth also specifically states that its radiation wouldn't change if you took it out of the cavity (at least until it cooled, but then its radiation would be according to SB and P laws at the lower temperature). Prof. Guth is a leading expert in cosmology, where black-body radiation is an important concept (e.g. cosmic background radiation), so I suspect his understanding of this topic is reliable. However, if you have a reference to a derivation of the Plank and Stefan-Boltzmann laws that detail the sensitivity to the environment, then I would happily read them. Can you supply me with such a reference? Now the environment is certainly important in determining the warming or cooling of the black body objects, which is why I made the simplifying assumption of a vacuum at 0K. However, whether the bodies warm or cool does not depend solely on their intrinsic radiation, as we shall see later in the thought experiment, so I see nothing unnatural about the intensity and spectrum of radiation depending only on temperature. Let us assume that object B is a little cooler than object A, would you agree that it (B) radiates photons equally in all directions (being a spherical black body object)? Claes:The version of SB you find in engineering texts, which is relevant to atmospheric radiation, states that the heat transfer between a body A at temperature T_A and a body B at temp T_B is given by Q = sigma (T_A^4 - T_B^4) with A warmer than B and transfer of Q from A to B. The dependence of the environment is here obvious, right? Do you accept this version of Stefan-Boltzmann? Gavin:That equation is for heat transfer, not for the radiation of photons from a black-body object, we will get on to transfer later. For the moment, I am trying to establish your position on the intensity and spectrum of photons radiated from B as that is important in explaining the transfer. Claes:No, as I have said, the presence of A will influence the radiation from B, since A is part of the environment of B. And yes, we are speaking about heat transfer by radiation, and nothing else, right? And it is better to leave out cosmology, inflation, Big Bang and multiverse, since it is is of little importance concerning atmospheric radiation, right? Gavin: As I have asked ClaesJohnson twice for a reference giving derivations of the Plank and Stefan-Boltzman law that detail the sensitivity to the environment, and none has been provided, I will have to leave that issue to one side for the moment. Claes, my previous question referred only to the direction of photons emitted by object B, do you agree, that being a spherical black body object, B will emit photons equally in all directions? BTW, I should have said more explicitly that I do agree with the formula "Q = sigma (T_A^4 - T_B^4)" for radiative transfer; I certainly do. Claes: If we do agree about SB as stated, then we are on speaking terms. The presence and dependence of the environment is clear in this formula, right? In particular, we have our previous agreement in the special case with T_B = 0 K, right? What I have asked you about is reference to a statement of independence of environment. What is your evidence? We have agreed on dependence and now it is up to you to deliver contradictory evidence of independence. What is it? Gavin: "The presence and dependence of the environment is clear in this formula, right?" no, as I said, that equation is about radiative transfer between objects, not the radiation from the objects themselves. "What I have asked you about is reference to a statement of independence of environment. " I have already provided two, the second being the lecture by Prof. Guth, whom most would regard as being well qualified on the subject. "We have agreed on dependence and now it is up to you to deliver contradictory evidence of independence" I have already clearly stated that the environment is relevant to transfer, but not to the intrinsic radiation of the object (indeed the transfer equation can be derived from intrinsic radiation being independent of environment). Now this is the fourth time I have asked this question without a direct answer, I repeat: Claes: Again, are we discussing heat transfer by radiation, or something else? What, if so? Gavin: ClaesJohnson we are discussing the radiation of photons from a black body object, with the intention of explaining the nature of radiative transfer from one black body to another in due course. Claes: No, the environment of B will influence the heat energy radiated by B. Gavin: ClaesJohnson, O.K. so does B emit any photons that strike (and are absorbed) by A? Claes: What does that have to do with the heat transfer by radiation between A which we are discussing? Or are you discussing something else? If so, what? Gavin: In order to explain my argument to you, I need to properly understand your objection. The best way to do this is to ask questions that allow you to unambiguously state your position in a way that I will understand. You may not understand the relevance of these questions, but I suspect it will be clear to most readers with a background in physics, but the fastest way to reach agreement is simply to give a concise and direct answer to the question. So, please give a direct answer to the question: does B emit any photons that strike (and are absorbed) by A? Claes: I cannot answer because I do not understand the physics of "photons that strike and are absorbed by A". Again, are we discussing heat transfer by electromagnetic waves? If not, what is it you are discussing? Gavin: A black body is an idealized physical body that absorbs all incident electromagnetic radiation, regardless of frequency or angle of incidence (source). Radiation and absorption of photons is the basic mechanism by which radiative transfer occurs. Absorption simply means that the photon no longer exists and the energy that it carried has been transferred to the body that absorbed it. So does B emit any photons that strike (and are absorbed) by A? Claes: Gavin, if we agree about SB as stated in engineering literature, why is this not enough to describe atmospheric radiative heat transfer? What more do you want? My analysis of blackbody radiation is exposed at https://computationalblackbody.wordpress.com/ Gavin: Claes, as I have pointed out to you before, that equation is for radiative transfer. In order to answer your question about the second law of thermodynamics and the back-radiation, you need to understand how that transfer arises from an exchange of energy in either direction. It is a shame that you have been so unwilling to give direct answers to straightforward questions, so I will explain it to you. So, the conventional interpretation of thermodynamics would indicate that the warmer of the two bodies would radiate energy (in the form of photons) at a rate given by the Stefan-Boltzmann law, i.e. Ja = sigma*Ta^4, where Ja is the power radiated from A, Ta is the temperature of A in Kelvin and sigma is the Stefan-Boltzmann constant. Now, A being a spherical black-body will radiate photons evenly in every direction, however a proportion of these, which we will call c, will be traveling in the right direction to intersect with B, which being a black body will absorb them. The rate at which energy is received at B due to this flow of photons from A is c*Ja = c*sigma*Ta^4 Similarly, B will radiate energy at a rate given by the Stefan-Boltzman law, such that Jb = sigma*Tb^4, where Jb is the power of the radiation from B and Tb is the temperature of B. Now by symmetry (as I have made the two objects spheres of identical radius), the proportion of the radiated photons from B that intersect with A is the same as the proportion of photons emitted by A that intersect with B, i.e. c. Thus the rate at which energy is received at A due to this flow of photons from B is c*Jb = c*sigma*Tb^4 Now let's consider the gain of energy by the cooler body, B. It has gained energy from A at a rate c*sigma*Ta^4, but has lost energy to A at a rate c*sigma*Tb^4. The transfer of heat between the two, is just the difference of these quantities, i.e. Q = c*sigma*Ta^4 - c*sigma*Tb^4 or in other words Q = c*sigma*(Ta^4 - Tb^4) which is the usual "engineering" representation of the Stephan-Boltzmann law of radiative transfer. The important thing to note is that this arises perfectly naturally from the intensity of black-body radiation depending solely on its absolute temperature. There is no need for some unspecified physical mechanism that influences the direction in which a black body radiates; it just radiates in all directions, and the net flow of energy conforms precisely to the second law of thermodynamics. This is what the footnote in Clausius' book describes, and the basic idea has been well understood for a long time. So, if a spherical black body does not radiate photons equally in every direction, but is affected by its environment, please explain the physical mechanism by which this is achieved. Footnote, I am assuming that the constant c has been aggregated with the Stefan-Boltzmann constant in ClaesJohnsons' equation. The constant c depends on the size of the objects, their shape and how much of the radiation from one object is able to fall on another. I have made the scenario in the thought experiment symmetrical so it is easy to see that c is the same for both bodies, although this is true without the exact symmetry. Claes: I have asked you if an engineering version of SB (or Planck) is enough to describe atmospheric radiation, and if not, what is missing. If you agree that it is basically enough, then we have a common standpoint and we can go on to specific questions concerning the physics of the so called "greenhouse effect". If you insist that it is not enough, then I want to see your arguments supporting this view. There are endless questions that can take a lifetime or more to answer, such as: What is an "infrared photon"? What physical laws does it follow? How does it travel through space? In straight lines? What is the process of "absorption/emission of an "infrared photon". In which direction is it emitted? Is an "infrared photon" particle or wave? What is its lifetime? How does it interact with other "infrared photons". Are "infrared photons" like bullets traveling through space? If so, what happens when two "infrared photons" meet? Is heat transfer between two bodies carried by two opposite streams of "infrared photons" back and forth between the bodies? If so, what is the mechanism that guarantees that the net heat transfer is from warm to cold? What is the wave function of the multiverse? Et cet, et cet... But all these questions are irrelevant as concerns the physics of the "greenhouse effect", if the engineering version of SB/Planck (which we have agreed is valid) is enough to describe atmospheric radiative heat transfer. So I ask you again if we can take this version as a common ground and then proceed to the real questions of importance concerning the physics of the "greenhouse effect", with a basic question being climate sensitivity as the amount of global warming from doubled CO2. Is this OK to you? Or do you insist that questions of the type I have listed above, have to be answered (by me) before we can can come to the point? And in particular before you will give an answer to my original question about the statement in the video of week 3 that "although heat moves in all directions.." (Is this statement connected to an idea of opposite streams of photons between bodies?) I expect to get a clear answer to my clearly stated questions, and not just more questions from you to me, which I cannot answer (and probably nobody else). To meet a question by a battery of counter-questions is a way to avoid answering the original question. I am sure you would not like to resort to this form of discussion trickery, right? I also ask you if you have looked at the web site on Computational Blackbody Radiation I referred to and if you have read and understood the arguments there presented, and if you have some comments or questions concerning the material? In short: If net heat transfer from warm-to-cold is what matters, why insist on net heat transfer as the difference of two opposite heat transfers warm-to-cold and cold-to-warm, where the latter appears to violate the 2nd law? I have given my argument for net transfer as a property of stability (connected to the 2nd law). Transfer as the difference of two opposite gross transfers is an unstable process, since small differences in gross transfers can shift the sign of the net transfer, and thus violate the 2nd law. The only way the 2nd law can be upheld with transfer as difference of opposite gross transfers, is that the opposite transfers somehow are linked, but that contradicts your idea that the opposite transfers are independent of each other. Do you see this? Gavin: ClaesJohnson wrote "I have asked you if an engineering version of SB (or Planck) is enough to describe atmospheric radiation, and if not, what is missing. If you agree that it is basically enough, then we have a common standpoint and we can go on to specific questions concerning the physics of the so called "greenhouse effect"." I have already explained why more is required, when I wrote in the previous message: "Claes, as I have pointed out to you before, that equation is for radiative transfer. In order to answer your question about the second law of thermodynamics and the back-radiation, you need to understand how that transfer arises from an exchange of energy in either direction." When we reach agreement on that point, then we will have a common standpoint to discuss the greenhouse effect (and specifically why there is no violation of the second law of thermodynamics). So, do you accept that the engineering version of SB can be derived (as shown in my previous comment) as the net result of a bi-directional transfer of energy from A to B and from B to A, where the radiation from A is determined solely by is absolute temperature Ta (according to SB), and the radiation from B determined solely by its absolute temperature Tb (according to SB)? Please lets leave rhetoric out of this discussion. I have asked one question in this comment, and one only, please give a direct answer. Claes: What is your question to me again? I will certainly try to answer if I only understand what you ask. Your answer to my question is that the engineering version of SB/Planck is not enough to (mathematically) model atmospheric radiative heat transfer. But you did not answer my follow-up question to your answer, namely, what more you then need to (mathematically) model atmospheric radiative heat transfer? What additional physical law do you need for this purpose? You did not either answer if you have looked at the web site I gave. Have you? If so any reaction? And what about my original question? You say that there are certain things I need to understand in order for you to answer my questions. I don't see that my understanding, whatever it means, is necessary in order for you to answer my questions? Would it not be possible for you to simply answer my clearly stated questions, regardless of my state of mind? Is it necessary for me to give a complete account of my inner status and thoughts in order for you to answer my questions in my role as student in a course that you are giving on edX? Isn't the role of a teacher to answer questions from students concerning the material presented by the teacher, rather than subjecting the students to interrogation to see if they carry ideas which the teacher does not like? And again, what more than the engineering version of SB/Planck do you need to model atmospheric radiative heat transfer??? My view on this question is presented as Unphysical Schwarzschild vs Physical Model for Radiative Transfer at http://claesjohnson.blogspot.se/2015/04/unphysical-schwarzschild-vs-physical.html Gavin: ClaesJohnson, you seem to have asked multiple questions in your comment, I am happy to answer them one at a time. Please select the technical/scientific question you would like me to answer. In an earlier comment, I derived the engineering form of the SB equation for heat transfer as being the net result of a bi-directional transfer of energy due to the radiation from each body. This is not a new idea, in his book "The Theory of Heat Radiation", Max Plank (c.f. Plank's law) states: A body A at 100C emits toward a body B at 0C exactly the same amount of radiation as toward an equally large and similarly situated body B' at 1000C. The fact that the body A is cooled by B and heated by B' is due entirely to the fact that B is a weaker, B' a stronger emitter than A. This makes it very clear that Plank's conception of heat transfer is of a bidirectional transfer of radiation, both from warmer to cooler and from cooler to warmer, with the transfer of heat depending on the net difference in the two flows. Note Plank also specifically states that the radiation from A is not dependent on the temperature of the body on which the radiation will fall. So to repeat my question: ...do you accept that the engineering version of SB can be derived (as shown in my previous comment) as the net result of a bi-directional transfer of energy from A to B and from B to A, where the radiation from A is determined solely by is absolute temperature Ta (according to SB), and the radiation from B determined solely by its absolute temperature Tb (according to SB)? Claes: Of course the one-directional SB can be derived from a two-directional version by trivially taking the difference. But that does not say that the two-directional is correct, right?. The one-directional version may be the correct physical law, while the two-directional may still be non-physical. Confirming an assumption by observing a consequence is one of the logical fallacies, right? So I have answered your question, and now to my question: What more than the engineering version of SB/Planck do you need to mathematically model atmospheric radiative heat transfer? What additional physical do you need for that purpose? The question is clearly stated and I expect a clear answer. Gavin: Claes, again you have asked multiple questions (note there are three question marks in your comment). However I will address them in turn, on this occasion (I assume the second was rhetorical): "Of course the one-directional SB can be derived from a two-directional version by trivially taking the difference. But that does not say that the two-directional is correct, right?." No, no in itself. However the quote I gave from Plank clearly shows that Plank's conception of heat transfer was the net result of a bi-directional flow of energy. Similarly Clausius' book clearly indicates that a bi-directional flow of heat is completely consistent with the second law of thermodynamics, provided the net flow is from hot to cold. More importantly, I would argue that there is no plausible physical mechanism that can explain how a body can modify its radiation to avoid its radiation being absorbed by a warmer body. My derivation requires no such assumption as the radiation of a body depends only on its local state (specifically its absolute temperature). So my question for this message is: "What physical mechanism allows a body to alter its radiation to avoid emitting photons that reach a (possibly moving) warmer body?" "What more than the engineering version of SB/Planck do you need to mathematically model atmospheric radiative heat transfer?" I have answered this question several times already. The net transfer of heat is depends on the difference in the energy exchanged between two bodies (in this case the atmosphere and the surface). Thus it is necessary to consider the amounts of energy radiated by each component and absorbed by each component separately. The greenhouse effect does not violate the second law of thermodynamics because the energy transferred by back radiation from the atmosphere to the surface is "compensated" (as Clausius, in translation, would say) by a larger transfer of energy in the other direction, in the form of IR radiation from the surface. The key point is that if you accept that a bi-directional exchange of radiation doesn't violate the second law of thermodynamics, provided the net flow is from hot to cold, then the greenhouse effect doesn't violate it either. If you do not accept this, then you need to show that the engineering form of the SB law is inconsistent with the interpretation as the net result of a bi-directional exchange of radiation. However you appear already to have conceded this point "Of course the one-directional SB can be derived from a two-directional version by trivially taking the difference". Claes: I asked you: What additional physical law, in addition to the one-directional engineering SB/Planck law we have agreed on, do you need to mathematically model atmospheric radiative heat transfer? What is your answer? Gavin: Claes, I have already answered that question. We must use the physical laws governing the radiation of black-body objects (at least to begin with), which is the Stefan-Boltzmann law for radiation, i.e. j* = sigma*T^4. From this we can straight-forwardsly derive the "one-directional engineering SB/Planck law" as the net result of an exchange of radiation between two bodies. Under this interpretation of the "engineering SB law", the greenhouse effect does not violate the second law of thermodynamics as the radiation from the warmer surface to the cooler atmosphere is greater than from the atmosphere to the surface. Therefore the "one-directional" net heat transfer of the "engineering" SB law is from warmer to cooler, as required by the second law of thermodynamics. This question ought to have a yes or no answer, and will help me to understand your position if you give an unequivocal answer. Plank writes: My one question for this comment is: Is any of the radiation emitted by B (at 0C) absorbed by A (at 100C), "yes" or "no"? Claes: This statement of Planck lacks physical reality. Nature does not play with opposite equally large quantities, which are independent, yet always keeping one bigger than the other to not violate the 2nd law. You cannot accept anything that Planck says without yourself judging if that is correct or not? Science is not parrot science where you simply repeat what is written in book or stated by some since long dead scientist. Again: what additional law is required to mathematically model atmospheric radiative heat transfer, beyond the one-directional SB/Planck law we have agreed on? Is it two-way heat transfer? If so, what equation does that effectively bring into the mathematical model? Have you read my post about Schwarzschild's (unphysical) equations based on two-way transfer? If not, do that and give your view on the necessity of Schwarzschild's model. OK? Gavin: Claes, I have already answered your question repeatedly. The additional physical law that is required is the Stefan-Boltzmann law of radiation: j* = sigma*T^4. For the reasons, see my previous answers. Now please give a direct answer to my previous question, I repeat: Plank writes:A body A at 100C emits toward a body B at 0C exactly the same amount of radiation as toward an equally large and similarly situated body B' at 1000C. The fact that the body A is cooled by B and heated by B' is due entirely to the fact that B is a weaker, B' a stronger emitter than A. You earlier wrote: Note that in my previous comment I asked precisely one question (only one question mark), but in your reply, you did not give an answer, but you asked multiple questions (I count six question marks!). Claes: My answer is no. Gavin: Thank you, that is interesting. Consider a third body B'' at 50C, which is of a similar size to B and again similarly situated. I am assuming that since B'' is also cooler than A, you would say that no radiation from B'' is absorbed by A either. Feel free to correct me if this assumption is incorrect. My question is, does A emit a different amount of radiation towards B than it emits towards B''? Claes: Radiative heat transfer between bodies is described by the one-directional SB law we have agreed is valid. Gavin: That is not a direct answer to the question, A either does emit a different amount of energy towards B than it emits towards B'', or it does not. Which is it? Claes: SB gives the answer to your question. This is an exercise you can do yourself. After all you are the teacher and should know. Gavin: The reason that I am asking for an unequivocal answer is that I intend to demonstrate a contradiction and do not want to leave room for equivocation after it has been established. If you are confident of your position, you ought to be eager to state your position in completely unequivocal terms. So I ask again: "does A emit a different amount of radiation towards B than it emits towards B''?" "yes" or "no". Claes: A as warmer transfers heat to B and B" according to SB. If B" is warmer than B, then less heat energy transfers from A to B" than to B. Claes: You claim that in addition to SB in the form Q = sigma (T_A^4 - T_B^4) with T_A > T_B, you need an SB of the form Q = sigma T_A^4. But the latter is included in the former if you set T_B = 0. So why is the extra SB needed? Claes: Gavin: While you are thinking, I hope you also remember to answer my original question about the meaning of the statement "although heat moves in all directions..." in a video of week 3. Gavin: I have now asked ClaesJohnson a straight-forward "yes"/"no" question three times, and each time have recieved an evasive response. As I have already explained why an indirect answer would be indicative of evasion, I think it is reasonable to conclude that the evasion was deliberate. ClaesJohnson subsequently attempted to divert the discussion away from a line of inquiry that will demonstrate a contradiction in his position by repeating a question from earlier in the discussion that has already been answered (repeatedly). Again this is evasion. Socratic method (a form of enquiry based on asking and answering questions) is an excellent means of resolving scientific disagreements, provided that both parties engage in the exercise in good faith. This cross-examination allows misunderstandings to be resolved and exposes the weaknesses in either argument. Evading direct questions is a clear indication that someone is unwilling to change their views, regardless of the evidence or opposing arguments presented. In this case, there is little point in continuing the discussion, and the observers can draw their own conclusion from the evasion. Ulimately, if ClaesJohnson refuses to look at anything other than the "one-direction" SB law for radiative heat transfer, and is unable to understand that this arises as the net result of a bi-directional transfer of energy (as illustrated by Planck's example), he will be unable to understand why the greenhouse effect doesn't violate the second law of thermodynamics. However, he can't say that this has not been explained to him. Claes: Gavin, I think we have come to the end of our discussion. Yes, it is true that only the uni-directional SB makes sense to me and to physics. You have not been able to give any scientific support to a "greenhouse effect" based on two-directional heat transfer including "back radiation" with heat transfer from cold to warm, and neither have you been able to explain the obvious violation of the 2nd law in such a process. This means that a "greenhouse effect" based on "back radiation" is nonphysical illusion. The result is that the course lacks sufficient scientific basis and should be closed. Gavin: ClaesJohnson If only the uni-directional SB makes sense to you, then perhaps you should not obstruct attempts to explain the bi-directional energy transfer required for an understanding of back radiation by the sort of evasive behaviour you have demonstrated during this discussion. Claes: I have declared my standpoint very clearly. The evasiveness is yours. I am surprised that edX offers a platform for the kind of propagandistic disinformation the course presents. In any case the "denial" will not be affected by the course. Etiketter: myth of backradiation, pyrgeometer Tragedy of Modern Physics: Schrödinger and Einstein, or Quantum Mechanics as Dice Game? The story of modern physics is commonly told as a tragedy in which the fathers of the new physics of atomistic quantum mechanics Einstein and Schrödinger, were brutally killed by their descendents Bohr, Heisenberg and Born. This story is told again in a new book by Paul Halpern with the descriptive title: Einstein's Dice and Schrödinger's Cat, How Two Great Minds Battled Quantum Randomness to Create a Unified Theory of Physics. In this story, it is Einstein and Schrödinger who represent the tragedy in their stubborn opposition to the probabilistic Copenhagen interpretation interpretation of the wave function of Schrödinger's equation and their fruitless search for a unified physical field theory free from dice games, which ended in tragical defeat under ridicule from the physics community controled from Copenhagen by Bohr. But it is possible that Einstein's and Schrödinger's dream of a unified physical field theory will come true one day, and then the tragedy will instead be modern physics based on dice games. In all modesty this is the working hypothesis I have adopted in my search for a version of Schrödinger's equation allowing a realistic physical interpretation without "quantum randomness". Stay tuned for an update of recent advances in this direction... In short, one may say that Einstein and Schrödinger seek a mathematical model of physical reality as a question of ontological realism or existence or what "is", while Bohr is only interested in what we can "say" (based on what we can "see") as a question of epistemological idealism. In the quest between realism and idealism in physics, one may argue that idealism is failed realism. The book describes Einstein's and Schrödinger's positions as follows: As originally construed, the Schrödinger equation was designed to model the continuous behavior of tangible matter waves, representing electrons in and out of atoms. Much as Maxwell constructed deterministic equations describing light as electro­magnetic waves traveling through space, Schrödinger wanted to create an equation that would detail the steady flow of matter waves. He thereby hoped to offer a comprehensive accounting of all of the physical properties of electrons. Born shattered the exactitude of Schrödinger's description, replacing matter waves with probability waves. Instead of physical properties being assessed directly, they needed to be calculated through mathematical manipulations of the probability waves' values. In doing so, he brought the Schrödinger equation in line with Heisenberg's ideas about indeterminacy. In Heisenberg's view, certain pairs of physical quantities, such as position and momentum (mass times velocity) could not be measured simultaneously with high precision. Aspiring to model the actual substance of electrons and other particles, not just their likelihoods, Schrödinger criticized the intangible elements of the Heisenberg-Born approach. He similarly eschewed Bohr's quantum philosophy, called "complementarity," in which either wavelike or particlelike properties reared their heads, depending on the experimenter's choice of measuring apparatus. Nature should be visualizable. Starting in the late 1920s, one of his primary goals was a deterministic alternative to probabilistic quantum theory, as developed by Niels Bohr, Werner Heisenberg, Max Born, and others. Although he (Einstein) realized that quantum theory was experimentally successful, he judged it incomplete. In his heart he felt that "God did not play dice," as he put it, couching the issue in terms of what an ideal mechanistic creation would be like. Agreeing with Spinoza, Einstein sought the invariant rules governing nature's mechanisms. He was absolutely determined to prove that the world was absolutely determined. Einstein, who had been a colleague and dear friend in Berlin, stuck by Schrödinger all along and was delighted to correspond with him about their mutual interests in physics and philosophy. Together they battled a common villain: sheer randomness, the opposite of natural order. Schooled in the writings of Spinoza, Schopenhauer— for whom the unifying principle was the force of will, connecting all things in nature— and other philosophers, Einstein and Schrödinger shared a dislike for including ambiguities and subjectivity in any fundamental description of the universe. While each played a seminal role in the development of quantum mechanics, both were convinced that the theory was incomplete. Though recognizing the theory's experimental successes, they believed that further theoretical work would reveal a timeless, objective reality. As Born's, Heisenberg's, and Bohr's ideas became widely accepted among the physics community, melded into what became known as the "Copenhagen interpretation" or orthodox quantum view, Einstein and Schrödinger became natural allies. In their later years, each hoped to find a unified field theory that would fill in the gaps of quantum physics and unite the forces of nature. By extending general relativity to include all of the natural forces, such a theory would replace matter with pure geometry— fulfilling the dream of the Pythagoreans, who believed that "all is number." The crux of Schrödinger's rebuttal was to declare that random quantum jumps simply weren't physical. He argued for a continuous, deterministic explanation instead. continuous, deterministic equation to defend. .....by late 1926 mutual opposition to the notion of random quantum jumps forced the two of them into the same anti-Copenhagen camp. The alliance would be forged once they realized that they were among the few vocal critics of Born's reinterpretation of the wave equation. After returning to Zurich from Copenhagen, Schrödinger continued to defend his disdain for quantum jumps on the basis that atomic physics should be visualizable and logically consistent. By the end of 1926, Einstein had drawn a stark line of demarcation between himself and quantum theory. Einstein appealed to Born, trying to convince him that quantum physics required deterministic equations, not probabilistic rules. "Quantum mechanics yields much that is very worthy of regard," Einstein wrote to Born. "But an inner voice tells me that it is not yet the right track. The theory . . . hardly brings us closer to the Old One's secrets. I, in any case, am convinced that He does not play dice. That was not the last time Einstein would make that point. For the rest of his life, in his explanations of why he didn't believe in quantum uncertainty, he would reiterate again and again, like a mantra, that God does not roll dice. In 1927, Einstein delivered a talk at the Prussian Academy purporting to prove that Schrödinger's wave equation implied definitive particle behavior, not just dice- rolling. Despite his prominence, Einstein's entreaties had little impact on the quantum faithful. Einstein returned to Berlin a far more isolated figure in the scientific community. While his world fame continued to grow, his reputation among the younger generation of physicists began to sour, as they derided his objections to quantum mechanics. With experimental findings continuing to support the unified quantum picture advocated by Bohr, Heisenberg, Born, Dirac, and others, Einstein's dismissal of their views seemed petty and illogical. Schrödinger was one of the few who sympathized with Einstein's doubts. They kept up a conversation about ways to extend quantum mechanics to make it more complete. Einstein complained to him about the dogmatism of the mainstream quantum community. For example, he wrote to Schrödinger in May 1928, "The Heisenberg- Born tranquilizing philosophy— or religion?— is so deliberately contrived that, for the time being, it provides a gentle pillow for the true believer from which he cannot very easily be aroused. So let him lie there. But this religion has . . . damned little effect on me." Although the physics community relocated to the realm of probabilistic quantum reality, leaving Einstein the lonely occupant of an isolated castle of determinism, the press still bathed him in glory. He was the wild-haired genius, the celebrity scientist, the miracle worker who had predicted the bending of starlight. He was something like a ceremonial king who had long lost his influence over the course of events; the media were more interested in him than in the lesser- known workers actually changing science. His every proclamation continued to be reported by the press, if largely ignored by his peers. the mainstream physics community, who increasingly viewed him as a relic, he remained the darling of the international media... Etiketter: physical quantum mechanics, Quantum Contradictions, quantum mechanics Physics as Analog Finite Precision Computation vs ... Physical Quantum Mechanics: Time Dependent Schrödi... "Back Radiation" as Violation of the 2nd Law of Th... edX: "Back Radiation" as the Physics of the "Green... Tragedy of Modern Physics: Schrödinger and Einstei...
CommonCrawl
Last edited by Zolomi 7 edition of Abstract Lie algebras found in the catalog. Abstract Lie algebras by David J. Winter Published 1972 by MIT Press in Cambridge, Mass . Lie algebras Bibliography: p. [145]-147. Statement [by] David J. Winter. LC Classifications QA251 .W68 Pagination viii, 150 p. Introduction to Abstract Algebra (PDF P) by D. S. Malik, John N. Mordeson and M.K. Sen File Type: PDF Number of Pages: Description This book covers the following topics: Sets, Relations, and Integers, Introduction to Groups, Permutation Groups, Subgroups and Normal Subgroups, Homomorphisms and Isomorphisms of groups, Direct Product of Groups, Introduction to rings, . / Linear algebra books. This button opens a dialog that displays additional images for this product with the option to zoom in or out. Report incorrect product information. Abstract Lie Algebras. Average rating: 0 out of 5 stars, based on 0 reviews Write a review. David J Winter. $ $ 78 $ $ Book Format. Select : David J Winter. Abstract Lie algebras. [David J Winter] Home. WorldCat Home About WorldCat Help. Search. Search for Library Items Search for Lists Search for Contacts Search for a Library. Create One purpose of this book is to give a solid but compact account of the theory of Lie algebras over fields of characteristic 0. Lie algebras comprise a significant part of Lie group theory and are being actively studied today. This book, by Professor Nathan Jacobson of Yale, is the definitive treatment of the subject and can be used as a text for graduate courses. is a well-known authority in the field of abstract algebra. His book, Lie Algebras, is a classic Author: Nathan Jacobson. Preliminary material covers modules and nonassociate algebras, followed by a compact, self-contained development of the theory of Lie algebras of characteristic 0. Topics include solvable and nilpotent Lie algebras, Cartan subalgebras, and Levi's radical splitting theorem and the complete reducibility of representations of semisimple Lie algebras. Find helpful customer reviews and review ratings for Abstract Lie Algebras at Read honest and unbiased product reviews from our users.5/5. Quartet for strings. Emergency mental health services in the community Critical elections and the mainsprings of American politics Historic Zuni architecture and society Tripuras bravehearts Choice examples of sterling silver ware, the productions of the Gorham Mfg. Co, silversmiths Ask Delilah-- about cyberlove Modern Interpretations Of The Gospel Life Anarchy! Hymns of the Evangelical Lutheran Church Surfing summer. A compendious olde treatyse, shewynge howe that we oughte to haue ye scripture in Englysshe How to Eat Like a Child The Best of Paul Weller Traditions of the North American Indians, Volume III Annual report of the Commissioner of the General Land Office to the Secretary of the Interior for the fiscal year ended June 30, 1898 Advances in Computers, Volume 40 (Advances in Computers) Abstract Lie algebras by David J. Winter Download PDF EPUB FB2 Abstract Lie Algebras (Dover Books on Mathematics) Paperback – Janu by David J Winter (Author)5/5(1). Abstract Lie Algebras (Dover Books on Mathematics) - Kindle edition by Winter, David J. Download it once and read it on your Kindle device, PC, phones or tablets. Use features like bookmarks, note taking and highlighting while reading Abstract Lie Algebras (Dover Books on Mathematics).5/5(1). Author David J. Winter, a Professor of Mathematics at the University of Michigan, also presents a general, extensive treatment of Cartan and related Lie subalgebras over arbitrary fields. Preliminary material covers modules and nonassociate algebras, followed by a compact, self-contained development of the theory of Lie algebras of characteristic : Abstract Lie Algebras by David J. Winter () [David J. Winter] on *FREE* shipping on qualifying offers. Abstract Lie Algebras by David J. Winter () Books Go Search Hello Select your address Best Sellers Customer Service Find a Gift. One purpose of this book is to give a solid but compact account Abstract Lie algebras book the theory of Lie algebras over fields of characteristic 0, with emphasis on Abstract Lie algebras book basic simplicity of the theory and on new approaches to the major theorems. Another is to give a general and extensive treatment of Cartan and related subalgebras of Lie algebras over arbitrary first two chapters present preliminary. Abstract Lie Algebras. Solid but concise, this account of Lie algebra emphasizes the theory's simplicity and offers new approaches to major theorems. Author David J. Winter, a Professor of Mathematics at the University of Michigan, also presents a general, extensive treatment of Cartan and related Lie subalgebras over arbitrary fields. Books shelved as abstract-algebra: Abstract Algebra by David S. Dummit, A Book of Abstract Algebra by Charles C. Pinter, Algebra by Michael Artin, Algebr. ] ABSTRACT DERIVATION AND LIE ALGEBRAS restricted Lie algebra if y is defined as the ^th power of y in 9?. We shall call this Lie algebra the restricted Lie algebra determined by the associative 9t. If 9t is any algebra the mapping ar: x-+xa is a linear transformation. Abstract. We now introduce the "abstract" notion of a Lie algebra. In Sect. we will associate to each matrix Lie group a Lie algebra. It is customary to use lowercase Gothic (Fraktur) characters such as \(\mathfrak{g}\) and \(\mathfrak{h}\) to refer to Lie algebras. Nathan Jacobson, presently Henry Ford II Professor of Mathematics at Yale University, is a well-known authority in the field of abstract algebra. His book, Lie Algebras, is a classic handbook both for researchers and students. Though it presupposes knowledge of linear algebra, it is not overly theoretical and can be readily used for self-study/5(14). Text for a graduate course in abstract algebra, it covers fundamental algebraic structures (groups, rings, fields, modules), and maps between them. The text is written in conventional style, the book can be used as a classroom text or as a reference. Abstract. In this crucial lecture we introduce the definition of the Lie algebra associated to a Lie group and its relation to that group. All three sections are logically necessary for what follows; § is : William Fulton, Joe Harris. Abstract Lie Algebras (eBook) Reg. Solid but concise, this account of Lie algebra emphasizes the theory's simplicity and offers new approaches to major theorems. Author David J. Winter, a Professor of Mathematics at the University of Michigan, also presents a general, extensive treatment of Cartan and related Lie subalgebras over arbitrary fields. Lie algebras are receiving increasing attention in the field of systems theory, because they can be used to represent many classes of physically motivated nonlinear systems and also switched systems. It turns out that some of the concepts studied in this book, such as the Darboux polynomials and the Poincaré–Dulac normal form, are. Abstract Lie Algebras (Dover Books on Mathematics) eBook: Winter, David J: : Kindle StoreReviews: 1. Abstract. History of the development of finite-dimensional Lie algebras is described in the preface itself. Lie theory has its name from the work of Sophus Lie [], who studied certain transformation groups, that is, the groups of symmetries of algebraic or geometric objects that are now called Lie groups. This book is designed to introduce the reader to the theory of semisimple Lie algebras over an algebraically closed field of characteristic 0, with emphasis on representations. A good knowledge of linear algebra (including eigenvalues, bilinear forms, euclidean spaces, 4/5(1). For Galois theory, there is a nice book by Douady and Douady, which looks at it comparing Galois theory with covering space theory etc. Another which has stood the test of time is Ian Stewart's book. For Lie groups and Lie algebras, it can help to see their applications early on, so some of the text books for physicists can be fun to read. Synopsis Solid but concise, this account of Lie algebra emphasizes the theory's simplicity and offers new approaches to major theorems. Author David J. Winter, a Professor of Mathematics at the University of Michigan, also presents a general, extensive treatment of Cartan and related Lie subalgebras over arbitrary inary material covers modules and nonassociate algebras, Pages: Nathan Jacobson, presently Henry Ford II Professor of Mathematics at Yale University, is a well-known authority in the field of abstract algebra. His book, Lie Algebras, is a classic handbook both for researchers and students. Though it presupposes knowledge of linear algebra, it is not overly theoretical and can be readily used for self-study. This note covers the following topics: Ideals and homomorphism, Nilpotent and solvable Lie algebras, Jordan decomposition and Cartan's criterion, Semisimple Lie algebras and the Killing form, Abstract root systems, Weyl group and Weyl chambers, Classification of semisimple Lie algebras, Exceptional Lie algebras and automorphisms, Isomorphism.So any Lie algebra acts on itself by derivations. This gives a homomorphism: ad: L → Der(A) called the adjoint representation. Abstract Lie algebras. We could simply start with the definition and try to con-struct all possible Lie algebras. Take L = Fn. n = 1: Show that all one dimensional Lie algebras File Size: KB.A book of abstract algebra / Charles C. Pinter. — Dover ed. p. cm. Originally published: 2nd ed. New York: McGraw-Hill, Includes bibliographical references and index. ISBN ISBN 1. Algebra, Abstract. I. Title. QAP56 ′—dc22 Manufactured in the United States by Courier. gulfpbc.com - Abstract Lie algebras book © 2020
CommonCrawl
Signal Processing Meta Signal Processing Stack Exchange is a question and answer site for practitioners of the art and science of signal, image and video processing. Join them; it only takes a minute: Why is the Fourier transform so important? Everyone discusses the Fourier transform when discussing signal processing. Why is it so important to signal processing and what does it tell us about the signal? Does it only apply to digital signal processing or does it apply to analog signals as well? fourier-transform jcolebrandjcolebrand $\begingroup$ Recently a discussion about Fourier transforms was revived on math.SE, and I thought that people on this site might find some of it worthwhile, and might even want to participate. $\endgroup$ – Dilip Sarwate Oct 14 '11 at 1:35 $\begingroup$ cf. this answer for some excellent historical background. Fourier series date at least as far back as Ptolemy's epicyclic astronomy. Adding more eccentrics and epicycles, akin to adding more terms to a Fourier series, one can account for any continuous motion of an object in the sky. $\endgroup$ – Geremia Jan 11 '16 at 2:56 This is quite a broad question and it indeed is quite hard to pinpoint why exactly Fourier transforms are important in signal processing. The simplest, hand waving answer one can provide is that it is an extremely powerful mathematical tool that allows you to view your signals in a different domain, inside which several difficult problems become very simple to analyze. Its ubiquity in nearly every field of engineering and physical sciences, all for different reasons, makes it all the more harder to narrow down a reason. I hope that looking at some of its properties which led to its widespread adoption along with some practical examples and a dash of history might help one to understand its importance. To understand the importance of the Fourier transform, it is important to step back a little and appreciate the power of the Fourier series put forth by Joseph Fourier. In a nut-shell, any periodic function $g(x)$ integrable on the domain $\mathcal{D}=[-\pi,\pi]$ can be written as an infinite sum of sines and cosines as $$g(x)=\sum_{k=-\infty}^{\infty}\tau_k e^{\jmath k x}$$ $$\tau_k=\frac{1}{2\pi}\int_{\mathcal{D}}g(x)e^{-\jmath k x}\ dx$$ where $e^{\imath\theta}=\cos(\theta)+\jmath\sin(\theta)$. This idea that a function could be broken down into its constituent frequencies (i.e., into sines and cosines of all frequencies) was a powerful one and forms the backbone of the Fourier transform. The Fourier transform: The Fourier transform can be viewed as an extension of the above Fourier series to non-periodic functions. For completeness and for clarity, I'll define the Fourier transform here. If $x(t)$ is a continuous, integrable signal, then its Fourier transform, $X(f)$ is given by $$X(f)=\int_{\mathbb{R}}x(t)e^{-\jmath 2\pi f t}\ dt,\quad \forall f\in\mathbb{R}$$ and the inverse transform is given by $$x(t)=\int_{\mathbb{R}}X(f)e^{\jmath 2\pi f t}\ df,\quad \forall t\in\mathbb{R}$$ Importance in signal processing: First and foremost, a Fourier transform of a signal tells you what frequencies are present in your signal and in what proportions. Example: Have you ever noticed that each of your phone's number buttons sounds different when you press during a call and that it sounds the same for every phone model? That's because they're each composed of two different sinusoids which can be used to uniquely identify the button. When you use your phone to punch in combinations to navigate a menu, the way that the other party knows what keys you pressed is by doing a Fourier transform of the input and looking at the frequencies present. Apart from some very useful elementary properties which make the mathematics involved simple, some of the other reasons why it has such a widespread importance in signal processing are: The magnitude square of the Fourier transform, $\vert X(f)\vert^2$ instantly tells us how much power the signal $x(t)$ has at a particular frequency $f$. From Parseval's theorem (more generally Plancherel's theorem), we have $$\int_\mathbb{R}\vert x(t)\vert^2\ dt = \int_\mathbb{R}\vert X(f)\vert^2\ df$$ which means that the total energy in a signal across all time is equal to the total energy in the transform across all frequencies. Thus, the transform is energy preserving. Convolutions in the time domain are equivalent to multiplications in the frequency domain, i.e., given two signals $x(t)$ and $y(t)$, then if $$z(t)=x(t)\star y(t)$$ where $\star$ denotes convolution, then the Fourier transform of $z(t)$ is merely $$Z(f)=X(f)\cdot Y(f)$$ For discrete signals, with the development of efficient FFT algorithms, almost always, it is faster to implement a convolution operation in the frequency domain than in the time domain. Similar to the convolution operation, cross-correlations are also easily implemented in the frequency domain as $Z(f)=X(f)^*Y(f)$, where $^*$ denotes complex conjugate. By being able to split signals into their constituent frequencies, one can easily block out certain frequencies selectively by nullifying their contributions. Example: If you're a football (soccer) fan, you might've been annoyed at the constant drone of the vuvuzelas that pretty much drowned all the commentary during the 2010 world cup in South Africa. However, the vuvuzela has a constant pitch of ~235Hz which made it easy for broadcasters to implement a notch filter to cut-off the offending noise.[1] A shifted (delayed) signal in the time domain manifests as a phase change in the frequency domain. While this falls under the elementary property category, this is a widely used property in practice, especially in imaging and tomography applications, Example: When a wave travels through a heterogenous medium, it slows down and speeds up according to changes in the speed of wave propagation in the medium. So by observing a change in phase from what's expected and what's measured, one can infer the excess time delay which in turn tells you how much the wave speed has changed in the medium. This is of course, a very simplified layman explanation, but forms the basis for tomography. Derivatives of signals (nth derivatives too) can be easily calculated(see 106) using Fourier transforms. Digital signal processing (DSP) vs. Analog signal processing (ASP) The theory of Fourier transforms is applicable irrespective of whether the signal is continuous or discrete, as long as it is "nice" and absolutely integrable. So yes, ASP uses Fourier transforms as long as the signals satisfy this criterion. However, it is perhaps more common to talk about Laplace transforms, which is a generalized Fourier transform, in ASP. The Laplace transform is defined as $$X(s)=\int_{0}^{\infty}x(t)e^{-st}\ dt,\quad \forall s\in\mathbb{C}$$ The advantage is that one is not necessarily confined to "nice signals" as in the Fourier transform, but the transform is valid only within a certain region of convergence. It is widely used in studying/analyzing/designing LC/RC/LCR circuits, which in turn are used in radios/electric guitars, wah-wah pedals, etc. This is pretty much all I could think of right now, but do note that no amount of writing/explanation can fully capture the true importance of Fourier transforms in signal processing and in science/engineering $\begingroup$ Nice answer in giving some realworld application using FT and its properties. +1. $\endgroup$ – goldenmean Aug 19 '11 at 10:31 $\begingroup$ @endolith I didn't say the Fourier transform was first, just that it is powerful. Note that a Taylor series is not an expansion in terms of the constituent frequencies. For e.g., the Taylor series of $\sin(\alpha x)$ about $0$ is $\alpha x-\alpha^3x^3/3!+\alpha^5x^5/5!\ldots$, whereas the Fourier transform of $\sin(\alpha x)$ is $\left[\delta(\omega-\alpha)-\delta(\omega+\alpha)\right]/(2\jmath)$ (give or take some normalization factors). The latter is the correct frequency representation, so I'm not sure if any comparisons with Taylor series is apt here. $\endgroup$ – Lorem Ipsum Aug 19 '11 at 14:34 $\begingroup$ When I started reading this response, somehow I knew @yoda wrote it before I scrolled down to see who it actually was = ) $\endgroup$ – Phonon Aug 19 '11 at 16:06 $\begingroup$ To elaborate on #3: Convolution is what you do when you apply a filter to an image, such as an average filter, or a Gaussian filter (though you can't Fourier-transform non-linear filters). $\endgroup$ – Jonas Aug 20 '11 at 3:03 $\begingroup$ Peter K's point is really critical. Signals can be represented with respect to many different bases. Sines and cosines are special because they are the eigenfunctions of LTI systems. $\endgroup$ – nibot Mar 24 '12 at 19:01 Lorem Ipsum's great answer misses one thing: The Fourier transform decomposes signals into constituent complex exponentials: $$ e^{\jmath \omega t}$$ and complex exponentials are the eigenfunctions for linear, time invariant systems. Put simply, if a system, $H$ is linear and time-invariant, then its response to a complex exponential will be a complex exponential of the same frequency but (possibly) different phase, $\phi$, and amplitude, $A$, --- and the amplitude may be zero: $$y = H[e^{\jmath \omega t}] = A e^{\jmath \phi} e^{\jmath \omega t} $$ So the Fourier transform is a useful tool for analyzing linear, time-invariant systems. Peter K.♦Peter K. $\begingroup$ @Peter K. I think that following the philosophy of choice on the (academic) correctness over "popularity" of an answer, your answer should be be integrated into the above answer provideded by Lorem Ipsum, which despite being selected as the answer with 96 points by the users, lacks this very important point of view. $\endgroup$ – Fat32 Feb 15 '16 at 20:11 $\begingroup$ @Peter Sorry to disturb you with this request, but you are 1) a moderator, 2) your name appeared on the list of users "active" with your beamforming tag. Can you give a quick opinion whether this post in Math.SE would be well received here? I am not sure, whether DSP.SE, Math.SE or EE.SE has the best chance of helping that asker. I am considering migration (which I can do as a Math.SE moderator). $\endgroup$ – Jyrki Lahtonen Nov 1 '16 at 10:55 $\begingroup$ @Peter K., Could you please reopen the question at: dsp.stackexchange.com/questions/37468. I fixed it. Thank You. $\endgroup$ – Royi Feb 11 '17 at 11:57 $\begingroup$ @Royi it's already open? $\endgroup$ – Peter K.♦ Feb 11 '17 at 13:09 $\begingroup$ Peter (How come some people can be approached using @ and some can't? Where is the option for that?), It seems someone opened it. Thank You. $\endgroup$ – Royi Feb 11 '17 at 13:44 Another reason: It's fast (e.g. useful for convolution), due to its linearithmic time complexity (specifically, that of the FFT). I would argue that, if this were not the case, we would probably be doing a lot more in the time domain, and a lot less in the Fourier domain. Edit: Since people asked me to write why the FFT is fast... It's because it cleverly avoids doing extra work. To give an concrete example of how it works, suppose you were multiplying two polynomials, $a_0 x^0 + a_1 x^1 + \ldots + a_nx^n$ and $b_0 x^0 + b_1 x^1 + \ldots + b_nx^n$. If you were to do this naively (using the FOIL method), you would need approximately $n^2$ arithmetic operations (give or take a constant factor). However, we can make a seemingly mundane observation: in order to multiply two polynomials, we don't need to FOIL the coefficients. Instead, we can simply evaluate the polynomials at a (sufficient) number of points, do a pointwise multiplication of the evaluated values, and then interpolate to get back the result. Why is that useful? After all, each polynomial has $n$ terms, and if we were to evaluate each one at $2n$ points, that would still result in $\approx n^2$ operations, so it doesn't seem to help. But it does, if we do it correctly! Evaluating a single polynomial at many points at once is faster than evaluating it at those points individually, if we evaluate at the "right" points. What are the "right" points? It turns out those are the roots of unity (i.e. all complex numbers $z$ such that $z^n = 1$). If we choose to evaluate the polynomial at the roots of unity, then a lot of expressions will turn out the same (because a lot of monomials will turn out to be the same). This means we can do their arithmetic once, and re-use it thereafter for evaluating the polynomial at all the other points. We can do a very similar process for interpolating through the points to get back the polynomial coefficients of the result, just by using the inverse roots of unity. There's obviously a lot of math I'm skipping here, but effectively, the FFT is basically the algorithm I just described, to evaluate and interpolate the polynomials. One of its uses, as I showed, was to multiply polynomials in a lot less time than normal. It turns out that this saves a tremendous amount of work, bringing down the running time to being proportional to $n \log n$ (i.e. linearithmic) instead of $n^2$ (quadratic). Thus the ability to use the FFT to perform a typical operation (such as polynomial multiplication) much faster is what makes it useful, and that is also why people are now excited by MIT's new discovery of the Sparse FFT algorithm. MehrdadMehrdad $\begingroup$ What is linearithmic time complexity? I won't downvote this answer but I don't think it adds anything of value to this discussion on Fourier transforms. $\endgroup$ – Dilip Sarwate May 8 '12 at 11:05 $\begingroup$ @DilipSarwate I suspect he's using it as shorthand for O(n*log(n)). $\endgroup$ – Jim Clay May 8 '12 at 12:50 $\begingroup$ @DilipSarwate: Jim is right. It has everything to do with (discrete) Fourier transforms. Without the FFT, your Fourier transforms would take time proportional to the square of the input size, which would make them a lot less useful. But with the FFT, they take time proportional the size of the input (times its logarithm), which makes them much more useful, and which speeds up a lot of calculations. Also this might be an interesting read. $\endgroup$ – Mehrdad May 8 '12 at 13:01 $\begingroup$ You should mention WHY its fast. Where its fast and why do we care that its fast? $\endgroup$ – CyberMen May 8 '12 at 15:03 $\begingroup$ I think that this answer is legitimate. It should be paraphrased - "Beside all the nice characteristics explained in other peoples answer, FFT allows it to become a feasible approach in real-time applications". $\endgroup$ – Andrey Rubshtein Nov 15 '12 at 18:30 Additional to Peter's answer, there is another reason which is also related to the eigenfunction. That's, $e^{kx}$ is the eigenfunction of the differential operator $\frac{d^n}{dx^n}$. That's why Fourier transform (corresponding to pure imaginary $k$) and Laplace transform (corresponding to complex $k$) can be used to solve differential equations. Since $e^{kx}$ is the eigenfunction of both convolution and differential operator, maybe that's one the of the reasons why LSIV system can be represented by differential equations. EDIT: As a matter of fact, differential (and integral) operators are LSIV operators, see here. chaohuangchaohuang Some of the other answers in this thread have excellent mathematical discussions of the definition and properties of the Fourier transform; as an audio programmer, I merely want to provide my own personal intuition as to why it's important to me. The Fourier transform permits me to answer questions about a sound that are difficult or impossible to answer with other methods. It makes hard problems easy. A recording contains a set of three musical notes. What are the notes? If you leave the recording as a set of amplitudes over time, this is not an easy problem. If you convert the recording to a set of frequencies over time, it's really easy. I want to change the pitch of a recording without changing its duration. How do I do this? It's possible, but not easy to do, by just manipulating the amplitude of an input signal. But it's easy if you know the frequencies that comprise the signal. Does this recording contain speech or does it contain music? Super hard to do using only amplitude-based methods. But there are good solutions that guess the right answer nearly all of the time based on the Fourier transform and its family. Almost every question you'd like to ask about a digital audio recording is made easier by transforming the recording using a discrete version of the Fourier transform. In practice, every modern digital audio device relies heavily on functions very similar to the Fourier transform. Again, forgive the highly informal description; this is merely my personal intuition as to why the Fourier transform is important. johnwbyrdjohnwbyrd $\begingroup$ Hey John, I have a silly question. I want to calculate the TWA (osha.gov/pls/oshaweb/…) from the sound we recorded in a workplace, I wonder if I could measure this value more precisely if I employ Fourier Transformation in analyzing my audio file. $\endgroup$ – Hossein Sarshar Jan 12 '17 at 17:35 $\begingroup$ Not unless the microphone and the recording environment were calibrated, no. $\endgroup$ – johnwbyrd Jan 16 '17 at 19:15 The other people have given great, useful answers. Just think about some signal: you only care what frequencies are in it (and their phase), not about the time domain. I do not know that this is a final or complete answer, but just another reason why the Fourier transform is useful. When you have some signal, it could be composed of an infinite (or close to) number of frequencies, depending on your sampling rate. But, that's not the case: we know that most signals have the fewest number of frequencies possible, or that we're sampling at a high enough rate. If we know that, why can't we use it? That's what the field of compressed sensing does. They know that the most likely signal is one that has the least error and has the fewest frequencies. So, they minimize the overall error relative to our measurements as well as the magnitude of the Fourier transform. A signal of a few frequencies often has a minimal Fourier transform, or mostly zeros (aka "sparse," as they say in compressed sensing). A signal of one frequency just has a delta function as the transform, for example. We can use the formal mathematical definition too. $\bar{x} = \textrm{arg min } ||y-Ax|| + \lambda||F(x)||$ Here, all we're doing is minimizing the error (first set of $||\cdot||$) and minimizing the Fourier transform (second set of $||\cdot||$). Here, we have $\bar{x}$ as our reconstructed signal (most likely close to the original) $y$, our measurements $A$, a selection matrix $x$, our signal $\lambda$ some constant $F(x)$ the Fourier transform. You may recall that Nyquist said that you have to measure at twice the highest frequency to get a good representation. Well, that was assuming you had infinite frequencies in your signal. We can get past that! The field of compressed sensing can reconstruct any signal that's mostly zeros (or sparse) in some domain. Well, that's the case for the Fourier transform. ScottScott The main importance of the Fourier transform lies with system analysis. The main constituent of our universe is vacuum, and vacuum is a fundamentally linear and time-invariant carrier of fields: different fields superimpose by adding their respective vectors, and regardless of when you repeat the application of certain fields, the outcome will be the same. As a consequence, a lot of systems also involving physical matter are to a good approximation behaving as linear, time-invariant systems. Such LTI systems can be described by their "impulse response", and the response to any time-distributed signal is described by convolving the signal with the impulse response. Convolution is a commutative and associative operation, but it is also quite computationally and conceptually expensive. However, the convolution of functions is mapped by Fourier transform into piecewise multiplication. That means that the properties of linear time invariant systems and their combinations are much better described and manipulated after Fourier transformation. As a result, things like "frequency response" are quite characteristic for describing the behavior of a lot of systems and become useful for characterizing them. Fast Fourier transforms are in the "almost, but not quite, entirely unlike Fourier transforms" class as their results are not really sensibly interpretable as Fourier transforms though firmly routed in their theory. They correspond to Fourier transforms completely only when talking about a sampled signal with the periodicity of the transform interval. In particular the "periodicity" criterion is almost always not met. There are several techniques for working around that, like the use of overlapping windowing functions. However the FFT can be employed for doing discrete-time convolution when doing things right, and is it is an efficient algorithm, that makes it useful for a lot of things. One can employ the basic FFT algorithm also for number theoretic transforms (which work in discrete number fields rather than complex "reals") in order to do fast convolution, like when multiplying humongous numbers or polynomials. In this case, the "frequency domain" is indistinguishable from white noise for basically any input and has no useful interpretation before you do the inverse transform again. the physics relevance of fourier transform is that it tells the relative amplitude of frequencies present in the signal . it can be defined for both discrete time and continuous time signal. Any signal can be represented as mixture of many harmonic frequencies. Fourier transform help in filter applications , where we need only certain range of frequencies then we first need to know what are the amplitudes of frequencies contains in the signal. vatsyayanvatsyayan protected by jojek♦ Mar 9 '15 at 8:05 Not the answer you're looking for? Browse other questions tagged fourier-transform or ask your own question. Understanding the frequency domain What information does fourier transform carry? Frequency Domain - Meaning What is the sparse Fourier transform? What is the most lucid, intuitive explanation for the various FTs - CFT, DFT, DTFT and the Fourier Series? Why is not Fourier Transform Good for Non-linear Processes How do I calculate the frequency of the highest amplitude wave? What is the basic concept of Fourier transform Alternative to Orthogonal Matching Pursuit (OMP) Algorithm Need intuition on the frequency response of a signal About Discrete Fourier Transform vs. Discrete Fourier Series DSP interview question: use of the identity in development of a significant transform Why Fourier series if Fourier transform can be calculated for both periodic and aperiodic? Why Fourier transform and Stockwell-transform retain the absolute phase information of one signal? Why are wavelet transforms (Multi Resolution Analysis) used more in practice for compression rather than Fourier series? Why is the Fourier transform valid only for absolutely integrable signals? From Fourier transform to Laplace Transform
CommonCrawl
Impact of uterine contractility on quality of life of women undergoing uterine fibroid embolization Vinicius Adami Vayego Fornazari1, Gloria Maria Martinez Salazar2, Stela Adami Vayego3, Thiago Franchi Nunes4, Belarmino Goncalves5, Jacob Szejnfeld6, Claudio Emilio Bonduki7, Suzan Menasce Goldman8 & Denis Szejnfeld9 Although changes in uterine contractility pattern after uterine fibroid embolization (UFE) has already been assessed by cine magnetic resonance imaging (MRI), their impact on quality of life outcomes has not been evaluated. The purpose of this study was to evaluate the impact of uterine contractility on the quality of life of women undergoing UFE measured by the Uterine Fibroid Symptom and Quality of Life questionnaire (UFS-QOL). A total of 26 patients were included. MRI scans were acquired 30–7 days before and 6 months after UFE for all patients. The UFS-QOL was applied in person on first MRI exam day and 1 year after UFE and the outcomes were analyzed according to the groups of evolution pattern of uterine contractility: Group A: Unchanged Uterine Contractility Pattern, 38%; Group B: Favorable Modified Uterine Contractility Pattern, 50%; and Group C: Loss of Uterine Contractility, 11%. All UFE patients presented a reduction in the mean score for symptoms and increase in mean scores on quality of life. All patients in this cohort presented a reduction in mean symptom score and increase in the mean score of quality of life subscales. Group A had more relevant complaints regarding their sense of self-confidence; Group B presented worse sexual function scores before UFE, which improved after UFE compared to Group A. Significant improvement in symptoms, quality of life, and uterine contractility was observed after UFE in women of reproductive age with symptomatic fibroids. Functional uterine contractility seems to have a positive impact on quality of life and sexual function in this population. Level 3, Non-randomized controlled cohort/follow-up study. Symptomatic uterine leiomyoma is highly prevalent in reproductive women (Vollenhoven et al. 1990; Peregrino et al. 2017; Mas et al. 2017), with a significant impact on quality of life, affecting physical and psychological wellbeing (Spies et al. 2002). A validated quality of life questionnaire was developed to specifically assess patients with symptomatic fibroids (Uterine Fibroid UFS-Qol) and it has been widely used to evaluate changes post-fibroid treatments in several trials (Spies et al. 2002; Williams et al. 2006; Harding et al. 2008; Oliveira Brito et al. 2017; Silva et al. 2016; Beaton et al. 2000). Uterine fibroid embolization (UFE) is recognized as a Level A treatment option for the management of leiomyomas in carefully selected patients, but its use for reproductive–age women with fibroids is still controversial (Mas et al. 2017). Moreover, the impact of fibroids in infertility is not yet clear. Most patients who are willing to become pregnant choose to undergo myomectomy, yet studies have demonstrated successful pregnancies post-UFE (Pisco et al. 2017; Mohan et al. 2013). One of the hypotheses for the association between fibroids and infertility is alteration of uterine contractility, according to Kunz's classification (Kunz et al. 1996). Uterine contractility is known to be associated with the female hormone cycle and uterine functionality (Kunz et al. 1998a). It acts to eliminate peeling endometrium during the menstrual period; influences sperm transport, nesting, embryo implantation, pregnancy maintenance during the periovulatory phase; and is implicated in dysmenorrhea (Kunz et al. 1998a; Kunz et al. 1998b; Kunz et al. 2000a; Kunz et al. 2000b; Kunz and Leyendecker 2002; Togashi 2007; Kataoka et al. 2005). It is hypothesized that some uterine disorders, such as fibroids, could alter uterine contractility and functionality (Fornazari et al. 2019; Kido et al. 2014; Kido et al. 2011; Kido et al. 2007; Koyama and Togashi 2007; Leonhardt et al. 2012; Orisaka et al. 2007). The advent of cine magnetic resonance imaging (MRI) has made it possible to quantify and evaluate uterine contractility (Kunz and Leyendecker 2002; Togashi 2007; Kataoka et al. 2005; Fornazari et al. 2019; Kido et al. 2014; Kido et al. 2011; Kido et al. 2007; Koyama and Togashi 2007; Leonhardt et al. 2012; Orisaka et al. 2007; Kido et al. 2005a; Kido et al. 2005b; Kido et al. 2006; Kido et al. 2008; Nakai et al. 2001; Nakai et al. 2003; Nakai et al. 2004; Nishino et al. 2005; Yoshino et al. 2010; Yoshino et al. 2012). Our previous study measured changes in the contractility pattern post-UFE by cine MRI, however the impact of quality of life outcomes was not evaluated (Fornazari et al. 2019). Therefore, the objective of this study was to evaluate the impact of uterine contractility on quality of life, measured by the UFS-QOL, in women undergoing uterine artery embolization for the treatment of symptomatic fibroids. In this Institutional Review Board (IRB)-approved, Health Insurance Portability and Accountability Act (HIPAA) compliant study, a prospective cohort of patients undergoing UFE for symptomatic fibroids, all of whom had undergone prior evaluation of uterine contractility by cine-MRI, were included. Detailed methods are reported elsewhere (Kido et al. 2005b); the inclusion, non-inclusion, and exclusion criteria are giben in Table 1 below. Table 1 Inclusion, non-inclusion, and exclusion criteria MRI scans were acquired 30–7 days before and 6 months after UFE, both ideally during the periovulatory cycle phase, which was estimated by adding 14 days to the first day of the last menstrual period. The Uterine Fibroid Symptom – Quality of Life (UFS-QOL) was applied in person on the first day of MRI and 1 year after UFE, during outpatient follow-up. UFS-QOL questionnaire The UFS-QOL questionnaire specifically assesses severity of symptoms (8 questions) and Health-Related Quality of Life (HRQL - 29 questions) among women undergoing UFE. The HRQL scale comprises the following sub-scales: concern, activities, energy/mood, control, self-consciousness and sexual function. All items are scored on a five-point Likert scale. The higher the score on the severity subscale of the questionnaire, the greater the severity of symptoms; the lower the scores on the HRQL subscales, the poorer the quality of life (Oliveira Brito et al. 2017) (Online Resource 1). The original UFS-QOL questionnaire was written in English (Spies et al. 2002). It has since been translated to Portuguese and this translation validated, with good internal consistency, discriminant validity, construct validity, structural validity and responsiveness, along with adequate test-retest results. It is accepted by the Society for Interventional Radiology (Beaton et al. 2000), as described in the publications of Oliveira et al. (Oliveira Brito et al. 2017) and Silva et al. (Silva et al. 2016). All participants were able to communicate in Portuguese. Study measures The image acquisition parameters consisted of obtaining scout images, followed by an SSFP (true FISP) cine MRI sequence for evaluation of contractility in the sagittal plane of the uterine cavity. This sequence was programmed to acquire a 10-mm-thick slice every 2.5 s for 4 continuous minutes, obtaining about 120 images from a single region of interest. The acquired images were viewed repetitively and consecutively at a rate of approximately 17 frames per second. Uterine contractility was defined as absent, ordered, and disordered, based on the classification of Nakai et al. (Nakai et al. 2003). The UFS-Qol Scoring Manual was used for calculation of symptom severity. A sum score was created from the items listed below, and the formula was then used to transform the value. Higher scores are indicative of greater symptom severity, while lower scores are indicative of minimal symptom severity (Table 2). Table 2 UFS-Qol formula to calculate symptom score, where higher score value indicates greater symptom severity $$ \mathbf{Transformed}\ \mathbf{Score}=\frac{\left(\mathrm{Actual}\ \mathrm{raw}\ \mathrm{score}-\mathrm{lowest}\ \mathrm{possible}\ \mathrm{raw}\ \mathrm{score}\right)}{\mathrm{Possible}\ \mathrm{raw}\ \mathrm{score}\ \mathrm{range}}\times 100 $$ For HRQL subscales (concern, activities, energy/mood, self-conscious, and sexual function), summed scores of the items listed below were created for each individual subscale. To calculate the HRQL total score, the value of each individual subscale (not individual items) was added. Higher scores are indicative of better HRQL (high = good) (Table 3). Table 3 HRQL subscales formula to calculate total score. Higher scores indicate better HRQL (high = good) $$ \mathbf{Transformed}\ \mathbf{Score}=\frac{\left(\mathrm{Actual}\ \mathrm{possible}\ \mathrm{score}-\mathrm{actual}\ \mathrm{raw}\ \mathrm{score}\right)}{\mathrm{Possible}\ \mathrm{raw}\ \mathrm{score}\ \mathrm{range}}\times 100 $$ Statistical analyses were performed using Excel and Bioestat software. The Mann-Whitney and Wilcoxon tests were used for comparisons between and within groups, respectively. A significance level of 5% was used for all tests. Twenty-six patients were included, with a mean age of 36 years (range 30–41 years; SD, 4 years). Of these, 14 presented with bleeding and pelvic pain, 10 with bleeding, and 3 with pain. Three uterine contractility patterns were defined according to change in contractility from baseline after UFE: group A: unchanged uterine contractility; group B, favorably modified uterine contractility; and group C, loss of uterine contractility (Fig. 1). Stratification of participants by uterine contractility before and after UFE. Blue, group A; green, group B; red, group C Of the 26 patients included in our cohort, 10 (38%) had no change in contractility after UFE (group A), 13 (50%) had a positive change (group B), and 3 (11%) lost contractility (group C). Potential interference factors (uterine volume, necrosis pattern, fibroid localization, and index fibroid/myometrium) had no statistically significant effect (Fornazari et al. 2019). All patients in this study presented a statistically significant reduction in mean symptom score and a statistically significant increase in mean quality of life scores (worry, activity, energy, self-control, self-confidence and sexual function) (Fig. 2). Average score obtained in the UFS-QOL questionnaire, before and after UFE Group A patients (unchanged uterine contractility pattern) presented a statistically significant reduction in mean symptom score and increase in the mean score of quality of life subscales, except for the sexual function subscale (p-value = 0.3232) (Fig. 3). Average score obtained in the UFS-QOL questionnaire, before and after UFE, in group A Group B (favorably modified uterine contractility pattern) showed a significant reduction in mean symptom score, and increase in mean quality of life subscale scores (Fig. 4). Average score obtained in the UFS-QOL questionnaire, before and after UFE, in group B As group C (loss of uterine contractility) comprised only 3 patients, no statistical analysis could be performed. A comparative analysis between groups A and B, before UFE, demonstrated that the average scores of activity subscales and self-confidence were significantly higher in group A (Fig. 5). After UFE, a comparative analysis between groups A and B, demonstrated significantly higher scores in group A as compared to group B (Fig. 6). Average score obtained in the UFS-QOL questionnaire, before UAE, referring to patients in group A (blue) and group B (green) Average score obtained in the UFS-QOL questionnaire, after UFE, referring to patients in group A (blue) and group B (green) Most women with uterine fibroids report a negative impact of symptoms, such as abnormal uterine bleeding and pelvic pain, on their quality of life (Spies et al. 2002; Williams et al. 2006; Harding et al. 2008; Brito et al. 2014). UFE has been reported as a less morbid alternative in women who wish to preserve their uterus when hysterectomy and myomectomy are contraindicated, when fibroids are refractory to myomectomy, or when there is a high risk of conversion to hysterectomy (Spies et al. 2002; Pisco et al. 2017; Mohan et al. 2013; Fornazari et al. 2019). In this setting, UFS-QOL is a validated tool for measuring patient-reported symptoms and documenting clinical outcomes from surgical and interventional procedures (Spies et al. 2002; Williams et al. 2006; Harding et al. 2008), with good internal consistency, discriminant validity, construct validity, structural validity, test-retest similarity, and responsiveness, including in its Portuguese version (Oliveira Brito et al. 2017; Silva et al. 2016). Our previous study began with the evaluation of uterine contractility before and after UFE in women with symptomatic fibroids using cine-MRI. Continuing this line of research, the present study aims to analyze the presentation of UFS-QoL scores among different three groups of change in uterine contractility pattern (group A: unchanged uterine contractility pattern, group B: favorable modified uterine contractility pattern, group C: loss of uterine contractility) (Fornazari et al. 2019). Interpretation of the average USF-QOL questionnaire scores presented revealed a significant improvement in symptoms and quality of life in all patients after UFE (Fig. 4). These data before and after UFE are similar to the scores usually presented in other published series of patients undergoing UFE for symptomatic uterine fibroids (Maiara et al. 2017; Spies et al. 2005; Coyne et al. 2012). The significant improvement of uterine contractility (Fornazari et al. 2019) and simultaneous improvement of quality of life these patients experienced after UFE suggests that uterine contractility may have a positive impact on quality of life. The three uterine contractility groups (A, B, C) did not present statistically significant demographic differences (age, pre-UFE uterine volume, post-UFE uterine volume, percentage of uterine volume reduction, total necrosis of embolized myomas and myoma-myometrium index), according to the work previously published by our group (Fornazari et al. 2019). Therefore, these factors were not associated with UFS-QOL scores. The finding of higher average scores on the activity subscales, self-confidence, in group A (Fig. 5) demonstrates that patients in this group had more relevant complaints regarding their sense of self-confidence (conscious sensation of weight gain, size and appearance of abdomen, change in clothes when menstruating) and to activities such as fear of traveling, interference with physical activities, reduction of physical exercise, difficulty in carrying out usual activities, interference in social activities, and need for careful planning of routine activities. Before UFE, group A presented a relatively high sexual function score in relation to other subscales; this means that sexual function was less affected in this group, and was not significantly affected after UFE (Fig. 3). In our previously published work, we did not identify statistically significant variables that could be correlated to this data (Fornazari et al. 2019). In this study, the only potentially relevant variable that could be associated with a low interference in sexual function, both before and after UFE, is the fact that this group experienced no change in uterine peristalsis. Group B (favorably modified uterine contractility pattern) showed a significant improvement in symptoms and quality of life after UFE (Fig. 4). Sexual function scores were worse before UFE in group B and improved after UFE compared to group A. When we analyzed groups A and B simultaneously after UFE, we found that 73% of patients (n = 19) presented a pattern of ordered uterine contractility and 3.8% had a disordered uterine contractility pattern (n = 1). Considering that all these patients experienced improvement of symptoms and quality of life after UFE, we can hypothesize that the resumption of functional uterine contractility may have a positive impact on quality of life and sexual function. Nevertheless, our sample was small, and additional studies are required to detect the real impact of uterine contractility on fertility and quality of life. Significant improvement in symptoms, quality of life, and uterine contractility was observed after UFE in women of reproductive age with symptomatic fibroids. Functional uterine contractility seems to have a positive impact on quality of life and sexual function in this population. The demographic data on the groups of this study are available at https://doi.org/10.1007/s00270-018-2053-6 Other datasets of the Uterine Fibroid Symptom and Quality of Life questionnaire (UFS-QOL) are not publicly available (due to personal information) but are available from the corresponding author (VAVF) on reasonable request. HIPPA: HRQL: Health-Related Quality of Life IRB: UFE: Uterine fibroid embolization Beaton DE, Bombardier C, Guillemin F, Ferraz MB (2000) Guidelines for the process of cross-cultural adaptation of self-report measures. Spine (Phila Pa 1976) 25:3186–3191 Brito LG, Panobianco MS, Sabino-de-Freitas MM, Barbosa Hde F, de Azevedo GD, Brito LM et al (2014) Uterine leiomyoma: understanding the impact of symptoms on womens' lives. Reprod health 11:10. https://doi.org/10.1186/1742-4755-11-10 Coyne KS, Margolis MK, Bradley LD, Guido R, Maxwell GL, Spies JB (2012) Further validation of the uterine fibroid symptom and quality-of-life questionnaire. Value Health 15:135–142. https://doi.org/10.1016/j.jval.2011.07.007 Fornazari VAV, Szejnfeld D, Szejnfeld J, Bonduki CE, Vayego SA, Goldman SM (2019) Evaluation of uterine contractility by magnetic resonance in women undergoing embolization of uterine fibroids. Cardiovasc Intervent Radiol 42:186–194. https://doi.org/10.1007/s00270-018-2053-6 Harding G, Coyne KS, Thompson CL, Spies JB (2008) The responsiveness of the uterine fibroid symptom and health-related quality of life questionnaire (UFS-QOL). Health Qual Life Outcomes 6:99. https://doi.org/10.1186/1477-7525-6-99 Kataoka M, Togashi K, Kido A, Nakai A, Fujiwara T, Koyama T et al (2005) Dysmenorrhea: evaluation with cine-mode-display MR imaging--initial experience. Radiology 235:124–131. https://doi.org/10.1148/radiol.2351031283 Kido A, Ascher SM, Hahn W, Kishimoto K, Kashitani N, Jha RC et al (2014) 3 T MRI uterine peristalsis: comparison of symptomatic fibroid patients versus controls. Clin Radiol 69:468–472. https://doi.org/10.1016/j.crad.2013.12.002 Kido A, Ascher SM, Kishimoto K, Hahn W, Jha RC, Togashi K et al (2011) Comparison of uterine peristalsis before and after uterine artery embolization at 3-T MRI. AJR Am J Roentgenol 196:1431–1435. https://doi.org/10.2214/AJR.10.5349 Kido A, Nishiura M, Togashi K, Nakai A, Fujiwara T, Kataoka ML et al (2005a) A semiautomated technique for evaluation of uterine peristalsis. J Magn Reson Imaging 21:249–257. https://doi.org/10.1002/jmri.20258 Kido A, Togashi K, Kataoka ML, Nakai A, Koyama T, Fujii S (2008) Intrauterine devices and uterine peristalsis: evaluation with MRI. Magn Reson Imaging 26:54–58. https://doi.org/10.1016/j.mri.2007.06.001 Kido A, Togashi K, Nakai A, Kataoka M, Fujiwara T, Kataoka ML et al (2006) Investigation of uterine peristalsis diurnal variation. Magn Reson Imaging 24:1149–1155. https://doi.org/10.1016/j.mri.2006.06.002 Kido A, Togashi K, Nakai A, Kataoka ML, Koyama T, Fujii S (2005b) Oral contraceptives and uterine peristalsis: evaluation with MRI. J Magn Reson Imaging 22:265–270. https://doi.org/10.1002/jmri.20384 Kido A, Togashi K, Nishino M, Miyake K, Koyama T, Fujimoto R et al (2007) Cine MR imaging of uterine peristalsis in patients with endometriosis. Eur Radiol 17:1813–1819. https://doi.org/10.1007/s00330-006-0494-9 Koyama T, Togashi K (2007) Functional MR imaging of the female pelvis. J Magn Reson Imaging 25:1101–1112. https://doi.org/10.1002/jmri.20913 Kunz G, Beil D, Deininger H, Wildt L, Leyendecker G (1996) The dynamics of rapid sperm transport through the female genital tract: evidence from vaginal sonography of uterine peristalsis and hysterosalpingoscintigraphy. Hum Reprod 11:627–632. https://doi.org/10.1093/HUMREP/11.3.627 Kunz G, Beil D, Huppert P, Leyendecker G (2000a) Structural abnormalities of the uterine wall in women with endometriosis and infertility visualized by vaginal sonography and magnetic resonance imaging. Hum Reprod 15:76–82. https://doi.org/10.1093/humrep/15.1.76 Kunz G, Herbertz M, Noe M, Leyendecker G (1998b) Sonographic evidence of a direct impact of the ovarian dominant structure on uterine function during the menstrual cycle. Hum Reprod 4:667–672 Kunz G, Kissler S, Wildt L, Leyendecker G (2000b) Uterine peristalsis: directed sperm transport and fundal implantation of the blastocyst. In: Filicori M (ed) Endocrine basis of reproductive function. Monduzzi Editore, Bologna, pp 409–422 Kunz G, Leyendecker G (2002) Uterine peristaltic activity during the menstrual cycle: characterization, regulation, function and dysfunction. Reprod BioMed Online 4(Suppl 3):5–9. https://doi.org/10.1016/S1472-6483(12)60108-4 Kunz G, Noe M, Herbertz M, Leyendecker G (1998a) Uterine peristalsis during the follicular phase of the menstrual cycle: effects of oestrogen, antioestrogen and oxytocin. Hum Reprod Update 4:647–654. https://doi.org/10.1093/humupd/4.5.647 Leonhardt H, Gull B, Kishimoto K, Kataoka M, Nilsson L, Janson PO et al (2012) Uterine morphology and peristalsis in women with polycystic ovary syndrome. Acta Radiol 53:1195–1201. https://doi.org/10.1258/ar.2012.120384 Maiara C, Obura T, Hacking N, Stones W (2017) One year symptom severity and health-related quality of life changes among black African patients undergoing uterine fibroid embolisation. BMC Res Notes 10:240. https://doi.org/10.1186/s13104-017-2558-0 Mas A, Tarazona M, Dasí Carrasco J, Estaca G, Cristóbal I, Monleón J (2017) Updated approaches for management of uterine fibroids. Int J Women's Health 9:607–617. https://doi.org/10.2147/IJWH.S138982 eCollection 2017 Mohan PP, Hamblin MH, Vogelzang RL (2013) Uterine artery embolization and its effect on fertility. J Vasc Interv Radiol 24:925–930. https://doi.org/10.1016/j.jvir.2013.03.014 Nakai A, Togashi K, Kosaka K, Kido A, Hiraga A, Fujiwara T et al (2004) Uterine peristalsis: comparison of transvaginal ultrasound and two different sequences of cine MR imaging. J Magn Reson Imaging 20:463–469. https://doi.org/10.1002/jmri.20140 Nakai A, Togashi K, Ueda H, Yamaoka T, Fujii S, Konishi J (2001) Junctional zone on magnetic resonance imaging: continuous changes on ultrafast images. J Women Imaging 3:89–93 Nakai A, Togashi K, Yamaoka T, Fujiwara T, Ueda H, Koyama T et al (2003) Uterine peristalsis shown on cine MR imaging using ultrafast sequence. JMRI 18:726–733. https://doi.org/10.1002/jmri.10415 Nishino M, Togashi K, Nakai A, Hayakawa K, Kanao S, Iwasaku K et al (2005) Uterine contractions evaluated on cine MR imaging in patients with uterine leiomyomas. Eur J Radiol 53:142–146. https://doi.org/10.1016/j.ejrad.2004.01.009 Oliveira Brito LG, Malzone-Lott DA, Sandoval Fagundes MF, Magnani PS, Fernandes Arouca MA, Poli-Neto OB et al (2017) Translation and validation of the Uterine Fibroid Symptom and Quality of Life (UFS-QOL) questionnaire for the Brazilian Portuguese language. Sao Paulo Med J 135:107–115. https://doi.org/10.1590/1516-3180.2016.0223281016 Orisaka M, Kurokawa T, Shukunami K, Orisaka S, Fukuda MT, Shinagawa A et al (2007) A comparison of uterine peristalsis in women with normal uteri and uterine leiomyoma by cine magnetic resonance imaging. Eur J Obstet Gynecol Reprod Biol 135:111–115. https://doi.org/10.1016/j.ejogrb.2006.07.040 Peregrino PFM, de Lorenzo MM, dos Santos SR, Soares-Júnior JM, Baracat EC (2017) Review of magnetic resonance-guided focused ultrasound in the treatment of uterine fibroids. Clinics (Sao Paulo) 72:637–641. https://doi.org/10.6061/clinics/2017(10)08 Pisco JM, Duarte M, Bilhim T, Branco J, Cirurgião F, Forjaz M et al (2017) Spontaneous pregnancy with a live birth after conventional and partial uterine fibroid embolization. Radiology 285:302–310. https://doi.org/10.1148/radiol.2017161495 Silva RO, Gomes MT, Castro RA, Bonduki CE, Girão MJ (2016) Uterine fibroid symptom - quality of life questionnaire translation and validation into Brazilian Portuguese. Rev Bras Ginecol Obstet 38:518–523. https://doi.org/10.1055/s-0036-1593833 Spies JB, Coyne K, Guaou NG, Boyle D, Skyrnarz-Murphy K, Gonzales SM (2002) The UFS-QOL, a new disease-specific symptom and health-related quality of life questionnaire for leiomyomata. Obstet Gynecol 99:290–300. https://doi.org/10.1016/S0029-7844(01)01702-1 Spies JB, Myers ER, Worthington-Kirsch R, Mulgund J, Goodwin S, Mauro M (2005) The FIBROID registry: symptom and quality-of-life status 1 year after therapy. Obstet Gynecol 106:1309–1318. https://doi.org/10.1097/01.AOG.0000188386.53878.49 Togashi K (2007) Uterine contractility evaluated on cine magnetic resonance imaging. Ann N Y Acad Sci 1101:62–71. https://doi.org/10.1196/annals.1389.030 Vollenhoven BJ, Lawrence AS, Healy DL (1990) Uterine fibroids: a clinical review. Br J Obstet Gynaecol 97:285–298. https://doi.org/10.1111/j.1471-0528.1990.tb01804.x Williams VSL, Jones G, Mauskopf J, Spalding J, Duchane J (2006) Uterine fibroids: a review of health-related quality of life assessment. J Women's Health (Larchmt) 15:818–829. https://doi.org/10.1089/jwh.2006.15.818 Yoshino O, Hayashi T, Osuga Y, Orisaka M, Asada H, Okuda S et al (2010) Decreased pregnancy rate is linked to abnormal uterine peristalsis caused by intramural fibroids. Hum Reprod 25:2475–2479. https://doi.org/10.1093/humrep/deq222 Yoshino O, Nishii O, Osuga Y, Asada H, Okuda S, Orisaka M et al (2012) Myomectomy decreases abnormal uterine peristalsis and increases pregnancy rate. J Minim Invasive Gynecol 19:63–67. https://doi.org/10.1016/j.jmig.2011.09.010 Informed consent was obtained from all individual participants included in the study. The authors have no financial relationships relevant to this article to disclose. Interventional Radiology and Endovascular Surgery, Universidade Federal de São Paulo (UNIFESP), Rua Napoleão de Barros, 800, Vila Clementino, São Paulo, SP, 04024-002, Brazil Vinicius Adami Vayego Fornazari Department of Simulation and Patient Experience, Massachusetts General Hospital, Harvard Medical School, 55 Fruit St #290, Boston, MA, 02114, USA Gloria Maria Martinez Salazar Department of Statistics, Universidade Federal do Paraná (UFPR), Rua General Carneiro, 370, Centro, Curitiba, PR, 81531-990, Brazil Stela Adami Vayego Hospital Universitário Maria Aparecida Pedrossian, Universidade Federal do Mato Grosso do Sul (UFMS), Av. Sen. Filinto Müler, 355, Vila Ipiranga, Campo Grade, MS, 79080-190, Brazil Thiago Franchi Nunes Angiography Section Clinical Interventional Radiology Department, Instituto Portugues de Oncologia (IPO-Porto), R. Dr. António Bernardino de Almeida 865, 4200-072, Porto, Portugal Belarmino Goncalves Department of Diagnostic Imaging, Escola Paulista de Medicina (EPM), UNIFESP, Rua Napoleão de Barros, 800, Vila Clementino, São Paulo, SP, 04024-002, Brazil Jacob Szejnfeld Outpatient Clinics of Arterial Embolization of Uterine Myoma and Cardiovascular Diseases and Thromboembolism, Gynecological Endocrinology Course, Department of Gynecology, EPM, UNIFESP, Rua Napoleão de Barros, 800, Vila Clementino, São Paulo, SP, 04024-002, Brazil Claudio Emilio Bonduki Department of Diagnostic Imaging, EPM, UNIFESP, Rua Napoleão de Barros, 800, Vila Clementino, São Paulo, SP, 04024-002, Brazil Suzan Menasce Goldman Interventional Radiology and Endovascular Surgery, UNIFESP, Rua Napoleão de Barros, 800, Vila Clementino, São Paulo, SP, 04024-002, Brazil Denis Szejnfeld VAVF performed procedures and data collection. All authors participated in the analysis and interpretation of data as well as in the revision and approval of the final manuscript. Correspondence to Vinicius Adami Vayego Fornazari. All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. This study was registered in the National Information System on Research Ethics Involving Human Beings (Sistema Nacional de Informação Sobre Ética em Pesquisa envolvendo Seres Humanos, SISNEP) and approved by the Research Ethics Committee/Plataforma Brasil (CAAE: 22039914.5.0000.5505). For this type of study consent for publication is not required. Fornazari, V.A.V., Salazar, G.M.M., Vayego, S.A. et al. Impact of uterine contractility on quality of life of women undergoing uterine fibroid embolization. CVIR Endovasc 2, 36 (2019). https://doi.org/10.1186/s42155-019-0080-2 Leiomyoma Dynamic MRI Uterine peristalsis
CommonCrawl
Hostname: page-component-7bb4899584-j8xrj Total loading time: 1.192 Render date: 2023-01-26T23:12:01.208Z Has data issue: true Feature Flags: { "useRatesEcommerce": false } hasContentIssue true >Journal of Fluid Mechanics >Volume 861 >Revisiting the Rayleigh–Plateau instability for the... Revisiting the Rayleigh–Plateau instability for the nanoscale Chengxi Zhao , James E. Sprittles [Opens in a new window] and Duncan A. Lockerby [Opens in a new window] Chengxi Zhao School of Engineering, University of Warwick, Coventry CV4 7AL, UK James E. Sprittles* Mathematics Institute, University of Warwick, Coventry CV4 7AL, UK Duncan A. Lockerby* †Email addresses for correspondence: [email protected], [email protected] Save PDF (0.68 mb) View PDF[Opens in a new window] Save hi-res PDF (1 mb) The theoretical framework developed by Rayleigh and Plateau in the 19th century has been remarkably accurate in describing macroscale experiments of liquid cylinder instability. Here we re-evaluate and revise the Rayleigh–Plateau instability for the nanoscale, where molecular dynamics experiments demonstrate its inadequacy. A new framework based on the stochastic lubrication equation is developed that captures nanoscale flow features and highlights the critical role of thermal fluctuations at small scales. Remarkably, the model indicates that classically stable (i.e. 'fat') liquid cylinders can be broken at the nanoscale, and this is confirmed by molecular dynamics. JFM classification Drops and Bubbles: Breakup/coalescence Interfacial Flows (free surface): Capillary flows Micro-/Nano-fluid dynamics: Non-continuum effects JFM Rapids Journal of Fluid Mechanics , Volume 861 , 25 February 2019 , R3 DOI: https://doi.org/10.1017/jfm.2018.950[Opens in a new window] This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2019 Cambridge University Press Understanding the surface-tension-driven break-up of liquid cylinders and their subsequent disintegration into droplets is crucial to a range of technologies such as ink-jet printing (Basaran, Gao & Bhat Reference Basaran, Gao and Bhat2013), fibre manufacture (Shabahang et al. Reference Shabahang, Kaufman, Deng and Abouraddy2011) and drug delivery (Mitragotri Reference Mitragotri2006). The classical theoretical results are due to Plateau (Reference Plateau1873), who showed that a cylinder of radius $r_{0}$ is unstable to sufficiently long wavelength disturbances $\unicode[STIX]{x1D706}>\unicode[STIX]{x1D706}_{crit}=2\unicode[STIX]{x03C0}r_{0}$ and Rayleigh (Reference Rayleigh1878), who conducted a linear stability analysis of the inviscid dynamics to find a fastest growing mode with wavelength $\unicode[STIX]{x1D706}_{max}=9.01r_{0}$ (or wavenumber $k_{max}=0.697/r_{0}$ ). The Rayleigh–Plateau (RP) theoretical framework has been subject to numerous generalisations (e.g. Rayleigh (Reference Rayleigh1892), Tomotika (Reference Tomotika1935)) and accurately describes macroscopic experiments (Eggers & Villermaux Reference Eggers and Villermaux2008). With an increasing interest in micro and nano fluid dynamics, driven by emerging technologies in these domains (Stone, Stroock & Ajdari Reference Stone, Stroock and Ajdari2004), there has been cause to reassess the validity of classical analyses at smaller scales and question the absolute truth of long-held intuition. For liquid flows, molecular dynamics (MD) allows one to perform nanoscale 'experiments' and these have been shown to qualitatively reproduce RP-like instability (Koplik & Banavar Reference Koplik and Banavar1993). However, MD studies of long cylinders (Kawano Reference Kawano1998; Gopan & Sathian Reference Gopan and Sathian2014) have been unable to conclusively (i.e. quantitatively) validate classical theory. It is well recognised that although $k_{max}$ can give a first approximation of the drop size formed, the actual break-up is a nonlinear asymmetric 'universal' solution (Eggers Reference Eggers1993) that resembles a drop connected to a thin cylinder. Interestingly, at the nanoscale, MD showed instead the emergence of a new double-cone break-up profile (Moseler & Landman Reference Moseler and Landman2000), generated by the presence of thermal fluctuations of molecules whose effects are negligible for standard liquids at the macroscale. This behaviour was described using a 'stochastic lubrication equation' (SLE) derived by applying the lubrication approximation to the equations of fluctuating hydrodynamics (Landau, Lifshits & Pitaevskii Reference Landau, Lifshits and Pitaevskii1980). The importance of the fluctuations has been confirmed by experiments (Hennequin et al. Reference Hennequin, Aarts, van der Wiel, Wegdam, Eggers, Lekkerkerker and Bonn2006; Petit et al. Reference Petit, Rivière, Kellay and Delville2012), analytical models (Eggers Reference Eggers2002) and subsequent MD (Choi, Kim & Kim Reference Choi, Kim and Kim2006; Kang & Landman Reference Kang and Landman2007). Remarkably, despite failures to match classical RP theory to MD and a long recognition that 'fluctuations substantially distort the shape of the cylinder' (Koplik & Banavar Reference Koplik and Banavar1993), no one has considered how the RP instability mechanism is modified in the presence of thermal fluctuations. This is despite recognition of their importance to enhancing instabilities in thin film flows (Grün, Mecke & Rauscher Reference Grün, Mecke and Rauscher2006; Diez, González & Fernández Reference Diez, González and Fernández2016). Note, the focus of this work is on how thermal fluctuations affect stability, which is a related but distinct question to how thermal fluctuations affect break-up, as studied in Moseler & Landman (Reference Moseler and Landman2000). In the next section, we develop an SLE-RP framework for incorporating fluctuations into the stability analysis of liquid cylinders at the nanoscale. The MD simulation method is introduced in § 3 to validate the SLE-RP. Results in § 4 show that the SLE-RP allows us to identify deviations from the classical picture, most notably a loss of the clear-cut Plateau stability criterion and very significant modifications of $k_{max}$ . 2 Mathematical modelling 2.1 Stochastic lubrication equation Consider a cylindrical polar coordinate system (figure 1), with $z$ -axis along the centre line of an axisymmetric initially cylindrical liquid volume of length $L$ and radius $r_{0}$ . In the lubrication approximation, the dynamics is described by the radius of the cylinder $r=h(z,t)$ and the axial velocity $u(z,t)$ , so that the SLE (Moseler & Landman Reference Moseler and Landman2000) is given by (2.1) $$\begin{eqnarray}\displaystyle & \displaystyle \unicode[STIX]{x2202}_{t}u=-uu^{\prime }-\frac{p^{\prime }}{\unicode[STIX]{x1D70C}}+3\unicode[STIX]{x1D708}\frac{(h^{2}u^{\prime })^{\prime }}{h^{2}}+A\frac{(h{\mathcal{N}})^{\prime }}{h^{2}}, & \displaystyle\end{eqnarray}$$ (2.2) $$\begin{eqnarray}\displaystyle & \displaystyle \unicode[STIX]{x2202}_{t}h=-uh^{\prime }-u^{\prime }h/2, & \displaystyle\end{eqnarray}$$ with the full Laplace pressure retained (2.3) $$\begin{eqnarray}p=\unicode[STIX]{x1D6FE}[h^{-1}(1+(h^{\prime })^{2})^{-1/2}-h^{\prime \prime }(1+(h^{\prime })^{2})^{-3/2}],\end{eqnarray}$$ and primes denoting differentiation with respect to $z$ . In these equations, $\unicode[STIX]{x1D708}$ is the liquid's kinematic viscosity, $\unicode[STIX]{x1D70C}$ is its density, $\unicode[STIX]{x1D6FE}$ is the surface-tension coefficient and ${\mathcal{N}}$ denotes a standard Gaussian (white) random variable, capturing the thermal fluctuation effects, where $A=\sqrt{6k_{B}T\unicode[STIX]{x1D708}/(\unicode[STIX]{x1D70C}\unicode[STIX]{x03C0})}$ is a dimensional parameter, $k_{B}$ is the Boltzmann constant and $T$ is the liquid temperature. If there are no fluctuations (i.e. $A=0$ ) the classical lubrication equations (LE) (Eggers & Dupont Reference Eggers and Dupont1994) are recovered. Figure 1. Schematic of the cylinder instability. 2.2 Stability analysis For the linear stability analysis, we take $h(z,t)=r_{0}[1+\hat{r}(z,t)]$ , with $\hat{r}(z,t)\ll 1$ a dimensionless perturbation. Substituting this into the SLE and linearising gives (2.4) $$\begin{eqnarray}\frac{\unicode[STIX]{x2202}^{2}\hat{r}}{\unicode[STIX]{x2202}t^{2}}+\frac{\unicode[STIX]{x1D6FE}}{2\unicode[STIX]{x1D70C}}\left(\frac{\hat{r}^{\prime \prime }}{r_{0}}+r_{0}\hat{r}^{\prime \prime \prime \prime }\right)-3\unicode[STIX]{x1D708}\frac{\unicode[STIX]{x2202}\hat{r}^{\prime \prime }}{\unicode[STIX]{x2202}t}=-\frac{A}{2r_{0}}{\mathcal{N}}^{\prime \prime }.\end{eqnarray}$$ Note, this linearisation also implies an extent to the strength of fluctuations, which we will discuss later. A finite Fourier transform is applied to (2.4) to obtain (2.5) $$\begin{eqnarray}\frac{\text{d}^{2}R}{\text{d}t^{2}}+\unicode[STIX]{x1D6FC}\frac{\text{d}R}{\text{d}t}+\unicode[STIX]{x1D6FD}R=\frac{A}{2r_{0}}k^{2}N,\end{eqnarray}$$ where $\unicode[STIX]{x1D6FC}=3\unicode[STIX]{x1D710}k^{2}$ and $\unicode[STIX]{x1D6FD}=(\unicode[STIX]{x1D6FE}/2\unicode[STIX]{x1D70C})(r_{0}k^{4}-(k^{2}/r_{0}))$ and the transformed variables are defined as follows: (2.6a,b ) $$\begin{eqnarray}R(k,t)=\int _{0}^{L}\hat{r}(z,t)\text{e}^{-\text{i}kz}\,\text{d}z\quad \text{and}\quad N(k,t)=\int _{0}^{L}{\mathcal{N}}(z,t)\text{e}^{-\text{i}kz}\,\text{d}z.\end{eqnarray}$$ The solution of (2.5) is linearly decomposed into two parts: (2.7) $$\begin{eqnarray}R=R_{LE}+R_{fluc}.\end{eqnarray}$$ The first part is the solution to the homogenous form of (2.5) (i.e. with $A=0$ ) with some stationary initial disturbance (i.e. $R=R_{i}$ and $\text{d}R/\text{d}t=0$ at $t=0$ ). The solution to the homogeneous ordinary differential equation is straightforward to obtain: (2.8) $$\begin{eqnarray}R_{LE}=R_{i}\text{e}^{-at/2\unicode[STIX]{x1D70F}}\left[\cosh (ct/2\unicode[STIX]{x1D70F})+\frac{a\sinh (ct/2\unicode[STIX]{x1D70F})}{c}\right]\!,\end{eqnarray}$$ (2.9a,b ) $$\begin{eqnarray}a=3(kr_{0})^{2}\left(\frac{\ell _{\unicode[STIX]{x1D708}}}{r_{0}}\right)^{1/2}\quad \text{and}\quad c=2\left[\frac{9}{4}\frac{\ell _{\unicode[STIX]{x1D708}}}{r_{0}}(kr_{0})^{4}+\frac{(kr_{0})^{2}}{2}(1-(kr_{0})^{2})\right]^{1/2},\end{eqnarray}$$ and characteristic flow scales for time $\unicode[STIX]{x1D70F}=(\unicode[STIX]{x1D70C}r_{0}^{3}/\unicode[STIX]{x1D6FE})^{1/2}$ and length $\ell _{\unicode[STIX]{x1D708}}=\unicode[STIX]{x1D70C}\unicode[STIX]{x1D708}^{2}/\unicode[STIX]{x1D6FE}$ have been introduced. This is a solution to the standard lubrication equations (there is no fluctuating component), and is thus denoted $R_{LE}$ in (2.7). Note, for the rest of this paper, the initial modal disturbance $R_{i}$ is treated as a complex random variable with zero mean. The second component of the solution arises from solving the full form of (2.5) with zero initial disturbance; this part of the solution is solely due to fluctuations, and is thus denoted $R_{fluc}$ . This is obtained by the homogeneous equation's impulse response, (2.10) $$\begin{eqnarray}H(k,t)=2\unicode[STIX]{x1D70F}\text{e}^{-at/2\unicode[STIX]{x1D70F}}\sinh (ct/2\unicode[STIX]{x1D70F})/c,\end{eqnarray}$$ which due to the linear, time-invariant nature of the system, allows us to write (2.11) $$\begin{eqnarray}R_{fluc}=\frac{Ak^{2}}{2r_{0}}\int _{0}^{t}N(k,t-{\mathcal{T}})H(k,{\mathcal{T}})\,\text{d}{\mathcal{T}}.\end{eqnarray}$$ The modal amplitude $R\,(=R_{LE}+R_{fluc})$ is a complex random variable, with zero mean. Note, $R_{LE}$ is also random as it develops from a random initial condition, but is uncorrelated with $R_{fluc}$ (both have zero mean). So, in order to obtain information on how disturbances associated with each wavenumber develop in time, allowing the identification of unstable and fastest growing modes, the root mean square (r.m.s.) of $|R|$ is sought (equivalent to the standard deviation of $|R|$ ): (2.12) $$\begin{eqnarray}|R|_{rms}=\sqrt{\overline{|R_{LE}+R_{fluc}|^{2}}}=\sqrt{\overline{|R_{LE}|^{2}}+\overline{|R_{fluc}|^{2}}},\end{eqnarray}$$ where, from (2.8), (2.13) $$\begin{eqnarray}\overline{|R_{LE}|^{2}}=\overline{|R_{i}|^{2}}\text{e}^{-at/\unicode[STIX]{x1D70F}}\left[\cosh (ct/2\unicode[STIX]{x1D70F})+\frac{a\sinh (ct/2\unicode[STIX]{x1D70F})}{c}\right]^{2},\end{eqnarray}$$ and since $N$ is uncorrelated Gaussian white noise, and the variance of the norm of the white noise $\overline{|N|^{2}}=L$ , equations (2.10) and (2.11) combine to give: (2.14) $$\begin{eqnarray}\displaystyle \overline{|R_{fluc}|^{2}} & = & \displaystyle \left(\frac{Ak^{2}}{2r_{0}}\right)^{2}\overline{\left|\int _{0}^{t}N(k,t-{\mathcal{T}})H(k,{\mathcal{T}})\,\text{d}{\mathcal{T}}\right|^{2}},\nonumber\\ \displaystyle & = & \displaystyle \left(\frac{Ak^{2}}{2r_{0}}\right)^{2}\left(\int _{0}^{t}\overline{|N(k,t-{\mathcal{T}})|^{2}}H(k,{\mathcal{T}})^{2}\,\text{d}{\mathcal{T}}\right),\nonumber\\ \displaystyle & = & \displaystyle \left(\frac{Ak^{2}}{2r_{0}}\right)^{2}L\int _{0}^{t}H^{2}\,\text{d}{\mathcal{T}},\nonumber\\ \displaystyle & = & \displaystyle \ell _{fluc}^{2}b\frac{(a^{2}-c^{2})-a^{2}\cosh (ct/\unicode[STIX]{x1D70F})-ac\sinh (ct/\unicode[STIX]{x1D70F})+c^{2}\text{e}^{at/\unicode[STIX]{x1D70F}}}{ac^{2}(a^{2}-c^{2})\text{e}^{at/\unicode[STIX]{x1D70F}}},\end{eqnarray}$$ where non-dimensional $b=(3/\unicode[STIX]{x03C0})(L/r_{0})(kr_{0})^{4}(\ell _{\unicode[STIX]{x1D708}}/r_{0})^{1/2}$ , and the thermal capillary length $\ell _{fluc}=(k_{B}T/\unicode[STIX]{x1D6FE})^{1/2}$ gives the characteristic length scale of the fluctuations. 2.3 Convergence to the classical model From (2.12), fluctuations can be seen to be negligible when the thermal capillary length is very much shorter than the initial modal amplitude; i.e. $R\rightarrow R_{LE}$ as $\ell _{fluc}^{2}/\overline{|R_{i}|^{2}}\rightarrow 0$ . We refer to this classical limit as the LE-RP model (as distinct from the SLE-RP). In order to compare with standard stability analysis, we want to find the fastest growing mode at a long time after any initial disturbance. As such we take $t\rightarrow \infty$ to give: (2.15) $$\begin{eqnarray}\displaystyle \left.\begin{array}{@{}c@{}}\displaystyle \overline{|R_{LE}|^{2}}\rightarrow \overline{|R_{i}|^{2}}\text{e}^{(c-a)t/\unicode[STIX]{x1D70F}}\left(\frac{c+a}{2c}\right)^{2},\\ \displaystyle \overline{|R_{fluc}|^{2}}\rightarrow \frac{\ell _{fluc}^{2}\,b}{2ac^{2}(a^{2}-c^{2})}[2c^{2}-\text{e}^{(c-a)t/\unicode[STIX]{x1D70F}}(a^{2}+ac)],\end{array}\right\} & & \displaystyle\end{eqnarray}$$ with $c-a\geqslant 0$ for $kr_{0}\leqslant 1$ (which is the case as $t\rightarrow \infty$ ). Therefore, a functional form of (2.12) is: (2.16) $$\begin{eqnarray}{\mathcal{R}}(k,t)={\mathcal{F}}_{1}(k)\text{e}^{{\mathcal{G}}(k)t}+{\mathcal{F}}_{2}(k),\end{eqnarray}$$ (2.17) $$\begin{eqnarray}\left.\begin{array}{@{}c@{}}\displaystyle {\mathcal{G}}(k)=(c-a)/\unicode[STIX]{x1D70F},\\ \displaystyle {\mathcal{F}}_{1}(k)=\overline{|R_{i}|^{2}}\left(\frac{c+a}{2c}\right)^{2}+\frac{\ell _{fluc}^{2}\,b}{2c^{2}(c-a)},\\ \displaystyle {\mathcal{F}}_{2}(k)=\frac{\ell _{fluc}^{2}\,b}{a(a^{2}-c^{2})}.\end{array}\right\}\end{eqnarray}$$ In order to find the maximum of ${\mathcal{R}}(k,t)$ , which defines the fastest growing mode $k=k_{max}$ , we calculate the derivative of (2.16) with respect to $k$ to obtain (2.18) $$\begin{eqnarray}\frac{1}{t\text{e}^{{\mathcal{G}}t}}\frac{\unicode[STIX]{x2202}{\mathcal{R}}}{\unicode[STIX]{x2202}k}=\frac{1}{t}\frac{\text{d}{\mathcal{F}}_{1}}{\text{d}k}+\frac{\text{d}{\mathcal{G}}}{\text{d}k}{\mathcal{F}}_{1}+\frac{1}{t\text{e}^{{\mathcal{G}}t}}\frac{\text{d}{\mathcal{F}}_{2}}{\text{d}k},\end{eqnarray}$$ and set $\unicode[STIX]{x2202}{\mathcal{R}}/\unicode[STIX]{x2202}k=0$ . As $1/t$ and $\text{e}^{-{\mathcal{G}}t}$ vanish as $t\rightarrow \infty$ , and ${\mathcal{F}}_{1}(k_{max})\neq 0$ , the equation for determining $k_{max}$ is simply (2.19) $$\begin{eqnarray}\left.\frac{\text{d}{\mathcal{G}}}{\text{d}k}\right|_{k=k_{max}}=\frac{1}{\unicode[STIX]{x1D70F}}\left.\frac{\text{d}(c-a)}{\text{d}k}\right|_{k=k_{max}}=0,\end{eqnarray}$$ which is the same as found by Eggers & Dupont (Reference Eggers and Dupont1994) who neglected fluctuations entirely. However, as break-up occurs in a finite time, both terms in (2.12) could play a role in determining $k_{max}$ at any instant, with the second term becoming more important as $\ell _{fluc}^{2}/\overline{|R_{i}|^{2}}$ increases (all else being constant). 3 Simulations To test our hypothesis, MD simulations (in figure 2) are performed in LAMMPS (Plimpton Reference Plimpton1995) on long cylinders $L/r_{0}=160$ of three different radii: Cylinder 1 ( $r_{0}=5.76$ nm, $2.1\times 10^{6}$ particles), Cylinder 2 ( $r_{0}=2.88$ nm, $2.8\times 10^{5}$ particles) and Cylinder 3 ( $r_{0}=$ 1.44 nm, $4.6\times 10^{4}$ particles). The simulation box ( $57$ nm $\times$ $57$ nm $\times \,L$ in the $x$ , $y$ and $z$ directions, respectively) has periodic boundary conditions imposed in all directions and is filled with Lennard-Jones (LJ) fluid whose atomic interactions are modelled using the standard 12-6 pair potential with the cutoff distance $5.5\unicode[STIX]{x1D70E}$ ; interaction potentials for argon have been employed (Lee, Barker & Pound Reference Lee, Barker and Pound1974). The initial configuration is created from equilibrium simulations of separate liquid-only and vapour-only systems (using a canonical ensemble (NVT) with a Nosé–Hoover thermostat) with respective densities of $1398~\text{kg}~\text{m}^{-3}$ and $3.22~\text{kg}~\text{m}^{-3}$ , which correspond to the saturated liquid and vapour densities at a temperature of 84.09 K (Fernandez, Vrabec & Hasse Reference Fernandez, Vrabec and Hasse2004). The same ensemble and thermostat is used for the full simulations. The kinematic viscosity is found using the Green–Kubo method (Green Reference Green1954; Kubo Reference Kubo1957) to be $\unicode[STIX]{x1D708}=1.76\times 10^{-7}~\text{m}^{2}~\text{s}^{-1}$ . This technique calculates dynamic viscosity by integration of the time-autocorrelation function of the off-diagonal elements of the pressure tensor $P_{ij}$ so that (3.1) $$\begin{eqnarray}\unicode[STIX]{x1D707}=\frac{V_{bulk}}{k_{B}T}\int _{0}^{\infty }\left\langle P_{ij}(t)\cdot P_{ij}(0)\right\rangle \,\text{d}t\quad (i\neq j),\end{eqnarray}$$ where the pressure tensor is obtained using the definition of Kirkwood & Buff (Reference Kirkwood and Buff1949) and the brackets indicate the expectation value. $V_{bulk}$ represents the volume of the bulk fluid under consideration. The surface tension is calculated from the profiles of the components of the pressure tensor in a simple liquid–vapour system (Trokhymchuk & Alejandre Reference Trokhymchuk and Alejandre1999): (3.2) $$\begin{eqnarray}\unicode[STIX]{x1D6FE}=\frac{1}{2}\int _{0}^{L_{z}}[P_{n}(z)-P_{t}(z)]\,\text{d}z,\end{eqnarray}$$ where $L_{z}$ is the length of the MD domain, and subscripts ' $n$ ' and ' $t$ ' denote normal and tangential components, respectively. We have found that to obtain results that are close to experimental data (Trokhymchuk & Alejandre Reference Trokhymchuk and Alejandre1999), a cutoff distance significantly larger than the popular choice of 2.5 $\unicode[STIX]{x1D70E}$ is required; a choice of 5.5 $\unicode[STIX]{x1D70E}$ (used throughout) has proved to be sufficiently accurate. Finally, we have $\unicode[STIX]{x1D6FE}=1.42\times 10^{-2}~\text{N}~\text{m}^{-1}$ and $\ell _{fluc}=0.2859$ nm. Figure 2. Molecular dynamics simulation of the Rayleigh–Plateau instability showing a liquid cylinder (Cylinder 1) breaking into droplets. To gather statistics, multiple independent MD simulations (realisations) (Cylinder 1: $30$ , Cylinder 2: $45$ and Cylinder 3: $100$ ) are performed. For each realisation, a discrete Fourier transform of the interface position (which is extracted from axially distributed annular bins based on a threshold density) is performed and then an ensemble average at each time allows us to produce the results shown in figure 3 (dashed lines) and figure 4 (red dots). Using the initial conditions from MD to extract $\overline{|R_{i}|^{2}}$ , remarkably good agreement with the SLE-RP is obtained, giving us confidence that our approach captures the essential physics. Figure 3. The r.m.s. of dimensionless modal amplitude versus dimensionless wavenumber; a comparison of ensemble-averaged MD data (dashed lines) and (2.12) (solid lines) at various time instances. (a) Results of Cylinder 1; the inset shows selected MD realisations; (b) Results of Cylinder 2, the three timesteps are: $0.23$ (red), $0.47$ (blue), $0.70$ (black)/ns; (c) Results of Cylinder 3, the three timesteps are: $0.062$ (red), $0.125$ (blue), $0.100$ (black)/ns. The MD results in figure 3 illustrate that there exists a modal distribution which varies with time, becoming sharper as it evolves. Extracting $k_{max}$ from these data yields the plots in figure 4, which confirm that $k_{max}$ tends to the Eggers & Dupont result as $t\rightarrow \infty$ . However, figure 4 also shows that $k_{max}$ at the average break-up time (which ultimately determines drop size) is consistently overpredicted by Rayleigh's inviscid result, as seen in previous MD, and underpredicted by the Eggers & Dupont model (valid across all values of viscosity) – particularly for the smallest radius (Cylinder 3) where $k_{max}=0.52/r_{0}$ in the MD and $k_{max}=0.35/r_{0}$ from Eggers & Dupont. The modal analysis based purely on the LE-RP also underpredicts the MD data and fails to exhibit the dominant short wavelength modes observed at early times. In contrast, the SLE-RP curves in both figures 3 and 4 give excellent agreement with the MD and underline the critical role of thermal fluctuations in the instability mechanism at the nanoscale. Note, however, that the agreement between the SLE-RP and the MD is less good for the smallest cylinder. This can potentially be explained by the decreasing validity with decreasing scale of the weak-fluctuation assumption in our SLE-RP analysis. Crudely, $l_{fluc}/r_{0}\ll 1$ for the linear analysis to be valid. Here, $l_{fluc}/r_{0}=0.05$ , 0.1 and 0.2, for the three cylinders, listed largest to smallest. Figure 4. Evolution in time $t$ of the wavenumber with greatest amplitude ( $k_{max}$ ) for (a) Cylinder 1 (b) Cylinder 2 and (c) Cylinder 3. Red dots and solid lines are the maximum predicted by MD (interpolation) and the SLE-RP respectively. Average break-up time, $\bar{t}_{b}$ , is obtained from the MD data. Intriguingly, the early time behaviour in figure 4 indicates that $k_{max}r_{0}$ can be greater than unity, violating the classical stability criterion of Plateau. Therefore, it seems possible that 'fat' cylinders, whose length is below the classical critical stability ( $L<L_{crit}=2\unicode[STIX]{x03C0}r_{0}$ ), may be unstable in the presence of fluctuations. To test the hypothesis, we consider (2.12) at the critical point, i.e. when $kr_{0}=1$ to obtain (4.1) $$\begin{eqnarray}\overline{|R|^{2}}=\overline{|R_{i}|^{2}}+\frac{L}{4\unicode[STIX]{x03C0}r_{0}}\ell _{fluc}^{2}\left(\frac{\ell _{\unicode[STIX]{x1D708}}}{r_{0}}\right)^{1/2}\frac{6at/\unicode[STIX]{x1D70F}+3\text{e}^{-2at/\unicode[STIX]{x1D70F}}(4\text{e}^{at/\unicode[STIX]{x1D70F}}-1)-9}{a^{3}}.\end{eqnarray}$$ Notably the contribution from LE equations (the first term on the right-hand side of (4.1)) is a constant; so, according to the classical model, the initial disturbance neither decays or grows. Hence, it is critically stable. However, the second term (due purely to fluctuations) grows in proportion to $t$ as $t\rightarrow \infty$ , giving a potential mechanism for break-up. This suggests that cylinders of the critical length, and perhaps shorter, are likely to be unstable at the nanoscale. To verify these conclusions, we perform a further series of MD experiments for cylinders of two radii ( $r_{0}=1.44,2.88$ nm) slightly shorter than the critical length $L_{crit}$ so that all classically unstable (long) wavelengths are suppressed by the domain size. This has been performed using two different flow configurations, one in which periodic boundary conditions are applied and the other in which the liquid cylinder is bounded at each end by a solid wall. The four cases in figure 5 (cylinders of two radii for each end condition) all have length $L=6r_{0}<L_{crit}$ , and thus satisfy Plateau's stability criterion. However, all break-up in a finite time, supporting our conclusion from SLE-RP that fat cylinders can indeed become unstable at the nanoscale. Notably, the break-up shapes resemble the double-cone profiles first observed by Moseler & Landman (Reference Moseler and Landman2000). Having established the possibility of violating the Plateau criterion at the nanoscale, in figure 6 we show the average break-up time of such cylinders using 50 independent MD simulations for each data point (the standard deviation is indicated). There are two intuitive observations: (i) for the smaller radius cylinder, the break-up (which is partly or wholly due to fluctuations) occurs significantly faster than for the larger radius cylinder; (ii) as the aspect ratio of the cylinder decreases (i.e. becomes fatter) and crosses the classical stability limit ( $L_{crit}/L>1$ ), the average break-up time increases dramatically, and so does the variance in the measured times. This is because, at lower aspect ratios, the stabilising effect of surface tension becomes stronger and one has to wait longer (on average) for the 'perfect storm' of fluctuations to arrive that will overcome these and rupture the cylinder. This could explain why previous MD (Min & Wong Reference Min and Wong2006) appears to support the classical criterion: to violate the Plateau stability one must either be close to $L_{crit}/L=1$ or potentially wait a long time. Notably, while this is a 'long time' in MD simulations, from the macroscopic perspective the time scales on which classical stability is lost are miniscule. Figure 5. Selected realisations for the break-up of classically stable cylinders (i.e. those satisfying the Plateau stability criterion). The two simulations on the left satisfy periodic boundary conditions, those on the right are bounded by a wall (in blue). Non-dimensional time $t^{\ast }=t/\unicode[STIX]{x1D70F}$ . Figure 6. The non-dimensional break-up time ( $t_{b}^{\ast }=t_{b}/\unicode[STIX]{x1D70F}$ ) of short nanocylinders near the classical stability boundary, obtained from MD. $L_{crit}/L=2\unicode[STIX]{x03C0}r_{0}/L=kr_{0}$ . Error bars indicate one standard deviation of $t_{b}^{\ast }$ either side of the mean. The SLE has been shown to be a remarkably robust tool for extending classical RP arguments into the nanoscale, predicting the loss of a clear-cut Plateau stability boundary and the modification of $k_{max}$ due to thermal fluctuations, at a fraction of the computational cost of the benchmark MD. More generally, the underlying formulation based on fluctuating hydrodynamics has been confirmed as a useful framework for investigating nanoscale flows; whilst the deterministic nature of classical hydrodynamics is lost, nanoflows can apparently often be well described with (stochastic) partial differential equations without resorting to particle methods. Promisingly, the effects of thermal fluctuations on capillary flows have previously been observed in experiments (Hennequin et al. Reference Hennequin, Aarts, van der Wiel, Wegdam, Eggers, Lekkerkerker and Bonn2006) using ultra-low surface-tension liquid–liquid systems that massively enhance $\ell _{fluc}$ to bring thermal fluctuations into the optically observable range. It is our hope that such systems can be used to experimentally verify our new predictions, most notably the violation of Plateau stability. Potential extensions of the framework are numerous. For example, to consider phase change phenomena, which are known to affect the break-up of nanocylinders (Kang & Landman Reference Kang and Landman2007), and to explore related flow configurations, such as liquid jets (Moseler & Landman Reference Moseler and Landman2000) where an intriguing open problem is to understand the interplay between fluctuations within the jet and perturbations imposed at the orifice in an attempt to control break-up. Useful discussions with Dr L. Gibelli and Dr J. C. Padrino are gratefully acknowledged. This work was supported by the Leverhulme Trust (Research Project Grant) and the EPSRC (grants EP/N016602/1, EP/P020887/1 & EP/P031684/1). Basaran, O. A., Gao, H. & Bhat, P. P. 2013 Nonstandard inkjets. Annu. Rev. Fluid Mech. 45, 85–113.CrossRefGoogle Scholar Choi, Y. S., Kim, S. J. & Kim, M. U. 2006 Molecular dynamics of unstable motions and capillary instability in liquid nanojets. Phys. Rev. E 73, 016309.Google ScholarPubMed Diez, J. A., González, A. G. & Fernández, R. 2016 Metallic-thin-film instability with spatially correlated thermal noise. Phys. Rev. E 93 (1), 013120.Google ScholarPubMed Eggers, J. 1993 Universal pinching of 3D axisymmetric free-surface flow. Phys. Rev. Lett. 71 (21), 3458–3460.CrossRefGoogle ScholarPubMed Eggers, J. 2002 Dynamics of liquid nanojets. Phys. Rev. Lett. 89, 084502.CrossRefGoogle ScholarPubMed Eggers, J. & Dupont, T. F. 1994 Drop formation in a one-dimensional approximation of the Navier–Stokes equation. J. Fluid Mech. 262, 205–221.CrossRefGoogle Scholar Eggers, J. & Villermaux, E. 2008 Physics of liquid jets. Rep. Prog. Phys. 71, 1–79.CrossRefGoogle Scholar Fernandez, G. A., Vrabec, J. & Hasse, H. 2004 A molecular simulation study of shear and bulk viscosity and thermal conductivity of simple real fluids. Fluid Phase Equilib. 221, 157–163.CrossRefGoogle Scholar Gopan, N. & Sathian, S. P. 2014 Rayleigh instability at small length scales. Phys. Rev. E 90, 033001.Google ScholarPubMed Green, M. S. 1954 Markoff random processes and the statistical mechanics of time-dependent phenomena. II. Irreversible processes in fluids. J. Chem. Phys. 22, 398–413.CrossRefGoogle Scholar Grün, G., Mecke, K. & Rauscher, M. 2006 Thin-film flow influenced by thermal noise. J. Stat. Phys. 122 (6), 1261–1291.CrossRefGoogle Scholar Hennequin, Y., Aarts, D., van der Wiel, J. H., Wegdam, G., Eggers, J., Lekkerkerker, H. N. W. & Bonn, D. 2006 Drop formation by thermal fluctuations at an ultralow surface tension. Phys. Rev. Lett. 97, 244502.CrossRefGoogle ScholarPubMed Kang, W. & Landman, U. 2007 Universality crossover of the pinch-off shape profiles of collapsing liquid nanobridges in vacuum and gaseous environments. Phys. Rev. Lett. 98, 064504.CrossRefGoogle ScholarPubMed Kawano, S. 1998 Molecular dynamics of rupture phenomena in a liquid thread. Phys. Rev. E 58, 4468–4472.Google Scholar Kirkwood, J. G. & Buff, F. P. 1949 The statistical mechanical theory of surface tension. J. Chem. Phys. 17, 338–343.CrossRefGoogle Scholar Koplik, J. & Banavar, J. R. 1993 Molecular dynamics of interface rupture. Phys. Fluids A 5, 521–536.CrossRefGoogle Scholar Kubo, R. 1957 Statistical-mechanical theory of irreversible processes. I. General theory and simple applications to magnetic and conduction problems. J. Phys. Soc. Japan 12, 570–586.CrossRefGoogle Scholar Landau, L. D., Lifshits, E. M. & Pitaevskii, L. P. 1980 Course of Theoretical Physics, 2nd edn. vol. 9, chap. 9, pp. 86–91. Pergamon Press.Google Scholar Lee, J. K., Barker, J. A. & Pound, G. M. 1974 Surface structure and surface tension: perturbation theory and Monte Carlo calculation. J. Chem. Phys. 60, 1976–1980.CrossRefGoogle Scholar Min, D. & Wong, H. 2006 Rayleigh's instability of Lennard-Jones liquid nanothreads simulated by molecular dynamics. Phys. Fluids 18, 024103.CrossRefGoogle Scholar Mitragotri, S. 2006 Current status and future prospects of needle-free liquid jet injectors. Nat. Rev. Drug Discov. 5 (7), 543–548.CrossRefGoogle ScholarPubMed Moseler, M. & Landman, U. 2000 Formation, stability, and breakup of nanojets. Science 289, 1165–1169.CrossRefGoogle ScholarPubMed Petit, J., Rivière, D., Kellay, H. & Delville, J.-P. 2012 Break-up dynamics of fluctuating liquid threads. Proc. Natl Acad. Sci. USA 109 (45), 18327–18331.CrossRefGoogle ScholarPubMed Plateau, J. A. F. 1873 Statique Expérimentale et Théorique des Liquides Soumis Aux Seules Forces Moléculaires, vol. 2. Gauthier-Villars.Google Scholar Plimpton, S. 1995 Fast parallel algorithms for short-range molecular dynamics. J. Comput. Phys. 117, 1–19.CrossRefGoogle Scholar Rayleigh, Lord 1878 On the instability of jets. Proc. Lond. Math. Soc. 1, 4–13.CrossRefGoogle Scholar Rayleigh, Lord 1892 XVI. On the instability of a cylinder of viscous liquid under capillary force. Phil. Mag. J. Sci. 34, 145–154.CrossRefGoogle Scholar Shabahang, S., Kaufman, J. J., Deng, D. S. & Abouraddy, A. F. 2011 Observation of the Plateau–Rayleigh capillary instability in multi-material optical fibers. Appl. Phys. Lett. 99 (16), 161909.CrossRefGoogle Scholar Stone, H. A., Stroock, A. D. & Ajdari, A. 2004 Engineering flows in small devices: microfluidics toward a lab-on-a-chip. Annu. Rev. Fluid Mech. 36, 381–411.CrossRefGoogle Scholar Tomotika, S. 1935 On the instability of a cylindrical thread of a viscous liquid surrounded by another viscous fluid. Proc. R. Soc. Lond. A 150, 322–337.Google Scholar Trokhymchuk, A. & Alejandre, J. 1999 Computer simulations of liquid/vapor interface in Lennard-Jones fluids: some questions and answers. J. Chem. Phys. 111, 8510–8523.CrossRefGoogle Scholar View in content Figure 4. Evolution in time $t$ of the wavenumber with greatest amplitude ( $k_{max}$) for (a) Cylinder 1 (b) Cylinder 2 and (c) Cylinder 3. Red dots and solid lines are the maximum predicted by MD (interpolation) and the SLE-RP respectively. Average break-up time, $\bar{t}_{b}$, is obtained from the MD data. Figure 5. Selected realisations for the break-up of classically stable cylinders (i.e. those satisfying the Plateau stability criterion). The two simulations on the left satisfy periodic boundary conditions, those on the right are bounded by a wall (in blue). Non-dimensional time $t^{\ast }=t/\unicode[STIX]{x1D70F}$. Figure 6. The non-dimensional break-up time ( $t_{b}^{\ast }=t_{b}/\unicode[STIX]{x1D70F}$) of short nanocylinders near the classical stability boundary, obtained from MD. $L_{crit}/L=2\unicode[STIX]{x03C0}r_{0}/L=kr_{0}$. Error bars indicate one standard deviation of $t_{b}^{\ast }$ either side of the mean. You have Access Open access Zhang, Yixin Sprittles, James E. and Lockerby, Duncan A. 2019. Molecular simulation of thin liquid films: Thermal fluctuations and instability. Physical Review E, Vol. 100, Issue. 2, Gorshkov, Vyacheslav N. Sareh, Pooya Tereshchuk, Vladimir V. and Soleiman‐Fallah, Arash 2019. Dynamics of Anisotropic Break‐Up in Nanowires of FCC Lattice Structure. Advanced Theory and Simulations, Vol. 2, Issue. 9, p. 1900118. Chubynsky, Mykyta V. Belousov, Kirill I. Lockerby, Duncan A. and Sprittles, James E. 2020. Bouncing off the Walls: The Influence of Gas-Kinetic and van der Waals Effects in Drop Impact. Physical Review Letters, Vol. 124, Issue. 8, Villermaux, Emmanuel 2020. Fragmentation versus Cohesion. Journal of Fluid Mechanics, Vol. 898, Issue. , Klymko, Katherine Nonaka, Andrew Bell, John B. Carney, Sean P. and Garcia, Alejandro L. 2020. Low Mach number fluctuating hydrodynamics model for ionic liquids. Physical Review Fluids, Vol. 5, Issue. 9, Zhang, Yixin Sprittles, James E. and Lockerby, Duncan A. 2020. Nanoscale thin-film flows with thermal fluctuations and slip. Physical Review E, Vol. 102, Issue. 5, Gorshkov, Vyacheslav N. Tereshchuk, Vladimir V. and Sareh, Pooya 2020. Diversity of anisotropy effects in the breakup of metallic FCC nanowires into ordered nanodroplet chains. CrystEngComm, Vol. 22, Issue. 15, p. 2601. Paiva, Felipe L. Secchi, Argimiro R. Calado, Verônica Maia, João and Khani, Shaghayegh 2020. Slip and momentum transfer mechanisms mediated by Janus rods at polymer interfaces. Soft Matter, Vol. 16, Issue. 28, p. 6662. Perumanath, Sreehari Borg, Matthew K. Sprittles, James E. and Enright, Ryan 2020. Molecular physics of jumping nanodroplets. Nanoscale, Vol. 12, Issue. 40, p. 20631. Kondic, Lou González, Alejandro G. Diez, Javier A. Fowlkes, Jason D. and Rack, Philip 2020. Liquid-State Dewetting of Pulsed-Laser-Heated Nanoscale Metal Films and Other Geometries. Annual Review of Fluid Mechanics, Vol. 52, Issue. 1, p. 235. Liang, Tianshou Shi, Pengpeng Su, Sanqing Zhang, Xiaolong and Dai, Xuanjun 2020. Near-perfect healing natures of silver five-fold twinned nanowire. Computational Materials Science, Vol. 183, Issue. , p. 109796. Hernández-Rivera, Efraín Mock, Clara M. Sietins, Jennifer M. and Hart, Kevin R. 2020. Plateau-Rayleigh instabilities in pore networks of additively manufactured polymers: A modeling perspective. Materials Letters, Vol. 275, Issue. , p. 128100. Zhao, Chengxi Lockerby, Duncan A. and Sprittles, James E. 2020. Dynamics of liquid nanothreads: Fluctuation-driven instability and rupture. Physical Review Fluids, Vol. 5, Issue. 4, Zhang, Yixin Lockerby, Duncan A. and Sprittles, James E. 2021. Relaxation of Thermal Capillary Waves for Nanoscale Liquid Films on Anisotropic-Slip Substrates. Langmuir, Vol. 37, Issue. 29, p. 8667. Paredes, Germercy Ondarçuhu, Thierry Monthioux, Marc and Piazza, Fabrice 2021. Unveiling the existence and role of a liquid phase in a high temperature (1400 °C) pyrolytic carbon deposition process. Carbon Trends, Vol. 5, Issue. , p. 100117. Zhao, Chengxi Zhao, Jiayi Si, Ting and Chen, Shuo 2021. Influence of thermal fluctuations on nanoscale free-surface flows: A many-body dissipative particle dynamics study. Physics of Fluids, Vol. 33, Issue. 11, p. 112004. Zhang, Y. Sprittles, J.E. and Lockerby, D.A. 2021. Thermal capillary wave growth and surface roughening of nanoscale liquid films. Journal of Fluid Mechanics, Vol. 915, Issue. , Yin, Zongjun Su, Rong Zhang, Wenfeng Zhang, Chunying Xu, Hui Hu, Hanchun Zhang, Zhendong Huang, Bensheng and Liu, Fengguang 2021. Binary collisions of equal-sized water nanodroplets: Molecular dynamics simulations. Computational Materials Science, Vol. 200, Issue. , p. 110774. Dallaston, Michael C. Zhao, Chengxi Sprittles, James E. and Eggers, Jens 2021. Stability of similarity solutions of viscous thread pinch-off. Physical Review Fluids, Vol. 6, Issue. 10, Zhao, Yu Wan, Dongmei Chen, Xiaoliang Chao, Xing and Xu, Haitao 2021. Uniform breaking of liquid-jets by modulated laser heating. Physics of Fluids, Vol. 33, Issue. 4, p. 044115. Download full list Chengxi Zhao (a1), James E. Sprittles (a2) and Duncan A. Lockerby (a1) DOI: https://doi.org/10.1017/jfm.2018.950
CommonCrawl
Corporate Finance & Accounting Financial Analysis Return on Net Assets (RONA) Adam Hayes is a financial writer with 15+ years Wall Street experience as a derivatives trader. Besides his extensive derivative trading expertise, Adam is an expert in economics and behavioral finance. Adam received his master's in economics from The New School for Social Research and his Ph.D. from the University of Wisconsin-Madison in sociology. He is a CFA charterholder as well as holding FINRA Series 7 & 63 licenses. He currently researches and teaches at the Hebrew University in Jerusalem. Janet Berry-Johnson Reviewed by Janet Berry-Johnson Janet Berry-Johnson is a CPA with 10 years of experience in public accounting and writes about income taxes and small business accounting. How to Value a Company What Is Return on Net Assets (RONA)? Return on net assets (RONA) is a measure of financial performance calculated as net profit divided by the sum of fixed assets and net working capital. Net profit is also called net income. The RONA ratio shows how well a company and its management are deploying assets in economically valuable ways; a high ratio result indicates that management is squeezing more earnings out of each dollar invested in assets. RONA is also used to assess how well a company is performing compared to others in its industry. Return on net assets (RONA) compares a firm's net profits to its net assets to show how well it utilizes those assets to generate earnings. A high RONA ratio indicates that management is maximizing the use of the company's assets. Net income and fixed assets can be adjusted for unusual or non-recurring items to gain a normalized ratio result. The Formula for Return on Net Assets Is R O N A = Net profit (Fixed assets + N W C ) N W C = Current Assets − Current Liabilities where: R O N A = Return on net assets \begin{aligned}&RONA=\frac{\text{Net profit}}{\text{(Fixed assets}+NWC)}\\&NWC=\text{Current Assets }-\text{Current Liabilities}\\&\textbf{where:}\\&RONA=\text{Return on net assets}\\&NWC=\text{Net working capital}\end{aligned} ​RONA=(Fixed assets+NWC)Net profit​NWC=Current Assets −Current Liabilitieswhere:RONA=Return on net assets​ How to Calculate RONA The three components of RONA are net income, fixed assets, and net working capital. Net income is found in the income statement and is calculated as revenue minus expenses associated with making or selling the company's products, operating expenses such as management salaries and utilities, interest expenses associated with debt, and all other expenses. Fixed assets are tangible property used in production, such as real estate and machinery, and do not include goodwill or other intangible assets carried on the balance sheet. Net working capital is calculated by subtracting the company's current liabilities from its current assets. It is important to note that long-term liabilities are not part of working capital and are not subtracted in the denominator when calculating working capital for the return on net assets ratio. At times, analysts make a few adjustments to the ratio formula inputs to smooth or normalize the results, especially when comparing to other companies. For example, consider that the fixed assets balance could be affected by certain types of accelerated depreciation, where up to 40% of the value of an asset could be eliminated in its first full year of deployment. Additionally, any significant events that resulted in either a large loss or unusual income should be adjusted out of net income, especially if these are one-time events. Intangible assets such as goodwill are another item that analysts sometimes remove from the calculation, since it is often simply derived from an acquisition, rather than being an asset purchased for use in producing goods, such as a new piece of equipment. What Does RONA Tell You? The return on net assets (RONA) ratio compares a firm's net income with its assets and helps investors to determine how well the company is generating profit from its assets. The higher a firm's earnings relative to its assets, the more effectively the company is deploying those assets. RONA is an especially important metric for capital intensive companies, which have fixed assets as their major asset component. In the capital-intensive manufacturing sector, RONA can also be calculated as: Return on Net Assets = Plant Revenue − Costs Net Assets \text{Return on Net Assets}=\frac{\text{Plant Revenue}-\text{Costs}}{\text{Net Assets}} Return on Net Assets=Net AssetsPlant Revenue−Costs​ Interpreting Return on Net Assets The higher the return on net assets, the better the profit performance of the company. A higher RONA means the company is using its assets and working capital efficiently and effectively, although no single calculation tells the whole story of a company's performance. Return on net assets is just one of many ratios used to evaluate a company's financial health. If the purpose of performing the calculation is to generate a longer-term perspective of the company's ability to create value, extraordinary expenses may be added back into the net income figure. For example, if a company had a net income of $10 million but incurred an extraordinary expense of $1 million, the net income figure could be adjusted upward to $11 million. This adjustment provides an indication of the return on net assets the company could expect in the following year if it does not have to incur any further extraordinary expenses. Example of RONA Assume a company has revenue of $1 billion and total expenses including taxes of $800 million, giving it a net income of $200 million. The company has current assets of $400 million and current liabilities of $200 million, giving it net working capital of $200 million. Further, the company's fixed assets amount to $800 million. Adding fixed assets to net working capital yields $1 billion in the denominator when calculating RONA. Dividing the net income of $200 million by $1 billion yields a return on net assets of 20% for the company. Return on Invested Capital (ROIC) Return on invested capital (ROIC) is a way to assess a company's efficiency at allocating the capital under its control to profitable investments. Return on Average Assets (ROAA) Definition Return on average assets (ROAA) is an indicator used to assess the profitability of a firm's assets, and it is most often used by banks. How Return on Equity (ROE) Works Return on equity (ROE) is a measure of financial performance calculated by dividing net income by shareholders' equity. Free Cash Flow (FCF) Free cash flow (FCF) represents the cash a company can generate after accounting for capital expenditures needed to maintain or maximize its asset base. Earnings Per Share (EPS) Earnings per share (EPS) is the portion of a company's profit allocated to each outstanding share of common stock. Earnings per share serve as an indicator of a company's profitability. What Is Cash Flow? Cash flow is the net amount of cash and cash equivalents being transferred into and out of a business. Tools for Fundamental Analysis Return on Equity (ROE) vs. Return on Capital (ROC): What's the Difference? How to Evaluate Firms Using Present Value of Free Cash Flows How to Evaluate a Company's Balance Sheet What Is the Formula for Calculating Free Cash Flow (FCF)? Analyze Investments Quickly With Ratios Analyzing Google's Balance Sheet
CommonCrawl
I can only talk from experience here, but I can remember being a teenager and just being a straight-up dick to any recruiters that came to my school. And I came from a military family. I'd ask douche-bag questions, I'd crack jokes like so... don't ask, don't tell only applies to everyone BUT the Navy, right? I never once considered enlisting because some 18 or 19 year old dickhead on hometown recruiting was hanging out in the cafeteria or hallways of my high school.Weirdly enough, however, what kinda put me over the line and made me enlist was the location of the recruiters' office. In the city I was living in at the time, the Armed Forces Recruitment Center was next door to an all-ages punk venue that I went to nearly every weekend. I spent many Saturday nights standing in a parking lot after a show, all bruised and bloody from a pit, smoking a joint, and staring at the windows of the closed recruiters' office. Propaganda posters of guys in full-battle-rattle obscured by a freshly scrawled Anarchy symbol or a collage of band stickers over the glass.I think trying to recruit kids from school has a child-molester-vibe to it. At least it did for me. But the recruiters defiantly being right next to a bunch of drunk and high punks, that somehow made it seem more like a truly bad-ass option. Like, sure, I'll totally join. After all, these guys don't run from the horde of skins and pins that descend every weekend like everyone else, they must be bad-ass. Many over the counter and prescription smart drugs fall under the category of stimulants. These substances contribute to an overall feeling of enhanced alertness and attention, which can improve concentration, focus, and learning. While these substances are often considered safe in moderation, taking too much can cause side effects such as decreased cognition, irregular heartbeat, and cardiovascular problems. …Phenethylamine is intrinsically a stimulant, although it doesn't last long enough to express this property. In other words, it is rapidly and completely destroyed in the human body. It is only when a number of substituent groups are placed here or there on the molecule that this metabolic fate is avoided and pharmacological activity becomes apparent. I am not alone in thinking of the potential benefits of smart drugs in the military. In their popular novel Ghost Fleet: A Novel of the Next World War, P.W. Singer and August Cole tell the story of a future war using drug-like nootropic implants and pills, such as Modafinil. DARPA is also experimenting with neurological technology and enhancements such as the smart drugs discussed here. As demonstrated in the following brain initiatives: Targeted Neuroplasticity Training (TNT), Augmented Cognition, and High-quality Interface Systems such as their Next-Generational Nonsurgical Neurotechnology (N3). Now, what is the expected value (EV) of simply taking iodine, without the additional work of the experiment? 4 cans of 0.15mg x 200 is $20 for 2.1 years' worth or ~$10 a year or a NPV cost of $205 (\frac{10}{\ln 1.05}) versus a 20% chance of $2000 or $400. So the expected value is greater than the NPV cost of taking it, so I should start taking iodine. At dose #9, I've decided to give up on kratom. It is possible that it is helping me in some way that careful testing (eg. dual n-back over weeks) would reveal, but I don't have a strong belief that kratom would help me (I seem to benefit more from stimulants, and I'm not clear on how an opiate-bearer like kratom could stimulate me). So I have no reason to do careful testing. Oh well. 70 pairs is 140 blocks; we can drop to 36 pairs or 72 blocks if we accept a power of 0.5/50% chance of reaching significance. (Or we could economize by hoping that the effect size is not 3.5 but maybe twice the pessimistic guess; a d=0.5 at 50% power requires only 12 pairs of 24 blocks.) 70 pairs of blocks of 2 weeks, with 2 pills a day requires (70 \times 2) \times (2 \times 7) \times 2 = 3920 pills. I don't even have that many empty pills! I have <500; 500 would supply 250 days, which would yield 18 2-week blocks which could give 9 pairs. 9 pairs would give me a power of: ^ Sattler, Sebastian; Forlini, Cynthia; Racine, Éric; Sauer, Carsten (August 5, 2013). "Impact of Contextual Factors and Substance Characteristics on Perspectives toward Cognitive Enhancement". PLOS ONE. 8 (8): e71452. Bibcode:2013PLoSO...871452S. doi:10.1371/journal.pone.0071452. ISSN 1932-6203. LCCN 2006214532. OCLC 228234657. PMC 3733969. PMID 23940757. The data from 2-back and 3-back tasks are more complex. Three studies examined performance in these more challenging tasks and found no effect of d-AMP on average performance (Mattay et al., 2000, 2003; Mintzer & Griffiths, 2007). However, in at least two of the studies, the overall null result reflected a mixture of reliably enhancing and impairing effects. Mattay et al. (2000) examined the performance of subjects with better and worse working memory capacity separately and found that subjects whose performance on placebo was low performed better on d-AMP, whereas subjects whose performance on placebo was high were unaffected by d-AMP on the 2-back and impaired on the 3-back tasks. Mattay et al. (2003) replicated this general pattern of data with subjects divided according to genotype. The specific gene of interest codes for the production of Catechol-O-methyltransferase (COMT), an enzyme that breaks down dopamine and norepinephrine. A common polymorphism determines the activity of the enzyme, with a substitution of methionine for valine at Codon 158 resulting in a less active form of COMT. The met allele is thus associated with less breakdown of dopamine and hence higher levels of synaptic dopamine than the val allele. Mattay et al. (2003) found that subjects who were homozygous for the val allele were able to perform the n-back faster with d-AMP; those homozygous for met were not helped by the drug and became significantly less accurate in the 3-back condition with d-AMP. In the case of the third study finding no overall effect, analyses of individual differences were not reported (Mintzer & Griffiths, 2007). You may have come across this age-old adage, "Work smarter, not harder." So, why not extend the same philosophy in other aspects of your life? Are you in a situation wherein no matter how much you exercise, eat healthy, and sleep well, you still struggle to focus and motivate yourself? If yes, you need a smart solution minus the adverse health effects. Try 'Smart Drugs,' that could help you out of your situation by enhancing your thought process, boosting your memory, and making you more creative and productive. A large review published in 2011 found that the drug aids with the type of memory that allows us to explicitly remember past events (called long-term conscious memory), as opposed to the type that helps us remember how to do things like riding a bicycle without thinking about it (known as procedural or implicit memory.) The evidence is mixed on its effect on other types of executive function, such as planning or ability on fluency tests, which measure a person's ability to generate sets of data—for example, words that begin with the same letter. To make things more interesting, I think I would like to try randomizing different dosages as well: 12mg, 24mg, and 36mg (1-3 pills); on 5 May 2014, because I wanted to finish up the experiment earlier, I decided to add 2 larger doses of 48 & 60mg (4-5 pills) as options. Then I can include the previous pilot study as 10mg doses, and regress over dose amount. The evidence? In small studies, healthy people taking modafinil showed improved planning and working memory, and better reaction time, spatial planning, and visual pattern recognition. A 2015 meta-analysis claimed that "when more complex assessments are used, modafinil appears to consistently engender enhancement of attention, executive functions, and learning" without affecting a user's mood. In a study from earlier this year involving 39 male chess players, subjects taking modafinil were found to perform better in chess games played against a computer. On the other hand, sometimes you'll feel a great cognitive boost as soon as you take a pill. That can be a good thing or a bad thing. I find, for example, that modafinil makes you more of what you already are. That means if you are already kind of a dick and you take modafinil, you might act like a really big dick and regret it. It certainly happened to me! I like to think that I've done enough hacking of my brain that I've gotten over that programming… and that when I use nootropics they help me help people. All of the coefficients are positive, as one would hope, and one specific factor (MR7) squeaks in at d=0.34 (p=0.05). The graph is much less impressive than the graph for just MP, suggesting that the correlation may be spread out over a lot of factors, the current dataset isn't doing a good job of capturing the effect compared to the MP self-rating, or it really was a placebo effect: It isn't unlikely to hear someone from Silicon Valley say the following: "I've just cycled off a stack of Piracetam and CDP-Choline because I didn't get the mental acuity I was expecting. I will try a blend of Noopept and Huperzine A for the next two weeks and see if I can increase my output by 10%. We don't have immortality yet and I would really like to join the three comma club before it's all over." This doesn't fit the U-curve so well: while 60mg is substantially negative as one would extrapolate from 30mg being ~0, 48mg is actually better than 15mg. But we bought the estimates of 48mg/60mg at a steep price - we ignore the influence of magnesium which we know influences the data a great deal. And the higher doses were added towards the end, so may be influenced by the magnesium starting/stopping. Another fix for the missingness is to impute the missing data. In this case, we might argue that the placebo days of the magnesium experiment were identical to taking no magnesium at all and so we can classify each NA as a placebo day, and rerun the desired analysis: "My husband and I (Ryan Cedermark) are so impressed with the research Cavin did when writing this book. If you, a family member or friend has suffered a TBI, concussion or are just looking to be nicer to your brain, then we highly recommend this book! Your brain is only as good as the body's internal environment and Cavin has done an amazing job on providing the information needed to obtain such!" Dosage is apparently 5-10mg a day. (Prices can be better elsewhere; selegiline is popular for treating dogs with senile dementia, where those 60x5mg will cost $2 rather than $3531. One needs a veterinarian's prescription to purchase from pet-oriented online pharmacies, though.) I ordered it & modafinil from Nubrain.com at $35 for 60x5mg; Nubrain delayed and eventually canceled my order - and my enthusiasm. Between that and realizing how much of a premium I was paying for Nubrain's deprenyl, I'm tabling deprenyl along with nicotine & modafinil for now. Which is too bad, because I had even ordered 20g of PEA from Smart Powders to try out with the deprenyl. (My later attempt to order some off the Silk Road also failed when the seller canceled the order.) With so many different ones to choose from, choosing the best nootropics for you can be overwhelming at times. As usual, a decision this important will require research. Study up on the top nootropics which catch your eye the most. The nootropics you take will depend on what you want the enhancement for. The ingredients within each nootropic determine its specific function. For example, some nootropics contain ginkgo biloba, which can help memory, thinking speed, and increase attention span. Check the nootropic ingredients as you determine what end results you want to see. Some nootropics supplements can increase brain chemicals such as dopamine and serotonin. An increase in dopamine levels can be very useful for memory, alertness, reward and more. Many healthy adults, as well as college students take nootropics. This really supports the central nervous system and the brain. With this experiment, I broke from the previous methodology, taking the remaining and final half Nuvigil at midnight. I am behind on work and could use a full night to catch up. By 8 AM, I am as usual impressed by the Nuvigil - with Modalert or something, I generally start to feel down by mid-morning, but with Nuvigil, I feel pretty much as I did at 1 AM. Sleep: 9:51/9:15/8:27 Several chemical influences can completely disconnect those circuits so they're no longer able to excite each other. "That's what happens when we're tired, when we're stressed." Drugs like caffeine and nicotine enhance the neurotransmitter acetylcholine, which helps restore function to the circuits. Hence people drink tea and coffee, or smoke cigarettes, "to try and put [the] prefrontal cortex into a more optimal state". Legal issues aside, this wouldn't be very difficult to achieve. Many companies already have in-house doctors who give regular health check-ups — including drug tests — which could be employed to control and regulate usage. Organizations could integrate these drugs into already existing wellness programs, alongside healthy eating, exercise, and good sleep. Took pill around 6 PM; I had a very long drive to and from an airport ahead of me, ideal for Adderall. In case it was Adderall, I chewed up the pill - by making it absorb faster, more of the effect would be there when I needed it, during driving, and not lingering in my system past midnight. Was it? I didn't notice any change in my pulse, I yawned several times on the way back, my conversation was not more voluminous than usual. I did stay up later than usual, but that's fully explained by walking to get ice cream. All in all, my best guess was that the pill was placebo, and I feel fairly confident but not hugely confident that it was placebo. I'd give it ~70%. And checking the next morning… I was right! Finally. The reviews on this site are a demonstration of what someone who uses the advertised products may experience. Results and experience may vary from user to user. All recommendations on this site are based solely on opinion. These products are not for use by children under the age of 18 and women who are pregnant or nursing. If you are under the care of a physician, have a known medical condition or are taking prescription medication, seek medical advice from your health care provider before taking any new supplements. All product reviews and user testimonials on this page are for reference and educational purposes only. You must draw your own conclusions as to the efficacy of any nutrient. Consumer Advisor Online makes no guarantee or representations as to the quality of any of the products represented on this website. The information on this page, while accurate at the time of publishing, may be subject to change or alterations. All logos and trademarks used in this site are owned by the trademark holders and respective companies. Many of the positive effects of cognitive enhancers have been seen in experiments using rats. For example, scientists can train rats on a specific test, such as maze running, and then see if the "smart drug" can improve the rats' performance. It is difficult to see how many of these data can be applied to human learning and memory. For example, what if the "smart drug" made the rat hungry? Wouldn't a hungry rat run faster in the maze to receive a food reward than a non-hungry rat? Maybe the rat did not get any "smarter" and did not have any improved memory. Perhaps the rat ran faster simply because it was hungrier. Therefore, it was the rat's motivation to run the maze, not its increased cognitive ability that affected the performance. Thus, it is important to be very careful when interpreting changes observed in these types of animal learning and memory experiments. Each nootropic comes with a recommended amount to take. This is almost always based on a healthy adult male with an average weight and 'normal' metabolism. Nootropics (and many other drugs) are almost exclusively tested on healthy men. If you are a woman, older, smaller or in any other way not the 'average' man, always take into account that the quantity could be different for you. Remember: The strictest definition of nootropics today says that for a substance to be a true brain-boosting nootropic it must have low toxicity and few side effects. Therefore, by definition, a nootropic is safe to use. However, when people start stacking nootropics indiscriminately, taking megadoses, or importing them from unknown suppliers that may have poor quality control, it's easy for safety concerns to start creeping in. Productivity is the most cited reason for using nootropics. With all else being equal, smart drugs are expected to give you that mental edge over other and advance your career. Nootropics can also be used for a host of other reasons. From studying to socialising. And from exercise and health to general well-being. Different nootropics cater to different audiences. Poulin (2007) 2002 Canadian secondary school 7th, 9th, 10th, and 12th graders (N = 12,990) 6.6% MPH (past year), 8.7% d-AMP (past year) MPH: 84%: 1–4 times per year; d-AMP: 74%: 1–4 times per year 26% of students with a prescription had given or sold some of their pills; students in class with a student who had given or sold their pills were 1.5 times more likely to use nonmedically But there are some potential side effects, including headaches, anxiety and insomnia. Part of the way modafinil works is by shifting the brain's levels of norepinephrine, dopamine, serotonin and other neurotransmitters; it's not clear what effects these shifts may have on a person's health in the long run, and some research on young people who use modafinil has found changes in brain plasticity that are associated with poorer cognitive function. Most epidemiological research on nonmedical stimulant use has been focused on issues relevant to traditional problems of drug abuse and addiction, and so, stimulant use for cognitive enhancement is not generally distinguished from use for other purposes, such as staying awake or getting high. As Boyd and McCabe (2008) pointed out, the large national surveys of nonmedical prescription drug use have so far failed to distinguish the ways and reasons that people use the drugs, and this is certainly true where prescription stimulants are concerned. The largest survey to investigate prescription stimulant use in a nationally representative sample of Americans, the National Survey on Drug Use and Health (NSDUH), phrases the question about nonmedical use as follows: "Have you ever, even once, used any of these stimulants when they were not prescribed for you or that you took only for the experience or feeling they caused?" (Snodgrass & LeBaron 2007). This phrasing does not strictly exclude use for cognitive enhancement, but it emphasizes the noncognitive effects of the drugs. In 2008, the NSDUH found a prevalence of 8.5% for lifetime nonmedical stimulant use by Americans over the age of 12 years and a prevalence of 12.3% for Americans between 21 and 25 (Substance Abuse and Mental Health Services Administration, 2009). A poster or two on Longecity claimed that iodine supplementation had changed their eye color, suggesting a connection to the yellow-reddish element bromine - bromides being displaced by their chemical cousin, iodine. I was skeptical this was a real effect since I don't know why visible amounts of either iodine or bromine would be in the eye, and the photographs produced were less than convincing. But it's an easy thing to test, so why not? Regardless of your goal, there is a supplement that can help you along the way. Below, we've put together the definitive smart drugs list for peak mental performance. There are three major groups of smart pills and cognitive enhancers. We will cover each one in detail in our list of smart drugs. They are natural and herbal nootropics, prescription ADHD medications, and racetams and synthetic nootropics. Segmental analysis of the key components of the global smart pills market has been performed based on application, target area, disease indication, end-user, and region. Applications of smart pills are found in capsule endoscopy, drug delivery, patient monitoring, and others. Sub-division of the capsule endoscopy segment includes small bowel capsule endoscopy, controllable capsule endoscopy, colon capsule endoscopy, and others. Meanwhile, the patient monitoring segment is further divided into capsule pH monitoring and others. Dallas Michael Cyr, a 41-year-old life coach and business mentor in San Diego, California, also says he experienced a mental improvement when he regularly took another product called Qualia Mind, which its makers say enhances focus, energy, mental clarity, memory and even creativity and mood. "One of the biggest things I noticed was it was much more difficult to be distracted," says Cyr, who took the supplements for about six months but felt their effects last longer. While he's naturally great at starting projects and tasks, the product allowed him to be a "great finisher" too, he says. The evidence? Ritalin is FDA-approved to treat ADHD. It has also been shown to help patients with traumatic brain injury concentrate for longer periods, but does not improve memory in those patients, according to a 2016 meta-analysis of several trials. A study published in 2012 found that low doses of methylphenidate improved cognitive performance, including working memory, in healthy adult volunteers, but high doses impaired cognitive performance and a person's ability to focus. (Since the brains of teens have been found to be more sensitive to the drug's effect, it's possible that methylphenidate in lower doses could have adverse effects on working memory and cognitive functions.) If the entire workforce were to start doping with prescription stimulants, it seems likely that they would have two major effects. Firstly, people would stop avoiding unpleasant tasks, and weary office workers who had perfected the art of not-working-at-work would start tackling the office filing system, keeping spreadsheets up to date, and enthusiastically attending dull meetings. Two additional studies assessed the effects of d-AMP on visual–motor sequence learning, a form of nondeclarative, procedural learning, and found no effect (Kumari et al., 1997; Makris, Rush, Frederich, Taylor, & Kelly, 2007). In a related experimental paradigm, Ward, Kelly, Foltin, and Fischman (1997) assessed the effect of d-AMP on the learning of motor sequences from immediate feedback and also failed to find an effect. On 15 March 2014, I disabled light sensor: the complete absence of subjective effects since the first sessions made me wonder if the LED device was even turning on - a little bit of ambient light seems to disable it thanks to the light sensor. So I stuffed the sensor full of putty, verified it was now always-on with the cellphone camera, and began again; this time it seemed to warm up much faster, making me wonder if all the previous sessions' sense of warmth was simply heat from my hand holding the LEDs Tyrosine (Examine.com) is an amino acid; people on the Imminst.org forums (as well as Wikipedia) suggest that it helps with energy and coping with stress. I ordered 4oz (bought from Smart Powders) to try it out, and I began taking 1g with my usual caffeine+piracetam+choline mix. It does not dissolve easily in hot water, and is very chalky and not especially tasty. I have not noticed any particular effects from it. Ethical issues also arise with the use of drugs to boost brain power. Their use as cognitive enhancers isn't currently regulated. But should it be, just as the use of certain performance-enhancing drugs is regulated for professional athletes? Should universities consider dope testing to check that students aren't gaining an unfair advantage through drug use? Some work has been done on estimating the value of IQ, both as net benefits to the possessor (including all zero-sum or negative-sum aspects) and as net positive externalities to the rest of society. The estimates are substantial: in the thousands of dollars per IQ point. But since increasing IQ post-childhood is almost impossible barring disease or similar deficits, and even increasing childhood IQs is very challenging, much of these estimates are merely correlations or regressions, and the experimental childhood estimates must be weakened considerably for any adult - since so much time and so many opportunities have been lost. A wild guess: $1000 net present value per IQ point. The range for severely deficient children was 10-15 points, so any normal (somewhat deficient) adult gain must be much smaller and consistent with Fitzgerald 2012's ceiling on possible effect sizes (small). Nature magazine conducted a poll asking its readers about their cognitive-enhancement practices and their attitudes toward cognitive enhancement. Hundreds of college faculty and other professionals responded, and approximately one fifth reported using drugs for cognitive enhancement, with Ritalin being the most frequently named (Maher, 2008). However, the nature of the sample—readers choosing to answer a poll on cognitive enhancement—is not representative of the academic or general population, making the results of the poll difficult to interpret. By analogy, a poll on Vermont vacations, asking whether people vacation in Vermont, what they think about Vermont, and what they do if and when they visit, would undoubtedly not yield an accurate estimate of the fraction of the population that takes its vacations in Vermont. After 7 days, I ordered a kg of choline bitartrate from Bulk Powders. Choline is standard among piracetam-users because it is pretty universally supported by anecdotes about piracetam headaches, has support in rat/mice experiments27, and also some human-related research. So I figured I couldn't fairly test piracetam without some regular choline - the eggs might not be enough, might be the wrong kind, etc. It has a quite distinctly fishy smell, but the actual taste is more citrus-y, and it seems to neutralize the piracetam taste in tea (which makes things much easier for me). The search to find more effective drugs to increase mental ability and intelligence capacity with neither toxicity nor serious side effects continues. But there are limitations. Although the ingredients may be separately known to have cognition-enhancing effects, randomized controlled trials of the combined effects of cognitive enhancement compounds are sparse. The chemicals he takes, dubbed nootropics from the Greek "noos" for "mind", are intended to safely improve cognitive functioning. They must not be harmful, have significant side-effects or be addictive. That means well-known "smart drugs" such as the prescription-only stimulants Adderall and Ritalin, popular with swotting university students, are out. What's left under the nootropic umbrella is a dizzying array of over-the-counter supplements, prescription drugs and unclassified research chemicals, some of which are being trialled in older people with fading cognition. "Certain people might benefit from certain combinations of certain things," he told me. "But across populations, there is still no conclusive proof that substances of this class improve cognitive functions." And with no way to reliably measure the impact of a given substance on one's mental acuity, one's sincere beliefs about "what works" probably have a lot to do with, say, how demanding their day was, or whether they ate breakfast, or how susceptible they are to the placebo effect. The abuse liability of caffeine has been evaluated.147,148 Tolerance development to the subjective effects of caffeine was shown in a study in which caffeine was administered at 300 mg twice each day for 18 days.148 Tolerance to the daytime alerting effects of caffeine, as measured by the MSLT, was shown over 2 days on which 250 g of caffeine was given twice each day48 and to the sleep-disruptive effects (but not REM percentage) over 7 days of 400 mg of caffeine given 3 times each day.7 In humans, placebo-controlled caffeine-discontinuation studies have shown physical dependence on caffeine, as evidenced by a withdrawal syndrome.147 The most frequently observed withdrawal symptom is headache, but daytime sleepiness and fatigue are also often reported. The withdrawal-syndrome severity is a function of the dose and duration of prior caffeine use…At higher doses, negative effects such as dysphoria, anxiety, and nervousness are experienced. The subjective-effect profile of caffeine is similar to that of amphetamine,147 with the exception that dysphoria/anxiety is more likely to occur with higher caffeine doses than with higher amphetamine doses. Caffeine can be discriminated from placebo by the majority of participants, and correct caffeine identification increases with dose.147 Caffeine is self-administered by about 50% of normal subjects who report moderate to heavy caffeine use. In post-hoc analyses of the subjective effects reported by caffeine choosers versus nonchoosers, the choosers report positive effects and the nonchoosers report negative effects. Interestingly, choosers also report negative effects such as headache and fatigue with placebo, and this suggests that caffeine-withdrawal syndrome, secondary to placebo choice, contributes to the likelihood of caffeine self-administration. This implies that physical dependence potentiates behavioral dependence to caffeine. This research is in contrast to the other substances I like, such as piracetam or fish oil. I knew about withdrawal of course, but it was not so bad when I was drinking only tea. And the side-effects like jitteriness are worse on caffeine without tea; I chalk this up to the lack of theanine. (My later experiences with theanine seems to confirm this.) These negative effects mean that caffeine doesn't satisfy the strictest definition of nootropic (having no negative effects), but is merely a cognitive enhancer (with both benefits & costs). One might wonder why I use caffeine anyway if I am so concerned with mental ability. Amphetamines have a long track record as smart drugs, from the workaholic mathematician Paul Erdös, who relied on them to get through 19-hour maths binges, to the writer Graham Greene, who used them to write two books at once. More recently, there are plenty of anecdotal accounts in magazines about their widespread use in certain industries, such as journalism, the arts and finance. Those who have taken them swear they do work – though not in the way you might think. Back in 2015, a review of the evidence found that their impact on intelligence is "modest". But most people don't take them to improve their mental abilities. Instead, they take them to improve their mental energy and motivation to work. (Both drugs also come with serious risks and side effects – more on those later). Table 3 lists the results of 24 tasks from 22 articles on the effects of d-AMP or MPH on learning, assessed by a variety of declarative and nondeclarative memory tasks. Results for the 24 tasks are evenly split between enhanced learning and null results, but they yield a clearer pattern when the nature of the learning task and the retention interval are taken into account. In general, with single exposures of verbal material, no benefits are seen immediately following learning, but later recall and recognition are enhanced. Of the six articles reporting on memory performance (Camp-Bruno & Herting, 1994; Fleming, Bigelow, Weinberger, & Goldberg, 1995; Rapoport, Busbaum, & Weingartner, 1980; Soetens, D'Hooge, & Hueting, 1993; Unrug, Coenen, & van Luijtelaar, 1997; Zeeuws & Soetens 2007), encompassing eight separate experiments, only one of the experiments yielded significant memory enhancement at short delays (Rapoport et al., 1980). In contrast, retention was reliably enhanced by d-AMP when subjects were tested after longer delays, with recall improved after 1 hr through 1 week (Soetens, Casaer, D'Hooge, & Hueting, 1995; Soetens et al., 1993; Zeeuws & Soetens, 2007). Recognition improved after 1 week in one study (Soetens et al., 1995), while another found recognition improved after 2 hr (Mintzer & Griffiths, 2007). The one long-term memory study to examine the effects of MPH found a borderline-significant reduction in errors when subjects answered questions about a story (accompanied by slides) presented 1 week before (Brignell, Rosenthal, & Curran, 2007). It may also be necessary to ask not just whether a drug enhances cognition, but in whom. Researchers at the University of Sussex have found that nicotine improved performance on memory tests in young adults who carried one variant of a particular gene but not in those with a different version. In addition, there are already hints that the smarter you are, the less smart drugs will do for you. One study found that modafinil improved performance in a group of students whose mean IQ was 106, but not in a group with an average of 115. It is often associated with Ritalin and Adderall because they are all CNS stimulants and are prescribed for the treatment of similar brain-related conditions. In the past, ADHD patients reported prolonged attention while studying upon Dexedrine consumption, which is why this smart pill is further studied for its concentration and motivation-boosting properties. The concept of neuroenhancement and the use of substances to improve cognitive functioning in healthy individuals, is certainly not a new one. In fact, one of the first cognitive enhancement drugs, Piracetam, was developed over fifty years ago by psychologist and chemist C.C. Giurgea. Although he did not know the exact mechanism, Giurgia believed the drug boosted brain power and so began his exploration into "smart pills", or nootropics, a term he coined from the Greek nous, meaning "mind," and trepein, meaning "to bend. The truth is, taking a smart pill will not allow you to access information that you have not already learned. If you speak English, a smart drug cannot embed the Spanish dictionary into your brain. In other words, they won't make you smarter or more intelligent. We need to throttle back our expectations and explore reality. What advantage can smart drugs provide? Brain enhancing substances have excellent health and cognitive benefits that are worth exploring. The goal of this article has been to synthesize what is known about the use of prescription stimulants for cognitive enhancement and what is known about the cognitive effects of these drugs. We have eschewed discussion of ethical issues in favor of simply trying to get the facts straight. Although ethical issues cannot be decided on the basis of facts alone, neither can they be decided without relevant facts. Personal and societal values will dictate whether success through sheer effort is as good as success with pharmacologic help, whether the freedom to alter one's own brain chemistry is more important than the right to compete on a level playing field at school and work, and how much risk of dependence is too much risk. Yet these positions cannot be translated into ethical decisions in the real world without considerable empirical knowledge. Do the drugs actually improve cognition? Under what circumstances and for whom? Who will be using them and for what purposes? What are the mental and physical health risks for frequent cognitive-enhancement users? For occasional users? Smart pills containing Aniracetam may also improve communication between the brain's hemispheres. This benefit makes Aniracetam supplements ideal for enhancing creativity and stabilizing mood. But, the anxiolytic effects of Aniracetam may be too potent for some. There are reports of some users who find that it causes them to feel unmotivated or sedated. Though, it may not be an issue if you only seek the anti-stress and anxiety-reducing effects. A key ingredient of Noehr's chemical "stack" is a stronger racetam called Phenylpiracetam. He adds a handful of other compounds considered to be mild cognitive enhancers. One supplement, L-theanine, a natural constituent in green tea, is claimed to neutralise the jittery side-effects of caffeine. Another supplement, choline, is said to be important for experiencing the full effects of racetams. Each nootropic is distinct and there can be a lot of variation in effect from person to person, says Lawler. Users semi-annonymously compare stacks and get advice from forums on sites such as Reddit. Noehr, who buys his powder in bulk and makes his own capsules, has been tweaking chemicals and quantities for about five years accumulating more than two dozens of jars of substances along the way. He says he meticulously researches anything he tries, buys only from trusted suppliers and even blind-tests the effects (he gets his fiancée to hand him either a real or inactive capsule). A LessWronger found that it worked well for him as far as motivation and getting things done went, as did another LessWronger who sells it online (terming it a reasonable productivity enhancer) as did one of his customers, a pickup artist oddly enough. The former was curious whether it would work for me too and sent me Speciosa Pro's Starter Pack: Test Drive (a sampler of 14 packets of powder and a cute little wooden spoon). In SE Asia, kratom's apparently chewed, but the powders are brewed as a tea. That first night, I had severe trouble sleeping, falling asleep in 30 minutes rather than my usual 19.6±11.9, waking up 12 times (5.9±3.4), and spending ~90 minutes awake (18.1±16.2), and naturally I felt unrested the next day; I initially assumed it was because I had left a fan on (moving air keeps me awake) but the new potassium is also a possible culprit. When I asked, Kevin said: Like caffeine, nicotine tolerates rapidly and addiction can develop, after which the apparent performance boosts may only represent a return to baseline after withdrawal; so nicotine as a stimulant should be used judiciously, perhaps roughly as frequent as modafinil. Another problem is that nicotine has a half-life of merely 1-2 hours, making regular dosing a requirement. There is also some elevated heart-rate/blood-pressure often associated with nicotine, which may be a concern. (Possible alternatives to nicotine include cytisine, 2'-methylnicotine, GTS-21, galantamine, Varenicline, WAY-317,538, EVP-6124, and Wellbutrin, but none have emerged as clearly superior.) This tendency is exacerbated by general inefficiencies in the nootropics market - they are manufactured for vastly less than they sell for, although the margins aren't as high as they are in other supplement markets, and not nearly as comical as illegal recreational drugs. (Global Price Fixing: Our Customers are the Enemy (Connor 2001) briefly covers the vitamin cartel that operated for most of the 20th century, forcing food-grade vitamins prices up to well over 100x the manufacturing cost.) For example, the notorious Timothy Ferriss (of The Four-hour Work Week) advises imitators to find a niche market with very high margins which they can insert themselves into as middlemen and reap the profits; one of his first businesses specialized in… nootropics & bodybuilding. Or, when Smart Powders - usually one of the cheapest suppliers - was dumping its piracetam in a fire sale of half-off after the FDA warning, its owner mentioned on forums that the piracetam was still profitable (and that he didn't really care because selling to bodybuilders was so lucrative); this was because while SP was selling 2kg of piracetam for ~$90, Chinese suppliers were offering piracetam on AliBaba for $30 a kilogram or a third of that in bulk. (Of course, you need to order in quantities like 30kg - this is more or less the only problem the middlemen retailers solve.) It goes without saying that premixed pills or products are even more expensive than the powders. Let's start with the basics of what smart drugs are and what they aren't. The field of cosmetic psychopharmacology is still in its infancy, but the use of smart drugs is primed to explode during our lifetimes, as researchers gain increasing understanding of which substances affect the brain and how they do so. For many people, the movie Limitless was a first glimpse into the possibility of "a pill that can make you smarter," and while that fiction is a long way from reality, the possibilities - in fact, present-day certainties visible in the daily news - are nevertheless extremely exciting. In sum, the evidence concerning stimulant effects of working memory is mixed, with some findings of enhancement and some null results, although no findings of overall performance impairment. A few studies showed greater enhancement for less able participants, including two studies reporting overall null results. When significant effects have been found, their sizes vary from small to large, as shown in Table 4. Taken together, these results suggest that stimulants probably do enhance working memory, at least for some individuals in some task contexts, although the effects are not so large or reliable as to be observable in all or even most working memory studies. No. There are mission essential jobs that require you to live on base sometimes. Or a first term person that is required to live on base. Or if you have proven to not be as responsible with rent off base as you should be so your commander requires you to live on base. Or you're at an installation that requires you to live on base during your stay. Or the only affordable housing off base puts you an hour away from where you work. It isn't simple. The fact that you think it is tells me you are one of the "dumb@$$es" you are referring to above. How much of the nonmedical use of prescription stimulants documented by these studies was for cognitive enhancement? Prescription stimulants could be used for purposes other than cognitive enhancement, including for feelings of euphoria or energy, to stay awake, or to curb appetite. Were they being used by students as smart pills or as "fun pills," "awake pills," or "diet pills"? Of course, some of these categories are not entirely distinct. For example, by increasing the wakefulness of a sleep-deprived person or by lifting the mood or boosting the motivation of an apathetic person, stimulants are likely to have the secondary effect of improving cognitive performance. Whether and when such effects should be classified as cognitive enhancement is a question to which different answers are possible, and none of the studies reviewed here presupposed an answer. Instead, they show how the respondents themselves classified their reasons for nonmedical stimulant use. "As a neuro-optometrist who cares for many brain-injured patients experiencing visual challenges that negatively impact the progress of many of their other therapies, Cavin's book is a god-send! The very basic concept of good nutrition among all the conflicting advertisements and various "new" food plans and diets can be enough to put anyone into a brain fog much less a brain injured survivor! Cavin's book is straightforward and written from not only personal experience but the validation of so many well-respected contemporary health care researchers and practitioners! I will certainly be recommending this book as a "Survival/Recovery 101" resource for all my patients including those without brain injuries because we all need optimum health and well-being and it starts with proper nourishment! Kudos to Cavin Balaster!" A rough translation for the word "nootropic" comes from the Greek for "to bend or shape the mind." And already, there are dozens of over-the-counter (OTC) products—many of which are sold widely online or in stores—that claim to boost creativity, memory, decision-making or other high-level brain functions. Some of the most popular supplements are a mixture of food-derived vitamins, lipids, phytochemicals and antioxidants that studies have linked to healthy brain function. One popular pick on Amazon, for example, is an encapsulated cocktail of omega-3s, B vitamins and plant-derived compounds that its maker claims can improve memory, concentration and focus.
CommonCrawl
Journal of Cheminformatics Cheminformatics approach to exploring and modeling trait-associated metabolite profiles Jeremy R. Ash1,2,3, Melaine A. Kuenemann1,3, Daniel Rotroff2,3, Alison Motsinger-Reif2,3 & Denis Fourches ORCID: orcid.org/0000-0001-5642-83031,3 Journal of Cheminformatics volume 11, Article number: 43 (2019) Cite this article Developing predictive and transparent approaches to the analysis of metabolite profiles across patient cohorts is of critical importance for understanding the events that trigger or modulate traits of interest (e.g., disease progression, drug metabolism, chemical risk assessment). However, metabolites' chemical structures are still rarely used in the statistical modeling workflows that establish these trait-metabolite relationships. Herein, we present a novel cheminformatics-based approach capable of identifying predictive, interpretable, and reproducible trait-metabolite relationships. As a proof-of-concept, we utilize a previously published case study consisting of metabolite profiles from non-small-cell lung cancer (NSCLC) adenocarcinoma patients and healthy controls. By characterizing each structurally annotated metabolite using both computed molecular descriptors and patient metabolite concentration profiles, we show that these complementary features enhance the identification and understanding of key metabolites associated with cancer. Ultimately, we built multi-metabolite classification models for assessing patients' cancer status using specific groups of metabolites identified based on high structural similarity through chemical clustering. We subsequently performed a metabolic pathway enrichment analysis to identify potential mechanistic relationships between metabolites and NSCLC adenocarcinoma. This cheminformatics-inspired approach relies on the metabolites' structural features and chemical properties to provide critical information about metabolite-trait associations. This method could ultimately facilitate biological understanding and advance research based on metabolomics data, especially with respect to the identification of novel biomarkers. The metabolome is an individual's phenotype at the molecular level [1,2,3,4]. Profiling metabolites (i.e., small molecule with molecular weight < 1500 Da) present in a given sample (e.g., serum, plasma, urine) enables in-depth investigations into various biochemical perturbations with internal (e.g., disease, drug metabolites, microbiome) and external (e.g., exposome, drugs) origins. A major advantage of metabolomic profiling over other omics methodologies is its high sensitivity to modulations of biological pathways that play a mechanistic role in these biochemical events. The potential for certain metabolites to be discovered as disease biomarkers has resulted in a rapidly expanding body of metabolomics studies. For instance, metabolomics has been used to search for biomarkers for colon cancer [5, 6], multiple sclerosis [7], and Alzheimer's disease [8,9,10]. Drug discovery efforts routinely use metabolomics to study the efficacy, toxicity, and pharmacokinetic/pharmacodynamic properties of drug candidates and their metabolites [11]. Furthermore, the field of pharmacometabolomics has emerged as a useful field to investigate the role of metabolites in drug response [12,13,14]. Metabolomics is utilized by medicinal chemists to investigate the in vivo mechanism of action of lead compounds and to more efficiently screen chemicals for their ability to cause adverse side effects. While chemical structure is at the centerpiece of the metabolites' structure elucidation stage in any metabolomics study [15,16,17], it is very often underutilized in the downstream trait association analysis. As underscored by recent papers [4, 18], the ability to conduct in-depth analysis of metabolomics datasets could be significantly improved through careful consideration of metabolites' chemical structure. This is exactly what cheminformatics approaches have been developed for: to rapidly, quantitatively, and systematically characterize the structural features of chemicals via the standardized calculation of molecular descriptors [19]. Therefore, one can envision further representation of metabolites by computing quantitative molecular descriptors to characterize their chemical structures. In that regard, a recent analysis [20] comparing the similarity of drugs' chemical structures with endogenous human metabolites found that 90% of marketed drugs have a medium-to-high similarity (Tanimoto > 0.5) to their most structurally similar human metabolite. Also recently, new algorithms have been developed to efficiently search metabolic networks using chemical fingerprints, demonstrating that metabolites in shared metabolic pathways have similar chemical structure [21]. The MetamapR network visualization tool [22, 23] has demonstrated that grouping metabolites by chemical classes can be used to generate hypotheses regarding the cellular processes related to an observed phenotype. The same research group has recently deployed ChemRICH [24], a tool for grouping metabolites by chemical similarity instead of biological annotation for enrichment analysis. However, to our knowledge, these methods have not been incorporated yet into a predictive modeling workflow. Overall, the chemical structures of metabolites are information-rich but have not yet been the centerpiece for a method to analyze and reliably model metabolomics datasets to establish more interpretable trait-metabolite relationships. While there are numerous ways of determining enzymatic relationships between metabolites (e.g., by pathway or reaction pair databases [25,26,27,28,29]), these approaches are considerably limited by the lack of annotation of metabolic pathways, particularly for understudied organisms [22]. Detecting modules within metabolite profile correlation networks may capture some biochemical relationships between metabolites; however, this is complicated by the fact that neighbors in metabolic pathways do not always have high correlation [30] and confounding variations can be caused by other factors such as the transcriptional regulation of enzymes [31]. Multi-metabolite models (i.e., models that take as input multiple metabolite concentrations and predict a trait of interest) can improve upon the prediction performance of single metabolite models. However, single metabolite models are still mostly used in biomarker discovery, because multi-metabolite models often suffer in interpretability [32, 33]. In biochemical reactions, enzymes catalyze the conversion between chemically similar compounds, so binning metabolites according to their structural similarity for multi-metabolite models is likely to group metabolites that are biochemically linked, and that share the same trait-metabolite relationships [22, 23, 34]. This is intuitively appealing, since the biological effect of interest often operates at the level of biochemical pathways [35]. This approach may improve upon the predictivity of single metabolite models, while maintaining their desired interpretability, as the resulting models can still suggest pathways mechanistically linked to the trait of interest. The metabolites discovered by this approach and their associated pathways could then be investigated with complementary methods (e.g., targeted metabolomics, isotope labeling) [35]. The biochemical relatedness of the metabolites within these models could provide new interpretations of metabolomics data and potentially lead to trait-metabolite associations that would have otherwise been missed using alternative approaches. Herein, we present a cheminformatics method [36] that leverages multi-metabolite modeling approach in conjunction with a chemical-informed clustering. We applied this approach to an adenocarcinoma lung cancer case study. Our main goal was to identify groups of structurally related metabolites linked to pathways with mechanistic and/or influential roles in lung cancer. We hypothesized that structure-based clustering of metabolites could help establishing more predictive, interpretable, and reproducible multi-metabolite classifiers for patient cancer status compared to alternative approaches. Subject sample collection and metabolomics profiling As a proof-of-concept for our method, we considered a data set that was originally collected and analyzed by Fahrmann et al. [37]. This study identified blood biomarkers for adenocarcinoma lung cancer. Multi-metabolite models were shown to be highly predictive of cancer status in an independently conducted case study. Therefore, we wanted to determine if using metabolite chemical structure to form new multi-metabolite models could further improve the prediction performances, while enhancing the model's interpretability. The data set was accessed from the Metabolomics Workbench public repository (www.metabolomicsworkbench.org [38]) under study numbers ST000385 and ST000386. Subjects diagnosed with NSCLC stage I-IV adenocarcinoma were recruited by the UC Davis Medical Center and Cancer Center Clinics. Additional file 1: Table S1 shows the distribution of the patients' characteristic variables (which were frequency matched by design). The patient characteristic distributions matched the data reported in Fahrmann et al. [37] except that one plasma cancer case was missing in the training set provided by the Metabolomics Workbench. The ADC1 (training) set contained 51 plasma and 49 serum samples collected from NSCLC stage I–IV adenocarcinoma patients. Plasma and serum samples were also collected from 31 healthy controls. The ADC2 (test) set consisted of 43 NSCLC stage I-IV adenocarcinoma patients and 43 healthy control plasma and serum samples. ADC2 was an independently conducted case study, meaning that the data was collected on different patients, and it was collected and analyzed at different times. This presented us with the opportunity to assess how well our predictive models would generalize to this independently collected, external test set. Importantly, the untargeted metabolomics analysis was performed by the same laboratory (WCMC Metabolomics Core at the University of California, Davis). Gas chromatography time-of-flight (GCTOF) mass spectrometry untargeted metabolomics analysis was performed using plasma and serum samples collected from each patient. Metabolites were structurally annotated by matching their mass spectra to the Fiehn library of 1200 authentic metabolite mass spectra [39]. Further details on the subject cohort, sample collection, metabolomics profiling can be found in Fahrmann et al. [37]. Differences between training and test sets In total, 130 metabolites were retained based on the actual detection and availability of structural annotations in plasma and serum samples from both ADC1 and ADC2 sets (see Methods). Volcano plots identified metabolites with large fold changes in terms of mean relative abundance between cancer and control patients that were statistically significant (FDR < 0.075, Additional file 1: Figure S1). Surprisingly, metabolites with large shifts in distribution were not consistent across data sets (ADC1/ADC2) or tissues (plasma/serum). The disagreement between ADC1 and ADC2 suggested some level of heterogeneity between the ADC1 and ADC2 samples. Figure 1 illustrates how some metabolites (e.g., xylose, maltose, maltotriose) in ADC2 showed large shifts in intensities relative to ADC1 for both cancer and control patients. To further explore these differences, we conducted a PCA of the entire metabolite profiles of both ADC1 and ADC2 patients (Fig. 2a, b). We found large differences between the ADC1 and ADC2 patients in the same cancer status group, with a separation much larger than that between cancer and control patients. Both plasma and serum had much more variation within health state group (plasma BSS/TSS: .025, serum BSS/TSS: .023) than between health state. When metabolite profiles were filtered so that only the significant metabolites (i.e. those found with significant difference in mean intensities between cancer and control) for plasma or serum were utilized (Fig. 2c, d), variance explained by the health state group was higher, for serum in particular (plasma BSS/TSS: .082, serum BSS/TSS: .16), though the percentage of total variation explained by cancer status was still low. This indicated that models constructed using the entire set of ADC1 metabolites would be vulnerable to overfitting and likely demonstrate poor predictivity on the external ADC2 set. Distribution of intensities for metabolites significantly associated with cancer status in the training set. ADC1 (training) and ADC2 (test) set boxplots shown for healthy (blue) and adenocarcinoma (red) patients. Significant plasma and serum metabolites in ADC1 were determined by a paired t test. *(FDR < .075), ** (FDR < .01), *** (FDR < .001). Many of the metabolites that are significant in ADC1 are also significant in ADC2. Some show more significant differences in the ADC2 PCA of all metabolite and significantly different metabolite profiles. a All plasma, b significant plasma, c all serum, and d significant serum metabolite profiles Heatmaps for the 25 metabolites with the most significantly different mean log intensities between ADC1 and ADC2 prior to total quantity normalization further illustrate the differences between these data sets (Additional file 1: Figures S2 and S3). A few of these metabolites were significantly associated with cancer status and selected for our classification models (e.g., xylose and cystine in serum, maltose in plasma). However, later we will show that classifiers trained on these ADC1 metabolite profiles resulted in poor prediction performance for the ADC2 test set. As illustrated in Fig. 1, most of the metabolites with these large shifts in mean do not have the statistical ADC1-determined significance reproduced in ADC2. Many metabolites shown in Additional file 1: Figures S2 and S3 were not chosen for the classification models, but still had dramatic shifts in mean. These metabolites create substantial differences in bias for the ADC1 and ADC2 metabolites once total quantity normalization is performed. This could partially explain the lack of reproducibility of statistical significance of metabolites as shown in Fig. 1. For an untargeted metabolomics analysis, total quantity normalization has been shown to be an effective strategy for controlling for many sources of analytical variation (e.g., extraction or derivatization process) in the absence of an accepted reference metabolite [40]. However, our results confirmed that this normalization strategy does not control for all variation between studies (which may be attributed to a number of causes, such as sample heterogeneity or methodological variation) [41]. Also, when there are large changes in metabolite intensities associated with a phenotype of interest, the normalization procedure can introduce bias that might lead to the detection of spurious associations in other metabolites [42]. To ensure the results are useful in clinical practice, researchers often conform to the clinical norm of total quantity normalization. Later, our classifier prediction performance results will suggest that the considerable noise that remains after total quantity normalization could be better controlled for by the careful construction of multi-metabolite classifiers. High structural similarity for metabolites significantly associated with cancer The cheminformatics approach described in this study is based on characterizing metabolites' structural properties by computing molecular fingerprints and then clustering those metabolites based on chemical similarity. We used the MACCS 166 bit fingerprint [43], a standard class of structural keys used in the cheminformatics community, to represent and characterize metabolites' chemical structures instead of the 881 bit PubChem fingerprints used by MetaMapR and ChemRICH [22,23,24]. Both types of fingerprints are interpretable, with bits simply indicating presence/absence of chemical substructures. The MACCS fingerprint was selected because in a number of studies on the use of molecular fingerprints for de novo reconstruction of metabolic networks [44, 45], more complex fingerprints (e.g., PubChem and the extended CDK fingerprint [46]) were shown to only have marginally improved prediction performances. Practically, we noticed little change in the clustering results when the ECFP6 [47] fingerprints were used to encode metabolite structures (results not shown). MACCS keys were thus selected due to their simplicity and interpretability. To examine whether the metabolites significantly associated with health status also had similar chemical structures, we conducted a hierarchical clustering of all the metabolites based on this fingerprint. Interestingly, Fig. 3 highlights a cluster containing a high proportion of significant metabolites across each data set. In other words, we identified a set of structurally similar metabolites whose significant differences are reproducible across samples (plasma and serum) and data sets (ADC1 and ADC2). This provides multiple lines of evidence that these associations are biologically meaningful and not an artifact of the sample preparation or methodology. Integrated circular dendrogram generated using MACCS fingerprint with average linkage and Soergel distance. A cell next to a metabolite name is colored green if the metabolite has a significant difference in mean relative abundance between for cancer versus control patients in one of the data sets (ADC1/ADC2, serum/plasma) after correction for multiple testing. Metabolites names are colored green if they were significant in at least one data set. Fisher exact test for greater probability of significance for metabolites within the highlighted cluster (orange) than those without *(FDR < .05), ** (FDR < .01), *** (FDR < .001). The metabolites highlighted in blue were selected by our multi-metabolite procedure to form the best classifier without using information about the test set For all data sets except ADC1 plasma, there were more significant metabolites in the chemically similar cluster than would be expected by chance (FDR < .05). This cluster contained glutamic acid, aspartic acid, glutamine, asparagine, N-acetylglutamate, and cystine (Fig. 4). All of these metabolite profiles except cystine were found significant in the plasma ADC2 set (Fig. 1). Importantly, aspartic acid, glutamic acid, and cystine were significant in both ADC1 and ADC2 serum sets. These metabolites were clustered together because of their high structural similarity: as amino acids or close derivatives, they share several key functional groups (e.g., carboxylic acid, amine, amide) and every metabolite except cystine has a primary, aliphatic carbon chain including 4 or 5 carbons. Figure 5 shows the same metabolite dendrogram as Fig. 3, except that metabolites were labeled based on their associated membership in the actual biological pathways found to have significant differences between cancer and control patients according to our pathway enrichment analysis (FDR < .05). Again, we analyzed the same cluster containing a high proportion of significant metabolites across samples and data sets. For each pathway, we performed an enrichment analysis as was done for Fig. 3. Metabolite structures from the cluster containing a large proportion of significant metabolites. Mean metabolite abundance fold change for cancer versus healthy patients in Serum and Plasma ADC1 and ADC2 data sets Integrated circular dendrogram generated using MACCS fingerprint with average linkage and Soergel distance. Cells next to metabolites names are colored dark blue if they belong to a pathway significantly enriched (hypergeometric test; FDR < .05) for metabolites found to be significant in the differential analysis for ADC1 serum (top band) or ADC1 plasma (bottom band). Metabolites names are colored green if they were significant in at least one data set. Fisher exact test for greater probability of pathway membership for metabolites within the highlighted cluster (orange) than those without *(FDR < .05), ** (FDR < .01), *** (FDR < .001) In serum and plasma, several significantly enriched metabolic pathways were found to have more metabolites in the cluster of interest than would be expected by chance. Two pathways in Serum (alanine, aspartate, and glutamate metabolism; cyanoamino acid metabolism) and three pathways in Plasma (histidine metabolism, purine metabolism, and nitrogen metabolism) showed enrichment for significant metabolites within that particular cluster of interest (FDR < .05). Interestingly, N-acetylglutamate was not a member of any significantly enriched pathway. In plasma, four metabolites (glutamic acid, aspartic acid, glutamine, and asparagine) belong to the significantly enriched nitrogen metabolism pathway (FDR = 0.0097). In this particular pathway, both asparagine and glutamine are metabolized to aspartic and glutamic acid, respectively, which is reflected in the concentration of these metabolites for cancer and healthy patients (Fig. 4). Glutamine and asparagine showed a large decrease in mean concentration for patients with cancer, while aspartic acid and glutamic acid showed a large increase. In serum, the alanine, aspartate and glutamate metabolism pathway also contained these four metabolites (FDR = 0.022). Our "cluster enrichment analysis" found that several significantly affected pathways contained more metabolites in the chemically similar cluster than would be expected by chance. This suggested that compounds in pathways significantly affected by adenocarcinoma can be clustered by chemical similarity. In fact, multiple pairs of metabolites in our cluster of interest are interconverted by one chemical reaction (e.g., asparagine to aspartic acid). The adenocarcinoma-associated decrease in glutamine and increase in glutamic acid is particularly interesting, as altered glutamine metabolism has been shown to play a critical role in the malignancy of lung cancer cells [37, 48]. This significant increase in glutamate levels has been observed in recent studies [49, 50], in which an increased reliance on glutamine instead of glucose for oxidative phosphorylation and NADPH generation has been suggested. The adenocarcinoma associated reduction in cystine has also been reproduced [49]. The SLC7A11 amino acid antiporter, which may also be up-regulated in adenocarcinoma [49], imports cystine and exports glutamate. A reduction of cystine could indicate an increase in its intracellular concentration as well as its reduction product, cysteine, which could regulate glutathione biosynthesis [49, 51]. It was particularly interesting that 4 out of 6 metabolites (glutamic acid, aspartic acid, cystine and glutamine) in the cluster of interest were found to be significant in both blood matrices. It has been previously noted that, although some metabolite concentrations may be higher on average in serum than plasma (resulting in higher sensitivity to detect significant low abundance metabolites), many of the metabolites with significant differences in both matrices were highly correlated [52, 53]. However, differences in the blood matrix and sample preparation are likely to result in some discrepancies between metabolite profiles [52, 53]. Fahrmann et al. [37] recommended using serum blood matrix for diagnosis of adenocarcinoma due to its higher sensitivity. Therefore, when we constructed the metabolite classifiers, we focused on optimizing the prediction performance of serum classifiers. Additional cheminformatics clustering of significant plasma and serum metabolites according to their chemical structure Our analysis in the previous section suggested that, when there is a cluster of chemically similar metabolites significantly associated with adenocarcinoma, the relationship is potentially highly reproducible, and a mechanistic interpretation can be assigned to that cluster. Below, we tested whether this principle could be incorporated into a predictive modeling workflow. In order to bin metabolites for the construction of multi-metabolite classifiers, the set of metabolites that were significantly associated with cancer status in either serum or plasma ADC1 were further clustered according to chemical structure similarity. The method for deciding the optimal number of clusters when using hierarchical clustering is illustrated in the Additional file 1: Figure S4, along with the heat maps for the corresponding distance matrix and cluster assignments. For serum samples, only one cluster contained multiple significant metabolites (aspartic acid, glutamic acid and cystine) (Additional file 1: Figure S4). Clustering of plasma metabolites resulted in several clusters with multiple metabolites (Cluster 1: 3-phosphoglycerate, pyrophosphate; Cluster 2: citrulline, cystine, histidine, lysine; Cluster 3: maltose, maltotriose). We also tested whether the same clusters of metabolites could be detected by the Spearman or partial correlations of patient metabolic profiles (which means absolutely no information about chemical structures). Hierarchical clustering was performed using either Spearman or partial correlation distances (1—absolute correlation) and the average linkage between clusters (Additional file 1: Figure S5). The cluster assignments differed from those obtained by chemical structure in serum—only glutamic acid and aspartic acid were clustered together and cystine was in a separate cluster. The cluster assignments also significantly differed for plasma, where only two clusters were identified in both the spearman and partial correlation networks (Cluster 1: histidine, cystine, tryptophan, lysine, citrulline; Cluster 2: maltotriose, maltose, adenosine-5-phosphate, pyrophosphate, 3-phosphoglycerate). We also tested whether the similar clusters of metabolites could be detected by MetaMapp or ChemRICH approaches (see Method section). Additional file 1: Figure S6 shows the MetaMapp networks inferred for metabolites significant in serum and plasma. ChemRICH did not find any significantly enriched clusters when adjusted p values were provided for serum or plasma metabolites. Instead, we provided unadjusted p values. Additional file 1: Figure S7 shows the significantly enriched clusters in serum and plasma. The cluster assignments found by both methods differed from our chemical structure clustering approach. See the Additional file 1: Results and Discussion for further discussion. Serum classifiers trained on ADC1 and validated with ADC2 Serum logistic regression classification models were constructed utilizing only metabolites that were found to be significant in the serum ADC1 training set. Table 1 shows the LOOCV prediction performances afforded by the single- and multi-metabolite classifiers. Cystine and Oxalic Acid were the best performing single-metabolite models with both models affording 70% prediction accuracy to separate cancer patients versus healthy controls. When significant serum metabolites were clustered to form multi-metabolite models, only one cluster contained multiple metabolites (aspartic acid, glutamic acid and cystine) (Additional file 1: Figure S4). Importantly, the multi-metabolite approach based on these three selected metabolites performed considerably better than any single metabolite approach with an overall prediction accuracy of 76.3%. This model afforded a good sensitivity (77.6%), while maintaining high specificity (74.2%). Table 1 Performance measures for selected serum models predicting cancer status Validation with an external test set is essential to determining the efficacy of a predictive model [54]. In order to assess the utility of these classifiers for an independently conducted case study, the models were trained using all ADC1 serum samples and their external prediction performances were assessed on the ADC2 set. This enabled us to assess how well our models generalized to an independent study in the presence of considerable batch effects (Figs. 1, 2). The optimal serum single metabolite models that would have been selected according to LOOCV accuracy on ADC1 were cystine and oxalic acid (70%). However, these single metabolite classifiers afforded very poor prediction accuracy on the ADC2 test set (55.8 and 57%, respectively). Importantly, if the best performing model was selected according to LOOCV accuracy on the ADC1 training set, the SVM multi-metabolite model using a cluster of metabolites (aspartic acid, glutamic acid and cystine) would have been selected (76.3%). This model consisted of all of the metabolites in the previously highlighted cluster of interest that were significant in serum ADC1. The classifier obtained better prediction accuracy on the ADC2 test set than any other serum classifier (84.9%). This model had very high specificity (97.7%), while maintaining high sensitivity (72.1%). This is desirable, as these biomarkers were intended to complement other diagnostic tools that have a high false positive rate, like low dose computed tomography [37]. This result is significant not only because this model demonstrated the best prediction performances for both ADC1 and ADC2, but also because the internal LOOCV procedure employed for the ADC1 set selected the classifier with the best prediction performance for ADC2, indicating that this multi-metabolite ensemble method has good prediction performance in independent case studies. The results obtained for all models (and not only the best performers) are given in Additional file 1: Tables S3 and S4. The same procedures used to train the multi-metabolite classifiers on clustered metabolites was used to train multi-metabolite classifiers on all metabolite and significant metabolite profiles (Additional file 1: Table S3). On ADC1, the best performing approaches obtained higher LOOCV accuracies (81.3% and 83.8% respectively), but these did not generalize well to ADC2 (50.0% and 59.3%). This was likely due to the fact that these models included noise variables – metabolites like xylose that showed significant separation between cancer control in ADC1, but not ADC2. The serum metabolites within our best multi-metabolite classifier were linked by multiple pathways (e.g., alanine, aspartate and glutamate metabolism, cysteine and methionine metabolism). The serum metabolites in our model provide several lines of evidence for sharing a common biological mechanism in NSCLC adenocarcinoma. Furthermore, these metabolites clustered together, while all others clustered into singletons, demonstrating that this method can group biologically meaningful metabolites, distinguishing them from the noise and leading to improvements in prediction performance. Taken together, these results support our conclusion that our cheminformatics-based approach to the construction of multi-metabolite classifiers provided us with a more predictive model than more traditional single metabolite approaches, while maintaining the interpretability of single metabolite models. In their original study, Fahrmann et al. [37] built single metabolite classifiers on metabolites significant associated with adenocarcinoma and then built multi-metabolite classifiers by iteratively including them into ensembles. The single metabolite classifier with the highest accuracy was added first, then the classifier with the next highest accuracy was added, and so on. The multi-metabolite ensembles classified samples by majority vote, and the prediction performance was evaluated at each iteration. By performing chemical structure clustering on the significant metabolites, our multi-metabolite classifier approach was able to better separate signal variables from the noise. This resulted in improved prediction performance compared to multi-metabolite classifier that would have been selected based on LOOCV accuracy in the original study (ADC1 72.5%, ADC2 64.0%). Further advantages of our method are that the metabolites in our best multi-metabolites classifiers have known chemical structures and are biochemically linked because they are chemically similar. Plasma classifiers trained on ADC1 and validated with ADC2 Similarly, plasma classification models were constructed utilizing the plasma metabolites found significant in the ADC1 training set. Table 2 shows the LOOCV prediction performances obtained on the ADC1 set for single- and multi-metabolite classifiers. Maltose was the best performing single metabolite model (74.4% accuracy). However, this model demonstrated poor predictive performance on the ADC2 data set (57% accuracy). Chemical structure clustering of the plasma metabolites resulted in three clusters with more than one metabolite (Additional file 1: Figure S4). The optimal multi-metabolite classifier was the SVM model trained on the 3-phosphoglycerate and pyrophosphate cluster (80.5% LOOCV accuracy) and this model also led to relatively high prediction accuracy on the ADC2 external set (70.9% accuracy). The results obtained for all models (and not only the best performers) are given in Additional file 1: Tables S4 and S5. Table 2 Performance measures for selected plasma models predicting cancer status This modeling method also improved on the multi-metabolite classifiers built on all metabolites (ADC1 79.3%, ADC2 69.8%), and significant metabolites (ADC1 78.0%, ADC2 69.8%) (Additional file 1: Table S5). Again, this supports our conclusion that our multi-metabolite approach results in reproducible and robust prediction performances. The prediction accuracies were comparable to that obtained by the five metabolite ensemble reported in the Fahrmann et al. [37] study (79.5% ADC1, 73.3% ADC2), but again the ensemble metabolites in our model are biochemically linked. Pyrophosphate and 3-phosphoglycerate were not grouped by any annotated pathway, but are biochemically linked. Phosphoglycerate kinase catalyzes the transfer of phosphate from 1,3-bisphosphoglycerate to ADP, forming 3-phosphoglycerate and ATP, which can then be hydrolyzed to ADP and pyrophosphate. Recent studies have reproduced the adenocarcinoma associated reduction of 3-phosphoglycerate [49, 50]. Altogether, the metabolite classifier results suggested that, when the difference between the cancer and control patient metabolite profiles was substantial enough, the signal in the data could be detected by single metabolite classifiers in both the ADC1 and ADC2 set. This is true despite the noise introduced by unwanted sources of variation observed in Figs. 1 and 2. However, building multi-metabolite classifiers based on clusters of metabolites with high similarity in chemical structure was determined to be an effective strategy for further removing noise in the data and improving prediction performances. Unless otherwise noted, all data analysis was performed in R (v3.3.2, [55]). To ensure that models were being built with the same metabolite profiles in ADC1/ADC2 and Plasma/Serum, only retained for the analysis were the 130 metabolites that (1) had fully determined chemical structures and (2) were detected in at least one sample in each data set. Missing metabolite intensities were imputed using half of the minimum intensity observed for that metabolite. Intensities were total quantity normalized and log base 2 transformed to correct for non-homogeneity of variance. Importantly, the curated datasets as well as all scripts used for this study are provided in the Additional file 2 to ensure the reproducibility of this study. The differences between the ADC1 and ADC2 samples were analyzed by means of principal components analysis (PCA) and metabolite profile heat maps using MetaboAnalyst [28]. Metabolite profiles were auto-scaled prior to these analyses. We quantified the metabolite profile variation between patients in the same health state group using within group sum of squares (WSS) and the metabolite profile variation between health state groups using between group sum of squares (BSS). The proportion of total metabolite variation explained by cancer state can be quantified using the between group sum of squares and total sum of squares ratio (BSS/(BSS + WSS) = BSS/TSS). Pathway analysis Pathway overrepresentation analysis was conducted using MetaboAnalyst [28, 56] v3.0. The KEGG [57] Homo sapiens pathway library was used for the pathway enrichment analysis. A hypergeometric test (FDR < .05) was used to determine the pathways that contained more metabolites significantly associated with cancer status than would be expected by chance. Differential analysis A differential analysis on the ADC1 training set determined the existence of significant differences between case and control mean metabolite intensities according to the procedure reported in Fahrmann et al. [37] Metabolite profiles were "covariate adjusted", meaning they were regressed on the gender and smoking history of each subject and the residuals were used for differential analysis. Univariate Monte Carlo permutation t-tests [58] were performed for each metabolite separately (100,000 permutations) using the package deducer [59]. A Benjamini–Hochberg correction for multiple testing [60] was employed with FDR significance threshold of 0.075. This FDR threshold found significant all of the structurally annotated metabolites reported by Fahrmann et al. [37] in both serum and plasma. To determine the reproducibility of the metabolites found to be significantly associated with adenocarcinoma in ADC1, differential analysis was performed in the same way on the ADC2 test set. For the ADC1 training set, significance was used as the criterion for the selection of metabolites for cancer status classifiers. This criterion was utilized because we expected significant metabolites to provide good classification of cancer status that is reproducible in independent studies. Volcano plots demonstrating the association analysis results were created by Metabolomics Workbench [38]. Cheminformatics clustering based on metabolites' chemical structures The names of structurally annotated metabolites were provided by the Metabolomics Workbench. We automatically retrieved the chemical structures for all 130 metabolites using the PubChem API [61]. Importantly, all structures were standardized according to our previously published chemical curation protocols [62,63,64]. Then, metabolite chemical structures were characterized using MACCS fingerprints [43] computed using the RDKit node [65] in Knime [66]. A Pearson correlation coefficient cutoff of 0.9 was used to filter out the highly correlated bits in the fingerprints. Hierarchical clustering of metabolites based on their chemical structure encoded as MACCS fingerprints was performed according to Soergel distances (also known as Tanimoto distances) and the average linkage. The ggtree package [67] was used to create circular dendrograms, and clustering of the significant metabolites for the construction of multi-metabolite models proceeded in the same way. As part of the hierarchical clustering procedure, the number of clusters (k) was selected in order to achieve a reasonable partitioning of the metabolites. We selected the k value that resulted in the highest average silhouette width (ASW) [68] for cluster assignments. By maximizing the ASW, we aimed to find the most "natural" number of clusters in the data, in which cluster members are most similar to each other, and distant from members belonging to other clusters. More formally, let a(i) be the average distance between metabolite i and all other members of its cluster, and b(i) be the smallest average distance between metabolite i and the members of any other cluster. Then the silhouette for metabolite i, s(i), is: $$s\left( i \right) = \frac{b\left( i \right) - a\left( i \right)}{{\hbox{max} \left\{ {a\left( i \right), b\left( i \right)} \right\}}}$$ Overall, the ASW is the average of the silhouette values for all i metabolites. Alternate metabolite clustering methods Alternate methods of grouping metabolites for multi-metabolite classifiers were also considered. To determine if the same cluster assignments could be obtained by the correlation of the covariate adjusted metabolite profiles alone, hierarchical clustering was performed using a correlation based distance (1 – absolute correlation). Spearman correlation was calculated to avoid parametric assumption of Pearson correlations. Partial correlations were computed by the package parcor [69]. Since there were more metabolites than samples, we used partial least squares (PLS) regression to regularize the estimation of the regression coefficients before computing the partial correlations [69]. We used 10-fold cross validation to select the number of PLS components (with a maximum of 30). We also considered two alternate methodologies that utilize chemical structure to cluster metabolites—MetaMapp [22] and ChemRICH [24]. MetaMapp was used to infer a network between plasma and serum metabolites. One set of edges were drawn between metabolites that were similar in chemical structure, based on a 0.7 threshold of Tanimoto similarity of their PubChem fingerprints. Another set of edges were drawn between metabolites that can be interconverted by a single reaction. Cytoscape's "organic layout" was used to find a natural clustering in the network based on node degree and clustering coefficient. ChemRICH groups metabolites together based on their chemical ontologies in the Medical Subject Headings (MeSH) database. If a metabolite is not annotated, it is labeled with the ontology of compounds with highly similar PubChem fingerprints. If metabolites can still not be annotated, ChemRICH has a mechanism for detecting novel groups of metabolites based on chemical structure similarity. ChemRICH then tests for significant enrichment of clusters using a Kolmogorov–Smirnov test. Training and validation of classifiers Single-metabolite classification models and multi-metabolite models were trained on ADC1 plasma and serum samples separately. The analysis was stratified this way in order to determine which blood matrix would provide a better diagnostic tool for NSCLC adenocarcinoma. These models took as input the processed metabolite concentration profiles and classified each patient as adenocarcinoma or healthy. For single metabolite classifiers, logistic regression was used to estimate the predicted probability of cancer status given a covariate adjusted metabolite profile. A receiver operating characteristic (ROC) curve was then used to select the probability threshold affording the highest accuracy. Multi-metabolite classifiers were trained using four machine learning modeling methods: support vector machines [70, 71] (SVM), partial least squares linear discriminant analysis [72, 73] (PLS), random forests [74, 75] (RF), and extreme gradient boosted trees [76,77,78] (xgbTree). Models were trained using the R package caret [79]. To compare our clustering approach to more standard approaches, models were trained on all metabolites, significant metabolites, and each cluster of chemically similar significant metabolites. Models were internally validated using leave-one-out cross-validation (LOOCV). A receiver operating characteristic (ROC) curve was then used to select the probability threshold affording the highest LOOCV accuracy. A grid search was performed on the machine learning model tuning parameters. The set of parameters resulting in the highest LOOCV accuracy were selected for each model. While LOOCV was useful in the selection of the best performing metabolite classifiers, it was also necessary to externally validate their predictive power on an independent case study (ADC2). The models and probability thresholds selected according to LOOCV were used for classification on the external ADC2 set. At last, the best performing classifiers were determined according to both their internal and external classification accuracy. In this study, we demonstrated that clustering metabolites based on their structural similarity enabled us to identify modules of metabolites whose significant association with NSCLC adenocarcinoma status was detected in the presence of unwanted sources of variation. We also showed that these metabolites are linked by metabolic pathways potentially dysregulated by NSCLC adenocarcinoma. This chemocentric analysis of metabolite profiles could facilitate the discovery of novel biomarkers and inferences regarding the health state or medical treatment outcomes for patients. Ultimately, the best performing classifier of patient cancer status using this transparent strategy was a serum multi-metabolite classifier (very high 84.9% overall accuracy), we thus demonstrate an improvement over alternative approaches. Metabolomics has proved to have widespread applicability to the search of clinical biomarkers, and predictive performance is of critical importance for any diagnostic biomarker. This approach could aid in identifying improved biomarkers in a number of metabolomics applications. Importantly, because the grouped metabolites identified by these means were biochemically linked, the biological interpretability of the resulting model was maintained. As modern metabolomics platforms are increasingly able to detect and correctly assign chemical structures to a large number of metabolites, this standardized and automated procedure could have a broad applicability to investigators using metabolite profiles to establish reliable trait-metabolite relationships and predict phenotypic data. One limitation of our approach is that metabolites that were detected by the untargeted metabolomics analysis, but did not have reliable structural annotations, could not be included in the modeling workflow. In the future, we plan to explore new ways to include those metabolites, especially by assigning them to the cluster(s) with most similar mass spectra. This approach can also be used to identify structural features of metabolites that determine their mechanistic relationships with the trait of interest. In the future, we plan to build models that identify additional relationships between the trait and metabolite chemical descriptors. These models could be used to predict whether metabolites that were not detected by an untargeted study could still have a trait-metabolite association. This could provide leads for targeted validation to confirm these changes in metabolite concentrations and expand the analysis to other metabolites in related pathways. This approach could address existing challenges in untargeted metabolomics, such as the inferences about affected pathways that are often limited by the identification of only a few statistically significant metabolites. The datasets investigated in this study are freely available via the Metabolomics public repository (www.metabolomicsworkbench.org) under study numbers ST000385 and ST000386. All the scripts and additional data necessary to recreate our analyses are available in the Additional file 2 and at https://github.com/jrash/metabochem. ADC: ADP: adenosine diphosphate ASW: average silhouette width ATP: adenosine triphosphate BSS: between group sum of squares FDR: false discovery rate GCTOF: gas chromatography time-of-flight LOOCV: leave-one-out cross-validation MACCS: molecular access system structural keys MeSH: medical subject headings NSCLC: non-small-cell lung cancer PLS: partial least squares PK/PD: pharmacokinetic/pharmacodynamic random forests receiver operating characteristic support vector machines WSS: within group sum of squares Clish CB (2015) Metabolomics: an emerging but powerful tool for precision medicine. Mol Case Stud 1:a000588. https://doi.org/10.1101/mcs.a000588 Ramsden JJ (2009) Metabolomics and metabonomics. In: Ramsden JJ (ed) Bioinformatics: an introduction. Springer, London, pp 221–226 Eckhart AD, Beebe K, Milburn M (2012) Metabolomics as a key integrator for "omic" advancement of personalized medicine and future therapies. Clin Transl Sci 5:285–288. https://doi.org/10.1111/j.1752-8062.2011.00388.x Kell DB, Oliver SG (2016) The metabolome 18 years on: a concept comes of age. Metabolomics 12:148. https://doi.org/10.1007/s11306-016-1108-4 Simińska E, Koba M (2016) Amino acid profiling as a method of discovering biomarkers for early diagnosis of cancer. Amino Acids 48:1339–1345. https://doi.org/10.1007/s00726-016-2215-2 Halama A, Guerrouahen BS, Pasquier J et al (2015) Metabolic signatures differentiate ovarian from colon cancer cell lines. J Transl Med 13:223. https://doi.org/10.1186/s12967-015-0576-z Bhargava P, Calabresi PA (2016) Metabolomics in multiple sclerosis. Mult Scler J 22:451–460. https://doi.org/10.1177/1352458515622827 Kang J, Lu J, Zhang X (2015) Metabolomics-based promising candidate biomarkers and pathways in Alzheimer's disease. Pharmazie 70:277–282. https://doi.org/10.1691/ph.2015.4859 Xu X-H, Huang Y, Wang G, Chen S-D (2012) Metabolomics: a novel approach to identify potential diagnostic biomarkers and pathogenesis in Alzheimer's disease. Neurosci Bull 28:641–648. https://doi.org/10.1007/s12264-012-1272-0 Toledo JB, Arnold M, Kastenmüller G et al (2017) Metabolic network failures in Alzheimer's disease: a biochemical road map. Alzheimer's Dement 13:965–984. https://doi.org/10.1016/j.jalz.2017.01.020 Lan K, Jia W (2010) An integrated metabolomics and pharmacokinetics strategy for multi-component drugs evaluation. Curr Drug Metab 11:105–114. https://doi.org/10.2174/138920010791110926 Kaddurah-Daouk R, Kristal BS, Weinshilboum RM (2008) Metabolomics: a global biochemical approach to drug response and disease. Annu Rev Pharmacol Toxicol 48:653–683. https://doi.org/10.1146/annurev.pharmtox.48.113006.094715 Rotroff D, Shahin M, Gurley S et al (2015) Pharmacometabolomic assessments of atenolol and hydrochlorothiazide treatment reveal novel drug response phenotypes. CPT Pharmacomet Syst Pharmacol 4:669–679. https://doi.org/10.1002/psp4.12017 Rotroff DM, Corum DG, Motsinger-Reif A et al (2016) Metabolomic signatures of drug response phenotypes for ketamine and esketamine in subjects with refractory major depressive disorder: new mechanistic insights for rapid acting antidepressants. Transl Psychiatry 6:e894. https://doi.org/10.1038/tp.2016.145 Dührkop K, Shen H, Meusel M et al (2015) Searching molecular structure databases with tandem mass spectra using CSI: FingerID. Proc Natl Acad Sci 112:12580–12585 Wang M, Carver JJ, Phelan VV et al (2016) Sharing and community curation of mass spectrometry data with Global Natural Products Social Molecular Networking. Nat Biotechnol 34:828 van Der Hooft JJJ, Wandy J, Barrett MP et al (2016) Topic modeling for untargeted substructure exploration in metabolomics. Proc Natl Acad Sci 113:13738–13743 Haug K, Salek RM, Steinbeck C (2017) Global open data management in metabolomics. Curr Opin Chem Biol 36:58–63. https://doi.org/10.1016/j.cbpa.2016.12.024 Cherkasov A, Muratov EN, Fourches D et al (2014) QSAR modeling: where have you been? Where are you going to? J Med Chem 57:4977–5010. https://doi.org/10.1021/jm4004285 O'Hagan S, Swainston N, Handl J, Kell DB (2015) A 'rule of 0.5' for the metabolite-likeness of approved pharmaceutical drugs. Metabolomics 11:323–339. https://doi.org/10.1007/s11306-014-0733-z Pertusi DA, Stine AE, Broadbelt LJ, Tyo KEJ (2015) Efficient searching and annotation of metabolic networks using chemical similarity. Bioinformatics 31:1016–1024. https://doi.org/10.1093/bioinformatics/btu760 Barupal DK, Haldiya PK, Wohlgemuth G et al (2012) MetaMapp: mapping and visualizing metabolomic data by integrating information from biochemical pathways and chemical and mass spectral similarity. BMC Bioinform 13:99. https://doi.org/10.1186/1471-2105-13-99 Grapov D, Wanichthanarak K, Fiehn O (2015) MetaMapR: pathway independent metabolomic network analysis incorporating unknowns. Bioinformatics 31:2757–2760. https://doi.org/10.1093/bioinformatics/btv194 Barupal DK, Fiehn O (2017) Chemical Similarity Enrichment Analysis (ChemRICH) as alternative to biochemical pathway mapping for metabolomic datasets. Sci Rep 7:14567. https://doi.org/10.1038/s41598-017-15231-w Faust K, Croes D, van Helden J (2009) Metabolic pathfinding using RPAIR annotation. J Mol Biol 388:390–414. https://doi.org/10.1016/j.jmb.2009.03.006 Moriya Y, Shigemizu D, Hattori M et al (2010) PathPred: an enzyme-catalyzed metabolic pathway prediction server. Nucleic Acids Res 38:W138–W143. https://doi.org/10.1093/nar/gkq318 Xia J, Wishart DS (2010) MetPA: a web-based metabolomics tool for pathway analysis and visualization. Bioinformatics 26:2342–2344. https://doi.org/10.1093/bioinformatics/btq418 Xia J, Sinelnikov IV, Han B, Wishart DS (2015) MetaboAnalyst 3.0—making metabolomics more meaningful. Nucleic Acids Res 43:W251–W257 Forsberg EM, Huan T, Rinehart D et al (2018) Data processing, multi-omic pathway mapping, and metabolite activity analysis using XCMS Online. Nat Protoc 13:633 Camacho D, de la Fuente A, Mendes P (2005) The origin of correlations in metabolomics data. Metabolomics 1:53–63. https://doi.org/10.1007/s11306-005-1107-3 Steuer R, Kurths J, Fiehn O, Weckwerth W (2003) Observing and interpreting correlations in metabolomic networks. Bioinformatics 19:1019–1026. https://doi.org/10.1093/bioinformatics/btg120 Korman A, Oh A, Raskind A, Banks D (2012) Statistical methods in metabolomics. In: Anisimova M (ed) Evolutionary genomics, vol 856. Humana Press, pp 381–413 Ren S, Hinzman AA, Kang EL et al (2015) Computational and statistical analysis of metabolomics data. Metabolomics 11:1492–1513. https://doi.org/10.1007/s11306-015-0823-6 Fahrmann J, Grapov D, Yang J et al (2015) Systemic alterations in the metabolome of diabetic NOD mice delineate increased oxidative stress accompanied by reduced inflammation and hypertriglyceremia. Am J Physiol Metab 308:E978–E989. https://doi.org/10.1152/ajpendo.00019.2015 Johnson CH, Ivanisevic J, Siuzdak G (2016) Metabolomics: beyond biomarkers and towards mechanisms. Nat Rev Mol Cell Biol 17:451–459. https://doi.org/10.1038/nrm.2016.25 Fourches D (2014) Cheminformatics: at the crossroad of eras. In: Gorb L, Kuz'min VE, Muratov EN (eds) Application of computational techniques in pharmacy and medicine. Springer, Netherlands, pp 539–546 Fahrmann JF, Kim K, DeFelice BC et al (2015) Investigation of metabolomic blood biomarkers for detection of adenocarcinoma lung cancer. Cancer Epidemiol Biomark Prev 24:1716–1723. https://doi.org/10.1158/1055-9965.EPI-15-0427 Sud M, Fahy E, Cotter D et al (2016) Metabolomics Workbench: an international repository for metabolomics data and metadata, metabolite standards, protocols, tutorials and training, and analysis tools. Nucleic Acids Res 44:D463–D470. https://doi.org/10.1093/nar/gkv1042 Fiehn O, Wohlgemuth G, Scholz M (2005) Setup and annotation of metabolomic experiments by integrating biological and mass spectrometric metadata. In: Ludäscher B, Raschid L (eds) International workshop on data integration in the life sciences. Springer, Berlin, Heidelberg, pp 224–239 Wu Y, Li L (2016) Sample normalization methods in quantitative metabolomics. J Chromatogr A 1430:80–95. https://doi.org/10.1016/j.chroma.2015.12.007 Veselkov KA, Vingara LK, Masson P et al (2011) Optimized preprocessing of ultra-performance liquid chromatography/mass spectrometry urinary metabolic profiles for improved information recovery. Anal Chem 83:5864–5872. https://doi.org/10.1021/ac201065j Sysi-Aho M, Katajamaa M, Yetukuri L, Orešič M (2007) Normalization method for metabolomics data using optimal selection of multiple internal standards. BMC Bioinform 8:93. https://doi.org/10.1186/1471-2105-8-93 Durant JL, Leland BA, Henry DR, Nourse JG (2002) Reoptimization of MDL keys for use in drug discovery. J Chem Inf Comput Sci 42:1273–1280. https://doi.org/10.1021/ci010132r Kotera M, Tabei Y, Yamanishi Y et al (2013) Supervised de novo reconstruction of metabolic pathways from metabolome-scale compound sets. Bioinformatics 29:i135–i144 Yamanishi Y, Tabei Y, Kotera M (2015) Metabolome-scale de novo pathway reconstruction using regioisomer-sensitive graph alignments. Bioinformatics 31:i161–i170 Willighagen EL, Mayfield JW, Alvarsson J et al (2017) The Chemistry Development Kit (CDK) v2.0: atom typing, depiction, molecular formulas, and substructure searching. J Cheminform 9:33. https://doi.org/10.1186/s13321-017-0220-4 Rogers D, Hahn M (2010) Extended-connectivity fingerprints. J Chem Inf Model 50:742–754 Mohamed A, Deng X, Khuri FR, Owonikoko TK (2014) Altered glutamine metabolism and therapeutic opportunities for lung cancer. Clin Lung Cancer 15:7–15. https://doi.org/10.1016/j.cllc.2013.09.001 Fahrmann JF, Grapov DD, Wanichthanarak K et al (2017) Integrated metabolomics and proteomics highlight altered nicotinamide- and polyamine pathways in lung adenocarcinoma. Carcinogenesis 38:205. https://doi.org/10.1093/carcin/bgw205 Wikoff WR, Grapov D, Fahrmann JF et al (2015) Metabolomic markers of altered nucleotide metabolism in early stage adenocarcinoma. Cancer Prev Res 8:410–418. https://doi.org/10.1158/1940-6207.CAPR-14-0329 Huang Y, Dai Z, Barbacioru C, Sadée W (2005) Cystine-glutamate transporter SLC7A11 in cancer chemosensitivity and chemoresistance. Cancer Res 65:7446–7454. https://doi.org/10.1158/0008-5472.CAN-04-4267 Yu Z, Kastenmüller G, He Y et al (2011) Differences between human plasma and serum metabolite profiles. PLoS ONE 6:e21230. https://doi.org/10.1371/journal.pone.0021230 Wedge DC, Allwood JW, Dunn W et al (2011) Is serum or plasma more appropriate for intersubject comparisons in metabolomic studies? An assessment in patients with small-cell lung cancer. Anal Chem 83:6689–6697. https://doi.org/10.1021/ac2012224 Tropsha A, Golbraikh A (2007) Predictive QSAR modeling workflow, model applicability domains, and virtual screening. Curr Pharm Des 13:3494–3504 R Core Team (2016) R: a language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. https://www.R-project.org/ Xia J, Wishart DS (2016) Using MetaboAnalyst 3.0 for comprehensive metabolomics data analysis. Curr Protoc Bioinforma 55:14.10.1–14.10.91. https://doi.org/10.1002/cpbi.11 Kanehisa M, Araki M, Goto S et al (2007) KEGG for linking genomes to life and the environment. Nucleic Acids Res 36:D480–D484. https://doi.org/10.1093/nar/gkm882 Dwass M (1957) Modified randomization tests for nonparametric hypotheses. Ann Math Stat 28:181–187. https://doi.org/10.1214/aoms/1177707045 Fellows I (2012) {Deducer}: a data analysis GUI for {R}. J Stat Softw 49:1–15 Benjamini Y, Hochberg Y (1995) Controlling the false discovery rate: a practical and powerful approach to multiple controlling the false discovery rate: a practical and powerful approach to multiple testing. J R Stat Soc 57:289–300. https://doi.org/10.2307/2346101 Kim S, Thiessen PA, Bolton EE et al (2016) PubChem substance and compound databases. Nucleic Acids Res 44:D1202–D1213. https://doi.org/10.1093/nar/gkv951 Fourches D, Muratov E, Tropsha A (2010) Trust, but verify: on the importance of chemical structure curation in cheminformatics and QSAR modeling research. J Chem Inf Model 50:1189–1204. https://doi.org/10.1021/ci100176x Fourches D, Muratov E, Tropsha A (2015) Curation of chemogenomics data. Nat Chem Biol 11:535. https://doi.org/10.1038/nchembio.1881 Fourches D, Muratov E, Tropsha A (2016) Trust, but verify II: a practical guide to chemogenomics data curation. J Chem Inf Model 56:1243–1252. https://doi.org/10.1021/acs.jcim.6b00129 RDKit: Open-source cheminformatics. http://www.rdkit.org Berthold MR, Cebron N, Dill F et al (2008) KNIME: the Konstanz Information Miner. In: Preisach C, Burkhard H, Schmidt-Thieme Tl, Decker R (eds) Studies in classification, data analysis, and knowledge organization (GfKL 2007). Springer, Berlin, Heidelberg, pp 319–326 Yu G, Smith DK, Zhu H et al (2017) ggtree : an R package for visualization and annotation of phylogenetic trees with their covariates and other associated data. Methods Ecol Evol 8:28–36. https://doi.org/10.1111/2041-210X.12628 Rousseeuw PJ (1987) Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. J Comput Appl Math 20:53–65. https://doi.org/10.1016/0377-0427(87)90125-7 Krämer N, Schäfer J, Boulesteix A-L (2009) Regularized estimation of large-scale gene association networks using graphical Gaussian models. BMC Bioinform 10:384. https://doi.org/10.1186/1471-2105-10-384 Cortes C, Vapnik V (1995) Support-vector networks. Mach Learn 20:273–297. https://doi.org/10.1007/BF00994018 Karatzoglou A, Smola A, Hornik K, Zeileis A (2004) kernlab—an S4 package for kernel methods in R. J Stat Softw 11:1–20 Mevik B-H, Wehrens R (2007) The pls package: principal component and partial least squares regression in R. J Stat Softw. https://doi.org/10.18637/jss.v018.i02 Barker M, Rayens W (2003) Partial least squares for discrimination. J Chemom 17:166–173. https://doi.org/10.1002/cem.785 Breiman L (2001) Random forests. Mach Learn 45:5–32. https://doi.org/10.1023/A:1010933404324 Liaw A, Wiener M (2002) Classification and regression by randomForest. R News 2:18–22 Chen T, He T, Benesty M et al (2018) xgboost: Extreme Gradient Boosting. R package version 0.6.4.1. https://CRAN.R-project.org/package=xgboost Friedman JH (2001) Greedy function approximation: a gradient boosting machine. Ann Stat 29:1189–1232 Friedman J, Hastie T, Tibshirani R et al (2000) Additive logistic regression: a statistical view of boosting (with discussion and a rejoinder by the authors). Ann Stat 28:337–407 Kuhn M, Wing J, Weston S, et al (2012) Caret: classification and regression training. https://Cran.R-Project.Org/Package=Caret JRA, MAK, and DF thank all the members of the Fourches laboratory for fruitful discussions and feedback regarding this project. The members and staff from the NC State Bioinformatics Center are also gratefully acknowledged. This work was supported by the National Institute of Health training grant T32ES007329. DF thanks the NC State Chancellor's Faculty Excellence Program for funding this project. Department of Chemistry, North Carolina State University, Raleigh, NC, USA Jeremy R. Ash, Melaine A. Kuenemann & Denis Fourches Department of Statistics, North Carolina State University, Raleigh, NC, USA Jeremy R. Ash, Daniel Rotroff & Alison Motsinger-Reif Bioinformatics Research Center, North Carolina State University, Raleigh, NC, USA Jeremy R. Ash, Melaine A. Kuenemann, Daniel Rotroff, Alison Motsinger-Reif & Denis Fourches Jeremy R. Ash Melaine A. Kuenemann Daniel Rotroff Alison Motsinger-Reif Denis Fourches JRA and MAK conceived, developed and implemented the method, performed the calculations, and wrote the manuscript. DR and AMR conceived the study and wrote the manuscript. DF conceived the method, designed the study, and wrote the manuscript. All authors read and approved the final manuscript. Correspondence to Denis Fourches. The authors declare no competing financial interest. Supplementary results and figures. The scripts and additional data necessary to recreate our analyses. Ash, J.R., Kuenemann, M.A., Rotroff, D. et al. Cheminformatics approach to exploring and modeling trait-associated metabolite profiles. J Cheminform 11, 43 (2019). https://doi.org/10.1186/s13321-019-0366-3 Cheminformatics Molecular fragmentation
CommonCrawl
Converting a probability density function into its Probability Distribution I have the following probability density function $$f(x) =\begin{cases} 4x & \mbox{for }0< x < 1/2 \\ 4-4x & \mbox{for }1/2 \leq x < 1 & \\ 0 & \mbox{otherwise}\end{cases}$$ I am tasked to now find its probability distribution function in the same piece wise format. I have the solution to this problem however I do not really understand the solution. The solution is the integral from 0 to x of 4x(for the first interval of 0 < x<= 1/2)+ integral of 0 to 1/2 of 4x + integral of 1/2 to x of 4-4x(for the second interval). I understand the first interval but I am stumped as to why we add the integral of 0 to 1/2 of 4x to the integral of 1/2 to x for the second interval. Any explanation would be appreciated. probability probability-distributions density-function Graham Kemp JmanxCJmanxC $\begingroup$ If you want the cumulative probability, there is a probability of $\int_0^{1/2} 4x \, dx =\frac12$ of being $\frac12$ or less, which you have to add to the probability of being from $\frac12$ up to the value you are interested in $\endgroup$ – Henry Feb 17 '16 at 23:23 $\begingroup$ Formatting tips here. $\endgroup$ – Em. Feb 17 '16 at 23:30 I guess it is asking for the CDF. It's hard to tell what you are writing since you didn't format. Essentially, it should be $$F_X(x) = P(X\leq x)=\begin{cases} 0& x<0\\ \int_0^x 4t\,dt& 0\leq x< \frac{1}{2}\\ \int_0^{1/2} 4t\,dt+\int_{1/2}^x4-4t\,dt&\frac{1}{2}\leq x <1\\ 1& x\geq 1\end{cases}$$ Loosely speaking, this is the case because the CDF is the accumulation of probability (area) under the curve up to $x$. Em.Em. $\begingroup$ So x is always defined as the point at which we are accumulating to then? As I am confused when to include actual integers as the end points vs the variable x. That is I am confused as what x is when talking about the area under the curves of these functions. $\endgroup$ – JmanxC Feb 17 '16 at 23:51 $\begingroup$ This is an odd example to try to understand from. Take instead $X$ to follow a continuous unif(0,1). The cdf of $X$ is $$F_X(x)=P(X\leq x) = \int_0^xf_X(t)\,dt = \int_0^x 1\,dt = x$$ when $0\leq x < 1$. Does this help? Do you see that this is the area under the curve $f_X(x)$ up to $x$? $\endgroup$ – Em. Feb 17 '16 at 23:58 $\begingroup$ So x is really the end point of a specific interval? So if it was a piece wise from 0 to lets say 5 for f1(x) and 5 to 10 for f2(x) you would add from 0 to x for the first interval then from 5 onward to x for the next interval? $\endgroup$ – JmanxC Feb 18 '16 at 0:02 $\begingroup$ I believe that is correct. It would be easier to tell if you format. But, yes naively speaking the cdf $F_X(x) =P(X\leq x)$ is the "area to the left" under the pdf. To see this using a piecewise function, use your current exercise. Graph the pdf, and notice that if $0\leq x <\frac{1}{2}$, then the area to the left of $x$ is the first intergal I provided. If $\frac{1}{2}\leq x<1$, then the area to the left of $x$ is the sum of integrals I provided. $\endgroup$ – Em. Feb 18 '16 at 0:09 $\begingroup$ Perfect thank you. And yes I realize I'm not formatting correctly but I think I get it. You've been a big a help. I am starting to get it in a 1d environment but I soon have to look at it in a 2d environment(joint pdf's). May have more questions on this later that you may see on here haha. $\endgroup$ – JmanxC Feb 18 '16 at 0:13 You have $$f(x) =\begin{cases} 4x & \mbox{for }0< x < 1/2 \\ 4-4x & \mbox{for }1/2 \leq x < 1 & \\ 0 & \mbox{otherwise}\end{cases}$$ Then you want to find $$F(x) =\begin{cases} 0 & \mbox{for } x\leq 0 \\[1ex] \int_0^x 4s\operatorname d s & \mbox{for }0< x < 1/2 \\[1ex] \int_0^{1/2} s\operatorname d s + \int_{1/2}^x 4-4s\operatorname d s & \mbox{for }1/2 \leq x < 1 & \\[1ex] 1 & \mbox{for } 1\leq x\end{cases}$$ Graham KempGraham Kemp $\begingroup$ You read my mind :p $\endgroup$ – Em. Feb 17 '16 at 23:29 $\begingroup$ Hi Graham I have left a comment for @probablyme above(the last comment) which you may be able to answer as well as I know you have helped me with a few questions on this site. $\endgroup$ – JmanxC Feb 18 '16 at 2:04 Not the answer you're looking for? Browse other questions tagged probability probability-distributions density-function or ask your own question. pdf is defined as $f_X(x)=P(X=x)$ but isn't $P(X=x)=0$ Determine a joint probability density function Piece-wise density function Probability Density Function- Normal distribution Finding the probability density from cumulative distribution function Density and distribution functions exercise Distribution and Density function related exercise + probability question How to find cumulative probability density function given the probability density function? Find the cumulative probability function given a probability density function Finding density function for a given distribution Probability density function calculation
CommonCrawl
Reading: Death and Destitution: The Global Distribution of Welfare Losses from the COVID-19 Pandemic Death and Destitution: The Global Distribution of Welfare Losses from the COVID-19 Pandemic Francisco H. G. Ferreira , London School of Economics and IZA, GB Olivier Sterck, University of Oxford, GB Daniel G. Mahler, World Bank, US Benoît Decerf The Covid-19 pandemic has brought about massive declines in wellbeing around the world. This paper seeks to quantify and compare two important components of those losses – increased mortality and higher poverty – using years of human life as a common metric. We estimate that almost 20 million life-years were lost to Covid-19 by December 2020. Over the same period and by the most conservative definition, over 120 million additional years were spent in poverty because of the pandemic. The mortality burden, whether estimated in lives or in years of life lost, increases sharply with GDP per capita. The poverty burden, on the contrary, declines with per capita national incomes when a constant absolute poverty line is used, or is uncorrelated with national incomes when a more relative approach is taken to poverty lines. In both cases the poverty burden of the pandemic, relative to the mortality burden, is much higher for poor countries. The distribution of aggregate welfare losses – combining mortality and poverty and expressed in terms of life-years – depends both on the choice of poverty line(s) and on the relative weights placed on mortality and poverty. With a constant absolute poverty line and a relatively low welfare weight on mortality, poorer countries are found to bear a greater welfare loss from the pandemic. When poverty lines are set differently for poor, middle and high-income countries and/or a greater welfare weight is placed on mortality, upper-middle and rich countries suffer the most. Keywords: COVID-19, pandemic, welfare, poverty, mortality, global distribution How to Cite: Ferreira, F.H.G., Sterck, O., Mahler, D.G. and Decerf, B., 2021. Death and Destitution: The Global Distribution of Welfare Losses from the COVID-19 Pandemic. LSE Public Policy Review, 1(4), p.2. DOI: http://doi.org/10.31389/lseppr.34 Accepted on 13 Apr 2021 Submitted on 15 Feb 2021 Since its onset in December 2019, the Covid-19 pandemic has spread death and disease across the whole world. Around the time of its "first anniversary", on 15 December 2020, 1.64 million people were counted as having lost their lives to the virus globally and,1 because of the likelihood of under-reporting, that was almost certainly an undercount. Although it is primarily a health crisis – with substantial additional pain and suffering caused to the tens of millions who have survived severe cases of the disease and, in many cases, continue to suffer from its long-term ill-effects – the pandemic has also had major economic effects. The current estimate is that global GDP per capita declined by 5.3% in 2020. Economic contraction was widespread, with 172 out of the 182 countries for which data is available experiencing negative growth in real GDP per capita in 2020 (World Bank [1]). This severe global economic shock has caused the first reversal in the declining trend in global extreme poverty (measured as the share of the world's population living under $1.90 per day) since the Asian Financial Crisis of 1997 – and only the second real increase in world poverty since measurement began in the early 1980s.2 This increase in extreme deprivation comes with its own suffering and anguish: jobs and homes were lost and people struggled to feed their children and themselves. Many asked whether they "would die of Coronavirus or hunger?"3 This paper seeks to address two questions. First, what were the relative contributions of increased mortality and poverty to the welfare losses caused by the pandemic, and did these contributions vary systematically across countries? Second, how were the aggregate welfare losses distributed across countries?4 We look at the impact on the cross-country distribution of wellbeing with the recognition that both the health crisis and the economic debacle have caused huge welfare losses. We focus on extreme outcomes in both domains: mortality in the case of health and falling into poverty in the case of economics. This implies that our estimates of welfare losses are clearly a lower-bound: we ignore the burden of the disease on those who survive it and, furthermore, data limitations mean that we look only at deaths officially classified as due to Covid, rather than the possibly preferable metric of excess deaths.5 Similarly, we ignore welfare losses from income declines that do not entail entry into poverty and, furthermore, we look only at the short-term poverty consequences arising from the contemporaneous income losses in 2020. We ignore, therefore, the longer-term consequences of any harms to child development arising from additional undernutrition, or the likely substantial future consequences of the schooling crisis that resulted from the pandemic (see, e.g., Lustig et al. [2]). These choices are not intended to minimize the importance of those negative consequences of the pandemic. On the contrary, the evidence to date suggests that they will be extremely important. Rather, they follow from a desire to focus on the most severe short-term consequences of the crisis along the two principal dimensions of health and incomes, using the best available data while avoiding an accumulation of assumptions and simulations. We are forced to make some assumptions to fill data gaps that inevitably arise when analysing an ongoing phenomenon, but they are few, and therefore are hopefully clearer and more transparent than if we had tried to incorporate expected future losses, and so on. A welfare-based approach requires comparing health and income losses or, in our case, mortality and poverty costs. As in Decerf et al. [3], we eschew more traditional methods such as the valuation of statistical life (VSL, see e.g. Viscusi [4]). Our approach is theoretically closer to the modeling of social welfare as aggregate expected lifetime utility, as in Becker et al. [5] or Adler et al. [6]. But, unlike those authors, we model the effect of the pandemic on social welfare in a way that allows us to use years of human life – either lost to premature mortality or spent in poverty – as our unit of comparison. This has two advantages over the alternative of using a money metric to value human lives: first, we hope it overcomes the instinctive aversion of many participants in the public debate to the idea of placing a "price" on human life. Second, the model yields a single, easily understandable normative parameter for the trade-off between mortality and poverty which has a direct, observable empirical counterpart. We can then simply present the empirical object for all countries in our sample, and let the reader compare her own valuation of the normative parameter to the data. The simple model is presented in Decerf et al. [3], and we do not repeat it here. The basic ingredients are (1) a utilitarian welfare function that simply adds up lifetime individual utility across people and time periods;6 (2) an individual utility function that depends solely on whether one is dead, poor or non-poor;7 and (3) an assumption that the pandemic may have two effects on people: it may (or may not) cause them to die earlier than they otherwise would, and it may (or may not) cause them to spend some more time in poverty before they die, relative to the counterfactual. This simple framework yields the result that the overall welfare effect of the pandemic is proportional to a weighted sum of the number of years of life lost to premature (Covid-induced) mortality and the number of additional years spent in poverty. The relevant equation is: ΔWΔup=αLY+PY M1 \documentclass[10pt]{article} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \usepackage{pmc} \usepackage[Euler]{upgreek} \pagestyle{empty} \oddsidemargin -1.0in \begin{document} \[\frac{{\Delta W}}{{\Delta {u_p}}} = \alpha LY + PY\] \end{document} In Equation (1) ΔW denotes the expected impact of the pandemic on social welfare; Δup is the difference in yearly individual well-being between being poor and non-poor; and LY and PY are respectively the total number of years of life lost and the total number of additional years spent in poverty due to the pandemic. α is a normative parameter that represents the ratio between the individual utility loss from each year lost to premature mortality (Δud) and the loss from each additional year spent in poverty (Δup). α is therefore the (social) marginal rate of substitution between life- and poverty-years. It can be understood as the "shadow price" of a lost life-year, expressed in terms of poverty years. Concretely, one can think of it as the answer people might give to the following hypothetical question: "If you could make this bargain, how many years would you be willing to spend in poverty during the rest of your life in order to add one additional year at the end of your life?" Clearly, there is plenty of room for individual disagreement about the answer to that question, and so about the value of α. Different people might answer that question very differently, depending on how far above the poverty line they are (or expect to be); on their expected residual life-expectancy and, of course, on their preferences. We thus choose to remain mostly agnostic about α. In what follows, we simply present values for LY and PY across as many countries as possible. When we discuss the relative contributions of mortality and poverty, we present ratios of PY to LY, which the reader can compare to her own preferred value for α in order to assess which source is responsible for the larger welfare loss. Next, when we seek to summarize the inter-country distribution of welfare losses, we suggest a plausible range for α: between five and twenty years. The rest of this short paper is organized as follows. The next section describes how we compute the numbers of years of life lost (LY) and additional poverty years (PY) for each country, given the available data, and presents the estimates for 145 countries. Building on those ingredients, Section 3 summarizes the evidence on the relative importance of poverty and mortality in lowering welfare around the world. It also investigates the global distribution of those losses under plausible values for the key normative parameter, both for a constant absolute poverty line and under a more relativist approach to poverty identification. Section 4 concludes. 2. Estimating additional mortality and poverty with imperfect data To compute the total number of years of life lost to the pandemic in each country, we first estimate how long each person reported as having died of COVID-19 in that country until 15 December 2020 might have lived, had the pandemic not occurred. That quantity is obviously not observed: it is a counterfactual. We use each country's pre-pandemic residual life expectancy at the age of the person's death as our estimate of that counterfactual. A country's residual life expectancy at age a is given by how much longer the average person who reaches that age in that country tends to live (pre-pandemic). It can be calculated from data on population pyramids from the UN Population Division database. Once we have estimates of the residual life expectancy at each age for each country, the total number of years of life lost in each country is simply the sum across all ages of the number of COVID-19 deaths at each age times the residual life expectancy at that age. Data on the number of COVID-19 deaths at each age can be obtained directly for some countries but it is not publicly available for all of them. Where it is not available, we estimate it using data on aggregate COVID-19 fatalities and on age-specific infection-fatality rates for certain countries. Those data are fortunately available for many countries from the Global Burden of Disease Database (Dicker et al. [7]) and from other sources. Appendix 1 provides details of those sources and of the exact procedure we use to generate our LY estimates for 145 countries. Using this method, we estimate that Covid-induced mortality in the year to 15 December 2020 caused the loss of 19.3 million years of life across the 145 countries in our sample (which account for 96% of the world's population). Absolute numbers range from 14 years lost in Burundi to 3,148,000 in the United States.8 Those aggregates numbers obviously depend a great deal on the country's population. Figure 1 below plots life years lost adjusted by population (LY per 100,000 people) against each country's GDP per capita, with both axes in logarithmic scale. Life-years lost to Covid, and GDP per capita. Two features of the scatter plot are worth highlighting: First there is enormous variation in the population-adjusted loss of life-years across countries. Even if one discounts Burundi and Tanzania as outliers where reporting is unlikely to have been reliable, LYs range from roughly one year lost per 100,000 people (in countries as diverse as Papua New Guinea, Thailand and Vietnam) to one or more years lost per 100 people, in a large set of countries including Brazil, Peru, Mexico, Belgium, the Czech Republic and the United States. Some of that variation is systematic: Figure 1 reveals a strongly positive (and concave)9 relationship between the mortality costs of Covid and level of economic development.10 Second, however, there is also considerable variation around the regression line at each income level – particularly at and above per capita GDP levels of $5,000 or thereabouts: Brazil and Thailand have comparable per capita income levels, but Brazil lost roughly one thousand life-years for each life-year lost in Thailand, controlling for population. The disparity is even greater between Bolivia and Vietnam, and still striking between France and South Korea. What explains this massive variation in population-adjusted life-years lost to Covid – both across and within country-income categories? Mechanically, it must reflect cross-country differences in three variables: the age structure of the population; residual life-expectancies at each age; and age-specific mortality rates.11 The first two are slow-moving variables that reflect each country's historical development; the stage of the demographic transition they are in; the ease of access to and the quality of their health care systems, etc. The last variable – the country's age-specific mortality rates, which are themselves the product of infection rates and infection fatality rates (Eq. A3 in Appendix 1) – reflects each country's exposure and response to the pandemic. Infection rates, (which vary substantially internationally) initially reflected how quickly the virus arrived in each country, and then the extent to which health systems were able to prevent spread within the country. Infection rates are also likely to depend on urbanization and climate. Infection-fatality rates reflect the quality of health care and the extent to which, for example, hospitals were overwhelmed by the pandemic at any stage. Our data suggests that all three of these variables contribute to the positive association between the population-adjusted mortality burden of the pandemic and national per capita income seen in Figure 1. It is well-know that Covid mortality varies substantially with age, and that it is much higher for the elderly. Figure A1 in Appendix 2 plots the ratio of Covid mortality among those aged 65 and over to the mortality among 20–39 year-olds in our data: the ratio ranges from around 100 in low and middle-income countries to between 200 and 300 among high-income countries.12 Figures A2 and A3, also in Appendix 2, plot indicators for the other two variables, namely the age structure of the population and age-specific residual life expectancies, both against per capita GDP. Specifically, Figure A2 plots the share of the population aged 65 or over (which ranges from 3–4% among the poorest countries to 20–25% among some of the richest); and Figure A3 looks at the residual life expectancy at age 65 across countries – which ranges from 13–15 years among most low-income countries (LICs) to 20–23 at the high-end, mostly among high-income countries (HICs). These upward sloping curves in Figures A1 – A3 suggest that all three variables play some role in contributing to the positive slope in Figure 1. Covid mortality is highly selective on age; richer countries have many more people in the vulnerable, elderly age ranges; and they tend to have higher life-expectancies at those ages, implying a larger number of years lost per death. Turning to the estimates of poverty years added by the Covid pandemic, it is important to note, first of all, that the household surveys from which we generally obtain reasonably reliable estimates of poverty are not yet available for 2020 in any country. This means that actual data on household incomes or consumption levels are not available at this time, and one must rely on ex-ante estimates and approximations. In that context, our basic approach is to compare "expected" poverty rates in 2020 under two scenarios: one with Covid and one without. To do this, we use the remarkable collection of household survey microdata from 166 countries contained in the World Bank's PovcalNet database.13 The dates of the latest household surveys in that database vary across countries, but all are "aligned" to 2019, using historically documented growth rates in GDP per capita and a pass-through coefficient to adjust for the fact that growth in mean incomes in household surveys is typically less than GDP per capita growth measured in the National Accounts.14 This procedure, which is carried out internally at the World Bank for nowcasting poverty, assumes no change in inequality between the last available household survey and 2019. Our starting point are these distributions of household per capita income or consumption, expressed in US dollars at PPP exchange rates, aligned for the pre-Covid year of 2019. From these distributions, one can calculate the headcount measure of poverty in each country in 2019 (for any given poverty line z expressed in per capita terms) simply as the share of the population with incomes below that line. We then obtain our estimate of poverty in the counterfactual "no-Covid 2020" scenario by applying the (adjusted) growth rate forecast for 2020 in the January 2020 issue of the Global Economics Prospect (GEP) to each country's 2019 income distribution. Analogously, the poverty estimate for the 2020 Covid scenario is obtained by applying to the same 2019 income distributions the (adjusted) growth rate forecast for 2020 in the October 2020 Macro and Poverty Outlook report of the World Bank.15 The idea, of course, is that the January forecasts were produced at a time when Covid-19 was a little heard-of virus confined to Wuhan province in China, and no macroeconomist had remotely imagined a pandemic on the scale we have since seen. The October forecasts, on the other hand, were the latest available for 2020 at the time of writing and reflect the World Bank's expectations of the pandemic's impact on growth around the world. Finally, we assume – conservatively – that the short-run poverty effect of the pandemic lasts for a single year, so that each additional person in poverty corresponds to precisely one additional poverty year. The total number of additional poverty years generated by the pandemic is then given simply by the difference between the poverty estimates for 2020 under the Covid and the non-Covid scenarios. Naturally, the number of poverty-years added by the pandemic depends on the poverty line that is used. Figure 2 below presents two of many possible poverty line options: Panel A uses a constant absolute line for all countries, namely the World Bank's international (extreme) poverty line of $1.90 per person per day (Ferreira et al. [12]). While there are good reasons for using the same poverty line in an international comparison of this kind, there are equally valid arguments for attempting to account for the fact that basic needs themselves may vary with national income, and that a different (and costlier) bundle of commodities may be needed to achieve the same welfare (or capabilities) threshold in Austria, say, than in Afghanistan. This latter view implies that "poverty" means different things (at least in income terms) in countries where average incomes are vastly different, and often leads to the adoption of relative or weakly-relative poverty lines (see, e.g. Atkinson and Bourguignon [13], and Ravallion and Chen [14]).16 Poverty-years added by Covid, and GDP per capita. Panel B of Figure 2 adopts a (coarse) approximation to a relative poverty line: It uses the World Bank's income-class poverty lines proposed by Jolliffe and Prydz [15]. Using a database of contemporaneous national poverty lines in 126 economies, these authors selected the median values of per capita poverty lines among low-income countries ($1.90); lower middle-income countries ($3.20); upper middle-income countries ($5.50); and high-income countries ($21.70). In Panel B, poverty years are computed using these income-class specific poverty lines for countries in each income category. Using the $1.90 line for all countries, we estimate that a total of 121 million additional poverty years were induced by Covid-19 in the year to 15 December 2020 across the 145 countries in our sample. Absolute numbers range from –35,800 in Papua New Guinea17 to 74.2 million in India. Panel A in Figure 2 reveals – perhaps unsurprisingly – a strongly downward-sloping relationship with GDP per capita, with an average of 2,568 poverty years per 100,000 people added in low-income countries, as compared to 28.5 years per 100,000 people added in high-income countries. The negative slope disappears completely in Panel B, where median poverty lines from each country income category (from low- to high-income) are used for countries in the respective groups. The slope of the linear regression line across the entire scatterplot is not significantly different from zero.18 The mean number of PYs/100,000 people is 2,568 for low-income countries, 2,778 for lower middle-income countries, 3,418 for upper middle-income countries and 3,330 for high-income countries. It is perhaps worth noting that there is nothing mechanical about this particular result: there is no reason why one would necessarily expect that adopting median poverty lines among groups of progressively richer countries would completely eliminate the negative association between the poverty burden of the pandemic and GDP per capita. Using these more generous poverty lines, our estimate for the total number of additional poverty years induced by the pandemic rises to 300 million. The next section seeks to combine the country-level PY and LY estimates obtained above in order to assess their relative and absolute importance in determining overall welfare losses from the pandemic. 3. Total welfare losses and the relative contributions of death and destitution Given the estimates of life-years lost to and of poverty-years added by the pandemic, we now ask: first, which of the two sources contributed the most to lowering welfare in each country; and second, what were those total welfare costs, using our metric of poverty years. The answers to both questions depend critically on the value of α, the normative parameter that tells us how many PYs cause as great a welfare loss as a single LY. Given a value of α and the ratio of PYs to LYs observed in any particular country, we can immediately tell whether poverty or mortality contributed the most to the aggregate welfare losses from the pandemic in that country. Wherever the actual ratio of PYs to LYs exceeds the reader's chosen value for α, poverty is the greater source of welfare loss: the country added more poverty years for each lost life year than the reader thinks a life year is "worth" in terms of poverty years. Conversely, if the actual ratio is lower than α, then mortality was the greater contributor to falling well-being.19 Figure 3 plots those observed ratios against GDP per capita for all countries in our sample, using the income-class specific poverty lines used in Panel B of Figure 2. (The line is even steeper if the constant $1.90 line were used instead.) Two things are immediately apparent: first, the variation in empirical PY/LY ratios is enormous: the median ratio is 15.7 (in United Arab Emirates) and the range is from 0.33 (in Bosnia and Herzegovina) to 5537 (in Burundi).20 Second, the poverty to life-years ratio is strongly negatively correlated with GDP per capita: although the regression lines shown in Figure 3 are for each income class, a simple linear regression over the entire sample has a highly significant negative slope.21 PY/LY ratios and GDP per capita. So far, we have been entirely agnostic about the value of α. Indeed, we argue that one advantage of our approach is that it can encapsulate the normatively challenging trade-off between lives and livelihoods in a single, easily interpretable parameter, while simultaneously remaining agnostic about its value. In order to make further progress in interpreting Figure 3, however, it will prove helpful to suggest a "plausible range" for α, which we set at 5 ≤ α ≤ 20. In terms of the question we proposed earlier as a means to elicit the normative judgement, this range means that most people in a country value an additional year of life-expectancy as being worth spending at least an additional five years, and at most an additional twenty years, in poverty. We do not yet have robust empirical evidence from surveys or experiments that try to elicit empirical values of α, so the reader may of course pick a value completely outside that range.22 But those who are comfortable with such a range could, on inspection of Figure 3, classify countries into three broad groups. Those below the band of grey shadow (PY/LY ≤ 5) are countries where the social welfare cost of the pandemic until December 2020 arose primarily from additional mortality, rather than from increases in poverty. This group includes a wide variety of nations, such as Bolivia, Brazil, Russia, Belgium and the US. The most extreme cases are Moldova, Bosnia and Herzegovina and Belarus where the ratio PY/LY is below even the theoretical lower bound of one for α. Belgium and the Czech Republic are close behind. This first group consists primarily of upper-middle and high-income countries and does not include a single low-income country. A second group consists of those above the band of grey shadow in the Figure (PY/LY ≥ 20). In these countries, increased destitution contributed more to declining social welfare than deaths and the loss of life years they caused. There are 70 countries in this group, including most low- and lower middle-income countries. This group also includes most countries frequently identified in the popular media as successful in combating the pandemic, through early lockdowns strictly enforced and/or well-functioning testing and tracing systems, such as Australia, China, Japan, Korea and Vietnam. Uruguay is the only continental Latin American country in this group, while its neighbours Argentina and Brazil (as well as Chile) are in Group 1 above.23 The third group consists of those countries in the grey band, whose empirical PY/LY ratios fall within our "plausible range" for α: between 5 and 20. Given that range, we are unable or unwilling to select either poverty or mortality as the main culprit in lowering social welfare in these countries. In other words, these are countries where their relative contributions were broadly similar. The group includes countries from every income category, from Nepal at the poorer end to Norway and the United Arab Emirates at the richer end. But Nepal and Tajikistan are the only two low-income countries in the group; all other LICs are in Group 2, where poverty dominated mortality as a source of declining well-being from the pandemic. Can we go beyond the relative contributions of mortality and poverty in each country and compute the aggregate welfare losses from the pandemic (arising both from deaths and additional destitution) in each country? Strictly speaking, of course, we suffer from the usual problem of choosing a suitable unit for measuring well-being. But if we are willing to abstract from measures of individual utility, Equation (1) tells us that the change in welfare in a country is proportional to the weighted sum of the number of years of life lost to and poverty years added by the pandemic, with the weight on life-years given by α: αLY + PY. Using the sum, as we do below, corresponds to using additional years spent in poverty as our social welfare metric. Of course, computing that weighted sum requires – once again – choosing both an approach to poverty identification (one or more poverty lines) as well as one or a range of values for α. For consistency, we use the same two sets of poverty lines used in Figure 2 (a constant line of $1.90, and the set of four income-classification poverty lines). For α we pick the bounds of our guidance range, namely α= 5 and α = 20. Figure 4 below plots aggregate social welfare losses, αLY + PY, against GDP per capita for all 145 countries in our sample: The first row (Panels A and B) uses the constant poverty line of $1.90: Panel A uses α = 5 and Panel B uses α = 20. The second line (Panels C and D) uses the income-classification poverty lines: once again with α = 5 in Panel C and α = 20 in Panel D. To control for population sizes, in all cases the welfare costs are expressed per 100,000 people. Total welfare losses from the pandemic, and GDP per capita. As expected, there is a great deal of cross-country variation in the welfare burden of the pandemic, regardless of which panel one looks at. The unit of measurement along the y-axis, as noted earlier, is additional person-years spent in poverty per 100,000 people. In panel A, using the most stringent global poverty line ($1.90 per person per day) and a relatively low value of α, the burden ranges from 26 in Thailand and 62 in China, to 7,556 in Belgium and 9,811 in Peru. The regression line across the scatter of countries in Panel A is downward sloping and that negative slope (–0.14) is statistically significant (p = 0.036), indicating a negative association between national income and the aggregate welfare losses from the deaths and destitution caused by the pandemic. Given the strong positive association between the mortality burden and GDP per capita seen in Figure 1, this is an important finding: the pandemic appears to have induced such large increases in extreme poverty in poor countries (Figure 2A) that the combined burden of poverty and mortality is, on average, greater for them than for richer countries.24 The pattern changes (and the overall numbers are mechanically higher) in Panel B, where each life-year lost is counted as being equivalent to 20, rather than 5, added poverty-years. The association with GDP per capita now becomes positive and statistically significant, because mortality was much higher in richer countries. Panels C and D would be preferred by those who take a more relative view of poverty. They use poverty lines that are typical of countries in their income ranges: $1.90 for low-income countries; $3.20 for lower middle-income countries; $5.50 for upper middle-income countries; and $21.70 for high-income countries. Naturally, this raises the number of poverty years in all but the low-income countries, and the positive slope of the relationship with GDP per capita strengthens further, first with α = 5, and even more with α = 20. Using more demanding poverty lines in upper middle-income countries means that ranks change at the bottom, with Tajikistan and Vietnam now reporting the lowest welfare burdens in the world, at 376 and 422 years per 100,000 people respectively (with α = 5). Fiji (15,855) and Peru (13,631) now occupy the top ranks in the distribution of population-adjusted welfare losses. Figure 4 tells us that whether total welfare losses (from deaths and short-term increases in poverty) are deemed to rise or fall with national income per capita depends on how one chooses to define and compare poverty across countries, and on the relative welfare weight between mortality and poverty. When one takes a more relative view of poverty, allowing for the fact that different (more expensive) bundles of goods are needed to escape poverty in richer countries, then the impact of the pandemic on poverty is uncorrelated with per capita income (Figure 2B). Since mortality is strongly correlated with income, this implies that overall losses are greater in richer than in poorer countries on average. This is still true even if a constant extreme poverty line is used, provided the welfare weight of mortality relative to poverty is high enough (Figure 4B). The positive association in Figure 4B, C and D is clearly related to Deaton's [8] finding that the effect of the pandemic on economic growth was negatively associated with GDP per capita – that is: on average, richer countries experienced larger proportional declines in real national income (as well as more deaths per capita).25 But, as the author was careful to point out, his "results say nothing about whether the degree of suffering has been larger of smaller in poor countries" [8, p4]. Our results are also far from capturing all the suffering caused by the pandemic: as discussed earlier, disutility from ill health among survivors and losses from increased malnutrition or paused schooling are ignored, among other things. Nonetheless, if one takes falling into extreme poverty (relative to the $1.90 line) as an indication of absolute economic suffering, our results suggest that the positive association can be reversed – provided the weight placed on that kind of destitution is high relative to mortality. In this paper we have sought to address two sets of questions: First, what were the relative contributions of increased mortality and poverty to the welfare losses caused by the pandemic, and did those contributions vary systematically across countries? Second, how large were the aggregate welfare losses, and how were they distributed across countries? We focused on welfare losses caused by extreme outcomes along both the health and income dimensions: death and destitution (defined as falling into poverty). Following our earlier work in Decerf et al. [3], we have used years of human life (either lost to premature death or lived in poverty) as our unit of measurement. Measuring the mortality burden in terms of years of life lost, we found that this burden increases systematically and markedly with per capita GDP. There were approximately one hundred times as many years of life lost per capita among the high- and upper-middle income countries that lost the most, as in a typical low-income country. This massive disparity was driven by the fact that Covid-19 kills older people disproportionately, and considerably larger fractions of the population are elderly in richer than in poorer countries. Higher residual life-expectancies among the elderly in richer countries also contributed. The association between the poverty burden (measured in terms of additional years spent in poverty) and GDP per capita depends on how the poor are identified. Using a constant absolute poverty line such as the international (extreme) poverty line of $1.90, the poverty burden is strongly negatively associated with GDP per capita. The world's poorest countries experienced poverty burdens between one hundred and one thousand times greater than the richest. However, when poverty lines typical of each of the four income categories (low income, lower middle-income, upper middle-income and high income) are used instead, that relationship effectively disappears. Either way, the relative contribution of poverty (vis-à-vis mortality) to the aggregate welfare burden is much higher in poorer countries. In fact, the ratio declines systematically with GDP per capita across the whole range. This leads to an important first conclusion: the economic consequences of the pandemic in terms of increased poverty cannot be treated as being of secondary importance. Even at our most conservative rate for comparing life- and poverty-years (twenty of the latter to one of the former), there are 70 countries in our sample where poverty was a more important source of declining well-being than mortality. That number rises to 108 countries (three quarters of our sample) at the lower rate of five poverty-years to one life-year. Most (but not all) of those countries tend to be poor. They are not the countries where the medical and social scientists, the journalists and global civil servants that set the terms of the "global" public debate are located. The importance of the poverty consequences of the pandemic, relative to those of mortality, has not been given its proper weight in the global discussion. Our second main conclusion relates to the distribution of the aggregate welfare burden across countries – and it is nuanced. We show that the association between the welfare burden and initial per capita incomes is not always unambiguously positive or negative. Instead, the shape of the relationship depends on two key factors: how poverty is defined, and the welfare weight placed on it relative to mortality. When poverty in a country is assessed in terms of poverty lines typical of countries at similar levels of development, a positive association arises: richer countries have suffered a greater loss in welfare than poor ones in this pandemic. That conclusion still holds even if the international extreme poverty line (IPL) is used instead, provided sufficient weight is placed on mortality relative to poverty. But if the IPL is combined with a lower welfare weight for death relative to destitution, the association reverses and poorer countries bear a greater welfare loss from the pandemic on average. The fact that the association can be positive under plausible parameter configurations at all reflects once again the magnitude of the income gradient of Covid-mortality shown in Figure 1. Although this strong association between Covid-mortality and national income does reflect population age structures, including residual life-expectancies, it is important to note that demography is not destiny. Japan, by some measures the world's "oldest" country, suffered welfare losses orders of magnitude lower than Belgium, Germany and the US. China, South Korea, Norway and Australia did even better. This is probably one of those cases where the variation around the regression line matters more than the variation along it. That is the variation that reflects, among other things, differences in policy responses: the speed with which the virus was contained upon arrival, either by effective testing and tracing protocols or by early and well-enforced lockdowns, by early and widespread use of masks and social distancing, or some combination of the above. Our study has nothing to say about that fundamental source of variation. Four final caveats are warranted. First, and as noted earlier, our results are sensitive to measurement error. The possibility that Covid-related mortality is substantially under-estimated in many countries is of particular concern. If the underestimation is concentrated among poor countries, this could alter both of our main findings. Second – and also noted earlier – we have used "aggregate" or "total" welfare loss as a shorthand for losses arising from mortality and entry into poverty. The pandemic has undoubtedly had other effects on well-being, both current and future; both among the sick and among those only indirectly affected. Third, the pandemic is still ongoing, and the distribution of both the poverty and mortality burdens in 2021 may turn out to be very different from that of 2020 – particularly as access to vaccination is spreading unequally around the world. This could cause an eventual "post-pandemic" version of the global distribution of welfare losses to differ from all of those in our Figure 4, with poorer countries doing even worse. Fourth, we have nothing to say on the important issue of how losses in well-being are distributed within countries. If, as seems likely, richer countries have been better able to cushion the losses among their poorer residents than poor countries have, it is quite possible that the world's very poorest people have suffered the most. An investigation of the distributional consequences within countries will be extremely important, but it will require post-pandemic household survey data and goes beyond the remit of this paper. Despite these important caveats, our analysis does suggest that the poverty consequences of the pandemic should be given as much importance in the global policy conversation as its mortality consequences. For most poor and middle-income countries, greater economic deprivation has in fact been a more important source of loss in well-being than premature mortality. Ignoring the large welfare costs of destitution would lead us to the wrong conclusions about the distribution of the burden of the pandemic across countries, exaggerating the share of suffering visited on richer, older countries to the detriment of poorer ones. Additonal Files The additional files for this article can be found as follows: Estimation procedure for the years of life lost to the pandemic in each country. DOI: https://doi.org/10.31389/lseppr.34.s1 Age-specific mortality estimates; share of the population over 65; and residual life-expectancies at 65 all correlate with GDP per capita. https://doi.org/10.31389/lseppr.34.s2 1Our World in Data, https://ourworldindata.org/grapher/cumulative-covid-deaths-region. 2There was also an apparent increase in 1989, which is fully accounted for by China switching from an income to a consumption indicator. See https://data.worldbank.org/indicator/SI.POV.DDAY. 3On 11 February 2021 a Google search for "dying of coronavirus or hunger" yielded 2,330,000 results. Some of the titles on the first page included "The pandemic pushes hundreds of millions of people toward starvation and poverty" (Washington Post, 25 September 2020) and "More people may die from hunger than from the Coronavirus this year…" (Forbes, 9 July 2020). 4To answer these questions, we revisit and update some of our earlier findings in Decerf et al. [3]. In that earlier paper we did not address the second question above and, conversely, we do not explore counterfactual herd immunity scenarios here, as we did there. The first question above was also addressed in the earlier article using mortality and poverty estimates from June 2020, but here are updated to December 2020. 5Even if data on excess mortality were more widely available it would also include additional deaths caused by poverty, potentially confounding our comparisons. 6With no aversion to inequality and no time discounting. 7Simple restrictions imply that this is a step-function approximation to utility that is increasing and concave in incomes, but the coarseness we introduce means that people are insensitive to income gains or losses that do not entail a crossing of the poverty line. This simplification may be seen as the price we pay for converting to a life-year metric, but it is also consistent with our emphasis – discussed above – on the extreme outcomes of death and destitution (defined as falling into poverty). 8Measurement error is likely to plague a number of these country estimates, with under-reporting being of particular concern. Deaton [8] singles out Burundi and Tanzania as likely candidates for under-reporting. 9An increasing and concave function – or relationship – is one that increases at a decreasing rate. 10Deaton [8] presents a similar figure that plots the log of lives – rather than life-years – lost against log GDP per capita. He notes that "there is no relationship […] within the OECD." Goldberg and Reed [9] also document that, as of July 2020, the number of lives lost to Covid per million inhabitants was larger in advanced economies than in developing countries. They suggest that older populations and a greater prevalence of obesity in developed countries can partially explain this positive association between mortality and development. 11See Equation A1 in Appendix 1. 12Some of actual variation in this ratio is missing because age-specific mortality data is, as discussed in Appendix 1, not widely available, so we use France's IFR for all HICs, and China's for all other countries. This accounts for the sharp jump between MICs and HICs in Figure A1. 13We use data from 145 of these 166 countries, for which we can find the required mortality statistics. 14It is assumed that 85 percent of growth in GDP per capita is passed through to growth in welfare observed in household surveys in line with historical evidence (Lakner et al. [10]). 15This method assumes that the adjusted growth rates in real GDP per capita accurately reflect the growth (or shrinkage) in household consumption. With ongoing globalization, the importance of tax havens, and so on, one might imagine that GDP has further decoupled from household consumption and that other variables from national accounts—or other economic indicators altogether—could be more informative. Yet Castaneda et al. [11] show that, out of more than a thousand variables, the change in real GDP per capita is the second most predictive variable of changes in household consumption. The only variable that does better is changes in employment, for which we do not have pre- and post-COVID forecasts for 2020 that are widely comparable across a large number of countries. 16Note that this argument is unrelated to differences in prices, which are supposed to be addressed by the use of PPP exchange rates. 17Papua New Guinea is the only country for which the October 2020 growth forecast for 2020 was higher than the January 2020 forecast. 18The slope is –0.0286, with a p-value of 0.612. 19A little more formally: if we denote the empirical ratio by = PYj ⁄ LYj, then an observer with an such α that > α will regard poverty as the principal source of welfare loss in country j. Conversely, if < α, then mortality is considered the greatest source of welfare loss. 20Looking only at positive values, and thus excluding Papua New Guinea (see previous footnote) and Azerbaijan (zero PYs reported). 21The slope is –1.043 (p-value = 0.000). 22Preliminary results from surveys we have conducted in the US, UK and South Africa suggest very low values for α. The mean across the three samples was 2.6. Work on these surveys is ongoing, and these early results are tentative and should be treated with caution. Nonetheless, they suggest that, if anything, our plausible range for α is on the high side, which would lead the results that follow to place "too much" weight on mortality, relative to poverty. 23In the Caribbean, Jamaica, St. Lucia and Trinidad and Tobago are also in Group 2. 24Yet, there is so much variation around the regression line that, even with this parameter configuration, the greatest losses in welfare were recorded in upper middle-income countries such as Belize, Macedonia and Peru, alongside rich countries like Belgium. But poorer countries such as Burkina Faso, Sierra Leone and Zimbabwe are not far behind. 25The fact that economic contractions were deeper in countries where the loss of life was greater is also consistent with the finding by Andersen and Gonzalez [16] that reductions in economic mobility (using data from Google's Community Mobility Reports) and the loss of life years were positively correlated across countries. We are grateful to Lykke Andersen, Angus Deaton, Martin Ravallion, Andres Velasco and participants at the Lemann Center seminar at Stanford University for comments on an earlier version. All remaining errors are our own, as are all the views expressed. This paper underwent peer review using the Cross-Publisher COVID-19 Rapid Review Initiative. World Bank. Global Economic Prospects, January 2021. Washington, DC: World Bank; 2021. Available from: https://openknowledge.worldbank.org/handle/10986/34710. Lustig N, Neidhöfer G, Tommasi M. Short and Long-Run Distributional Impacts of COVID-19 in Latin America. Working Paper No. 2013. Tulane University, Department of Economics; 2020. Decerf B, Ferreira FHG, Mahler DG, Sterck O. Lives and Livelihoods: Estimates of the Global Mortality and Poverty Effects of the COVID-19 Pandemic. Policy Research Working Paper 9277. Washington, DC: World Bank; 2020. DOI: https://doi.org/10.1596/1813-9450-9277 Viscusi WK. The value of risks to life and health. Journal of Economic Literature. 1993; 31(4): 1912–1946. Becker GS, Philipson TJ, Soares RR. The quantity and quality of life and the evolution of world inequality. American Economic Review. 2005: 95(1): 277–291. DOI: https://doi.org/10.1257/0002828053828563 Adler MD, Bradley R, Ferranna M, Fleurbaey M, Hammitt J, Voorhoeve A. Assessing the wellbeing impacts of the Covid-19 pandemic and three policy types: Suppression, control, and uncontrolled spread. Saudi Arabia: Think20; 2020. Dicker D, Nguyen G, Abate D, et al. Global, regional, and national age-sex-specifc mortality and life expectancy, 1950–2017: A systematic analysis for the global burden of disease study 2017. The Lancet. 2018; 392(10159):1684–1735. DOI: https://doi.org/10.1016/S0140-6736(18)31891-9 Deaton A. COVID-19 and global income inequality. National Bureau of Economic Research Working Paper. 28392; 2021. DOI: https://doi.org/10.3386/w28392 Goldberg PK, Reed T. The effects of the coronavirus pandemic in emerging market and developing economies: An optimistic preliminary account. Brooking Papers on Economic Activity. 2020; 6(1). DOI: https://doi.org/10.1353/eca.2020.0009 Lakner C, Mahler DG, Negre M, Prydz EB. How much does reducing inequality matter for global poverty? Global Poverty Monitoring Technical Note. 13; 2020. DOI: https://doi.org/10.1596/33902 Castaneda ARA, Mahler DG, Newhouse D. Nowcasting Global Poverty. Paper presented at the Special IARIW-World Bank Conference New Approaches to Defining and Measuring Poverty in a Growing World. Washington, DC, 2019 November 7–8. Ferreira, FH, Chen S, Dabalen A, et al. A global count of the extreme poor in 2012: data issues, methodology and initial results. Journal of Economic Inequality. 2016; 14(2): 141–172. DOI: https://doi.org/10.1007/s10888-016-9326-6 Atkinson AB, Bourguignon F. Poverty and Inclusion from a World Perspective. In Stiglitz J, Muet PA, editors Governance, Equity and Global Markets. New York: Oxford University Press; 2001. Ravallion M, Chen S. Weakly relative poverty. Review of Economics and Statistics. 2011; 93(4): 1251–1261. DOI: https://doi.org/10.1162/REST_a_00127 Jolliffe D, Prydz EB. Estimating international poverty lines from comparable national thresholds. Policy Research Working Paper 7606. Washington, DC; 2016. DOI: https://doi.org/10.1596/1813-9450-7606 Andersen LE, Gonzales Rocabado A. Life and Death during the First Year of the COVID-19 Pandemic: An analysis of cross-country differences in changes in quantity and quality of life. Latin American Journal of Economic Development. Forthcoming, May 2021; 35. Salje, H, Kiem CT, Lefrancq N, Courtejoie N, Bosetti P, Paireau J, Andronico A, Hoze N, Richet JJ, Dubost CL, et al. Estimating the burden of sars-cov-2 in France. Science. 2020; 369(6500): 208–211. DOI: https://doi.org/10.1126/science.abc3517 Verity R, Okell LC, Dorigatti I, et al. Estimates of the severity of coronavirus disease 2019: A model-based analysis. The Lancet Infectious Diseases. 2020; 20(6): 669–677. DOI: https://doi.org/10.1016/S1473-3099(20)30243-7 Heuveline P, Tzen M. Beyond deaths per capita: comparative COVID-19 mortality indicators. BMJ Open. 2021; 11: e042934. DOI: https://doi.org/10.1136/bmjopen-2020-042934 Ferreira, F.H.G., Sterck, O., Mahler, D.G. and Decerf, B., 2021. Death and Destitution: The Global Distribution of Welfare Losses from the COVID-19 Pandemic. LSE Public Policy Review, 1(4), p.2. DOI: http://doi.org/10.31389/lseppr.34 Ferreira FHG, Sterck O, Mahler DG, Decerf B. Death and Destitution: The Global Distribution of Welfare Losses from the COVID-19 Pandemic. LSE Public Policy Review. 2021;1(4):2. DOI: http://doi.org/10.31389/lseppr.34 Ferreira, F. H. G., Sterck, O., Mahler, D. G., & Decerf, B. (2021). Death and Destitution: The Global Distribution of Welfare Losses from the COVID-19 Pandemic. LSE Public Policy Review, 1(4), 2. DOI: http://doi.org/10.31389/lseppr.34 1. Ferreira FHG, Sterck O, Mahler DG, Decerf B. Death and Destitution: The Global Distribution of Welfare Losses from the COVID-19 Pandemic. LSE Public Policy Review. 2021;1(4):2. DOI: http://doi.org/10.31389/lseppr.34 Ferreira FHG and others, 'Death and Destitution: The Global Distribution of Welfare Losses from the COVID-19 Pandemic' (2021) 1 LSE Public Policy Review 2 DOI: http://doi.org/10.31389/lseppr.34 Ferreira, Francisco H. G., Olivier Sterck, Daniel G. Mahler, and Benoît Decerf. 2021. "Death and Destitution: The Global Distribution of Welfare Losses from the COVID-19 Pandemic". LSE Public Policy Review 1 (4): 2. DOI: http://doi.org/10.31389/lseppr.34 Ferreira, Francisco H. G., Olivier Sterck, Daniel G. Mahler, and Benoît Decerf. "Death and Destitution: The Global Distribution of Welfare Losses from the COVID-19 Pandemic". LSE Public Policy Review 1, no. 4 (2021): 2. DOI: http://doi.org/10.31389/lseppr.34 Ferreira, F H G, et al.. "Death and Destitution: The Global Distribution of Welfare Losses from the COVID-19 Pandemic". LSE Public Policy Review, vol. 1, no. 4, 2021, p. 2. DOI: http://doi.org/10.31389/lseppr.34
CommonCrawl
PACS-Classification 10.00.00 THE PHYSICS OF ELEMENTARY PARTICLES AND F... 12.00.00 Specific theories and interaction models;... 12.60.-i Models beyond the standard model; Unified field theories and models, see 12.10.-g 12.60.Cn Extensions of electroweak gauge sector (2) 12.60.Fr Extensions of electroweak Higgs sector 12.60.Jv Supersymmetric models (see also 04.65.+e Supergravity) (8) 12.60.Nz Technicolor models 12.60.Rc Composite models Elementarteilchenphysik (2) Dimension 5 (1) Extra Dimensions (1) Extradimensionen (1) Higgsless Models (1) Higgslose Modelle (1) Hochenergiephysik (1) Bach, Fabian (1) Knochel, Alexander Karl (1) Mück, Alexander (1) Charged Current Top Quark Couplings at the LHC (2013) Bach, Fabian The top quark plays an important role in current particle physics, from a theoretical point of view because of its uniquely large mass, but also experimentally because of the large number of top events recorded by the LHC experiments ATLAS and CMS, which makes it possible to directly measure the properties of this particle, for example its couplings to the other particles of the standard model (SM), with previously unknown precision. In this thesis, an effective field theory approach is employed to introduce a minimal and consistent parametrization of all anomalous top couplings to the SM gauge bosons and fermions which are compatible with the SM symmetries. In addition, several aspects and consequences of the underlying effective operator relations for these couplings are discussed. The resulting set of couplings has been implemented in the parton level Monte Carlo event generator WHIZARD in order to provide a tool for the quantitative assessment of the phenomenological implications at present and future colliders such as the LHC or a planned international linear collider. The phenomenological part of this thesis is focused on the charged current couplings of the top quark, namely anomalous contributions to the trilinear tbW coupling as well as quartic four-fermion contact interactions of the form tbff, both affecting single top production as well as top decays at the LHC. The study includes various aspects of inclusive cross section measurements as well as differential distributions of single tops produced in the t channel, bq → tq', and in the s channel, ud → tb. We discuss the parton level modelling of these processes as well as detector effects, and finally present the prospected LHC reach for setting limits on these couplings with 10 resp. 100 fb−1 of data recorded at √s = 14 TeV. Supersymmetry in a Sector of Higgsless Electroweak Symmetry Breaking (2009) Knochel, Alexander Karl Since its popularization due to Randall and Sundrum (RS) one decade ago, and in connection with the $AdS/CFT$ correspondence in particular, 5D warped background spacetime has been one of the most fruitful new ideas in physics beyond the standard model (SM), leading to new insights into symmetry breaking and the properties of strongly interacting theories inaccessible to direct perturbative calculations, while at the same time relating gravity to phenomenological model building. This has, among others, led to a renewed interest in models of electroweak symmetry breaking without physical scalar fields in the guise of so-called 'warped higgsless' models, which could provide an alternative to the famed Higgs mechanism of electroweak symmetry breaking which is part of the Standard Model of particle physics. However, little emphasis was put on reconciling these models with the strong evidence from astrophysical observations that one or several new, as yet unknown, stable particle species exist which form the cold dark matter content of the universe. The nature of dark matter and electroweak symmetry breaking are among the most prominent puzzles subject to experimental scrutiny at the Tevatron, direct search experiments, and in the near future at the LHC, which compels us the believe that both issues should be addressed together in any alternative scenario beyond the Standard Model. In this thesis we have investigated phenomenological implications which arise for cosmology and collider physics when the electroweak symmetry breaking sector of warped higgsless models is extended to include warped supersymmetry with conserved $R$ parity. The goal was to find the simplest supersymmetric extension of these models which still has a realistic light spectrum including a viable dark matter candidate. To accomplish this, we have used the same mechanism which is already at work for symmetry breaking in the electroweak sector to break supersymmetry as well, namely symmetry breaking by boundary conditions. While supersymmetry in five dimensions contains four supercharges and is therefore directly related to 4D $\mathcal{N}=2$ supersymmetry, half of them are broken by the background leaving us with ordinary $\mathcal{N}=1$ theory in the massless sector after Kaluza-Klein expansion. We thus use boundary conditions to model the effects of a breaking mechanism for the remaining two supercharges. The simplest viable scenario to investigate is a supersymmetric bulk and IR brane without supersymmetry on the UV brane. Even though parts of the light spectrum are effectively projected out by this mechanism, we retain the rich phenomenology of complete $\mathcal{N}=2$ supermultiplets in the Kaluza-Klein sector. While the light supersymmetric spectrum consists of electroweak gauginos which get their $\mathcal{O}(100\mbox{ GeV})$ masses from IR brane electroweak symmetry breaking, the light gluinos and squarks are projected out on the UV brane. The neutralinos, as mass eigenstates of the neutral bino-wino sector, are automatically the lightest gauginos, making them LSP dark matter candidates with a relic density that can be brought to agreement with WMAP measurements without extensive tuning of parameters. For chargino masses close to the experimental lower bounds at around $m_{\chi^+}\approx 100\dots 110$ GeV, the dark matter relic density points to LSP masses of around $m_\chi\approx 90$ GeV. At the LHC, the standard particle content of our model shares most of the key features of known warped higgsless models. We have performed Monte Carlo simulations of warped higgsless LSP and NLSP production at a benchmark point using \nameomega/\namewhizard, concentrating on $\ptmiss$ in association with third generation quarks. After background reduction cuts on the quark momenta and angles, we get hadronic cross sections of $\sigma>100\mbox{ fb}$ at $14\mbox{ TeV}$ with characteristic $\ptmiss$ distributions for $\chi\chi t\overline{t}$ final states, while the final states with $b\overline{b}$ pairs have much lower event rates and shapes which are hard to discern in experiments. Our results suggest that the discovery of warped higgsless LSP dark matter at the LHC via missing energy is within reach for the first few $\mbox{ fb}^{-1}$ at $14$ TeV if $b$ and in particular $t$ identification is reliable. The standard model in 5D : theoretical consistency and experimental constraints (2004) Mück, Alexander The four-dimensional Minkowski space is known to be a good description for space-time down to the length scales probed by the latest high-energy experiments. Nevertheless, there is the viable and exciting possibility that additional space-time structure will be observable in the next generation of collider experiments. Hence, we discuss different extensions of the standard model of particle physics with an extra dimension at the TeV-scale. We assume that some of the gauge and Higgs bosons propagate in one additional spatial dimension, while matter fields are confined to a four-dimensional subspace, the usual Minkowski space. After compactification on an S^1/Z_2 orbifold, an effective four-dimensional theory is obtained where towers of Kaluza-Klein (KK) modes, in addition to the standard model fields, reflect the higher-dimensional structure of space-time. The models are elaborated from the 5D Lagrangian to the Feynman rules of the KK modes. Special attention is paid to an appropriate generalization of the Rxi-gauge and the interplay between spontaneous symmetry breaking and compactification. Confronting the observables in 5D standard model extensions with combined precision measurements at the Z-boson pole and the latest data from LEP2, we constrain the possible size R of the extra dimension experimentally. A multi-parameter fit of all relevant input parameters leads to bounds for the compactification scale M=1/R in the range 4-6 TeV at the 2 sigma confidence level and shows how the mass of the Higgs boson is correlated with the size of an extra dimension. Considering a future linear e+e- collider, we outline the discovery potential for an extra dimension using the proposed TESLA specifications as an example. As a consistency check for the various models, we analyze Ward identities and the gauge boson equivalence theorem in W-pair production and find that gauge symmetry is preserved by a complex interplay of the Kaluza-Klein modes. In this context, we point out the close analogy between the traditional Higgs mechanism and mass generation for gauge bosons via compactification. Beyond the tree-level, the higher-dimensional models studied extensively in the literature and in the first part of this thesis have to be extended. We modify the models by the inclusion of brane kinetic terms which are required as counter terms. Again, we derive the corresponding 4D theory for the KK towers paying special attention to gauge fixing and spontaneous symmetry breaking. Finally, the phenomenological implications of the new brane kinetic terms are investigated in detail.
CommonCrawl
Home Journals MMC_A A Novel Hybrid Krill Herd Algorithm for Improving the Performance of Electric Power Systems Operation A Novel Hybrid Krill Herd Algorithm for Improving the Performance of Electric Power Systems Operation Aboubakr Khelifi* | Bachir Bentouati | Saliha Chettih | Ragab A. El-Sehiemy Electrical Engineering Department, LMSF Laboratory, Amar Telidji University of Laghouat, Laghouat 03000, Algeria Electrical Engineering Department, Kafrelsheikh University, Egypt [email protected] Solving the Optimal power flow (OPF) problem is an urgent task for power system operators. It aims at finding the control variables' optimal scheduling subjected to several operational constraints to achieve certain economic, technical and environmental benefits. The OPF problem is mathematically expressed as a nonlinear optimization problem with contradictory objectives and subordinated to both constraints of equality and inequality. In this work, a new hybrid optimization technique, that integrates the merits of cuckoo search (CS) optimizer, is proposed to ameliorate the krill herd algorithm (KHA)'s poor efficiency. The proposed hybrid CS-KHA has been expanded for solving for single and multi-objective frameworks of the OPF problem through 8 case studies. The studied cases reflect various economic, technical and environmental requirements. These cases involve the following objectives: minimization of non- smooth generating fuel cost with valve-point loading effects, emission reduction, voltage stability enhancement and voltage profile improvement. The CS-KHA presents krill updating (KU) and krill abandoning (KA) operator derived from cuckoo search (CS) amid the procedure when the krill updating in order to extraordinarily improve its adequacy and dependability managing OPF problem. The viability of these improvements is examined on IEEE 30-bus test system. The experimental results prove the greatest ability of the proposed hybrid meta-heuristic CS-KHA compared to other famous methods. cuckoo search algorithm (CS), krill herd algorithm (KHA), optimal power flow, voltage stability (VS), valve-point effect, emission reduction The problem of optimal power flow (OPF) is significates considerable attention in recent years and has based its position among the main tools for the operation and planning of recent power systems. OPF is a non-linear programming problem. The major objective is to find the correct adjustment of its control variables that optimize specific objective functions/functions while sufficient the operational constraints of equality and inequality at specified loading settings and defined system parameters [1-3]. The OPF has been applied to regulate the production of real powers, generators terminal voltages, setting of transformer taps, shunt reactors/capacitors and other control variables to improve the power system requirements by minimizing the production fuel costs, reducing the network active power losses, enhance the voltage stability and voltage profile at load buses. The previous requirements are achieved while all operational requirements are preserved within the accepted operation limitations as the voltages of load bus, the reactive power products of the generator, the network's power flows and whole other state variables in the power system within their assure and operational bounds. In its most popular formulation, the OPF is static, a non-convex, wide-ranging optimization problem with both discontinuous and continuous control variables. Even in operating cost functions' absence of non-convex generators, prohibited operating zones (POZ) of generating units and discontinuous control variables, the OPF problem is a non-convex because of the presence of non-linear alternating current power flow equality constraints. The existence of discontinuous control variables, like transformer tap positions, phase shifters, switchable shunt devices, added more difficulty the formulation and solution of the problem. The methods were evolving to solve OPF problem can be categorized into two types conventional and advanced optimization techniques. The traditional optimization techniques were used derivatives and gradient operators. These techniques are usually not capable to find or determine the global optimal. Several mathematical suppositions like analytic, convex and differential objective functions must be made to simplicity the problem. Nevertheless, the OPF's problem is a problem of optimization non-convex and non-smooth objective function in general. As a result, it is significant to evolve optimization methods that are effective in dominating these disadvantages and to treat this hardness effectively. The computational materials' evolution in recent decades has motivated to the development of advanced optimization methods that were so-called meta-heuristics. These techniques can dominate many disadvantages of conventional techniques [4]. Several of these recent techniques have been applied to solve the OPF problem like: Simulated Annealing (SA) [5], Genetic Algorithm (GA) [6, 7], Differential Evolution (DE) [8], Tabu Search (TS) [9], Imperialist Competitive Algorithm (ICA) [10], Particle Swarm Optimization (PSO) [11], adaptive real coded biogeography-based optimization (ARCBBO) [12], Biogeography Based Optimization (BBO) [13-14], multi-phase search algorithm [15], Gbest guided artificial bee colony algorithm (Gbest-ABC) [16], Gravitational Search Algorithm (GSA) [17], Artificial Bee Colony (ABC) [18], Multi-objective Grey Wolf Optimizer (MOGWO) [19], black-hole-based optimization (BHBO) [20], Teaching Learning based Optimization (TLBO) [21], Sine-Cosine Optimization algorithm (SCOA) [22], Group Search Optimization (GSO) [23], hybrid algorithm of particle swarm optimizer with grey wolves (PSO-GWO) [24], quasi-oppositional teaching-learning based optimization [25] have been incorporated into it. Meanwhile, many state-of-the-art meta-heuristic techniques, like Improved Colliding Bodies Optimization (ICBO) [26], Moth Swarm Algorithm (MSA) [27], Moth-Flame Optimization (MFO) [28], cuckoo search [29], firefly algorithm [30] and Backtracking Search Optimization Algorithm (BSA) [31] Surveys of different meta-heuristics used to solve the problem of OPF are offered in [32]. The applications of these methods on different size systems lead to competitive results and therefore were favorable and encouraging for more study in this trend. Furthermore, because of the objectives' contrast where various functions can be envisaged for modeling the OPF problem, of course not technique can be seen as the preferable in solving whole OPF problems. Hence, it is constantly needed to have a novel technique that can successfully solve several of the OPF problems. Optimization is turning an area of request to analysts, particularly since a framework's the competence depends on obtaining an arrangement an order that can be acquired through suitable optimization technique. It is a method in order to discover the perfect solution next assessing the cost function that denotes the association among the system framework and its limitations. Presently, meta-heuristic algorithms are being formed in many regions for example crossbreeding, multi-objective type, binary type, preparing multi-layer perceptron and ways as Lévy flight, operator, and chaos theory. Most of these improvements happened because the deterministic and evolutionary components are used [23]. A perfect incorporation of global and local search has intensive local exploration and global exploration [25]. Krill herd method (KH) first suggested by Gandomi and Alavi in 2012 [33] and because it performs well, many optimization strategies such as chaotic theory [34, 35, 30], Flower Pollination Algorithm (FPA) [36] and colonial competitive differential evolution (CCDE) [37] have been hybridized with the fundamental KH algorithm as mutation operator with the objective of further enhancing the performance of KHA. Furthermore, to make KHA perform in the most ideal way, a parametric study has been conducted through an array of standard benchmark functions [38]. Furthermore, KHA is a new population-build swarm computation [26] in view of the Lagrangian and revolutionary conduct of krill people in wildlife for utilization and investigation in a problem of optimization. KH computation occasionally is not able to must avoid local optimum [27, 28]. Firstly, as portrayed here, a successful hybrid Meta heuristic cuckoo search krill herd (CS-KHA) technique in light of KHA and CS is initially suggested to accelerate convergence. In CSKH, we use an essential KHA to select an encouraging solution set. Consequently, krill updating (KU) and krill abandoning (KA) operator started from CS algorithm are added to the method. The KU operator is to a decent encouraging arrangement; while KA operator is made use of further improving the investigation of the CS-KHA to substitute the worse krill's a small amount at the finale of every generation. The performance of this approach is utilized to keep away from local optimum and obtain a worldwide ideal solution, in addition, minimal computational time to achieve the ideal solution, local minimum evasion, and quicker convergence, which produce them suitable for viable implementations for solving various constrained optimization problems. The purpose of this article is to develop an improved KHA called CS-KHA to solve OPF problem. So as to proven the evolution of the CS-KHA, its efficiencies are compared to CS, KHA and other well-known optimization methods. The rest of article is structured in the next form: The following segment outlines the formulation of the OPF problem; meanwhile, section 3 depicts the algebraic equation of CS-KHA. Section 4 shows the simulation's results and discussion. While the finally conclusion of this paper is in section 5. 2. Formulation of Optimal Power Flow (OPF) The problem of OPF aims at finding the control variables' optimal setting through minimizing /maximizing a predefined objective function while a collection of equality and inequality constraints satisfied. OPF considering the system's operating limit, hence it can be defined like a non-linear constrained optimization problem. Minimize: $f(x, u)$ (1) Subject to: $\begin{array}{l} h(x, u)=0 \\ g(x, u) \leq 0 \end{array}$ (2) where, $u$ is the independent variable or control's vector, is the dependent variables or state's vector. Objective functions of OPF, $g(x, u)$: set of inequality constraints, $h(x, u)$: set of equality constraints. 2.1 Control variables The vector of power network control variables is expressed as follows [37]: $u=\left[P_{G_{2}} \cdots P_{C_{\log }}, V_{G_{1}} \cdots V_{G_{N G}}, Q_{C_{1}} \cdots Q_{C_{1 c}}, T_{1} \cdots T_{N T}\right]$ (3) where, $P_{G_{i}}$ is the $i$-th active power bus generator. Chosen from bus 1 as swing bus is represented just and any one of the generator buses can be swing bus. $V_{G_{i}}$ is the voltage magnitude at $i$-th voltage controlled generator bus, $T j$ is the j-th branch transformer tap, $Q C k$ is the shunt compensation at k-th bus. $N G, N C$ and are the generators' number, transformers and shunt VAR compensators. Any value within its range can be assumed as a control variable. Practically, transformer taps are not constant. Be that as it may, the tap settings indicated are in p.u. and outright voltage's estimation is not represented. Subsequently, for the aim of this study and to compare with previously described results, all control variables including tap settings are viewed constant for general cases of study. 2.2 State variables The power system's state variables can be expressed through vector x as: $x=\left[P_{G_{1}}, V_{L_{1}} \ldots V_{L_{N L}}, Q_{G_{1}} \ldots Q_{G_{N G}}, S_{l_{1}} \ldots S_{l_{n l}}\right]$ (4) where, $P_{G_{1}}$ is the active power of generator at slack bus, $Q_{G_{i}}$ is the generator's reactive power linked to bus i, is the p-th load bus's bus voltage (PQ bus) and q-th line's line loading of is specified by. NL and nl are the load buses' number and lines of transmission respectively [39-40]. 2.3 Power system constraints As aforesaid earlier, the problem of OPF presents both operational constraints on equality and inequality. These constraints are defined as follows: 2.3.1 Equality constraints In OPF, the reactive and real power equilibrium equations are represented the system constraints of equality are formulated as for all system buses: $P_{G_{i}}-P_{D_{i}}-V_{i} \sum_{j=1}^{N B} V_{j}\left[G_{i j} \cos \left(\delta_{i j}\right)+B_{i j} \sin \left(\delta_{i j}\right)\right]=0$ (5) $Q_{G_{i}}-Q_{D_{i}}-V_{i} \sum_{j=1}^{N B} V_{j}\left[G_{i j} \sin \left(\delta_{i j}\right)+B_{i j} \cos \left(\delta_{i j}\right)\right]=0$ (6) where, $\delta_{i j}=\delta_{i}-\delta_{j}$ is the voltage angles among bus i and bus j, NB is the buses' number, $Q_{D i}$ and $P_{D i}$ are reactive and real load demands. $G_{i j}$ is the transfer conductance and $B_{i j}$ is the saucepans among bus i and bus j, respectively. 2.3.2 Inequality constraints The inequality's constraint in the OPF reflects the equipment's operating limit in the power system, and too reflects the limitation of the line and the load bus to ensure the safety of the system. a) Generator constraints: $V_{{{G}_{i}}}^{\min }\le {{V}_{{{G}_{i}}}}\le V_{{{G}_{i}}}^{\max }\forall i\in NG$ (7) $P_{G_{i}}^{\min } \leq P_{G_{i}} \leq P_{G_{i}}^{\max } \forall i \in N G$ (8) $Q_{G_{i}}^{\min } \leq Q_{G_{i}} \leq Q_{G_{i}}^{\max } \forall i \in N G$ (9) b) Transformer constraints: $T_{j}^{\min } \leq T_{j} \leq T_{j}^{\max } \forall j \in N T$ (10) c) Shunt compensator constraints: $Q_{C_{k}}^{\min } \leq Q_{C_{k}} \leq Q_{C_{k}}^{\max } \forall k \in N C$ (11) d) Security constraints: $V_{L_{p}}^{\min } \leq V_{L_{p}} \leq V_{L_{p}}^{\max } \forall p \in N L$ (12) $S_{l_{q}} \leq S_{l_{q}}^{\max } \forall q \in n l$ (13) The control variables in constraints of inequality are self-limiting. The technique of optimization chooses a viable value for every like variable within the determined scope. Efficient methods for dealing with constraints of inequality related to dependent or state variables. 3. Suggested Hybrid Technique 3.1 KH technique The KH technique is built on the natural inspiration of conduct krill individuals' imitation in the krill population. The KH technique is motivated by krill activities like [26]: 1/The movement of other krill individuals is induced; 2/Food search activity; 3/random scattering. The optimization technique has the ability to search for an uncertain search space. Lagrangian model is extended to an n-dimensional decision space: $\frac{d X_{k}}{d t}=N_{k}+F_{k}+D_{k}$ (14) where, $N_{k}$ the movement is stimulated by other members of the krill; $F_{k}$ is the feeding movement and $D_{k}$ is the physical diffusion of the $k_{t h}$ krill. The movement stimulated expresses the conservation of density through every individual. The matimatical formula reflects this conduct, which is worded as follows: $N_{k}^{\text {next}}=N^{\max } \alpha_{k}+\omega_{d} N_{k}^{\text {present}}$ (15) $\alpha_{k}=\alpha_{k}^{l o c a l}+\alpha_{k}^{t \text { arget }}$ (16) where, in $N^{\max }$ is the highest stimulated velocity, $\omega_{d}$ indicates the inertia weight in [0, 1], $N_{k}^{\text {Anclent}}$ is the preceding movement $a_{k}^{\text {local}} \text {and } a_{k}^{\text {target}}$ indicate the local effect of the neighbor, which is the best solution of the $k_{t h}$ individual. $a_{k}^{\text {target}}$ is formulated by the following equations: $\alpha_{k}^{t \arg e t}=C^{b e s t} \hat{K}_{k, b e s t} \hat{X}_{k, b e s t}$ (17) $C^{\text {best}}=2\left(r_{1}+\frac{I}{I_{\max }}\right)$ (18) where, $C^{\text {bset}}$ is the krill individual's effective coefficient with the preferable fitness for the first $k_{t h}$ krill, $K\widehat{{\text {k,worst}}}$ and $\widehat{K_{k, \text {best}}}$ are the worst and preferable krill's fitness value so far; is a random values' number among 0 and 1. It is used to improve exploration, $I$ is the current iterations' number, and $I_{\max }$ is the iterations' maximum number. Foraging activities/movements are mathematically calculated as follows: The foraging action consists of two major parameters. Premier is the position of the food $F_{k}^{n e x t}$ , followed by the preceding experiment $\beta_{k}$ around the position of the food. $F_{k}^{\text {next}}=V_{f} \beta_{k}+\omega_{f} F_{k}^{\text {previous}}$ (19) $\beta_{k}=\beta_{k}^{f o o d}+\beta_{k}^{\text {best}}$ (20) where, $V_{f}$ is the foraging speed, $\omega_{f}$ is the foraging motion's inertia weight in the field [0, 1], $F_{k}^{\text {previous}}$ is the final foraging movement, $\beta_{k}^{f o o d}$ is the food attractive and $\beta_{k}^{\text {best}}$ is the preferable fitness's effect of each krill. Depending on the foraging speed's measured values, take as 0.02 ( $m s^{-1}$ ). $D_{k}=D^{\max } \delta$ (21) $D_{k}=D^{\max }\left(1-\frac{I}{I_{\max }}\right) \delta$ (22) wherein, $D^{\max }$ is the highest induction velocity, $\delta$ is the random direction vector [0, 1]. Lastly, the location of each krill is updated to: $X_{k}^{\text {next }}=X_{k}^{\text {curent }}+\Delta x_{k}(t)$ (23) $\Delta x(t)=N_{k}(t)+F_{k}(t)+D_{k}(t)$ (24) where, t is the krill's position. 3.2 Cuckoo search Through optimizing the conduct of some cuckoo species, CS is suggested that is swarm intelligence's a type technique for optimization problems. In CS, Lévy flights are consolidated that decides the cuckoo's walking steps. For simplicity in portraying CS, Yang and Deb adopted some of the idealized rules. For instance, every cuckoo is just relating to one egg; the preferable nests would be preserved and not be obliterated; the possible host nest number is unchangeable, and an egg is recognized through the host bird with a possibility. In CS, every egg in a nest shows a solution. The CS is to take use of the recently created better solutions in place of a moderately poor solution. In this research, we just looked at every nest that merely had an egg. Thus, in this research, the difference between the nest egg and solution was not identified. The CS technique can make a good harmony between a local arbitrary walk and the irregular global exploratory walk using a switching parameter. The former one can be represented as $X_{i}^{t+1}=X_{i}^{t}+\beta_{s} \otimes H\left(p_{a}-\varepsilon\right) \otimes\left(X_{j}^{t}-X_{k}^{t}\right)$ (25) where, $X_{j}^{t} \text { and } X_{k}^{t}$ are two various solutions choice at random, $\mathrm{H}(\mathrm{u})$ is function of a Heaviside, $\varepsilon$ is a number of random drawn from a regular distribution, and sis the step size. For the global random walk, it is combined with Lévy flights as follows: X_{i}^{t+1}=X_{i}^{t}+\beta L(s, \lambda), L(s, \lambda)= \\ \frac{\lambda \Gamma(\lambda) \sin \left(\frac{\pi \lambda}{2}\right)}{\pi} \frac{1}{s^{1}+\lambda},\left(s, s_{0} \succ 0\right) \end{array}$ (26) Here, $\beta>0$ is the scaling factor of step size. 3.3 Proposed Hybrid CS-KHA procedure To ameliorate the fundamental's the search capacity KH technique; genetic techniques are added to the method [26]. Numerical outcomes when contrast with other methods displays that KH II (only added crossover operator) performed the best. In any case, KH can sometimes find it hard to come up with better solutions to several complicated problems. Consequently, in this article, a novel meta-heuristic technique by prompting KU operator and KA operator into KH to form a recent hybrid method, named CS-KHA is used to manage an OPF problem. The introduced KU/KA operators are roused by the authoritative CS algorithm. As such, in this paper, the property of cuckoo used in CS is supplemented to the krill to create excellent krill's a sort that can play out the KU/KA operator. The contrast amongst CSKH and KH is that the KU operator as a local search tool is used to adjust the new solution for every krill rather than rand walks used as KH's part (whereas in KH II, genetic generation techniques are employed). While KA operator is used to enhance further the exploration the method's ability by replacing some nests randomly thereby constructing new solutions. By the blending of CS and KH, CSKH can investigate the new search space with standard KH technique and KA operator and exploit the population information by KU operator. The main step of KU/KA operators used in CSKH method is presented by Algorithms 1 and 2, respectively. Algorithm 1 KU operator Get a krill i and update its solution using Lévy flights using Equation (25). Evaluate its quality $F_{i}$ Select a krill j randomly. If ($F_{i}<F_{j}$) Replace j with the novel solution and take the novel solution as $X_{i+1}$ Update the position of krill using equation (22) as $X_{i+1}$ Algorithm 2 KA operator 1. Begin 2. $K=r a n d(N P, D)>P_{a}$. 3. $P_{1}=P ; P_{2}=P$ 4. $For $i=1$ to $N P$ (all krill) do.$ 5. step=rand*(Yi-Zi); 6. $X_{n e w}=X_{j}+s t e p \odot K(i,:)$ 7. End for 8. For $i=1$ to $N P$ (all krill) do. 9. $\text {If } F\left(X_{\text {nev}}\right) \prec F\left(X_{i}\right) \text { then }$ 10. $X_{\text {nex}}=X_{i} ; F\left(X_{\text {nex}}\right)=F\left(X_{i}\right)$ 11. End if 12. End for Algorithm 3 CSKH algorithm Step 1: Initialization. Set the $t=1$, the population $P, V_{f}, D^{\max } \text { and } N^{\max }, P_{a} \text { and KEEP }$. Step 2: Fitness evaluation. Step 3: While $t<$ MaxGeneration do. Sort the population. Store the KEEP best krill. for $i=1: N_{p}$ (all krill) do Perform the three motions. Update the krill position by CU operator (see Algorithm 1). Evaluate each krill by $X_{i+1}$. end for $i$ Destroy the worse krill and build new ones by CA operator (see Algorithm 2). Replace the KEEP worst krill with the KEEP best krill. t=t+1. Step 4: end while Firstly, in the proposed method, standard KHA uses three movements to look for the best solutions and engage these movements to lead the candidate solutions for the following generation. In this, KU operator is then employed to carry out local search intensively to achieve better solutions. This operator can since it abuses the search space by Lévy flight. Towards the end of each generation, the KA operator is employed to additionally ameliorate the CS-KHA's the exploration by replacing the worse krill's a fraction (pa) .Along these lines, this component used in CS-KHA can completely extend the strong the KHA's exploration and gain overcome the absence of the KHA's weak exploitation . Above all, this technique can additionally unwind the inconsistency among exploration and exploitation effectively. Furthermore, another basic change is the presentation of elitism scheme into the CSKH. Likewise, with other population-based methodologies, we employ a further focused elitism technique to hold the preferable solutions for the population. That elitism system forbids the preferable krill from existence demolished through three movements and KU/KA operator. By joining previously mentioned KU/KA operator and concentrated elitism design into unique KH technique to form a new CSKH algorithm (see Algorithm 3). 4. Objective Functions and Studied Cases A few contextual investigations with unique and multi-objective have been made for network IEEE 30-bus test system. The essential characteristics of this network exam system are given in [27]. 4.1 IEEE 30 bus system results: A studied cases A total of 8 studies of cases were implementing in the first exam system (IEEE 30-bus exam system). The first two cases studies reduced OPF's single objective function. The rest is multi-objective optimization, which translates into a single target with a weighting factor, as in numerous past studies and recreated here. The definitions of the studied cases are expressed as follows: Case 1: fuel cost's minimization This is the fundamental OPF's objective function in all studies. The relationship among fuel cost ($/h) and power generation Power (MW) is generally offered by two relationships, so the target function to be is reported as: $f(x, u)=\sum_{i=1}^{N G} a_{i}+b_{i} P_{G_{i}}+c_{i} P_{G_{i}}^{2}$ (27) where, $a_{i}, b_{i}, c_{i}$ are the $i-\mathrm{th}$ generator's cost coefficients generating produce power. IEEE 30-bus system generators' cost coefficients can be seen in [39]. Case 2: fuel cost's minimization taking into account valve point effect The impact of the valve point should be taken into account for further practical and exact fuel cost function's modeling. The generating units with multi-valve steam turbines display a more prominent variety in the fuel-cost functions [32]. The valve loading multi-valve steam turbines' impact is modeled as function of sinusoidal, which the absolute value is added to the fundamental cost function. The steam plant's actual cost curve function becomes non-continuous. The aim of reducing fuel cost of generating with valve-point effect is presented by [40]: $f(x, u)=\sum_{i=1}^{N G}\left(a_{i}+b_{i} P_{G_{i}}+c_{i} P_{G_{i}}^{2}\right)+\left|d_{i} \times \sin \left(e_{i} \times\left(P_{G_{i}}^{\min }-P_{G_{i}}\right)\right)\right|$ (28) where, $d_{i}$ and $e_{i}$ are the coefficients that show the valve-point loading effect. The factors applied for calculations are given in [37]. Case 3: fuel cost's minimization and voltage stability enhancement Voltage dependability issues are accepting developing consideration in power systems as network breakdown have been experienced in last because of instability of voltage. Under normal condition and in the wake of being subjected to unsettling influence, the power system's steadiness is portrayed through its capacity to keep up whole bus voltages in suitable boundaries. A system goes into voltage instability's a condition when an unsettling influence, augmentation in load demand or variation in system term causes a dynamic and wild abatement in voltage [14]. Systems with long lines of transmission and overwhelming loading are further inclined to the problem of voltage instability. In power system, a system's enhancing voltage stability is a vital part. Each bus's L-index fills in as perfect power system stability's marker [42]. The index's value can be between 0 and 1, where 0 existence the no load case whereas 1 is the voltage collapse. If a power system has NL load (PQ) buses' number and NG generator (PV) buses' number, L-index Lj's value of bus j is can be explained as: $L_{j}=\left|1-\sum_{i=1}^{N G} F_{j i} \frac{V_{i}}{V_{j}}\right|$ $j=1,2, \dots, N L$ (29) $F_{j i}=-\left[Y_{L L}\right]^{-1}\left[Y_{L G}\right]$ where, $Y_{L L}$ and $Y_{L G}$ sub-matrices and are gotten from YBUS system matrix next separating load (PQ) buses and generator (PV) buses as shown in Eq. (29). $\left[\begin{array}{l} I_{L} \\ I_{G} \end{array}\right]=\left[\begin{array}{ll} Y_{L L} & Y_{L G} \\ Y_{G L} & Y_{G L} \end{array}\right]\left[\begin{array}{l} V_{L} \\ V_{G} \end{array}\right]$ (30) $L_{\max }=\max \left(L_{j}\right) j=1,2 \ldots \ldots, N L$ (31) The indicator $L_{\max }$ varies among 0 and 1 where the minimal the indicator, the further the system stable. Thus, enhancing voltage stability can be obtained by the reducing of $L_{\max }$ . Hence, the objective function can be formulated as: $f(x, u)=\left(\sum_{i=1}^{N G} a_{i}+b_{i} P_{G_{i}}+c_{i} P_{G_{i}}^{2}\right)+\lambda_{L} \times L_{\max }$ (32) where, $L_{\max }$ is chosen weight factor's value $\lambda_{L}$ is 100. Case 4: Fuel cost's minimization and emission Electrical power's generation from traditional energy's sources releases dangerous gases for the environment. The nitrogen oxides (NOx) and sulfur oxides (SOx)'s amount and emission in tones per hr (t/h) is higher with augmented in generated power (in p.u. MW) next the relationship presented in Eq. (33). $\text { Emission }=\sum_{i=1}^{N B}\left[\left(\alpha_{i}+\beta_{i} P_{G_{i}}+\gamma_{i} P_{G_{i}}^{2}\right) \times 0.001+\omega_{i} e^{\left(\mu_{i} P_{G_{i}}\right)}\right]$ (33) where, $\alpha_{i}, \beta_{i}, \gamma_{i}, \omega_{i} \text { and } \mu_{i}$ are all coefficients of emission provided in [41]. Therefore, the objective function of this case is given by: $f(x, u)=\left(\sum_{i=1}^{N G} a_{i}+b_{i} P_{G_{i}}+c_{i} P_{G_{i}}^{2}\right)+\lambda_{E} \times \text {Emission }$ (34) The weight factors are chosen as = 100 in this case. Case 5: fuel cost's minimization and voltage deviation Deviation of voltage is voltage quality's a measure in the network. The deviation's index is too vital from the security part. The indicator is expressed as cumulative voltages deviation of whole load buses in the network from nominal unity's value. Mathematically it is formulated as: $V D=\left(\sum_{p=1}^{N L}\left|V_{L_{p}}-1\right|\right)$ (35) The combining fuel cost's objective function and deviation of voltage is: $f(x, u)=\left(\sum_{i=1}^{N G} a_{i}+b_{i} P_{G_{i}}+c_{i} P_{G_{i}}^{2}\right)+\lambda_{V D} \times V D$ (36) where, factor of weight is give a value of 100 as in [32-33]. Case 6: Fuel cost minimization and active power loss The power loss in system of transmission is certain because the lines have latent resistance. The active power loss to be reduced is formulated as: $P_{\text {loss}}=\sum_{i=1}^{n l} \sum_{j=1, j \neq i}^{n l} G_{i j}\left[V_{i}^{2}+V_{j}^{2}-2 V_{i} V_{j} \cos \left(\delta_{i j}\right)\right]$ (37) A multi-objective case that aims at reducing fuel cost and active power loss simultaneously is transformed into single objective as: $f(x, u)=\sum_{i=1}^{N G} a_{i}+b_{i} P_{G_{i}}+c_{i} P_{G_{i}}^{2}+\lambda_{p} \times P_{l o s s}$ (38) where, $P_{l o s s}$ is the active power loss and factor's value $\lambda_{p}$ is selection as 40. Case 7: Fuel cost's minimization and voltage stability's enhancement The objective function's formulation, comprising of both fuel cost taking into account the valve-point effect and voltage stability, this case's the objective function can be expressed as: f(x, u)=\sum_{i=1}^{N G}\left(a_{i}+b_{i} P_{i}+c_{i} P_{i}^{2}\right)+ \\ \left|d_{i} \times\left(e_{i} \times \sin \left(P_{g i}^{\min }-P_{g i}\right)\right)\right|+\lambda_{L} \times L_{\max } The choice weight factor $\lambda L$ is too 100. Case 8: Fuel cost's minimization, emission, voltage deviation and losses Four objectives are put together for this case study. Fuel cost, emission, voltage deviation and active power loss in the network are whole reduced together. The objective function is presented by: f(x, u)=\left(\sum_{i=1}^{N G} a_{i}+b_{i} P_{i}+c_{i} P_{i}^{2}\right)+\lambda_{E} \times \\ \text {Emission }+\lambda_{V D} \times V D+\lambda_{p} \times P_{\text {loss}} The weight factors are choice as in [33] with $\lambda_{E}=19, \lambda_{V D}=21 \text { and } \lambda_{p}=22$ to balance between the objectives. For optimizing's case 1 essential fuel cost, CS-KHA algorithms canproduce to fuel costs of 799.0595 $\$ / h$, The results are shown in the table 1 which satisfies all the system constraints, complying to the vital constraints of inequality on generator reactive power, load bus voltage and line capacity. Amongst whole the constraints of inequality, constraint on load bus voltage was discovered to be vital as the load buses' operating voltages are sometimes establish to be close the boundaries. Using the 3-methods (CS, KHA and CS-KHA), recent studies recorded better results when compared with present study are presented in Table 2. The valve-point effect is studied for case 2 to achieve at a rise in cost than in case 1 with conclusive value of 830.0981$\$ / h$, get by CS-KHA. In a nutshell, in spite of the variation in efficiency is seen between three methods, produce one or more technique's outcome used in our work are better than most of the results revealed in past literatures on the problem of OPF are presented in Table 2. Table 1. The control variables' optimal settings for Cases 1-3 Control variable CS-KHA PG1 (MW) PG11 (MW) V1 (p.u) V11 (p.u) Qc10 (Mvar) Fuel cost ($/h) Emission (ton/h) Ploss(MW) Table 2. The results obtained are compared for Cases 1-3 Fuel cost($/h) BHBO[20] BSA [37] Gbest-ABC[16] ARCBBO [12] ICBO [32] MSA [33] BSA[37] CBO[32] MSA[33] ECBO[32] BBO[37] DE[37] MDE [33] Case 3 to case 8 are for OPF with multi-objective for 30-bus system. In these case studies, the joined objective function's fitness is the significant factor in ranking the different optimization techniques' outcome out. For a significant comparison, other techniques' fitness value is calculated and provided here employing the different objective functions are weight factor. In multi-objective cases, an adjustment in weight factor e.g. elevated weight factor on fuel cost in case 3 the best values of both fuel cost and the system load buses'$Lmax$, CS-KHA gives preferable produce of 799.5625 and 0.1251 respectively, superior to the other comparable algorithms as appears in the Table 2. Two objectives of cost and emission are concurrently reduced in case 4. Along with the fitness value, CS-KHA is at the cost and emission's least values in compared with in compared with other techniques presented in Table 4. Minimizing cost and voltage deviation (VD)'s in case 5, is achieved by CS-KHA which is the least among all other comparable techniques as appear in Table 4. In case 6 will reduce the cost and power loss. Table 3 shows' quick review that any these techniques' one or more CS, KHA and CS-KHA can give the preferable fitness values in whole the cases. Despite the fact that the preferable fitness is described by CS-KHA in case 6, a transitional value fuel cost, the forming objectives' one, is accomplished. The active power loss's other goal is the minimum when compared with other methods as appears in Table 4. Important amelioration in fuel cost seen (through CS-KHA) in case 7's for multi-objective optimization where both cost considering the valve-point effect and $L-\max$ are minimized, the results of this case are presented in Table 5. Preferable to the other comparable algorithms as appears in Table 6. Cost, real power loss, emission and voltage deviation concurrently reduced four objectives are in case 8, the results of the control variables given in Table 5. Along with the fitness value, CS-KHA is at the cost and loss's least values in contrast with MSA [33] and FPA [33], as shown in Table 6. Graphical comparison the convergence of three proposed techniques for Case 1 and Case 2 of the objective functions related to the fuel cost is shown in Figures 1 and 2 respectively. The convergence speeds are Not distinctly various between the techniques. Be that as it may, fast and surprising convergence is seen for both KHA and CS-KHA during the search process's first phase. KHA converges to the ideal solution more consistently. Two-objective cases' convergences are given in Figure 3 (3.a and 3.b), Figure 4, and Figure 5 (5.a and 5.b). For clarity, only one technique's convergence achieving optimal fitness value is shown in the graph. Figure 1. Convergent curves of Case 1 Table 3. Optimal settings of the control variables for case 4, 5 and 6 V1(p.u) V11(p.u) Qc10(Mvar) $L_{\max }$ $p_{l o s s}(M W)$ Table 4. Comparison of the results obtained for Cases 4-6 Emission (t/h) Ploss (MW) FPA [33] GA-MPC[41] MOGWO [19] MFO[33] NSGA-II[19] FPA[33] Table 5. Optimal settings of the control variables for case 5 and case 6 $p_{\text {loss}}(M W)$ Table 6. Comparison of the results obtained for Case 5 VD (pu) MDE[33] 3a.png a Fuel cost b Lmax Figure 3. Convergent curves of Cas3 (bi-objectives) b Voltage deviation Figure 5. Convergent curves of the objectives of Case 5 In present study, a new Meta hybrid heuristic CSKH technique has been suggested to solve the problem of OPF. By merging the merits KU/KA operator of CS technique with the KH technique. Hence, the KH is improved and the CSKH algorithm is evaluated numerically. The detailed expression of a new variant of KH algorithm is given, and the KU operator is adjusted dynamically in KU process. In the proposed hybrid CSKHA, a greedy option was used, often surpassing the standard CS and KH. Moreover, so as to more ameliorate the CSKH's exploration, each generation of end KA operators will be a small number of poor krill thrown away, and replaced by new randomly generated krill. The problem of OPF has been expressed as a constrained optimization problem where many objective functions have been taking into account to decrease the fuel cost, to enhance the voltage stability and to improve the voltage profile. However, non-smooth piece-wise quadratic cost objective function has been deliberated. The feasibility of the suggested CS-KHA technique for solving problems of OPF is confirm by apply three standard test power systems. The results of the simulation prove the success and robustness of the suggested method to solve problem of OPF in small and large test systems. [1] Shaheen, A.M., El-Sehiemy, R.A., Farrag, S.M. (2016). Solving multi-objective optimal power flow problem via forced initialised differential evolution algorithm. IET Generation, Transmission and Distribution, 10(7): 1634-1647. https://doi.org/10.1049/iet-gtd.2015.0892 [2] Huneault, M., Galiana, F.D. (1991). A survey of the optimal power flow literature. IEEE Transactions on Power Systems, 6(2): 762-770. https://doi.org/10.1109/59.76723 [3] Chowdhury, B, Rahman, S. (1990). A review of recent advances in economic dispatch. IEEE Trans Power Syst, 5(4): 1248-1259. https://doi.org/10.1109/59.99376 [4] Niknam, T., Narimani, M.R., Jabbari, M., Malekpour, A.R. (2011). A modified shuffle frog leaping algorithm for multi-objective optimal power flow. Energy, 36: 6420-6432. https://doi.org/10.1016/j.energy.2011.09.027 [5] Roa-Sepulveda, C.A., Pavez-Lazo, B.J. (2003). A solution to the optimal power flow using simulated annealing. Int J Electri Power Energy Syst, 25: 47-57. https://doi.org/10.1109/PTC.2001.964733 [6] Lai, L.L., Ma, J.T. (1997). Improved genetic algorithms for optimal power flow under both normal and contingent operation states. Electrical Power and Energy Systems, 19: 287-292. https://doi.org/10.1016/S0142-0615(96)00051-8 [7] Paranjothi, S.R., Anburaja, K. (2002). Optimal power flow using refined genetic algorithm. Electr. Power Components Syst, 30: 1055-1063. https://doi.org/10.1080/15325000290085343 [8] Shaheen, A.M., Farrag, S.M., El-Sehiemy, R.A. (2017). MOPF solution methodology. IET Generation, Transmission and Distribution, 11(2): 570-581. https://doi.org/10.1049/iet-gtd.2016.1379 [9] Abido, M. (2002). Optimal power flow using Tabu search algorithm. Electric Power Syst. Res, 30: 469-483. [10] Ghasemi, M., Ghavidel, S., Ghanbarian, M.M., Massrur, H.R., Gharibzadeh, M. (2014). Application of imperialist competitive algorithm with its modified techniques for multi-objective optimal power flow problem: a comparative study. Inf Sci (Ny), 281: 225-47. https://doi.org/10.1016/j.ins.2014.05.040 [11] Abido, M.A. (2002). Optimal power flow using particle swarm optimization. Electrical Power and Energy Systems, 24: 2002. https://doi.org/10.1016/s0142-0615(01)00067-9 [12] Kumar, A.R., Premalatha, L. (2015). Optimal power flow for a deregulated power system using adaptive real coded biogeography-based optimization. Electrical Power and Energy Systems, 73: 393-399. https://doi.org/10.1016/j.ijepes.2015.05.011 [13] Roy, P.K., Ghoshal, S.P., Thakur, S.S. (2010). Biogeography based optimization for multi-constraint optimal power flow with emission and non-smooth cost function. Expert Syst. Appl, 37: 8221-8228. https://doi.org/10.1016/j.eswa.2010.05.064 [14] Bhattacharya, A., Chattopadhyay, P.K. (2011). Application of biogeography-based optimization to solve different optimal power flow problems. IET generation, transmission & distribution, 5(1): 70-80. https://doi.org/10.1049/iet-gtd.2010.0237 [15] El-Sehiemy, R.A., Shafiq, M.B., Azmy, A.M. (2014). Multi-phase search optimization algorithm for constrained optimal power flow problem. International Journal of Bio-Inspired Computation, 6(4): 275-289. [16] Roy, R., Jadhav, H.T. (2015). Optimal power flow solution of power system incorporating stochastic wind power using Gbest guided artificial bee colony algorithm. Electrical Power and Energy Systems, 64: 562-578. https://doi.org/10.1016/j.ijepes.2014.07.010 [17] Bhowmik, A.R., Chakraborty, A.K. (2015). Solution of optimal power flow using non dominated sorting multi objective opposition based gravitational search algorithm. Int. J. Electr. Power Energy Syst, 64: 1237-1250. https://doi.org/10.1016/j.ijepes.2014.09.015 [18] Ayan, K., Kılıc, U., Baraklı, B. (2015). Chaotic artificial bee colony algorithm based solution of security and transient stability constrained optimal power flow. Int. J. Electr. Power Energy Syst, 64: 136-147. https://doi.org/10.1016/j.ijepes.2014.07.018 [19] Dilip, L., Bhesdadiya, R., Jangir, P. (2018). Optimal power flow problem solution using multi-objective grey wolf optimizer algorithm. Springer Nature Singapore, Pte. Ltd. https://doi.org/10.1007/978-981-10-5523-2_18 [20] Hreh, B. (2014). Optimal power flow using black-hole-based optimization approach. Appl Soft Comput J, 24: 879-888. https://doi.org/10.1016/j.asoc.2014.08.056 [21] Hreh, B., Ma, A., Boucherma, M. (2014). Optimal power flow using teaching-learning-based optimization technique. Electr Power Syst Res, 114: 49-59. https://doi.org/10.1016/j.epsr.2014.03.032 [22] Attia, A.F., El Sehiemy, R.A., Hasanien, H.M. (2018). Optimal power flow solution in power systems using a novel Sine-Cosine algorithm. International Journal of Electrical Power and Energy Systems, 99: 331-343. https://doi.org/10.1016/j.ijepes.2018.01.024 [23] Daryani, N., Hagh, M.T., Teimourzadeh, S. (2016). Adaptive group search optimization algorithm for multi-objective optimal power flow problem. Appl Soft Comput, 38: 1012-24. https://doi.org/10.1016/j.asoc.2015.10.057 [24] Khelifi, A., Chettih, S., Bentouati, B. (2018). A new hybrid algorithm of particle swarm optimizer with grey wolves' optimizer for solving optimal power flow problem. Leonardo Electronic J. of P. & Technologies, 249-270. [25] AlRashidi, M., El-Hawary, M. (2009). Applications of computational intelligence techniques for solving the revived optimal power flow problem. Electr. Power Syst. Res, 79: 694-702. https://doi.org/10.1016/j.epsr.2008.10.004 [26] Gandomi, A.H., Alavi, A.H. (2012). Krill herd: a new bio-inspired optimization algorithm. Commun. Nonlinear Sci. Numer. Simulat, 17: 4831-4845. https://doi.org/10.1016/j.cnsns.2012.05.010 [27] Wang, G.G., Guo, L., Gandomi, A.H., Hao, G.S., Wang, H. (2014). Chaotic krill herd algorithm. Inf Sci, 274: 17-34. https://doi.org/10.1016/j.ins.2014.02.123 [28] Bentouati, B., Chettih, S., El-Sehiemy, R. (2018). A chaotic firefly algorithm framework for non-convex economic dispatch problem. Electrotehnica, Electronica, Automatica (EEA), 66(1): 172-179. [29] Yang, X.S. (2012). Flower pollination algorithm for global optimization. In: Unconventional Computation and Natural Computation, Vol. 7445, Springer Lecture Notes in Computer Science, Berlin, Heidelberg, 240-249. [30] Ghasemia, M., Taghizadeh, M., Ghavidel, S., Abbasian, A. (2016). Colonial competitive differential evolution: an experimental study for optimal economic load dispatch. Appl Soft Comput, 40: 342e63. https://doi.org/10.1016/j.asoc.2015.11.033 [31] Mandal, B., Roy, P.K. (2014). Multi-objective optimal power flow using quasi-oppositional teaching-learning-based optimization. Appl. Soft Comput, 21: 590-606. https://doi.org/10.1016/j.asoc.2014.04.010 [32] Bouchekara, H.R.E.H., Chaib, A.E., Abido, M.A., El-Sehiemy, R.A. (2016). Optimal power flow using an Improved Colliding Bodies Optimization algorithm. Applied Soft Computing, 42 119-131. https://doi.org/10.1016/j.asoc.2016.01.041 [33] Mohamed, A.A.A., Mohamed, Y.S., El-Gaafary, A.A., Hemeida, A.M. (2017). Optimal power flow using moth swarm algorithm. Electric Power Systems Research, 142: 190-206. https://doi.org/10.1016/j.epsr.2016.09.025 [34] Mirjalili, S. (2015). Moth-flame optimization algorithm: a novel nature-inspired heuristic paradigm. Knowledge Based Syst, 89: 228-249. https://doi.org/10.1016/j.knosys.2015.07.006 [35] Bentouati, B., Chettih, S., El Sehiemy, R., Wang, G.G. (2017). Elephant herding optimization for solving non-convex optimal power flow problem. Journal of Electrical and Electronics Engineering, 10(1): 31-36. [36] Bentouati, B., Chettih, S., Djekidel, R., El-Sehiemy, R.A. (2017). An efficient chaotic cuckoo search framework for solving non-convex optimal power flow problem. International Journal of Engineering Research in Africa, 33: 84-99. [37] Chaib, A.E., Bouchekara, H.R.E.H., Mehasni, R., Abido, M.A. (2016). Optimal power flow with emission and non - smooth cost functions using backtracking search optimization algorithm. International Journal of Electrical Power & Energy Systems, 81: 64-77. [38] Wang, G., Gandomi, A.H., Alavi, A.H. (2014). An effective krill herd algorithm with migration operator in biogeography-based optimization. Appl Math Model, 38(9-10): 2454-62. https://doi.org/10.1016/j.apm.2013.10.052 [39] El-Hosseini, M.A., El-Sehiemy, R.A., Haikal, A.Y. (2014). Multi-objective optimization algorithm for secure economical/emission dispatch problems. Journal of Engineering and Applied Science, 61(1): 83-103. [40] Biswas, P.P., Suganthan, P.N., Amaratunga, G.A. (2017). Optimal power flow solutions incorporating stochastic wind and solar power. Energy Conversion and Management, 148: 1194-1207. https://doi.org/10.1016/j.enconman.2017.06.071 [41] Bouchekara, H.R.E.H., Chaib, A.E., Abido, M.A. (2016). Optimal power flow using GA with a new multi-parent crossover considering: prohibited zones. valve-point effect, multi-fuels and emission. Electr Eng. https://doi.org/10.1007/s00202-016-0488-9 [42] Kessel, P., Glavitsch, H. (1986). Estimating the voltage stability of a power system. IEEE Transactions on Power Delivery, 1(3): 346-354. https://doi.org/10.1109/TPWRD.1986.4308013
CommonCrawl
Set in present day New York City, an unknown spacecraft of alien origin expelled millions of micro blackholes each with the mass of a grape in the earth atmosphere. I like to know what happens if these millions of micro blackholes were to fall on building structures such as skyscrapers, would it trigger an extinction level event? apocalypse weapon-mass-destruction black-holes extinction edited Apr 5 at 14:50 $\begingroup$ Given the aliens could easily send waves of asteroids to destroy Earth's surface completely with practically trivial effort (at their tech level), mucking around with micro black holes (or any black holes) seems quite daft. $\endgroup$ – StephenG Apr 5 at 13:33 $\begingroup$ @user6760 what do you want to happen or expect to happen? I presume you chose black holes for a reason. $\endgroup$ – Snyder005 Apr 5 at 19:26 $\begingroup$ @StephenG - an entire can of micro black holes fits in the storage cupboard in the corner of the spacecraft's kitchen (which has a stasis field to keep food fresh, and keep black holes from evaporating), going out and dragging waves of asteroids is a lot more work than just opening a can of micro-blackholes and sprinkling them out a hatch. $\endgroup$ – Johnny Apr 5 at 23:10 $\begingroup$ @Johnny As explained in answers, in an instant of time after you open the "can" (remove the magic statsis field) so short you could not measure it, all the micro black holes evaporate (with a huge out-pouring of radiation like a nuke). Dragging asteroids is what we in engineering call "safer", at least for the aliens - but still kills the pesky humans. :-) $\endgroup$ – StephenG Apr 6 at 1:18 $\begingroup$ just one point - whatever effect you're thinking about (and don't forget, there is no effect - small black holes just evaporate) ... it would happen to the air before happening to anything else it is moving towards. $\endgroup$ – Fattie Apr 6 at 16:09 would it triggers extinction level event? Since they'd evaporate more or less instantaneously (known as Hawking radiation), releasing energy according to the famous equation beginning E=, the spaceship would last a few microseconds at best, Earth would be fine. Yes, the aliens in the ship would become extinct. answered Apr 5 at 13:25 Confounded by beige fish.Confounded by beige fish. $\begingroup$ According to this calculator (eguruchela.com/physics/calculator/…), they would last 1.6581375e-29 seconds. There is also the fact that their radius would be so small, that they wouldn't even interact with atoms most of the time. $\endgroup$ – Tyler S. Loeper Apr 5 at 13:28 $\begingroup$ @TylerS.Loeper Gosh, we don't have an SI multiplier to express that, atto is feeling left out and lonely. $\endgroup$ – Confounded by beige fish. Apr 5 at 13:31 $\begingroup$ How about they just pick the right size black holes that they evaporate with a boom after they reach Earth? That shouldn't be too hard. They'd only be grape sized for a tiny while, but that's fine. $\endgroup$ – John Dvorak Apr 5 at 14:17 $\begingroup$ @John Dvorak a 1,000 metric tons black hole will have a lifespan of 84 seconds. It may be enough to reach the surface, but it's going to release an energy amount equivalent to teratons of TNT. $\endgroup$ – Alexander Apr 5 at 20:30 $\begingroup$ @reirab As long as you're outside the event horizon, a black hole's effects are more or less the same as a non-black-hole object of the same mass (Hawking radiation notwithstanding). So, if you can work out how to manipulate the black hole without touching the event horizon, it's just like having a thousand-tonne marble on board. $\endgroup$ – anaximander Apr 6 at 7:16 Black holes evaporate by emitting Hawking radiation a 1-second-life black hole has a mass of $2.28 \cdot 10^5 \ \mathrm{kg}$ A grape has far less mass than that, thus the black hole would evaporate way faster than that. An intelligent life form dropping micro black holes on Earth would thus quickly annihilate its own bombing squad in a shower of gamma ray, proving that they were not so intelligent as we thought. Loong $\begingroup$ So, basically, aliens playing with tech they don't quite fully understand yet resulting in unintended consequences. Basically an alien version of the early Cold War period. $\endgroup$ – reirab Apr 5 at 21:51 $\begingroup$ @reirab, I mean, we managed not to blow ourselves up during the Cold War, so... score one for humanity I guess? $\endgroup$ – Gryphon Apr 6 at 1:29 $\begingroup$ @Gryphon let's be real though: we came damned close. $\endgroup$ – Sora Tamashii Apr 10 at 6:32 $\begingroup$ @SoraTamashii yeah, but we managed not to commit self-genocide at the last second every time we almost did. So it's still a point for us. $\endgroup$ – Gryphon Apr 10 at 12:34 $\begingroup$ @Gryphon A-hecking-men. $\endgroup$ – Sora Tamashii Apr 12 at 0:30 The electromagnetic force from one electron on another and the gravitational force of this micro-black hole both follow an inverse square law. A grape about 1.5 cm in radius would have a mass of about 0.015 kg. When does the gravitational force of the grape exceed the electromagnetic force between electrons ? It's when : $$\frac r R < \sqrt{\frac {4\pi \epsilon_0Gm_em_h}{e^2}} = 6.3\times 10^{-8}$$ Meaning the black hole would have to pass less than one ten millionth of the distance between electrons to have a significant influence on one. Away from than range the electron will happily go about it's business hardly disturbed at all. Even if a black hole passes this close the effect is only temporary. You're still nowhere near the event horizon of that black hole and so the electron will, at worst, be pulled away from it's normal motion and after some brief period when the black hole moves away it will simply recombined in some way with the bulk of atoms around it. It might cause a minute amount of damage on a molecular level (even allowing for millions of these micro black holes), but the net effect would be tiny, probably less that someone hitting a wall with their hand. How about they expell at fraction of c so we take length contraction into question? You seem to mean that to avoid Hawking radiation evaporation destroying these black holes before they even reach the black hole, they could be ejected at a high fraction of the speed of light. So how high a speed is needed to avoid them evaporating before they travel 100 meters, assuming your aliens like low level flying ? The fraction of the speed of light needed is : $$\frac v c > \frac 1 { \sqrt{ 1 + \left( \frac {Tc} L \right)^2 } }$$ Where $L$ is the distance they must travel and $T$ is the lifetime of the micro black hole before it evaporates. This works out at $\frac v c \approx 1 - 2\times 10^{-19}$. That's insanely close to the speed of light. A million grapes of mass 0.015 kg will have a mass of 15,000 kg. But the energy required to get them moving at this insane fraction of the speed of light would be enormous. It equates to a mass about $2\times 10^9$ times 15,000 kg. Or to put it another way, the ship firing these micro black holes would need to have a mass-energy of about $3\times 10^{13}$ kg. The asteroid Vesta is substantially larger than this. So this is actually a small mass in terms of asteroids and you could probably destroy Earth a lot more easily simply by grabbing some handy largish asteroids and sending them on their merry way towards Earth at some modest speed that's easily imparted with your spaceship. No need at all to mess around with ultra-relativistic micro-black holes when the universe provides you with much simpler and easy to handle "ammunition" in the form of basic asteroids. StephenGStephenG $\begingroup$ I was wondering about that speed, thanks for working it out. $\endgroup$ – Kevin Apr 5 at 20:30 $\begingroup$ Ultra-relativistic black holes would be considerably less effective than throwing Vesta at the Earth: as you note, they'll mostly just pass through Earth without doing anything. $\endgroup$ – Mark Apr 5 at 22:23 $\begingroup$ Interestingly, an actual grape going at that speed would be far more devastating than a grape-massed black hole. I don't have the time to do the math at the moment, but I'd guess it'd be enough to overcome the gravitational binding energy of the Earth. $\endgroup$ – Gryphon Apr 6 at 1:34 $\begingroup$ 👍for addressing my comment too $\endgroup$ – user6760 Apr 6 at 2:15 All the answers so far take as read the veracity of Hawking Radiation. If we assume, for a moment, that this is false and that some undiscovered process prevents black hole evaporation (perhaps there is a layer of physics underlying quantum mechanics in the same way that QM underlies classical physics...); then what happens? The black holes would fall to Earth like grapes (I assume you've removed their orbital velocity so that they don't just stay in orbit). They would accelerate like any falling body but, because of their tiny size, would not experience any air resistance. So they would arrive at the surface going at a fair old clip. If dropped from orbit, say about 400km up, this would be about 3 km/s. At the surface, what would happen? Nothing much, I'd guess... They're still so small that "solid" matter is practically a vacuum to them so they go straight through, down past the crust, mantle, core and then up the other side, out through Western Australia, back up to about 400km where they stop - and then tumble back down again. Eventually, they'd settle into a highly elliptical orbit around the centre of the Earth. The Coriolis force would make it look like the stream was scanning round the Earth every 24 hours. Occasionally, one of them would strike a nucleus head-on and capture some of its quarks, so it would grow slightly. This process would have some positive feedback (bigger the event horizon gets, more chance of interaction) but I'm not sure what the time constant would look like. Anyone fancy doing the calculation? I'm guessing it might be aeons before it eats the Earth... edited Apr 8 at 6:30 answered Apr 6 at 9:57 Oscar BravoOscar Bravo $\begingroup$ With 10^51 neutron-big particles in the 10^21 m3 of earth (10^30 per cubic meter), each about 10^-15 m big (volume thus about 10^-45 m3 for one, or 10^-15 m3 per cubic meter of earth), we should have a chance per meter of about 10^-15 of hitting one of those neutron-big particles with one black hole. Thats not a lot, but it's per meter and for only one black hole, so a million black holes orbiting inside earth (about 10^5 m/s)should accrue some hits $\endgroup$ – bukwyrm Apr 9 at 16:51 $\begingroup$ 1/10 000 chance of a hit per second, with the above values, and a one in three chance every few hours. So not a lot after all ... $\endgroup$ – bukwyrm Apr 9 at 17:01 If we take "size of a grape" to mean a gram, then a million of them would be 1 metric ton. When they evaporate, they release energy several orders of magnitudes higher than a nuclear bomb. So New York would be devastated, but the earth as whole would not have large effects, although it might produce radioactive elements that would increase cancer rates. AcccumulationAcccumulation $\begingroup$ You forget that those micro black holes won't make it to New York. They'll evaporate the alien ship instead. $\endgroup$ – cmaster Apr 6 at 9:22 $\begingroup$ @cmaster The question says "an unknown spacecraft of alien origin expelled millions of micro blackholes each with the mass of a grape in the earth atmosphere." So the black holes making it to the atmosphere is stated as being part of the hypothetical. The equivalent of nuclear bombs exploding in the atmosphere is going to cause problems. $\endgroup$ – Acccumulation Apr 6 at 16:55 $\begingroup$ Yes, but that's already the part that simply doesn't work out. The aliens won't manage to get such tiny black holes into the atmosphere. It's not like you can create them directly where you need them out of nothing. If you want a black hole that can travel from the alien ship into the atmosphere before exploding, it needs to be many orders of magnitude bigger, and then it'll explode with such tremendous force that you won't need a second black hole to destroy the city. $\endgroup$ – cmaster Apr 6 at 18:03 $\begingroup$ @cmaster - They could get them into the atmosphere if the containment system used to transport the microholes is designed to be ejected from the ship and wait until it enters the atmosphere (or descends to some lower altitude) before releasing them. Of course, at that point, you've basically turned it into a bomb anyhow and there are probably easier ways to make a non-black-hole-based bomb which would be just as effective. $\endgroup$ – Dave Sherohman Apr 8 at 8:15 $\begingroup$ @DaveSherohman You cannot contain black holes. Black holes cannot have a meaningful electrical charge, you can only move them by gravity, i.e. by moving masses within their vicinity. And you most certainly cannot contain black holes of grape size, they'll simply blow any containment away. A BH with a mass of 1000 tons evaporates on the timescale of a second. Lifetime grows with the cube of it's weight, so the last 100 tons only take a millisecond to evaporate. 100 tons worth of energy within $1ms$! - Even if those number don't blow your mind away, they will blow away any containment. $\endgroup$ – cmaster Apr 8 at 19:10 While this may or may not be an extinction event, it has the potential to be very bad news for at least some of those on the surface of the Earth, even if not the whole surface. The trick is that it will depend crucially on the alien ship's location, since as mentioned in the answers, the ship will be destroyed on the instant the black holes are released. A typical grape has a mass of about 5 grams (cite: https://www.reference.com/food/many-grams-grape-weigh-fcc1e34fdbbcf843). 5 grams times a million is 5 megagrams (about five typical-sized road vehicles). The explosion into Hawking radiation can be considered effectively as very much like the instant detonation of an antimatter or nuclear explosive which ends up with the conversion of that same total mass to energy. By $$E = mc^2$$ that is about $4.5 \times 10^{20}\ \mathrm{J}$ released, or 450 EJ. For comparison, the TSAR (largest nuclear device ever exploded) was about 0.21 EJ, so this is roughly 2000 times more explosive. While not as large as the Chicxulub asteroid strike (about 400 000 EJ), this is still a considerable bang, more than enough that were it to occur on or near Earth's surface (i.e. the ship is "hovering" just above the skyscraper in question), it would lead to the complete annihilation of not only all of New York City (so way worse than just "a skyscraper"), but probably also the whole surrounding states, if not the entire Northwestern US due to the blast and thermal waves. So insofar as the "skyscraper" is concerned, the answer for it is: complete, instantaneous vaporization into a high-energy ball of plasma, similar to that from a very, very, very, very big nuclear explosive, along with a large amount of the surrounding city and probably also the ground it is sitting on. The expansion of this huge ball of plasma generates a very large blast wave that they lays waste to the surrounding countryside, out probably to a radius larger than New York State itself, plus formation of a large crater similar to that from an asteroid impact and resultant release of ejecta. The other plausible alternative is if we imagine the craft is in orbit. For Low Earth orbit, we are talking about a height of 400 km above the surface. From geometry, we can then figure the amount of energy deposited at a point by using the inverse-square law, since the source will be approximately pointlike at this distance: $$I(r) = \frac{I_0}{4\pi r^2}$$ where $I_0$ is the initial intensity, $I(r)$ that at distance $r$. Taking $r = 400\ \mathrm{km} = 400 000\ \mathrm{m}$, we get that the point on Earth's surface directly below the craft gets struck with about 223 MJ of energy per square meter, delivered effectively instantaneously. The farthest point that will experience irradiation of energy by the explosion is that for which it is just on the horizon, something we can calculate by considering when the line from the exploding craft to the point on the ground is at a right angle (so the tangent) to the line from the Earth's center to the same ground point. Geometrically, this forms a right triangle with the right angle that at the observation point, the hypotenuse is the line from the Earth's center to the craft (thus equal to $R_E + 400\ \mathrm{km}$) and the adjacent side is the line from the Earth's center to the observation point itself (thus equal to $R_E$). The length of the opposite side is then $\sqrt{(R_E + 400\ \mathrm{km})^2 - R_E^2}$ which with $R_E = 6371\ \mathrm{km}$ gives the straight-line distance as ~2300 km. To get the precise ground distance we need to take into account the curvature of the Earth's surface, and we can do that by taking the angle at the Earth's centre: since we have the hypotenuse and adjacent, we get $\cos(\theta) = \frac{\mathrm{adj}}{\mathrm{hyp}} = \frac{R_E}{R_E + 400\ \mathrm{km}}$ which gives $\theta \approx 345\ \mathrm{mrad}$ and multiplying it by $R_E$ to get the circular arc length ($s = r\theta$), which gives the ground distance as still being pretty close: ~2200 km. Thanks to the cosine law of the angle of incidence, of course, radiation at this point will be effectively zero, so we can estimate that a radius of 1000 km will be subjected to radiation levels exceeding 100 MJ/m^2, delivered virtually instantly, chiefly as hard X/gamma rays. This will, for the most part, be absorbed in the atmosphere, but may cause interesting shock heating and chemical effects that I imagine cannot be good for anyone who happens to be underneath them. At the very least, you get a huge cloud of oxides of nitrogen ($\mathrm{N_2 O}$, $\mathrm{NO_2}$) produced immediately, like smog - poison. I'm less sure of how to calculate how much and moreover what the effects of that will be once dispersed globally, but I can't imagine they'd be too good, either. This effect can be considered similar to that of a nearby Gamma-Ray Burst (GRB) impinging on the planet; cited as a possible cause of the Ordovician mass extinction (though I might have also heard something more recently that this has been tracked to a different cause), though here affecting only a considerably smaller area - that would have affected an entire hemisphere. Nonetheless, it might give you some clue that this is probably not going to be too good a day (week, month, year) for anyone. It may not kill everybody, but it's also not going to be anywhere close to as innocent and harmless as so many other answers and comments here seem to be painting it to be. And this also, I should tell you, varies with just how much "millions" actually is. This was for one million. If it's 100 million, then we are getting to around 10% of Chicxulub, and talking a lot worse. The_SympathizerThe_Sympathizer Not the answer you're looking for? Browse other questions tagged apocalypse weapon-mass-destruction black-holes extinction or ask your own question.
CommonCrawl
home chevron_right Learning centerchevron_rightResearch electrochemistrychevron_rightAnalytical electrochemistrychevron_rightSECM101: An Introduction to Scanning Electrochemical Microscopy Topic 10 min read SECM101: An Introduction to Scanning Electrochemical Microscopy Latest updated: December 3, 2021 What is Scanning Electrochemical Microscopy? Scanning Electrochemical Microscopy (SECM) is a scanning probe technique which measures the local electrochemical activity of sample in solution. The most common form of SECM, feedback mode, measures the Faradaic current of a redox mediator which interacts with the sample, this leads to an inherent chemical selectivity. SECM was introduced in 1989 by A. J. Bard [1], based on earlier work demonstrating that electroactive species diffusing from a biased macroelectrode could be measured by a microelectrode held within its diffusion layer [2]. Building on this work it was shown that when in close proximity to the sample the signal measured by a probe biased to interact with a redox mediator in solution was affected even without biasing the sample, and is also affected by close proximity to an insulating sample. Furthermore, the signal measured by SECM is dependent on the gap between the probe and the sample. Scanning Electrochemical Microscopy exploits this phenomenon to map both the electrochemical activity and topography of a sample. Since its introduction and subsequent commercialization SECM has grown to become the most popular scanning probe electrochemistry technique. The flexibility afforded by the Direct Current (dc)-SECM form originally introduced has only been expanded on in recent years by the introduction of Alternating Current (ac)-SECM, and a number of constant distance SECM modes. While this article will only consider the simplest form of SECM, the feedback mode in which only the probe is biased, further information on the other SECM types can be found in our previous Learning Center article dealing with the wide array of Scanning Electrochemical Microscopy types. How does Scanning Electrochemical Microscopy work? In Scanning Electrochemical Microscopy an UltraMicroElectrode (UME) probe, biased at a known potential, is held near to a sample to measure the Faradaic current due to an electrochemically active species, the redox mediator, diffusing in the gap and being reduced or oxidized at the UME. The Faradaic current measured reflects the electrochemical activity of the sample. In the simplest form of SECM, feedback mode, the sample is left unbiased at Open Circuit Potential (OCP). The UME probe of SECM is a key working principle of the technique. In SECM a diameter of 25 µm or smaller is typically used. When an UME is used hemispherical diffusion to the electrode occurs, Fig. 1, and the steady state current measured at the probe in bulk electrolyte is determined by the diffusion of the redox species to the UME probe. This means the steady-state Faradaic current measured is given by the equation below: $i_{\mathrm{ss}}=4nFDCr$ Where iss is the steady state current, n the number of electrons transferred, F is the Faraday constant in C mol-1, D the diffusion coefficient in cm2 s-1, and C is the bulk concentration in mol cm-3 and r is the radius of the UME in cm. This equation demonstrates that the measured current is directly related to the concentration of the redox species. Fig 1. Hemispherical diffusion to the ultramicroelectrode probe is shown. When a biased UME probe is brought into close proximity with an insulating sample negative feedback occurs, with the diffusion of the redox mediator to the probe blocked, Fig. 2. This results in a lower concentration of redox mediator measured by the probe, and therefore a lower current compared to that in bulk solution is measured. When the SECM probe is instead over a conducting sample, the sample acts as a bipolar electrode, even if it is not biased, to recycle the redox mediator in the region under the probe, Fig. 3 and locally reach the Nernst equilibrium potential. This mediator recycling causes a local increase in the concentration of mediator detected by the probe, in turn causing an increase in the current measured. Figure 2: Negative feedback at an SECM probe over an insulator sample is shown. Figure 3: Positive feedback at an SECM probe over a conductor sample is shown. As the probe to insulating sample gap decreases the degree to which mediator diffusion is blocked increases. This results in a decrease in the absolute value of current on decreasing the probe to sample distance over an insulator. On the other hand, as the probe to conducting sample gap decreases the positive feedback from the sample increases, resulting in an increase in the absolute value of current on decreasing the probe to sample distance over a conductor. These responses are compared in Fig. 4. From this comparison it can be seen that two sample properties affect the signal measured in Scanning Electrochemical Microscopy: (1) sample activity and (2) sample topography. These different contrasts can be further demonstrated if two different sample types are considered, one a homogeneously insulating sample with topography variations, and the other a completely flat sample with heterogeneous activity. Both sample types will show variations in the SECM probe signal, but for different reasons. When the homogeneously insulating sample with topography variations is measured the probe to sample distance changes throughout the measurement. This results in a current signal absolute value which is higher over valleys, and lower over peaks due to the negative feedback at an insulator, Fig. 5. For the completely flat sample with topography variations fluctuations in the current signal are also seen. In this case the probe to sample distance remains the same throughout the measurement, as the probe moves from regions of positive feedback to those with negative feedback, and vice versa, the current measured changes, with higher absolute values of currents over the most active regions, Fig. 6. As a result, for very rough samples it can be beneficial to perform SECM measurements with a controlled probe to sample distance. A number of approaches exist to achieve constant distance SECM, though discussion of these is outside the scope of this article. Figure 4: The difference in probe current with decreasing probe to sample distance over a conductor and insulator are compared. Figure 5: When SECM is used to measure a homogeneously insulating sample the variations in topography cause variations in the current measured at the probe. Figure 6: When SECM is used to measure a flat sample with varying conductivity (yellow is fully conducting, blue is fully insulating) the variations in conductivity cause variations in the current measured at the probe. Basic Scanning Electrochemical Microscopy measurement types There are two basic experiment types in Scanning Electrochemical Microscopy, the SECM approach curve, and the line scan, which can be used to build up an area scan of the sample. The effect of probe to sample distance on the measured current is key to the probe approach curve experiment, in which the probe response is measured as a function of its position in z, i.e. the probe to sample distance. The approach curve serves a number of functions in SECM. Most often it is used for positioning the probe relative to the sample for a line or area scan measurement. It can also be used as a standalone experiment type. For example, approach curves have been demonstrated as an effective technique for measuring sample swelling over time [3], and under changing sample bias conditions [4]. They are also used to investigate the kinetics of reactions occurring at samples which are not easily measured by bulk electrochemistry, for example living cells and 2D materials. The line scan experiment is an SECM measurement in which the probe is approached close to the substrate and is scanned horizontally in x or y with the probe signal plotted as a function of position. An area scan is built up through a series of x line scans performed at incremented y positions. This results in the area map correlating probe signal, usually current, to position many are familiar with for SECM measurements, an example is given in Fig. 7. The area scan is particularly useful for visualizing the local electrochemical features of a sample, and when performed over time can be used to follow the evolution of electrochemical processes. Figure 7: SECM area scan of a spider plant leaf measured in 0.1 x 10-3 mol L-1. The 1 µm Pt probe was biased at -0.75 V to measure O2. Though less popular than the imaging experiments, Scanning Electrochemical Microscopy is also used in measurements where the probe is approached in close proximity to the sample surface to investigate the adsorption, desorption, and dissolution of species at the sample. These measurements are performed in a stationary position using standard electrochemical measurements, and are dependent on the diffusion of species from the sample to the probe. What are the components of a Scanning Electrochemical Microscope? A Scanning Electrochemical Microscope is composed of a number of components which are annotated in Fig. 8. To allow the x,y,z scanning required all SECMs have an x,y,z scanning stage, capable of achieving the sub-micron step sizes required for the highest resolution measurements. Although feedback mode only requires the probe to be biased, and therefore is a single potentiostat measurement, SECM is commonly performed in other modes which also require the sample to be biased as a second working electrode. For this reason, SECMs all use a bipotentiostat. The sample is held in an electrochemical cell, on a stage, with provisions made for adjusting sample tilt. Finally, SECM requires the use of an UME probe, typically acting as the main working electrode in the measurement. Figure 8: A Scanning Electrochemical Microscope is annotated to show all of the components. The UME used in Scanning Electrochemical Microscopy is composed of an active electrode material surrounded by an insulating sheath, with a carefully controlled diameter. The probe tip is polished flat resulting in an active region which is a flat disc electrode. In SECM, the UME should be carefully considered due to its significant influence on the resulting measurement. The resolution of an SECM measurement ultimately depends on the diameter of the active region of the probe. Typically, the probe will be selected to be a similar scale to, or smaller than the features of interest. Although for active features which are well spaced the probe can detect features which are much smaller. The diameter also dictates how close the probe should be to the sample surface throughout a measurement. The rule of thumb is that the probe should be within 1-2 probe diameters for the best measurements, as a result probes with smaller active diameters require a smaller probe to sample distance. Also important is the ratio of the radius of the insulating sheath to that of the active material. This is known as the RG ratio and is given by the equation below: $$RG=\frac{R}{r}$$ Where R is the radius of the insulating sheath, and r is the radius of the active area of the electrode. The RG ratio affects the sharpness of the resulting SECM image. If the RG ratio is too large diffusion of the redox mediator to the active region of the probe is blocked at larger probe to sample distances over insulating sample. While it may seem that having an RG ratio of 1 would give the best results, this is not the case. When the RG ratio is too small the diffusion of a redox mediator over an insulator is not effectively blocked. The resulting poor negative feedback response can therefore lead to a lack of distinction between the conducting and insulating regions. Typically, SECM probes have an RG ratio of 10 though this can vary. For example, the smallest probes may be made with a larger RG ratio to improve durability. A final key characteristic of the probe is the active material used. This affects the accessible electrochemical window of the probe, and the electrochemistry which can be performed at the probe. In the majority of SECM publications Pt is the probe material of choice [5]. Why use Scanning Electrochemical Microscopy? Scanning Electrochemical Microscopy has a number of advantages. Using SECM it is possible to locally detect sample electrochemical activity. In bulk electrochemistry the measurement is an average of the entire sample, this means that local features and processes, which may have a large effect on the measurement, cannot be distinguished from other features. By using SECM it is possible to visualize these effects. An example is the use of SECM to measure the electrochemical characteristics of the grains and grain boundaries of novel materials. These features typically have different effects on the overall electrochemical characteristics of a system, which would not otherwise be easily distinguished. Furthermore, SECM is a non-contact technique which can be performed at larger probe to sample distances than other local measurement techniques, such as Atomic Force Microscopy (AFM). This is particularly advantageous when measuring fragile samples which can easily be damaged through contact, for example the solid electrolyte interface which forms on battery electrodes, or flakes of 2D materials. SECM also has no conductivity requirement for the sample, it can be used to measure anything from fully insulating to fully conducting samples, and even those with conducting regions isolated in an insulating matrix. This makes SECM highly flexible and applicable to a wide range of application areas. Furthermore, it means SECM can be used to measure samples which could be difficult to measure by bulk electrochemistry particularly because electrical contact cannot be easily made. Examples include liquid-liquid interfaces, biological materials, 2D materials, and electrode arrays for which each point can be individually examined. Finally, SECM has an inherent selectivity towards the mediator used in the measurement. This means the interaction of specific chemicals with the sample can be probed. This is of importance, for example, in the study of catalysts, antigen-sensor interactions, the release of metabolites from biological molecules, and the intercalation/deintercalation of Li + at battery electrodes. What is Scanning Electrochemical Microscopy used for? Figure 9: The application areas of SECM are shown. Scanning Electrochemical Microscopy can be applied to any system measured by bulk electrochemistry for which the local electrochemical characteristics are of interest, some of the fields SECM has been used in are shown in Fig. 9. Because of its wide applicability SECM has been used in a very wide range of applications including: Studying corrosion processes [6] Analysis of coatings breakdown [7] Investigation of the formation of the Solid Electrolyte Interface (SEI) on battery electrodes [8] Screening of fuel cell catalysts [9] Measurement of the morphology of living cells [10] Investigation of ion flow through membranes [11] Studying the electronic properties of 2D materials [12] These uses are covered in more detail in a series of Learning Center articles focusing on the different research questions which can be answered by scanning probe electrochemistry. The latest article in this series focuses on the use of scanning probe electrochemistry in fuel cell research. BioLogic has a number of resources available dedicated to taking your understanding of Scanning Electrochemical Microscopy further. To learn more about the use of SECM in different application areas please consult the scanning probe workstations application notes. For information on achieving the best images in SECM please see our two-part tutorial: How to get clear images in SECM. Faradaic current: The current related to an oxidation or reduction reaction. Feedback mode Scanning Electrochemical Microscopy: The simplest form of SECM in which the probe is biased to measure the interaction of a redox mediator in solution, with an unbiased sample. Positive feedback: The increase in redox mediator concentration, and associated increase in probe current, which occurs over a conducting sample due to mediator recycling. Negative feedback: The decrease in redox mediator concentration, and associated decrease in probe current, which occurs over an insulating sample due to blocked mediator diffusion. Redox mediator: An electrochemically active compound which transfers electrons in SECM, allowing measurement of Faradaic current. RG ratio: The ratio of the radius of the insulating sheath to the radius of the active probe region. Scanning probe electrochemistry: Any of the family of scanning probe techniques which measures the local electrochemistry of a sample. This includes SECM, SKP, LEIS, SDC, and SVET. J. Bard, F. R. F. Fan, J. Kwak, O. Lev, Anal. Chem. 61 (1989) 132-138 C. Engstrom, M. Weber, D. J. Wunder, R. Burgess, S. Winquist, Anal. Chem. 58 (1986) 844-848 Elkebir, S. Mallarino, D. Trinh, S. Touzain, Electrochim. Acta 337 (2020) 135766 Fic, A. Płatek, J. Piwek, J. Menzel, A. Ślesiński, P. Bujewska, P. Galek, E. Frąckowiak, Energy Storage Mater. 22 (2019) 1-14 Polcari, P. Dauphin-Ducharme, J. Mauzeroll, Chem. Rev. 116 (2016) 13234−13278 Kong, C. Dong, Z. Zheng, F. Mao, A. Xu, X. Ni, C. Man, J. Yao, K. Xiao, X. Li, Appl. Surf. Sci. 440 (2018) 245–257 Morimoto, A. Fujimoto, H. Katoh, H. Kumazawa. AIAA Scitech 2019 Forum, 7-11 January 2019, San Diego, California Liu, Q. Yu, S. Liu, K. Qian, S. Wang, W. Sun, X.-Q. Yang, F. Kang, B. Li, J. Phys. Chem. C 123 (2019) 12797-12806 Black, J. Cooper, P. McGinn, Meas. Sci. Technol. 16 (2005) 174 Razzaghi, J. Seguin, A. Amar, S. Griveau, F. Bedioui, Electrochim. Acta 157 (2015) 95–100 J. Jackson, J. M. Sanderson, R. Kataky, Sens. Actuators B Chem. 130 (2008) 630–637 Henrotte, T. Bottein, H. Casademont, K. Jaouen, T. Bourgeteau, S. Campidelli, V. Derycke, B. Jousselme, R. Cornut, ChemPhysChem 18 (2017) 2777 – 2781 Scanning Electrochemical Microscopy SECM feedback mode redox mediator A. J. Bard Direct Current (dc)-SECM Alternating Current (ac)-SECM UltraMicroElectrode (UME) probe Faradaic current Open Circuit Potential (OCP) ultramicroelectrode bipolar electrode x,y,z scanning stage SECM150 Compact SECM built for high-resolution imaging, x,y stages are fully separated from z stage to reduce crosstalk
CommonCrawl
Combinatorics and Discrete Geometry Senior Theses RTG Dynamics, Probability and PDEs Internal DLA in $\mathbb{Z}^2$ with early points colored red and late points colored blue (taken from Logarithmic functions for internal DLA by Jerison, Levine, and Sheffield). "Probability is both a fundamental way of viewing the world, and a core mathematical discipline, alongside geometry, algebra, and analysis." Many of the leading probabilists of the twentieth century worked at Cornell, including M. Kac, W. Feller, G. Hunt, F. Spitzer, H. Kesten, K. Ito, and E. Dynkin. Today the research interests of the probability group center around random walks on groups, Dirichlet forms, potential theory, statistical physics and abelian networks. Mathematical statistics concerns the logical arguments underlying justification of statistical methods and inference. Changes in technology are creating an exponential increase in the amount of data available to science and business, but the size and complexity of modern data sets require new mathematical theory. Cornell's mathematical statistics dates back to 1951 with the arrival of J. Wolfowitz, joined by J. Kiefer a year later. Current interests of the mathematical statistics faculty include the theory of statistical experiments (using the concepts of local asymptotic normality and of Le Cam equivalence), quantum statistics, statistical model selection, dimension reduction, high dimensional data (high dimension and small sample size) and empirical process theory. Notable graduate students who pursued an academic career in probability and statistics include: Henry Lewis Rietz (1902), the first president of the Institute for Mathematical Statistics; Robert Cameron (1932); Murray Rosenblatt (1949); Daniel Ray (1953); Robert M. Blumenthal (1956); Harry Kesten (1958); Lawrence Brown (1964). Field Members Lionel Levine Probability and combinatorics Michael Nussbaum Mathematical statistics Laurent Saloff-Coste Analysis, potential theory, probability and stochastic processes Gennady Samorodnitsky Philippe Sosoe Marten Wegkamp Mathematical statistics, empirical process theory, high dimensional statistics and statistical learning theory Emeritus and Other Faculty Roger H. Farrell Mathematical statistics, measure theory J.T. Gene Hwang Statistics, confidence set theory Harry Kesten Probability theory, limit theorems, percolation theory Activities and Resources Activities are organized in collaboration with the departments of ORIE and Statistics, the graduate fields of statistics and applied mathematics, and the Center for Applied Mathematics. Probability Seminar Statistics Seminar Probability Summer School Probability in the Department of Mathematics at Cornell: a brief history A brief history of statistics at Cornell Department of Statistical Sciences NSF report of current and emerging research opportunities in probability (source of the quote in the first sentence) Home » Research » Probability and Statistics
CommonCrawl
Topological characterization of canonical Thurston obstructions JMD Home On bounded cocycles of isometries over minimal dynamics January 2013, 7(1): 75-97. doi: 10.3934/jmd.2013.7.75 The Cayley-Oguiso automorphism of positive entropy on a K3 surface Dino Festi 1, , Alice Garbagnati 2, , Bert Van Geemen 2, and Ronald Van Luijk 1, Mathematisch Instituut, Universiteit Leiden, Niels Bohrweg 1, 2333 Leiden, Netherlands, Netherlands Dipartimento di Matematica, Università di Milano, Via Saldini 50, 20133 Milano, Italy, Italy Received August 2012 Published May 2013 Recently Oguiso showed the existence of K3 surfaces that admit a fixed point free automorphism of positive entropy. The K3 surfaces used by Oguiso have a particular rank two Picard lattice. We show, using results of Beauville, that these surfaces are therefore determinantal quartic surfaces. Long ago, Cayley constructed an automorphism of such determinantal surfaces. We show that Cayley's automorphism coincides with Oguiso's free automorphism. We also exhibit an explicit example of a determinantal quartic whose Picard lattice has exactly rank two and for which we thus have an explicit description of the automorphism. Keywords: automorphism, K3 surfaces, positive entropy., dynamics. Mathematics Subject Classification: Primary: 14J28, 14J50, 37F10; Secondary: 32H50, 32J15, 32Q1. Citation: Dino Festi, Alice Garbagnati, Bert Van Geemen, Ronald Van Luijk. The Cayley-Oguiso automorphism of positive entropy on a K3 surface. Journal of Modern Dynamics, 2013, 7 (1) : 75-97. doi: 10.3934/jmd.2013.7.75 M. F. Atiyah and I. G. Macdonald, "Introduction to Commutative Algebra,", Addison-Wesley Publishing Co., (1969). Google Scholar W. P. Barth, K. Hulek, C. A. M. Peters and A. Van de Ven, "Compact Complex Surfaces,", Second edition, 4 (2004). Google Scholar L. Bădescu, "Algebraic Surfaces,", Translated from the 1981 Romanian original by Vladimir Maşek and revised by the author, (1981). Google Scholar A. Beauville, Determinantal Hypersurfaces,, Michigan Math. J., 48 (2000), 39. doi: 10.1307/mmj/1030132707. Google Scholar S. Cantat, A. Chambert-Loir and V. Guedj, "Quelques Aspects des Systèmes Dynamiques Polynomiaux,", Panoramas et Synthèses, 30 (2010). Google Scholar A. Cayley, A memoir on quartic surfaces,, Proc. London Math. Soc., 3 (): 1869. Google Scholar I. Dolgachev, "Classical Algebraic Geometry: A Modern View,", Cambridge University Press, (2012). doi: 10.1017/CBO9781139084437. Google Scholar W. Fulton, "Intersection Theory,", Ergebnisse der Mathematik und ihrer Grenzgebiete (3) [Results in Mathematics and Related Areas (3)], 2 (1984). Google Scholar D. Festi, A. Garbagnati, B. van Geemen and R. van Luijk, Computations for Sections 4 and 5., Available from: \url{http://www.math.leidenuniv.nl/~rvl/CayleyOguiso}., (). Google Scholar A. Grothendieck, Éléments de géométrie algébrique. IV. Étude locale des schémas et des morphismes de schémas. II,, Inst. Hautes Études Sci. Publ. Math., 24 (1965). Google Scholar R. Hartshorne, "Algebraic Geometry,", Graduate Texts in Mathematics, (1977). Google Scholar Q. Liu, "Algebraic Geometry and Arithmetic Curves,", Translated from the French by Reinie Erné, 6 (2002). Google Scholar R. van Luijk, An elliptic K3 surface associated to Heron triangles,, J. Number Theory, 123 (2007), 92. doi: 10.1016/j.jnt.2006.06.006. Google Scholar R. van Luijk, K3 surfaces with Picard number one and infinitely many rational points,, Algebra and Number Theory, 1 (2007), 1. doi: 10.2140/ant.2007.1.1. Google Scholar W. Bosma, J. Cannon and C. Playoust, The Magma algebra system. I. The user language,, J. Symbolic Comput., 24 (1997), 235. doi: 10.1006/jsco.1996.0125. Google Scholar J. S. Milne, "Étale Cohomology,", Princeton Mathematical Series, 33 (1980). Google Scholar K. Oguiso, Free automorphisms of positive entropy on smooth Kähler surfaces,, to appear in Adv. Stud. Pure Math., (). Google Scholar T. G. Room, Self-transformations of determinantal quartic surfaces. I,, Proc. London Math. Soc. (2), 51 (1950), 348. doi: 10.1112/plms/s2-51.5.348. Google Scholar T. G. Room, Self-transformations of determinantal quartic surfaces. II,, Proc. London Math. Soc. (2), 51 (1950), 362. doi: 10.1112/plms/s2-51.5.362. Google Scholar T. G. Room, Self-transformations of determinantal quartic surfaces. III,, Proc. London Math. Soc. (2), 51 (1950), 383. doi: 10.1112/plms/s2-51.5.383. Google Scholar T. G. Room, Self-transformations of determinantal quartic surfaces. IV,, Proc. London Math. Soc. (2), 51 (1950), 388. doi: 10.1112/plms/s2-51.5.388. Google Scholar B. Saint-Donat, Projective models of K-3 surfaces,, Amer. J. Math., 96 (1974), 602. doi: 10.2307/2373709. Google Scholar F. Schur, Über die durch collineare Grundgebilde erzeugten Curven und Flächen,, Math. Ann., 18 (1881), 1. doi: 10.1007/BF01443653. Google Scholar V. Snyder and F. R. Sharpe, Certain quartic surfaces belonging to infinite discontinuous Cremonian groups,, Trans. Amer. Math. Soc., 16 (1915), 62. doi: 10.1090/S0002-9947-1915-1501000-2. Google Scholar J. T. Tate, Algebraic cycles and poles of zeta functions,, in, (1965), 93. Google Scholar Günther J. Wirsching. On the problem of positive predecessor density in $3n+1$ dynamics. Discrete & Continuous Dynamical Systems - A, 2003, 9 (3) : 771-787. doi: 10.3934/dcds.2003.9.771 Marcelo R. R. Alves. Positive topological entropy for Reeb flows on 3-dimensional Anosov contact manifolds. Journal of Modern Dynamics, 2016, 10: 497-509. doi: 10.3934/jmd.2016.10.497 Jan Philipp Schröder. Ergodicity and topological entropy of geodesic flows on surfaces. Journal of Modern Dynamics, 2015, 9: 147-167. doi: 10.3934/jmd.2015.9.147 Eva Glasmachers, Gerhard Knieper, Carlos Ogouyandjou, Jan Philipp Schröder. Topological entropy of minimal geodesics and volume growth on surfaces. Journal of Modern Dynamics, 2014, 8 (1) : 75-91. doi: 10.3934/jmd.2014.8.75 Seung Won Kim, P. Christopher Staecker. Dynamics of random selfmaps of surfaces with boundary. Discrete & Continuous Dynamical Systems - A, 2014, 34 (2) : 599-611. doi: 10.3934/dcds.2014.34.599 J. L. Barbosa, L. Birbrair, M. do Carmo, A. Fernandes. Globally subanalytic CMC surfaces in $\mathbb{R}^3$. Electronic Research Announcements, 2014, 21: 186-192. doi: 10.3934/era.2014.21.186 Dieter Mayer, Fredrik Strömberg. Symbolic dynamics for the geodesic flow on Hecke surfaces. Journal of Modern Dynamics, 2008, 2 (4) : 581-627. doi: 10.3934/jmd.2008.2.581 François Ledrappier. Erratum: On Omri Sarig's work on the dynamics of surfaces. Journal of Modern Dynamics, 2015, 9: 355-355. doi: 10.3934/jmd.2015.9.355 François Ledrappier. On Omri Sarig's work on the dynamics on surfaces. Journal of Modern Dynamics, 2014, 8 (1) : 15-24. doi: 10.3934/jmd.2014.8.15 Dong Chen. Positive metric entropy in nondegenerate nearly integrable systems. Journal of Modern Dynamics, 2017, 11: 43-56. doi: 10.3934/jmd.2017003 François Gay-Balmaz, Cesare Tronci, Cornelia Vizman. Geometric dynamics on the automorphism group of principal bundles: Geodesic flows, dual pairs and chromomorphism groups. Journal of Geometric Mechanics, 2013, 5 (1) : 39-84. doi: 10.3934/jgm.2013.5.39 Dwayne Chambers, Erica Flapan, John D. O'Brien. Topological symmetry groups of $K_{4r+3}$. Discrete & Continuous Dynamical Systems - S, 2011, 4 (6) : 1401-1411. doi: 10.3934/dcdss.2011.4.1401 Alejo Barrio Blaya, Víctor Jiménez López. On the relations between positive Lyapunov exponents, positive entropy, and sensitivity for interval maps. Discrete & Continuous Dynamical Systems - A, 2012, 32 (2) : 433-466. doi: 10.3934/dcds.2012.32.433 Jorge Sotomayor, Ronaldo Garcia. Codimension two umbilic points on surfaces immersed in $R^3$. Discrete & Continuous Dynamical Systems - A, 2007, 17 (2) : 293-308. doi: 10.3934/dcds.2007.17.293 Suleyman Tek. Some classes of surfaces in $\mathbb{R}^3$ and $\M_3$ arising from soliton theory and a variational principle. Conference Publications, 2009, 2009 (Special) : 761-770. doi: 10.3934/proc.2009.2009.761 Anatole Katok. Fifty years of entropy in dynamics: 1958--2007. Journal of Modern Dynamics, 2007, 1 (4) : 545-596. doi: 10.3934/jmd.2007.1.545 Fryderyk Falniowski, Marcin Kulczycki, Dominik Kwietniak, Jian Li. Two results on entropy, chaos and independence in symbolic dynamics. Discrete & Continuous Dynamical Systems - B, 2015, 20 (10) : 3487-3505. doi: 10.3934/dcdsb.2015.20.3487 Matías Navarro, Federico Sánchez-Bringas. Dynamics of principal configurations near umbilics for surfaces in $mathbb(R)^4$. Conference Publications, 2003, 2003 (Special) : 664-671. doi: 10.3934/proc.2003.2003.664 Boris Kruglikov, Martin Rypdal. A piece-wise affine contracting map with positive entropy. Discrete & Continuous Dynamical Systems - A, 2006, 16 (2) : 393-394. doi: 10.3934/dcds.2006.16.393 Dino Festi Alice Garbagnati Bert Van Geemen Ronald Van Luijk
CommonCrawl
Effect of single intralesional treatment of surgically induced equine superficial digital flexor tendon core lesions with adipose-derived mesenchymal stromal cells: a controlled experimental trial Florian Geburek ORCID: orcid.org/0000-0002-3161-90551, Florian Roggel1, Hans T. M. van Schie2, Andreas Beineke3, Roberto Estrada2, Kathrin Weber4, Maren Hellige1, Karl Rohn5, Michael Jagodzinski6, Bastian Welke7, Christof Hurschler7, Sabine Conrad8, Thomas Skutella9, Chris van de Lest2, René van Weeren2 & Peter M. Stadler1 Adipose tissue is a promising source of mesenchymal stromal cells (MSCs) for the treatment of tendon disease. The goal of this study was to assess the effect of a single intralesional implantation of adipose tissue-derived mesenchymal stromal cells (AT-MSCs) on artificial lesions in equine superficial digital flexor tendons (SDFTs). During this randomized, controlled, blinded experimental study, either autologous cultured AT-MSCs suspended in autologous inactivated serum (AT-MSC-serum) or autologous inactivated serum (serum) were injected intralesionally 2 weeks after surgical creation of centrally located SDFT lesions in both forelimbs of nine horses. Healing was assessed clinically and with ultrasound (standard B-mode and ultrasound tissue characterization) at regular intervals over 24 weeks. After euthanasia of the horses the SDFTs were examined histologically, biochemically and by means of biomechanical testing. AT-MSC implantation did not substantially influence clinical and ultrasonographic parameters. Histology, biochemical and biomechanical characteristics of the repair tissue did not differ significantly between treatment modalities after 24 weeks. Compared with macroscopically normal tendon tissue, the content of the mature collagen crosslink hydroxylysylpyridinoline did not differ after AT-MSC-serum treatment (p = 0.074) while it was significantly lower (p = 0.027) in lesions treated with serum alone. Stress at failure (p = 0.048) and the modulus of elasticity (p = 0.001) were significantly lower after AT-MSC-serum treatment than in normal tendon tissue. The effect of a single intralesional injection of cultured AT-MSCs suspended in autologous inactivated serum was not superior to treatment of surgically created SDFT lesions with autologous inactivated serum alone in a surgical model of tendinopathy over an observation period of 22 weeks. AT-MSC treatment might have a positive influence on collagen crosslinking of remodelling scar tissue. Controlled long-term studies including naturally occurring tendinopathies are necessary to verify the effects of AT-MSCs on tendon disease. Tendon injuries are common in both human [1,2,3] and equine [4,5,6] athletes. In horses, the superficial digital flexor tendon (SDFT), which is located at the palmar aspect of the limb, acts to store and release elastic energy and is subject to strains close to its functional limits [7, 8]. Due to gradual accumulation of degenerative damage during intensive training leading to partial rupture, this tendon is prone to failure, especially in racehorses [5]. Because of the high incidence, prolonged recovery period and high re-injury rate, a plethora of physical, medical and surgical interventions have been applied over the years to improve quality of the repair tissue; however, to date the ideal treatment concept has not been found [9,10,11]. Potentially regenerative therapies, in particular cell and blood-based substrates, have gained interest over the last decade [12]. The administration of multipotent cells, in particular autologous mesenchymal stromal cells (MSCs) [13], or totipotent embryonic stem cells (ESCs) [14, 15] into tendon defects is suggested to have direct and indirect influences on tendon healing. It is thus hypothesized that the injected cells may either differentiate into cells capable of synthesizing tendon matrix—that is, have a direct regenerative effect [16,17,18,19,20]—or act by a paracrine effect through the release of trophic mediators, growth factors and immunomodulatory, angiogenic as well as anti-apoptotic substances [21,22,23,24,25,26]. To date it is not clear which cell source is the ideal choice to enhance tendon regeneration [27]. Comparing different adipose tissue-based cell-rich substrates, adipose-derived nucleated cells (ADNCs) are, by contrast with adipose tissue-derived mesenchymal stromal cells (AT-MSCs), a mixture of different cell types. The advantage is that ADNCs are readily available within hours after tissue harvest without the demand of cost and time for culture. Multipotency has been proven for pericytes [28], which form a subset of the ADNCs. However, AT-MSC culture leads to a higher cell dose and theoretically to a greater effect [29]. To the knowledge of the authors, however, it is not clear which cell type contributes most to tendon healing [28, 30]. In an experimental equine collagenase model study ADNCs had a limited effect on tendon healing but led to histologically improved tendon organization and an increase in cartilage oligomeric matrix protein (COMP) expression [30]. The clinically most relevant sources of MSCs in equine orthopaedics are bone marrow, adipose tissue and umbilical cord blood [27, 31]. The advantages of adipose tissue over bone marrow are that it is widely available and easily accessible, and its MSC content is higher with a higher proliferation capacity of AT-MSCs [32] and a slower senescence than that of bone marrow mesenchymal stromal cells (BM-MSCs) [27, 33, 34]. Despite the lack of a definite set of surface markers to characterize equine tenocytes, a recent study demonstrated that, compared with BM-MSCs, umbilical cord blood MSCs and AT-MSCs express collagen 1A2, collagen 3A1 and decorin at the highest levels with the highest collagen type 1A2:3A1 ratio [32]. AT-MSCs show high expressions of the tendon markers COMP and scleraxis [20, 32]. Furthermore, AT-MSCs have already been used with promising results to treat equine tendinopathies, as described in several uncontrolled case series [35,36,37,38]. In a controlled in-vivo experimental study, the implantation of AT-MSCs into collagenase-induced SDFT core lesions resulted in improved tendon fibre organization and decreased inflammatory infiltrate, as well as increased collagen type I gene expression compared with the control limbs, whereas no differences were seen in clinical parameters and with B-mode ultrasonography [39]. In another study from the same group using the collagenase-gel model of tendinopathy, intralesional treatment with AT-MSCs suspended in platelet concentrate prevented progression of SDFT lesions and resulted in better organization of collagen fibrils, less inflammation and increased vascularity [40]. However, the relevance of this model is questioned due to the strong acute inflammatory response after collagenase injection, which is unlike naturally occurring degenerative tendon lesions, and due to difficulties in standardization of lesions [41,42,43,44]. A recently introduced surgical model of centrally located SDFT (core) lesions is thought to mimic the characteristics of overuse tendinopathy more realistically [43, 45,46,47]. Since the 1980s, B-mode ultrasonography has been considered the gold standard for the diagnosis of tendinopathy [48, 49]. Because of its limitations, such as the lack of axial information, operator dependence, influence of ultrasound beam angle and limited resolution, however, the value of quantification of B-mode ultrasound images for the adequate assessment of repair is questioned [50, 51]. Ultrasound tissue characterization (UTC) is a new technique that was developed to analyse echo pattern stability on a computerized basis [52, 53]. Transverse ultrasound (US) images are captured at regular distances over the long axis of the tendon, and are reconstituted to a three-dimensional block of US information with the help of custom-designed software. Depending on the echo pattern stability over contiguous images, four different echo types can be discriminated with histo-morphology as a reference test [44, 52]. UTC has been shown a viable diagnostic tool to monitor experimental tendinopathies [44, 47, 54] and natural tendon disease [55, 56] in horses and is increasingly used for the assessment of (Achilles) tendon integrity in humans [57,58,59]. Histologic examination is still the gold standard to assess tendon healing [60] and commonly yields the most important results at the end of terminal experimental studies when findings can be correlated to those from other examination modalities (e.g. diagnostic imaging) [30, 44, 45]. Another major component of experimental tendon studies should be biomechanical testing of tendon specimens to quantify functional properties of the repair tissue which are potentially representative for the resistance to strain in a clinical setting [45, 61, 62]. To our knowledge, the effect of AT-MSCs has not been tested experimentally in horses over a 6-month period using a controlled blinded, randomized study design with non-invasive monitoring and multimodal end-stage evaluation of the repair tissue. Therefore, the aim of this study was to assess whether a single intralesional implantation of AT-MSCs suspended in autologous inactivated serum into surgically created lesions leads to a reduction of inflammatory signs and improved ultrasonographic, biochemical, biomechanical and histologic characteristics compared with the application of autologous inactivated serum alone. Nine horses (eight warmbloods, one trotter) aged 3–6 years (mean 4 years) with a mean bodyweight of 545 kg (range 498–607 kg) were included in this study. All horses were housed in boxes and fed hay and cereals. Prior to the study, none of the horses showed clinical and/or ultrasonographic (B-mode, UTC) signs of forelimb SDFT disorders. Surgical creation of lesions and adipose tissue harvest Core lesions were surgically created in the SDFTs of both forelimbs using the model introduced by Little and Schramme [63] and modified by Bosch et al. [45] and Schramme et al. [43]. All horses received meloxicam (0.6 mg/kg bwt (IV)) preoperatively and 2 days postoperatively. No perioperative antimicrobials were administered. A standard protocol for inhalation anaesthesia was used and the horses were positioned in lateral recumbency. After clipping and aseptic preparation, a 1.5-cm incision was made in the palmar midline, 2 cm proximal to the proximal end of the common digital flexor tendon sheath through the skin and the mesotendon into the tendon core. A 2.5-mm blunt conical obturator (Karl Storz, Tuttlingen, Germany) was inserted and guided proximally inside the tendon core under ultrasonographic guidance over a distance of 7 cm. Subsequently, a 3.5-mm burr (Abrador Burr 28200RN; Karl Storz) was inserted in the tunnel that had been created, activated and gradually pulled backwards over 20 seconds, while pressing the tendon against the tip of the burr. Care was taken not to damage the dorsal epitenon. Epitenon incisions were sutured in a simple interrupted pattern with polyglactin 910 (Vicryl® 2-0 USP; Ethicon, Norderstedt, Germany), and skin incisions were closed in a vertical mattress pattern with polyamide (Dafilon® 2-0 USP; Braun Melsungen, Melsungen, Germany). Double-layer bandages were applied and changed every 1–2 days until intralesional injection. During the same general anaesthesia, the right paraxial gluteal region was clipped and aseptically prepared. A 4-cm skin incision was made in the cranio-caudal direction and 20 g of adipose tissue was excised from the subcutis. The skin was sutured with a vertical mattress pattern with polyamide (Dafilon® 1 USP; Braun Melsungen). Adipose tissue was stored in MSC medium containing 1 g/l Dulbecco's Modified Eagle's Medium (DMEM) with 25 mM 4-(2-hydroxyethyl)-1-piperazineethanesulfonic acid (HEPES) and 1% l-glutamine (DMEM with 25 mM HEPES, with l-glutamine; PAA Laboratories, Pasching, Austria), fetal bovine serum (PAA Laboratories) and 1% penicillin/streptomycin (Penicillin-Streptomycin 100; PAA Laboratories) at 5 °C and shipped to the laboratory overnight. AT-MSC isolation and culture Adipose tissue was mechanically separated using a scalpel blade and a tissue chopper. The tissue was digested with 0.05% type IV collagenase (Sigma Aldrich, Taufkirchen, Germany) and 0.025% protease (Dispase II; Sigma-Aldrich) at 37 °C for 30 min until it was neutralized by 10% fetal bovine serum (PAA Laboratories). After centrifugation (200 × g, 10 min), the pellet was suspended in 10 ml of Hanks' buffer (HBSS 1× with Ca and Mg; PAA Laboratories), centrifuged again with the same settings, and the pellet containing the MSCs was suspended in MSC medium in 75-ml polypropylene flasks (T75 flask; BD Falcon, Franklin Lakes, NJ, USA) and cultured at 37 °C and 5% CO2. The MSCs were selected by plastic adhesion. MSCs were transfected with recombinant lentivirus particles expressing sequences of the promoting region for transcription factor hUbiC (copGFP; System Biosciences, Mountain View, CA, USA) to be able to track the MSCs via autofluorescence, which was part of another study presented elsewhere. After 95–100% confluence of cells was reached, the medium was removed and MSCs were washed once with PBS (Dulbecco's PBS without Ca and Mg; PAA Laboratories). Adherent cells were separated with 3 ml of Trypsin/EDTA solution (Trypsin/EDTA 1×; PAA Laboratories) for 5 min at 37 °C, which was checked by light microscopy. After neutralization with 6 ml of MSC medium, the suspension was transferred into a 15-ml polypropylene tube (Conical tubes 15 ml; BD Falcon), centrifuged (200 × g, 5 min) and the pellet suspended in 10 ml MSC medium. Cells were counted using a Neubauer counting chamber (Zählkammer Neubauer; LO Laboroptik, Friedrichsdorf, Germany), and 10 × 106 MSCs were transferred into a new 15-ml polypropylene tube and washed with 10 ml of 1 g/l DMEM with 25 mM HEPES and 1% l-glutamine. After centrifugation, the pellet was suspended in 1 ml of autologous serum. The serum had been inactivated previously at 56 °C for 30 min. The MSC-serum suspension was transferred into a plastic tube (S-Monovette® 9 ml; Sarstedt, Nümbrecht, Germany), stored on ice, shipped overnight and kept at 4 °C until injection. AT-MSC culture and processing was performed as described previously [46]. Immunophenotyping of AT-MSCs by flow cytometric analysis After isolation and expansion (passage 3) the AT-MSCs were incubated with monoclonal and polyclonal antibodies against CD14 (C2265-36; US Biological), CD29 (303015; Biozol), CD34 (555820; BD Pharmingen), CD44 (103005; Bio Legend), CD45 (555480; BD Pharmingen), CD90 (ab225; abcam) and CD117 (ab5616; Biozol) (Fig. 1). AT-MSCs stained only with secondary antibodies (ab150105 (abcam), DAB087583 and DAB087693 (Dianova)) were used as negative controls. Antibody binding was measured using flow cytometric analysis (BD FACSCanto™ II with BD FACSDiva™ 8.0.1 software). Flow cytometric analysis of cultured AT-MSCs from a representative study horse. Histograms indicate the immunophenotype of AT-MSCs for CD14, CD29, CD34, CD44, CD45, CD90 and CD117. Results are displayed for the distribution of immunostained (green) and unstained (red) AT-MSCs. All stained cells were positive for CD29, CD44 and CD90 while the signal for CD14 was weaker. No signal was detected for CD34, CD45 and CD117 (Colour figure online) Adipogenic, osteogenic and chondrogenic differentiation of AT-MSCs Differentiation of AT-MSCs was induced at passage 2 to adipogenic, osteogenic and chondrogenic lineages. For adipogenic differentiation, cells were seeded at a density of 2 × 104 cells/cm2 in basal medium. After 24 hours, medium was switched to Adipogenic Induction Medium, consisting of DMEM High Glucose, supplemented with 10% FBS, 1% l-glutamine, 1% penicillin/streptomycin, 1 μM dexamethasone, 1 μM indomethacin, 500 μM 3-isobutyl-1-methylxantine (IBMX) and 10 μg/ml human recombinant insulin. Medium was changed twice a week for 14 days. Lipid production being specific for adipocytes was made visible by Oil Red staining (Fig. 2). Adipogenic differentiation of equine AT-MSCs from a representative study horse. Photomicrographs of AT-MSCs (passage 2) taken 28 days after induction of adipogenic differentiation (a). After Oil Red staining a high number of intracellular lipid-containing vesicles was detected compared with the control without differentiation medium (b) (Colour figure online) To induce osteogenic differentiation, AT-MSCs were cultured in 24-well plates with DMEM Low Glucose with 10% FBS (MSC tested; Gibco), 1% l-glutamine (PAA Laboratories), 0.1 μM dexamethasone (Sigma Aldrich) and 1 mM β-glycerophosphate (Sigma Aldrich) for 4 weeks. Controls for osteogenic differentiation were treated under the same conditions without dexamethasone and β-glycerophosphate. The production of mineralized matrix produced by osteoblasts was made evident by alkaline phosphatase, Alizarin red and von Kossa staining (Fig. 3). Osteogenic differentiation of equine AT-MSCs from a representative study horse. Photomicrographs of AT-MSCs (passage 2) taken on day 28 after induction of osteogenic differentiation (A1–A3; C1–C3). By contrast to controls without differentiation medium (B1–B3; D1–D3), deposition of extracellular calcium was detected by alkaline phosphatase (A1, C1), von Kossa staining (A2, C2) and Alizarin red staining (A3, C3) (Colour figure online) To demonstrate chondrogenic differentiation, the cells were trypsinized and 5 × 105–1 × 106 cells/ml were resuspended in DMEM High Glucose, 1% penicillin/streptomycin, 1% l-glutamine, 10% FBS, with 1× Insulin–Transferrin–Selenium (ITS) supplement, 1 mM sodium pyruvate, 100 nM dexamethasone, 40 μg/ml proline, 50 μg/ml l-ascorbic acid-2-phosphate and 10 ng/ml TGF-β1. Cells were centrifuged for 10 min at 200 × g in 15-ml Falcon tubes. The tubes were incubated with filter tops in a rack at 37 °C and 5% CO2. After 2–4 days the pellets condensed. The cells were further incubated in these tubes for 21 days. The medium was changed every 2–3 days. The production of proteoglycans being specific for cartilage was visualized with Toluidine Blue and Safranin-O staining (Fig. 4). Chondrogenic differentiation of equine AT-MSCs from a representative study horse. Photomicrographs of AT-MSCs (passage 2) taken on day 21 after induction of chondrogenic differentiation. The presence of glycosaminoglycans and collagen was detected by Toluidine Blue (a) and Safranin O (b) (Colour figure online) Intralesional treatment of tendons with AT-MSCs Fourteen days after creation of the lesions, horses were sedated with detomidine hydrochloride (0.015 mg/kg bwt (IV)) and butorphanol (0.025 mg/kg bwt (IV)), the hair over the palmar metacarpal region was clipped, the skin was prepared aseptically and the Nn. palmares lateralis and medialis were anaesthetized with 2.5 ml of 2% mepivacaine solution. The core lesion of one randomly assigned SDFT of each horse was injected with 10 × 106 AT-MSCs suspended in 1 ml of inactivated autologous serum, whereas the lesion in the contralateral SDFT was injected with 1 ml of inactivated autologous serum to serve as an intra-individual control. Randomization was carried out by flipping a coin and the operator was not blinded to the treatment modality. Limbs were positioned manually to ensure equal weight bearing. For the ultrasound-guided intralesional injection, a 22-G needle was inserted from lateral at two sites (3 and 5 cm proximal to the surgical scar in the skin) and per site 0.5 ml of the inactivated serum containing AT-MSCs (AT-MSC-serum group) or inactivated serum alone (serum group) were injected intralesionally, respectively. Care was taken that the injection proceeded without resistance. A bandage was applied for 10 days and changed every second day. Exercise programme All horses were subject to a standardized hand-walking exercise programme as described previously by Bosch et al. [45] (Additional file 1) on firm flat ground mainly in straight lines. Horses were turned to the left and to the right equally often. Trotting exercise was carried out on a treadmill at 3.1 m/s. Clinical and ultrasonographic examinations A general clinical examination (body temperature, heart rate, respiratory rate, appetite, limb function and comfort level) was performed daily. Preoperatively, prior to intralesional injection at 2 weeks after surgery, and 3, 4, 5, 6, 8, 10, 12, 18, 21 and 24 weeks postoperatively, limbs were assessed clinically, via B-mode ultrasonography and with UTC. SDFT swelling was determined by palpation as an increase in diameter relative to normal tendon (0 = no increase, 1 = increase by factor 1.5; 2 = increase by factor 1.5–2; 3 = increase by more than factor 2) [56, 64, 65], skin temperature over the SDFT was assessed manually (0 = no abnormality; 1 = mild abnormality; 2 = moderate abnormality; 3 = severe abnormality) and surgical skin wounds and injection sites were inspected. Lameness was evaluated at walk during the first 18 weeks post surgery, and additionally at trot from weeks 19 to 24 by an experienced equine clinician being blinded to the treated limb (five-grade score) [66]. Prior to ultrasonographic examinations, horses were sedated with romifidine (0.04–0.08 mg/kg bwt (IV)) and butorphanol (0.02 mg/kg bwt (IV)), and the hair on the palmar aspect of the metacarpus was clipped and shaved. The skin was washed with soap and degreased with alcohol, and contact gel for ultrasound examination was applied copiously. B-mode ultrasound examination was carried out with a 6–15 MHz ultrasound probe (GE ML 6-15; GE Healthcare, Wauwatosa, WI, USA) connected to a Logiq E9 (GE Healthcare) using a standoff pad and constant settings (frequency 13 MHz, gain 52, depth 25 mm, single focal zone set at 15 mm depth). The palmar metacarpus was divided into examination zones from proximal to distal in the transverse (1A, 1B, 2A, 2B, 3A, 3B, 3C) and longitudinal plane (1, 2, 3) as described earlier [67, 68]. Images were stored digitally and analysed with a DICOM workstation programme (easyIMAGE®, easyVET®; IFS Informationssysteme, Hannover, Germany). The cross-sectional area (CSA) of the SDFT was determined on all transverse images. Values for each examination zone were added to calculate the total cross-sectional area (TCSA). Echogenicity and fibre alignment were graded semi-quantitatively on longitudinal images of each zone. Echogenicity was assigned a score of 0 (normoechoic), 1 (hypoechoic), 2 (mixed echogenicity) or 3 (anechoic) and fibre alignment was graded according to the estimated percentage of parallel fibre bundles in the lesion: 0 (>75%), 1 (50–74%), 2 (25–49%) and 3 (<25%). Scores for all levels were summarized to calculate the total echo score (TES) and the total fibre alignment score (TFAS), respectively [68]. All measurements were performed by two experienced examiners blinded to treatment modalities (FR, MH). Values for TCSA and scores for TES and TFAS from both examiners were averaged. For the UTC examination, a 10-MHz ultrasound probe (10 L5 Smartprobe; Terason Ultrasound, Teratech Corporation, Burlington, MA, USA) connected to a laptop computer (MacBook Pro® 17 inch; Apple, Cupertino, CA, USA) loaded with software for data acquisition and analysis (UTC™ Software V.1.0.1 2010; UTC Imaging, Stein, the Netherlands) was used. The probe was fixed in a motorized tracking device with built-in standoff pad (UTC-Tracker™; UTC Imaging). Settings (depth, gain, focal zone) were standardized, and all examinations were performed by the same operator (FG) with the horse bearing weight equally on both forelimbs. With the help of the tracking device, the probe moved automatically from proximal to distal at constant speed over a distance of 12 cm. Sampling of transverse images was conducted every 0.2 mm, including the surgical site as the reference point for the analysis. The compiled 600 transverse US images were reconstructed into a three-dimensional data block of US information and stored digitally until the end of the examination period. The stability of the echo pattern of corresponding pixels in contiguous transverse images was analysed (UTC2011® Analyser V1.0.1; UTC Imaging). The following echo types were discriminated: those generated by intact and fully aligned fascicles (echo type I, green), those generated by discontinuous and less aligned fascicles (echo type II, blue), those generated by a mainly fibrillary matrix with accumulation of collagen fibrils not (yet) organized into fascicles (echo type III, red) and those generated by an amorphous matrix and fluid (echo type IV, black). A 4-cm-long tendon segment from 2 to 6 cm proximal to the scar in the epitenon was selected for analysis by one examiner (FR), being blinded to the treatment modality. Within this segment, every fifth colour-coded transverse image (distance between images 1 mm) was used to place a circular cursor (∅ 5 mm) in or around the central core lesion, depending on its size. These contours were interpolated and ratios of echo types were analysed quantitatively as fractions of the determined volume. Mean values for the proportion of each echo type were calculated for all horses and for all time points. Ten scans obtained at different time points and from different horses were chosen randomly and analysed twice by the same examiner to determine the intra-observer reliability and by three examiners to determine the inter-observer reliability. Euthanasia and tissue harvest Twenty-four weeks after creation of the lesion, all horses were sedated and anaesthesia was induced using a standard protocol. Thereafter, the horses were euthanized with pentobarbital (90 mg/kg bwt (IV)). Each SDFT was excised at the level of the carpus and the fetlock and the scar of the former surgical entrance to the tendon was identified. A transverse segment of 1 cm was harvested from the lateral half of the SDFT between 2 and 3 cm proximal to the entrance portal. The half-core of the lesion could easily be identified in this segment and was excised with a microtome blade and cut into three pieces that were snap frozen and stored at –80 °C for biochemical analysis. Macroscopically normal reference tissue was harvested from an equivalent 1-cm-long segment taken between 12 and 13 cm proximal to the entrance portal; that is, in the proximal metacarpal region at least 5 cm away from the proximal end of the core lesion. The medial halves of the SDFT segments 2–7 cm proximal and 13–18 cm proximal to the epitenon scar (macroscopically normal control tissue) were obtained for biomechanical testing and stored in PBS-soaked gauze at –20 °C until analysis. The lateral half of the segment 4–6 cm proximal to the epitenon scar was collected for histological analysis, fixed in 4% paraformaldehyde for 7 days and stored in PBS at 4 °C until embedding in paraffin. Later, 5-μm longitudinal slices of the core lesion including adjacent tissue were cut starting from the centre of the tendon and stained with haematoxylin and eosin (H&E). Histologic examination The centre of each tendon was independently judged by two observers blinded to horse and treatment (FG, FR). In total, five high-power fields (40× magnification) per section were examined with a light microscope (Leitz Laborlux 12; Leica, Wetzlar, Germany) using an established score [45, 60] (Additional file 2). Score values determined by each observer were calculated for each parameter before score values of both examiners were averaged. Biochemical analysis Glycosaminoglycans and DNA analysis After lyophilization of tendon samples, the dry weight was determined and they were digested overnight at 60 °C in 400 μl papain solution (2 mM cysteine, 1 U/ml papain, 50 mM NaH2PO4 and 2 mM EDTA, pH 6.5). A modified 1,9-dimethylmethylene blue (DMMB) dye binding assay was used to analyse the sulphated GAG concentration [45]. To a 20 μl sample, 10 μl of 3% (w/v) bovine serum albumin and 250 μl of reagent (46 μM DMMB, 40 mmol/l glycine and 42 mmol/l NaCl adjusted to pH 3.0 with HCl (hydrochloric acid)) were added, and the absorbency at 525 and 570 nm was measured after 30 min. The assay was standardized with shark chondroitin sulfate (1–100 μg/ml). Quantification of total DNA was performed utilizing the reaction of fluorescent dye [69]. Briefly, 2 ml Hoechst 33258 (Molecular Probes, Leiden, the Netherlands) fluorescent dye solution (0.1 pg/ml in 10 mM Tris, 1 mM ethylenediaminetetraacetate (EDTA), 0.1 M NaCl pH 7.4) was added to an 80 μl sample and, immediately after mixing, fluorescence was measured using a LS-50B fluorimeter (Perkin Elmer, Norwalk, CT, USA), with excitation at 352 nm and emission at 455 nm. Salmon sperm (0–20 μg/ml) was used as a standard. All results were expressed as μg/mg dry weight tendon. Collagen and crosslink analysis Tendon samples were hydrolysed (110 °C, 18–20 h) in 600 μl of 6 M HCl for mass spectrometric determination after lyophilizing for 24 hours. Hydroxyproline (Hyp) was determined as a measure of total collagen content, and the amino acid lysine (Lys), hydroxylysine (HLys, a measure for the degree of lysine hydroxylation) and the pyridinoline crosslinks hydroxylysylpyridinoline (HP) and lysylpyridinoline (LP) as measures for the post-translational modifications of collagen. To the hydrolyzed tendon samples, 200 μl of 2.4 mM homo-arginine was added after which the samples were vacuum-dried and dissolved in 20% acetonitrile containing 1.2 mM of tridecafluoroheptanoic acid. Samples were centrifuged at 13,000 × g for 10 min. Supernatants were subjected to high-performance liquid chromatography (HPLC) and mass spectrometry (MS), using a 4000 Q-TRAP mass spectrometer (MDS Sciex, Foster City, CA, USA) at a source temperature of 300 °C and a spray voltage of 4.5 kV. Amino acids were separated on a Synergi MAX-RP 80A column (250 × 3 mm, 4 μm; Phenomenex Inc., Torrance, CA, USA) at a flow rate of 400 μl/min, using a gradient from MilliQ® water to acetonitrile, both containing 1.2 mM of tridecafluoroheptanoic acid and 2.5 mM ammonium acetate. Amino acids were identified by MS in multiple reaction mode using the mass transitions 429.3/82.0 (HP), 413.3/84.0 (LP), 189.2/143.7 (homo-arginine), 147.2/130.2 (Lys), 163.2/128.1 (HLys) and 131.8/67.8 (Hyp). Data were related to the recovery of internal standard. Collagen content was calculated as follows: $$ \mathrm{Collagen}\ \left(\upmu \mathrm{g}\right) = \left[\mathrm{Hyp}\ \left(\mathrm{pmol}\right)\ /\ 300\right] \times 0.3 $$ where 300 is the number of Hyp residues in one collagen triple helix and 0.3 is deduced from the molecular weight of collagen (30,000 Da). Biomechanical testing Tendon specimens were thawed at room temperature for approximately 2 hours before trimming. Two strips of 2 × 2 mm width and 50 mm length were cut from the visible lesion and a single strip was harvested from a tendon segment with macroscopically normal tissue (further proximal to the lesion site) from each tendon using a custom-designed cutting device [45, 70]. Specimens were marked and stored in PBS-soaked gauze at 4 °C and tested within 2 hours. Before testing, the exact CSA was measured at four locations along the length of the specimen using a laser device (Laser Micro Diameter LDM-110; Takikawa Engineering, Japan) and the average CSA was calculated. Strips were fixed in the loading device of a universal materials testing machine (Zwick 1445; Zwick, Germany) in a PBS bath and preconditioned with 3% strain at 1 Hertz for seven cycles. After that, specimens were loaded to failure with 0.5 mm/s. For each specimen, force (F) and displacement were recorded. The stress at failure (σmax) was calculated as follows: $$ {\upsigma}_{\max }={\mathrm{F}}_{\max }/\ \mathrm{C}\mathrm{S}\mathrm{A}\ \left(\mathrm{N}/{\mathrm{mm}}^2 = \mathrm{MPa}\right) $$ and the modulus of elasticity was derived from the linear part of the stress–strain curve [45, 61]. Results of the two specimens taken from macroscopically normal tissue (left and right SDFTs) were pooled in one group. Clinical, B-mode ultrasound, UTC, histology and biochemistry data were analysed using SAS® 9.3 (SAS Institute, Cary, NC, USA). The assumption of a normal distribution of quantitative parameters was examined using the Shapiro–Wilk test and visual assessment of distributions. In normally distributed samples, parametric methods were used: t test for paired observations and analysis of variance for calculation of variance components (intra-class correlation coefficient (ICC)). Otherwise, non-parametric tests (Kruskal–Wallis test and Wilcoxon two-sample test) were applied. Data from biochemical testing, which were partly not normally distributed, were tested using the Wilcoxon signed-rank test (R version 3.0.2; The R Foundation for Statistical Computing, Vienna, Austria). The Kruskal–Wallis test was used to compare CSAs of specimens, stress at failure and modulus of elasticity among all three tested groups. Individual comparisons of the three groups (AT-MSC-serum, serum, macroscopically normal tissue) were performed using a Wilcoxon signed-rank test with Bonferroni correction for multiple testing. A significance level of α = 0.05 was applied. Intra-observer and inter-observer repeatability of UTC measurements and histology scores were calculated using the ICC. A repeatability value > 0.75 was considered excellent, 0.75–0.4 as fair to good and <0.4 as poor [71]. Proc NESTED and Proc VARCOM were used for calculation of the coefficient of variance. Lesion creation, adipose tissue harvest and intralesional treatment Surgical creation of core lesions was successful in all limbs; the epitenon was not damaged inadvertently in any case. Harvest of adipose tissue, isolation and culture of AT-MSCs was carried out without complications. Immunophenotyping and trilineage differentiation of cultured cells showed typical characteristics of MSCs (Figs. 1, 2, 3 and 4) [72]. Intralesional injections were uneventful in all horses. No lameness was evident at walk until week 17 and at walk and trot from week 18 to 24. All horses developed an increase in skin temperature over the SDFT during the course of the experiment, without differences between the groups. Mild to moderate swelling of the palmar metacarpal region was palpable in both groups after induction of lesions. This swelling decreased markedly until 3 weeks and increased markedly again until week 8 after injection in all limbs. Scores for palpable swelling remained consistently high in the serum group, while they decreased continuously in the AT-MSC-serum group. This difference was weakly significant (p = 0.0497) at 21 weeks after surgery. At the end of the observation period, palpable SDFT swelling was mild in all horses. Two horses developed cellulitis in the region of the surgical entrance, which healed without pharmaceutical intervention. B-mode ultrasonography All horses developed bilateral tendon core lesions with an ultrasonographic morphology similar to naturally occurring tendon disease. There were no significant differences in TCSA, TFAS (Fig. 5) and TES between the groups at any point in time. Mean TCSA increased markedly in both groups until 4 weeks and reached its maximum at 8 weeks after lesion induction (AT-MSC-serum, 803 mm2, serum, 793 mm2). TFAS increased similarly and peaked at week 5 after lesion induction (Fig. 5). TCSA and TFAS decreased again until week 10. B-mode ultrasonographic parameters. Development of ultrasonographic scores (mean ± SD) of SDFTs treated with AT-MSCs suspended in autologous inactivated serum (solid lines) and autologous inactivated serum alone (dotted lines) over 24 weeks after surgical induction of tendon injuries. a TCSA. b TFAS. AT-MSC adipose-derived mesenchymal stromal cell Ultrasound tissue characterization Intra-class correlation for UTC measurements was excellent for intra-observer reliability (0.99) and inter-observer reliability (0.80–0.97 depending on echo type). The development of echo type ratios within SDFT lesions over time is shown in Fig. 6. There was a strong decrease of structure-related echo types I and II until week 5 after lesion induction (Fig. 6a, b) and a concomitant increase of non-structure-related echo types III and IV in both groups (Fig. 6c, d), indicative for tissue damage, loss of structural organization and inflammatory response. Echo type III ratios indicating fibrillogenesis peaked between weeks 6 and 8 postoperatively in both groups. Echo type II ratios increased from weeks 6 to 18, which is indicative for a fibrillary matrix being organized into fascicles that are not yet aligned properly. The significantly higher ratio of echo type II in the AT-MSC-serum group in week 12 post surgery (p = 0.0326) may be indicative for increased remodelling (Fig. 6b). Simultaneously there was an increase in echo type I ratios without any differences between the groups. From week 12 post surgery onwards, all echo type ratios did not change markedly until the end of the study. Ultrasound tissue characterization. Development of echo type ratios for SDFTs treated with AT-MSCs suspended in autologous inactivated serum (solid lines) and autologous inactivated serum alone (dotted lines) over 24 weeks after surgical induction of tendon injuries. a Echo type I. b Echo type II. c Echo type III. d Echo type IV. *Significant difference between groups. AT-MSC adipose-derived mesenchymal stromal cell Gross pathologic examination At post-mortem gross examination, injection sites were still visible as two unilateral swellings of the lateral aspect of the tendon. Scar tissue was visible within the centre of the tendons as the core lesion (Fig. 7). Macroscopically, there were no differences visible between the groups. Gross pathologic examination of SDFTs. a Transverse and b longitudinal section of a serum-treated tendon segment 24 weeks after surgical induction of tendinopathy, 2 and 2–5 cm proximal to the surgical entrance into the tendon, respectively. The centrally located scar tissue is pale to intensively pink, partly reddish and well demarcated from surrounding ivory-coloured tendon tissue. The scar is nearly circular in shape on the transverse and appears as an oblong area on the longitudinal section (Colour figure online) DNA, GAG, collagen and crosslink content The dry weight of samples from AT-MSC-serum-treated and from serum-treated tendon lesions as well as samples from macroscopically normal tissue from the same tendons did not differ significantly (p > 0.05). The contents of GAG, DNA, Hyp, total collagen, HP, LP and HLys did not differ between lesions from the AT-MSC-serum group and the serum group. Tendon lesions treated with AT-MSC-serum and those treated with serum alone contained both more GAG (AT-MSC-serum, p = 0.0078; serum, p = 0.0039) and DNA (AT-MSC-serum, p = 0.0039; serum, p = 0.0039) and less Hyp (AT-MSC-serum, p = 0.0195; serum, p = 0.0078) and total collagen (AT-MSC-serum, p = 0.0195; serum, p = 0.0078) than macroscopically normal tissue from the same tendons (Table 1). LP and HLys content was the same in tendon lesions treated with AT-MSC-serum and serum alone and in macroscopically normal tissue (p > 0.05). Tendons treated with AT-MSC-serum had the same HP content as tendons treated with serum alone (p = 0.25) and macroscopically normal tendon tissue (p = 0.0742). By contrast, normal tendon tissue contained more HP than tendon treated with autologous inactivated serum alone (p = 0.0273). Table 1 Biochemical parameters of SDFTs treated with AT-MSCs suspended in autologous inactivated serum (AT-MSC-serum) and autologous inactivated serum alone (serum) and macroscopically normal tendon tissue (normal) from the same tendons 22 weeks after treatment Intra-class correlation for histology scores was excellent for inter-observer reliability (0.77–0.97 depending on score/sub-score). The lesions could be identified clearly on all H&E-stained slices. Histology was indicative for an incomplete restoration of structural integrity and a high metabolic activity (Fig. 8). There were no significant (p < 0.05) differences between the AT-MSC-serum and serum groups alone with respect to total scores, scores for fibre arrangement, scores for metabolic activity and sub-scores for fibre structure, fibre alignment, morphology of tenocyte nuclei, variations in cell density or vascularization. Scores for structural integrity and metabolic activity remained high after the 24-week observation period (Fig. 9). Histology of surgically induced SDFT lesions 22 weeks after treatment. a–f Longitudinal specimens of tendon lesions treated with AT-MSCs suspended in autologous inactivated serum (a, c, e) and autologous inactivated serum alone (control; b, d, f) stained with H&E (a, b, scale bar = 200 μm; c, d, e, f, scale bar = 100 μm). Fibril arrangement was mostly unidirectional (a–d), with some cases showing large regions without any regular fibre arrangement (e, f; yellow asterisk) in both groups. Specimens showed variations in cell density (a–d) and regions with high cellularity and vascularization after both treatment modalities (e, f; black asterisks) (Colour figure online) Histomorphological scores (Aström and Rausing [60], modified by Bosch et al. [45]). Mean histomorphological scores (± SD) for surgically induced SDFT lesions treated with AT-MSCs suspended in autologous inactivated serum (AT-MSC) and with autologous inactivated serum alone (control) 22 weeks after treatment. Total scores (green) include values from sub-scores for fibre structure (struct.), fibre alignment (align.), morphology (morph.) of tenocyte nuclei, variations (variat.) in cell density in cell density and vascularization (vascul.). Scores for structural (struct.) integrity and metabolic (metab.) activity summarize values from sub-scores displayed in blue and red, respectively. Scores and sub-scores did not differ between treatment modalities (p < 0.05). AT-MSC adipose-derived mesenchymal stromal cell (Colour figure online) Cross-sectional areas of tendon strips were not significantly different between AT-MSC-serum-treated lesion tissue, serum-treated lesion tissue and macroscopically normal tendon tissue (p = 0.11). Stress at failure and modulus of elasticity did not differ between AT-MSC-serum-treated tendons and those treated with serum (p = 1) (Fig. 10). Compared with macroscopically normal tendon tissue, stress at failure and modulus of elasticity were significantly lower in AT-MSC-serum-treated lesion tissue (p = 0.048 and p = 0.001, respectively) as well as in serum-treated lesion tissue (p = 0.004 and p = 0.002, respectively). Biomechanical parameters. Stress at failure and modulus of elasticity (measure of tensile stiffness of materials) of surgically induced SDFT lesion tissue treated with AT-MSCs suspended in autologous inactivated serum (AT-MSC + serum) and autologous inactivated serum alone (serum) 24 weeks after creation of lesions and 22 weeks after treatment. Macroscopically normal tendon tissue (normal) was harvested from a proximal segment of the same tendons. AT-MSC adipose-derived mesenchymal stromal cell Standardized central lesions were successfully induced in the SDFT of horses and longitudinally followed using standard and more sophisticated monitoring techniques for 24 weeks before in-depth histological, biochemical and biomechanical tissue analysis. The current study shows that a single treatment with 10 × 106 AT-MSCs suspended in inactivated autologous serum does not have a lasting effect on signs of inflammation and does not substantially improve ultrasonographic histologic, biochemical or biomechanical characteristics of surgically created SDFT core lesions over a 24-week period compared with inactivated autologous serum alone. However, the fact that the hydroxylysylpyridinoline (HP) content in the AT-MSC-serum treatment group is closer to the normal situation may potentially indicate improved collagen crosslinking. Hydroxylysine (HLys) is produced during a post-translational enzymatic modification process in tendon and is needed for the formation of the mature crosslinks HP and LP [73, 74]. Newly formed collagen within a lesion is less crosslinked than in mature tissue, which was most likely caused by enzymatic cleavage after the surgical trauma and the subsequent formation of more stable crosslinks during the remodelling phase resulting in repair tissue with higher tensile strength and stiffness [75]. The HP crosslink concentration per molecule of collagen in our study was the same in normal tissue as in AT-MSC-treated lesion tissue while lesion tissue treated with inactivated serum alone contained significantly less HP than macroscopically normal tissue. Despite the lack of significance between the AT-MSC-serum and serum groups this might be interpreted as a sign of superior crosslinking; that is, a better or more advanced repair after AT-MSC treatment. The contents of lesion tissue in HLys and in LP as expressed per molecule of collagen were the same in tendon lesions treated with AT-MSC-serum and serum alone and in macroscopically normal tissue, showing that these post-translational modifications were by contrast not affected by creation of the lesions and the treatment modalities. An increased GAG content of healing tendon lesions reflects a high tenocyte metabolism and an increased production of extracellular matrix. There is controversy about increased GAG content in tendon repair: on the one hand, degenerated tendon regions including fibrous scar tissue have been shown to contain higher concentrations of sulphated GAGs than normal tendons in humans and horses [76,77,78]. On the other hand, an increased GAG content as expressed per DNA has been interpreted as a sign of improved tendon healing in a recent equine study after intralesional platelet-rich plasma (PRP) treatment [45]. Interestingly, GAG contents of saline-treated control tendons in the latter study and in another study testing a plasma product [79] were similar to the GAG content in both groups of the current study using the same surgical model of tendinopathy, respectively. This also corresponds well to findings in naturally injured SDFTs that were saline treated and contained significantly more GAG than corresponding BM-MSC-treated as well as untreated control tendons, which were relatively uninjured [13]. In the current study, GAG contents in tendon lesions were higher than in normal SDFT tissue from more proximal regions of the injured tendons. These GAG contents in turn were very similar to those from mid-metacarpal SDFT tissue from mature horses in another study [80], suggesting that the proximal metacarpal region is suitable as a control to compare biochemical parameters of mid-metacarpal SDFTs and that SDFTs did not reach normal GAG content after treatment at the end of the observation period in the current study [80, 81], which implies that the end stage of healing had not been reached. In the current study, lesions treated with AT-MSCs had the same content in total collagen and Hyp as tendons treated with autologous inactivated serum alone. Although replacement of tenocytes with new ones resulting from tenogenic differentiation after engraftment of MSCs with consecutive production of extracellular matrix (e.g. collagen, GAG) is a potential mechanism of the therapy used [28], the findings do not support that this mechanism of action played a major role in the current study. Collagen and Hyp contents, however, were significantly lower than in macroscopically normal tendon tissue which contained as much collagen as unaltered tissue in a previous study [82]. The finding that lesion tissue contains less collagen than normal tendon is in accordance with a previous study using the same model [47]. Collagen types could have been differentiated between the more elastic type I and type III in the current study. However, MSCs as well as autologous serum have the potential to influence the expression of these collagen types during tendon repair [32, 62, 65]. The time frame in which expression of the collagen type I to III ratio reaches its optimum during the remodelling phase has not been clearly defined [41], so it remains unclear whether differentiation of collagen types would have helped to show which treatment modality was more beneficial in the current study. Collagen content correlates positively with the modulus of elasticity representing stiffness and with the stress at failure of the tendon strips in the current study, because both parameters were significantly lower in lesion tissue than in macroscopically normal tendon tissue. This is a typical characteristic of scar tissue with low maturity [61] and also proves that tendon healing was still ongoing at the terminal stage of the current experiment. It has to be concluded that AT-MSCs did not significantly influence the biomechanical properties in surgically created SDFT lesions 22 weeks after treatment, as did PRP in a similar model [45]. Stress at failure, one of the most important indicators for tensile strength of tendons [61], was significantly lower after AT-MSC-serum treatment and after treatment with serum alone than in macroscopically normal tendon tissue harvested from a proximal segment from the same tendons. Based on the knowledge that intact tendon tissue from the proximal metacarpal region normally has similar biomechanical properties as tissue from the mid-metacarpal region [83], this finding suggests that scar tissue has not regained its original strength 22 weeks after AT-MSC treatment. Findings from the present study show that it is beneficial to include macroscopically normal tissue as a second control group during biomechanical testing as an intra-individual reference for normal tendon tissue which was not reported in similar studies previously [45, 79]. Biomechanical properties of the repair tissue are closely related to functionality [45, 61, 84], which becomes clinically manifest by the recurrence rate of natural tendon disease in horses and to a lesser extent by persisting lameness [41]. Horses in the present study did not show signs of lameness until the end of the study period, which is in accordance with the observation that, in contrast to human patients with Achilles tendinopathy, chronic tendon pain does not play a major role in horses [23]. However, the sensitivity of clinical examination alone is limited to monitoring lameness, especially in cases of bilateral tendon injury [3]. This could have been improved by using computerized gait analysis, but this was not available during the experiment. The study period and the exercise regimen could have been extended to determine the recurrence rate of tendinopathy, one of the most reliable parameters of long-term functionality. However, this was considered ethically unacceptable in an experimental study. Swelling of SDFTs, another clinical parameter of inflammation, was less (weak significance) in AT-MSC-treated tendons than in control tendons only at a single time point (i.e. at 21 weeks after lesion induction). Because the ultrasonographically determined tendon cross-sectional area (TCSA) did not differ between groups at any time, the findings during palpation might be attributed to a decrease in subcutaneous swelling rather than in tendon diameter. Swelling was determined by palpation, a diagnostic modality which is subject to some inaccuracy, so it remains questionable whether AT-MSCs really influenced subcutaneous swelling during the remodelling phase (e.g. by reflux of the cell suspension during treatment). In the current study, no significant reduction of clinical signs of inflammation and no reduction of fluid and/or cell accumulation during ultrasonography were recorded after injection of AT-MSC-serum during the acute inflammatory phase when compared with the effect of serum alone, which corroborates earlier in-vivo findings [39] - although an increased echogenicity of collagenase-induced SDFT lesions was seen after AT-MSC application in a later study from the same group [40]. Another experimental in-vivo study using the same surgical model as in the current study showed a significant reduction of fluid and cell accumulation by means of UTC early after intralesional treatment with PRP, which was interpreted as a reduction of inflammation [45]. The authors of the current study would have expected similar effects after AT-MSC injection, because these cells are known to exert anti-inflammatory effects [85,86,87]: human AT-MSCs led to a reduced inflammatory response in synovial cells from patients with osteoarthritis by inhibition of pro-inflammatory cytokines such as IL-1β and IL-6 [85]. Recently it has been shown in a canine model of tendon transection that collagen sheets seeded with AT-MSCs promote an anti-inflammatory M2-type macrophage phenotype [87]. Immunomodulatory actions of MSCs are thought to be dose and time dependent [29], so that a higher MSC dose, implantation at an earlier or later time point after lesion creation or repeated injections might have led to different results. To the authors' knowledge, no experimental data exist concerning the ideal dose and timing of MSC implantation [28, 88]. On the one hand, it is hypothesized that there is a direct positive dose–effect relation [89]; on the other, experimental data show cytotoxic effects of high MSC doses [90]. In a rabbit experimental study, BM-MSC–collagen composites were used to repair patellar tendon defects and MSC concentrations of 4 × 106 and 8 × 106 cells/ml did not lead to additional biomechanical and histological improvement compared with 1 × 106 cells/ml [91]. In the current study, the cell dose of 10 × 106 AT-MSCs was based on recommendations from previous clinical and experimental trials [23, 39, 92]. The time of injection might have been too late to induce early anti-inflammatory effects, which were achieved with PRP injections as early as 7 days after lesion creation in the same model of tendinopathy [45]. Although the ideal point of time for MSC treatment is not known [28], it is hypothesized that tendons should be treated after the initial inflammatory phase and before fibrous tissue has formed [88]. Advised times range from 7 to 30 days between lesion creation and treatment with adipose tissue-derived cells has been described in experimental studies [30, 39, 40]. Using high numbers of pure MSCs for treatment, cell culture per se implies a delay of cell injection. In a large clinical study, horses treated <5 weeks after injury had a lower re-injury rate than horses treated later, but this difference was not significant [93]. In the current study the authors decided to postpone the MSC injection for another week, because lesions seemed to be less developed at ultrasonography 7 days after induction than in previous studies using the same model [43, 45, 47], which might be attributed to the use of a different burr during creation of the lesions, to confinement of the horses to box rest instead of exercising them after lesion induction, or to the model itself. The surgical model ideally creates a compartment similar to that typically seen in degenerative strain-induced tendon disease [43, 47]. Creating these lesions requires general anaesthesia, which is a disadvantage, but this model is better standardized and considered less painful for horses than induction of core lesions with collagenase, a model which rather reflects the inflammatory component of tendinopathy. The response to these interventions in terms of the expansion of the lesion seems to be less pronounced in the surgical model [42, 43]. The inflammatory response in the current study was most prominent relatively late - that is, at 5 weeks after surgery and 3 weeks after cell injection, as shown by UTC (peak in echo type IV) - so that at this time potential anti-inflammatory effects ascribed to AT-MSCs [85,86,87] might already have been weaker than the endogenous inflammatory response. In other words, the AT-MSCs could possibly not exert their effect adequately in an environment of only mild to moderate inflammation at the time of injection. Interestingly, in another in-vivo study using a collagenase model, AT-MSCs which were implanted 30 days after lesion creation (i.e. approximately 2 weeks later than in the current study) led to decreased inflammatory cell infiltration as determined by histology of biopsies at 30 and 120 days after treatment [39]. This finding suggests that later AT-MSC implantation at the end of a pronounced inflammatory phase might be more efficient, but the significant differences between both experimental models of tendinopathy may also play a role here. Depending on the mode of action of MSCs implanted into tendon lesions (i.e. paracrine effects versus de-novo synthesis of tendon tissue), a decline of MSC population over time during the inflammatory and proliferative phase may be an explanation for the lack of substantial differences between the AT-MSC and control groups. Death of the MSCs is a potential reason: in an equine surgical model of tendinopathy, less than 5% of BM-MSCs survived more than 10 days and only 0.02% survived over 90 days after implantation [14]. Allogeneic embryonic stem like cells (ESCs), by contrast, were detectable at a constant level over 90 days in the same study, which shows that development of stem cells from different sources may vary after implantation. High numbers of AT-MSCs could be detected with different modalities as long as 9 weeks after implantation into surgically created SDFT lesions [94]. Labelled AT-MSCs injected into experimentally induced equine tendon lesions were partly found to remain viable and integrated in the lesion tissue after 24 weeks, although numbers of cells decreased over time, and MSCs could even be detected in SDFT lesions of the contralateral limb [95]. Together with the observation that labelled cells were retrieved in the peritendinous tissue near the injection site [94, 95], this implies a potential effect of the MSC treatment on the contralateral defect and a loss of MSCs effectively available in the treated lesion. Both effects may have limited the effect of the treatment. It would have been valuable to test in the current trial whether the therapy exerted an anti-apoptotic effect on tenocytes, an effect which has been attributed to AT-MSCs [87]. However, to address this it would have been necessary to take tendon biopsies to perform immunostaining with markers for apoptotic cells during the inflammatory phase, given the duration of the study. This was discarded because of the potential impact of the procedure on tendon healing. Further, the attraction of precursor cells as another potential mechanism of AT-MSC therapy is more relevant during early phases of tendon healing. Even after harvesting of tendon biopsies, it would have been a challenge to estimate this effect merely via determination of the cellularity, because there is no precursor cell-specific marker. For injection, cells must be suspended in a medium, which may consist of phosphate-buffered saline (PBS) [30] or autologous blood serum, the latter being considered a more adequate suspension medium by some authors [14, 96]. However, it was shown that autologous serum, cultivated in glass tubes at 37 °C for 24 hours, contains significant amounts of IL-1ra and IL-10 [97], which implies that serum alone might influence tendon healing [65]. Instead of fresh serum [39], thermally inactivated autologous serum was used in the present study and also injected into control group lesions. Alternatively, control lesions could have been left without puncture and treatment, but puncture alone has been shown to support drainage of early fluid accumulation and could theoretically guide peritendineal precursor cells into the lesion and thereby have a therapeutic effect [98, 99]. For the same reason, repeat biopsies were not used in the current study compared with previous studies [40]. Only two more control groups—one without any puncture, and one with puncture alone but without injection—could have ruled out these effects. However, a much larger study population would have been needed. Twelve weeks after surgery, the ratio of type II echoes representing discontinuous fascicles not yet aligned into lines of stress was significantly higher in AT-MSC-treated lesions than in control lesions, which might be indicative for a pro-aligning influence of implanted AT-MSCs on the early organization of tendon matrix into tendon bundles [44, 54], potentially mediated by direct or paracrine coordinating effects of AT-MSCs on fibroblasts. However, differences did not last until the end of the observation period, and no effects on the percentage of type I echoes (i.e. intact tendon tissue) were evident in the AT-MSC group, so it remains unclear whether the finding is biological reality or due to chance. Clinical relevance seems limited. Ratios for type I echoes, representative for fully aligned tendon bundles, did not reach those for intact tendon tissue inside the core lesion in terminal UTC scans. This indicates that there was still no complete restoration of perfectly aligned tendon bundles after 24 weeks, as expected after this timeframe [82]. The finding that ratios of all echo types only changed mildly in the current study after week 12 is different from that of Bosch et al. [45], who used a similar model of tendinopathy and found further improvement; that is, a more pronounced decrease of the echo type IV ratio and an increase of the echo type I ratio between weeks 12 and 24 after PRP treatment of surgically induced tendon lesions. Results of histology, biochemical analyses and biomechanical testing, which were only performed post mortem 22 weeks after treatment, correlate well with the UTC findings observed previously [44, 45]. None of these diagnostic modalities showed differences between AT-MSC-serum-treated tendons and serum-treated controls. Compared with macroscopically normal SDFT tissue from the same horses there were significantly higher DNA and GAG contents, and lower Hyp and total collagen and HP contents. At the same time tenocyte nuclei were rounded, showing the more tenoblastic cell type, and cell density showed high local variations. Also, histologic scores for fibre structure and fibre alignment were increased in both groups. All of these findings are indicative for relatively high metabolic activity and ongoing tissue repair and remodelling at the end of the observation period (i.e. in the remodelling or maturation phase) [41]. It can be argued that to evaluate the end stage of tendon repair an observation period of 1 year would have been even more appropriate [82]. However, it had been shown previously that the phases of early remodelling or maturation when newly formed collagen type I fibres are organized into bundles and orientate themselves into lines of stress take place between approximately days 45 and 120 after lesion induction [100], and it is hardly conceivable that differences would have been noted at 1 year but not after 24 weeks. In AT-MSC-treated lesions, no difference in vascularization was found histologically using a subjective five-point scale in this study. However, in a more detailed analysis reported elsewhere [46] in which all clearly identifiable vessels on the entire tissue section were counted in the same specimen, there was a significantly higher number of vessels in AT-MSC-treated versus control tendons. This finding, together with the increase in Doppler signal detected at 2 weeks post treatment [46], shows that the methodology used in the current study was obviously not sensitive enough to detect these relatively minor differences. In other equine experimental studies, an increase in Doppler signal could be detected at 6 weeks after implantation of AT-MSCs suspended in platelet concentrate into artificial SDFT lesions [40], whereas intralesional PRP injection led to an almost continuous increase in Doppler signal in another investigation [101]. Stimulation of angiogenesis by AT-MSCs was also postulated by several authors [22, 86] while others [90] showed endothelial cell apoptosis and capillary degeneration after application of rat BM-MSCs in vitro. Neovascularization is of utmost importance to assure transportation of cells and growth factors towards and away from the lesion site during the inflammatory and proliferative phases of extrinsic tendon healing [41]. However, there is controversy about the effects of a prolonged increase in vascularity, which is discussed elsewhere [46, 60, 102, 103]. The effect of a single intralesional injection of cultured AT-MSCs suspended in autologous inactivated serum was not superior to the treatment of surgically created SDFT lesions with autologous inactivated serum alone in a surgical model of tendinopathy over an observation period of 22 weeks. AT-MSC treatment might have a positive influence on collagen crosslinking and therefore possibly on tensile stress resistance of remodelling scar tissue. Ultrasound tissue characterization (UTC) was a viable tool to monitor tendon healing non-invasively and the findings correlated well with results from end-stage histology, biochemistry and biomechanical testing, which proved that despite the onset of the remodelling phase, tendon healing was not completed after a period of 24 weeks. Randomized controlled long-term studies including naturally occurring tendinopathies are necessary to put the results of the current study into perspective before intralesional AT-MSC injection can be recommended (or rejected) as a viable option for the treatment of tendon disease. ADNC: Adipose-derived nucleated cell Adipose derived COMP: Cartilage oligometric matrix protein CSA: Cross-sectional area DICOM: Digital Imaging and Communications in Medicine DMEM: Dulbecco's Modified Eagle's Medium EDTA: Ethylenediaminetetraacetate GAG: Glycosaminoglycan H&E: Haematoxylin and eosin HEPES: 4-(2-Hydroxyethyl)-1-piperazineethanesulfonic acid HLys: Hydroxylysine Hydroxylysylpyridinoline Hyp: Intraclass correlation coefficient Lysylpyridinoline Lys: SDFT: Superficial digital flexor tendon TCSA: Total cross-sectional area TES: Total echogenicity score TFAS: Total fibre alignment score UTC: Cassel M, Baur H, Hirschmuller A, Carlsohn A, Frohlich K, Mayer F. Prevalence of Achilles and patellar tendinopathy and their association to intratendinous changes in adolescent athletes. Scand J Med Sci Sports. 2015;25(3):e310–8. Maffulli N, Wong J. Rupture of the Achilles and patellar tendons. Clin Sports Med. 2003;22(4):761–76. Ross MW. Movement. In: Ross MW, Dyson SJ, editors. Diagnosis and Management of Lameness in the Horse. 2nd ed. St. Louis: Elsevier Saunders; 2011. p. 64–80. O'Meara B, Bladon B, Parkin TD, Fraser B, Lischer CJ. An investigation of the relationship between race performance and superficial digital flexor tendonitis in the Thoroughbred racehorse. Equine Vet J. 2010;42(4):322–6. Williams RB, Harkins LS, Hammond CJ, Wood JL. Racehorse injuries, clinical problems and fatalities recorded on British racecourses from flat racing and National Hunt racing during 1996, 1997 and 1998. Equine Vet J. 2001;33(5):478–86. Lam KH, Parkin TD, Riggs CM, Morgan KL. Descriptive analysis of retirement of Thoroughbred racehorses due to tendon injuries at the Hong Kong Jockey Club (1992-2004). Equine Vet J. 2007;39(2):143–8. Riemersma DJ, Schamhardt HC. In vitro mechanical properties of equine tendons in relation to cross-sectional area and collagen content. Res Vet Sci. 1985;39(3):263–70. Stephens PR, Nunamaker DM, Butterweck DM. Application of a Hall-effect transducer for measurement of tendon strains in horses. Am J Vet Res. 1989;50(7):1089–95. Geburek F, Stadler P. Regenerative therapy for tendon and ligament disorders in horses. Terminology, production, biologic potential and in vitro effects. Tieraerztl Prax G N. 2011;39(6):373–83. Dyson SJ. Medical management of superficial digital flexor tendonitis: a comparative study in 219 horses (1992-2000). Equine Vet J. 2004;36(5):415–9. Goodship AE, Birch HL, Wilson AM. The pathobiology and repair of tendon and ligament injury. Vet Clin North Am Equine Pract. 1994;10(2):323–49. Stewart MC, Stewart AA. Cell-based therapies in orthopedics. Vet Clin North Am Equine Pract. 2011;27(2):xiii–xiv. Smith RK, Werling NJ, Dakin SG, Alam R, Goodship AE, Dudhia J. Beneficial effects of autologous bone marrow-derived mesenchymal stem cells in naturally occurring tendinopathy. PLoS One. 2013;8(9), e75697. Guest DJ, Smith MR, Allen WR. Equine embryonic stem-like cells and mesenchymal stromal cells have different survival rates and migration patterns following their injection into damaged superficial digital flexor tendon. Equine Vet J. 2010;42(7):636–42. Watts AE, Yeager AE, Kopyov OV, Nixon AJ. Fetal derived embryonic-like stem cells improve healing in a large animal flexor tendonitis model. Stem Cell Res Ther. 2011;2(1):4. Violini S, Ramelli P, Pisani LF, Gorni C, Mariani P. Horse bone marrow mesenchymal stem cells express embryo stem cell markers and show the ability for tenogenic differentiation by in vitro exposure to BMP-12. BMC Cell Biol. 2009;10:29. Park A, Hogan MV, Kesturu GS, James R, Balian G, Chhabra AB. Adipose-derived mesenchymal stem cells treated with growth differentiation factor-5 express tendon-specific markers. Tissue Eng Part A. 2010;16(9):2941–51. Lovati AB, Corradetti B, Cremonesi F, Bizzaro D, Consiglio AL. Tenogenic differentiation of equine mesenchymal progenitor cells under indirect co-culture. Int J Artif Organs. 2012;35(11):996–1005. Tan SL, Ahmad RE, Ahmad TS, Merican AM, Abbas AA, Ng WM, Kamarul T. Effect of growth differentiation factor 5 on the proliferation and tenogenic differentiation potential of human mesenchymal stem cells in vitro. Cells Tissues Organs. 2012;196(4):325–38. Raabe O, Shell K, Fietz D, Freitag C, Ohrndorf A, Christ HJ, Wenisch S, Arnhold S. Tenogenic differentiation of equine adipose-tissue-derived stem cells under the influence of tensile strain, growth differentiation factors and various oxygen tensions. Cell Tissue Res. 2013;352(3):509–21. Rehman J, Traktuev D, Li J, Merfeld-Clauss S, Temm-Grove CJ, Bovenkerk JE, Pell CL, Johnstone BH, Considine RV, March KL. Secretion of angiogenic and antiapoptotic factors by human adipose stromal cells. Circulation. 2004;109(10):1292–8. Caplan AI, Dennis JE. Mesenchymal stem cells as trophic mediators. J Cell Biochem. 2006;98(5):1076–84. Richardson LE, Dudhia J, Clegg PD, Smith R. Stem cells in veterinary medicine—attempts at regenerating equine tendon after injury. Trends Biotechnol. 2007;25(9):409–16. Caplan AI. Why are MSCs therapeutic? New data: new insight. J Pathol. 2009;217(2):318–24. Sorrell JM, Baber MA, Caplan AI. Influence of adult mesenchymal stem cells on in vitro vascular formation. Tissue Eng Part A. 2009;15(7):1751–61. Herrero C, Perez-Simon JA. Immunomodulatory effect of mesenchymal stem cells. Braz J Med Biol Res. 2010;43(5):425–30. Alves AG, Stewart AA, Dudhia J, Kasashima Y, Goodship AE, Smith RK. Cell-based therapies for tendon and ligament injuries. Vet Clin North Am Equine Pract. 2011;27(2):315–33. Koch TG, Berg LC, Betts DH. Current and future regenerative medicine—principles, concepts, and therapeutic use of stem cell therapy and tissue engineering in equine medicine. Can Vet J. 2009;50(2):155–65. Peroni JF, Borjesson DL. Anti-inflammatory and immunomodulatory activities of stem cells. Vet Clin North Am Equine Pract. 2011;27(2):351–62. Nixon AJ, Dahlgren LA, Haupt JL, Yeager AE, Ward DL. Effect of adipose-derived nucleated cell fractions on tendon repair in horses with collagenase-induced tendinitis. Am J Vet Res. 2008;69(7):928–37. Taylor SE, Clegg PD. Collection and propagation methods for mesenchymal stromal cells. Vet Clin North Am Equine Pract. 2011;27(2):263–74. Burk J, Gittel C, Heller S, Pfeiffer B, Paebst F, Ahrberg AB, Brehm W. Gene expression of tendon markers in mesenchymal stromal cells derived from different sources. BMC Res Notes. 2014;7:826. Vidal MA, Kilroy GE, Lopez MJ, Johnson JR, Moore RM, Gimble JM. Characterization of equine adipose tissue-derived stromal cells: adipogenic and osteogenic capacity and comparison with bone marrow-derived mesenchymal stromal cells. Vet Surg. 2007;36(7):613–22. Kern S, Eichler H, Stoeve J, Kluter H, Bieback K. Comparative analysis of mesenchymal stem cells from bone marrow, umbilical cord blood, or adipose tissue. Stem Cells. 2006;24(5):1294–301. Dahlgren LA. Fat derived mesenchymal stem cells for equine tendon repair. Regen Med. 2009;4(6 Suppl 2):S14. Leppänen M, Miettinen S, Mäkinen S, Wilpola P, Katiskalahti T, Heikkilä P, Tulamo RM. Management of equine tendon & ligament injuries with expanded autologous adipose derived mesenchymal stem cells: a clincal study. Regen Med. 2009;4(6 Suppl 2):S21. Leppänen M, Heikkilä P, Katiskalahti T, Tulamo RM. Follow up of recovery of equine tendon & ligament injuries 18-24 months after treatment with enriched autologous adipose-derived mesenchymal stem cells: a clinical study. Regen Med. 2009;4(6 Suppl 2):S21. Ricco S, Renzi S, Del Bue M, Conti V, Merli E, Ramoni R, Lucarelli E, Gnudi G, Ferrari M, Grolli S. Allogeneic adipose tissue-derived mesenchymal stem cells in combination with platelet rich plasma are safe and effective in the therapy of superficial digital flexor tendonitis in the horse. Int J Immunopathol Pharmacol. 2013;26(1 Suppl):61–8. Carvalho AD, Alves ALG, de Oliveira PGG, Alvarez LEC, Amorim RL, Hussni CA, Deffune E. Use of adipose tissue-derived mesenchymal stem cells for experimental tendinitis therapy in equines. J Equine Vet Sci. 2011;31(1):26–34. Carvalho AD, Badial PR, Alvarez LEC, Yamada ALM, Borges AS, Deffune E, Hussni CA, Alves ALG. Equine tendonitis therapy using mesenchymal stem cells and platelet concentrates: a randomized controlled trial. Stem Cell Res Ther. 2013;4(85):1–13. Patterson-Kane JC, Firth EC. The pathobiology of exercise-induced superficial digital flexor tendon injury in Thoroughbred racehorses. Vet J. 2009;181(2):79–89. Lui PP, Maffulli N, Rolf C, Smith RK. What are the validated animal models for tendinopathy? Scand J Med Sci Sports. 2011;21(1):3–17. Schramme M, Hunter S, Campbell N, Blikslager A, Smith R. A surgical tendonitis model in horses: techinque, clinical, ultrasonographic and histological characterisation. Vet Comp Orthopaed. 2010;23(4):231–9. van Schie HT, Bakker EM, Cherdchutham W, Jonker AM, van de Lest CH, van Weeren PR. Monitoring of the repair process of surgically created lesions in equine superficial digital flexor tendons by use of computerized ultrasonography. Am J Vet Res. 2009;70(1):37–48. Bosch G, van Schie HT, de Groot MW, Cadby JA, van de Lest CH, Barneveld A, van Weeren PR. Effects of platelet-rich plasma on the quality of repair of mechanically induced core lesions in equine superficial digital flexor tendons: a placebo-controlled experimental study. J Orthop Res. 2010;28(2):211–7. Conze P, van Schie HT, Weeren RV, Staszyk C, Conrad S, Skutella T, Hopster K, Rohn K, Stadler P, Geburek F. Effect of autologous adipose tissue-derived mesenchymal stem cells on neovascularization of artificial equine tendon lesions. Regen Med. 2014;9(6):743–57. Cadby JA, David F, van de Lest C, Bosch G, van Weeren PR, Snedeker JG, van Schie HT. Further characterisation of an experimental model of tendinopathy in the horse. Equine Vet J. 2013;45(5):642–8. Rantanen NW. The use of diagnostic ultrasound in limb disorders of the horse: a preliminary report. J Equine Vet Sci. 1982;2(2):62–4. Reef VB. Musculoskeletal ultrasonography. In: Reef VB, editor. Equine Diagnostic Ultrasound. 1st ed. Philadelphia: W.B. Saunders; 1998. p. 39–186. van Schie JT, Bakker EM, van Weeren PR. Ultrasonographic evaluation of equine tendons: a quantitative in vitro study of the effects of amplifier gain level, transducer-tilt, and transducer-displacement. Vet Radiol Ultrasound. 1999;40(2):151–60. van Schie HT, Bakker EM, Jonker AM, van Weeren PR. Ultrasonographic tissue characterization of equine superficial digital flexor tendons by means of gray level statistics. Am J Vet Res. 2000;61(2):210–9. van Schie HT, Bakker EM, Jonker AM, van Weeren PR. Computerized ultrasonographic tissue characterization of equine superficial digital flexor tendons by means of stability quantification of echo patterns in contiguous transverse ultrasonographic images. Am J Vet Res. 2003;64(3):366–75. van Schie HT, Bakker EM, Jonker AM, van Weeren PR. Efficacy of computerized discrimination between structure-related and non-structure-related echoes in ultrasonographic images for the quantitative evaluation of the structural integrity of superficial digital flexor tendons in horses. Am J Vet Res. 2001;62(7):1159–66. Bosch G, Rene van Weeren P, Barneveld A, van Schie HT. Computerised analysis of standardised ultrasonographic images to monitor the repair of surgically created core lesions in equine superficial digital flexor tendons following treatment with intratendinous platelet rich plasma or placebo. Vet J. 2011;187(1):92–8. Docking SI, Daffy J, van Schie HT, Cook JL. Tendon structure changes after maximal exercise in the Thoroughbred horse: use of ultrasound tissue characterisation to detect in vivo tendon response. Vet J. 2012;194(3):338–42. Geburek F, Gaus M, van Schie HT, Rohn K, Stadler PM. Effect of intralesional platelet-rich plasma (PRP) treatment on clinical and ultrasonographic parameters in equine naturally occurring superficial digital flexor tendinopathies—a randomized prospective controlled clinical trial. BMC Vet Res. 2016;12(1):191. Docking SI, Rosengarten SD, Cook J. Achilles tendon structure improves on UTC imaging over a 5-month pre-season in elite Australian football players. Scand J Med Sci Sports. 2016;26(5):557–63. de Jonge S, Rozenberg R, Vieyra B, Stam HJ, Aanstoot HJ, Weinans H, van Schie HT, Praet SF. Achilles tendons in people with type 2 diabetes show mildly compromised structure: an ultrasound tissue characterisation study. Br J Sports Med. 2015;49(15):995–9. de Vos RJ, Heijboer MP, Weinans H, Verhaar JA, van Schie JT. Tendon structure's lack of relation to clinical outcome after eccentric exercises in chronic midportion Achilles tendinopathy. J Sport Rehabil. 2012;21(1):34–43. Aström M, Rausing A. Chronic Achilles tendinopathy. A survey of surgical and histopathologic findings. Clin Orthop Relat Res. 1995;(316):151–64. https://www.ncbi.nlm.nih.gov/pubmed/?term=astrom+rausing+1995. Crevier-Denoix N, Collobert C, Pourcelot P, Denoix JM, Sanaa M, Geiger D, Bernard N, Ribot X, Bortolussi C, Bousseau B. Mechanical properties of pathological equine superficial digital flexor tendons. Equine Vet J Suppl. 1997;23:23–6. Majewski M, Ochsner PE, Liu F, Fluckiger R, Evans CH. Accelerated healing of the rat Achilles tendon in response to autologous conditioned serum. Am J Sports Med. 2009;37(11):2117–25. Little D, Schramme MC. Ultrasonographic and MRI evaluation of a novel tendonitis model in the horse. Vet Surg. 2006;35(6), E15. Schmidt H. Die Behandlung akuter und chronischer Sehnenerkrankungen beim Pferd mit hochmolekularer Hyaluronsäure. Dr. med. vet. thesis. Hannover: Tierärztliche Hochschule Hannover; 1989. Geburek F, Lietzau M, Beineke A, Rohn K, Stadler PM. Effect of a single injection of autologous conditioned serum (ACS) on tendon healing in equine naturally occurring tendinopathies. Stem Cell Res Ther. 2015;6:126. Edinger J. Orthopädische Untersuchung der Gliedmaßen und der Wirbelsäule. In: Wissdorf H, Gerhards H, Huskamp B, Deegen E, editors. Praxisorientierte Anatomie und Propädeutik des Pferdes. Hannover: M. u. H. Schaper; 2010. p. 890–926. Genovese RL, Rantanen NW, Hauser ML, Simpson BS. Diagnostic ultrasonography of equine limbs. Vet Clin North Am Equine Pract. 1986;2(1):145–226. Rantanen NW, Jorgensen JS, Genovese RL. Ultrasonographic evaluation of the equine limb: technique. In: Ross MW, Dyson SJ, editors. Diagnosis and Management of Lameness in the Horse. 1st ed. St. Louis: Elsevier; 2003. p. 166–88. Kim YJ, Sah RL, Doong JY, Grodzinsky AJ. Fluorometric assay of DNA in cartilage explants using Hoechst 33258. Anal Biochem. 1988;174(1):168–76. Dudhia J, Scott CM, Draper ER, Heinegard D, Pitsillides AA, Smith RK. Aging enhances a mechanically-induced reduction in tendon strength by an active process involving matrix metalloproteinase activity. Aging Cell. 2007;6(4):547–56. Fleiss JL. The design and analysis of clinical experiments. New York: Wiley; 1986. Marx C, Silveira MD, Beyer NN. Adipose-derived stem cells in veterinary medicine: characterization and therapeutic applications. Stem Cells Dev. 2015;24(7):803–13. Eyre D. Collagen cross-linking amino acids. Methods Enzymol. 1987;144:115–39. Last JA, Armstrong LG, Reiser KM. Biosynthesis of collagen crosslinks. Int J Biochem. 1990;22(6):559–64. Eyre DR, Koob TJ, Van Ness KP. Quantitation of hydroxypyridinium crosslinks in collagen by high-performance liquid chromatography. Anal Biochem. 1984;137(2):380–8. Fu SC, Chan KM, Rolf CG. Increased deposition of sulfated glycosaminoglycans in human patellar tendinopathy. Clin J Sport Med. 2007;17(2):129–34. Parkinson J, Samiric T, Ilic MZ, Cook J, Feller JA, Handley CJ. Change in proteoglycan metabolism is a characteristic of human patellar tendinopathy. Arthritis Rheum. 2010;62(10):3028–35. Birch HL, Bailey AJ, Goodship AE. Macroscopic 'degeneration' of equine superficial digital flexor tendon is accompanied by a change in extracellular matrix composition. Equine Vet J. 1998;30(6):534–9. Estrada RJ, van Weeren R, van de Lest CHA, Boere J, Reyes M, Ionita JC, Estrada M, Lischer CJ. Effects of Autologous Conditioned Plasma (R) (ACP) on the healing of surgically induced core lesions in equine superficial digital flexor tendon. Pferdeheilkunde. 2014;30(6):633–42. Lin YL, Brama PA, Kiers GH, DeGroot J, van Weeren PR. Functional adaptation through changes in regional biochemical characteristics during maturation of equine superficial digital flexor tendons. Am J Vet Res. 2005;66(9):1623–9. Batson EL, Paramour RJ, Smith TJ, Birch HL, Patterson-Kane JC, Goodship AE. Are the material properties and matrix composition of equine flexor and extensor tendons determined by their functions? Equine Vet J. 2003;35(3):314–8. Silver IA, Brown PN, Goodship AE, Lanyon LE, McCullagh KG, Perry GC, Williams IF. A clinical and experimental study of tendon injury, healing and treatment in the horse. Equine Vet J Suppl. 1983;(1):1–43. https://www.ncbi.nlm.nih.gov/pubmed/9079042. Crevier N, Pourcelot P, Denoix JM, Geiger D, Bortolussi C, Ribot X, Sanaa M. Segmental variations of in vitro mechanical properties in equine superficial digital flexor tendons. Am J Vet Res. 1996;57(8):1111–7. Caniglia CJ, Schramme MC, Smith RK. The effect of intralesional injection of bone marrow derived mesenchymal stem cells and bone marrow supernatant on collagen fibril size in a surgical model of equine superficial digital flexor tendonitis. Equine Vet J. 2012;44(5):587–93. Manferdini C, Maumus M, Gabusi E, Piacentini A, Filardo G, Peyrafitte JA, Jorgensen C, Bourin P, Fleury-Cappellesso S, Facchini A, et al. Adipose-derived mesenchymal stem cells exert antiinflammatory effects on chondrocytes and synoviocytes from osteoarthritis patients through prostaglandin E2. Arthritis Rheum. 2013;65(5):1271–81. Ceserani V, Ferri A, Berenzi A, Benetti A, Ciusani E, Pascucci L, Bazzucchi C, Cocce V, Bonomi A, Pessina A, et al. Angiogenic and anti-inflammatory properties of micro-fragmented fat tissue and its derived mesenchymal stromal cells. Vascular Cell. 2016;8:3. Shen H, Kormpakis I, Havlioglu N, Linderman SW, Sakiyama-Elbert SE, Erickson IE, Zarembinski T, Silva MJ, Gelberman RH, Thomopoulos S. The effect of mesenchymal stromal cell sheets on the inflammatory stage of flexor tendon healing. Stem Cell Res Ther. 2016;7(1):144. Dahlgren LA. Management of tendon injuries. In: Robinson NE, Sprayberry KA, editors. Current Therapy in Equine Medicine. 6th ed. St. Louis: Saunders Elsevier; 2009. p. 518–23. Frisbie DD, Smith RK. Clinical update on the use of mesenchymal stem cells in equine orthopaedics. Equine Vet J. 2010;42(1):86–9. Otsu K, Das S, Houser SD, Quadri SK, Bhattacharya S, Bhattacharya J. Concentration-dependent inhibition of angiogenesis by mesenchymal stem cells. Blood. 2009;113(18):4197–205. Awad HA, Boivin GP, Dressler MR, Smith FN, Young RG, Butler DL. Repair of patellar tendon injuries using a cell-collagen composite. J Orthop Res. 2003;21(3):420–31. Taylor SE, Smith RK, Clegg PD. Mesenchymal stem cell therapy in equine musculoskeletal disease: scientific fact or clinical fiction? Equine Vet J. 2007;39(2):172–80. Godwin EE, Young NJ, Dudhia J, Beamish IC, Smith RK. Implantation of bone marrow-derived mesenchymal stem cells demonstrates improved outcome in horses with overstrain injury of the superficial digital flexor tendon. Equine Vet J. 2012;44(1):25–32. Geburek F, Mundle K, Conrad S, Hellige M, Walliser U, van Schie HT, van Weeren R, Skutella T, Stadler PM. Tracking of autologous adipose tissue-derived mesenchymal stromal cells with in vivo magnetic resonance imaging and histology after intralesional treatment of artificial equine tendon lesions-a pilot study. Stem Cell Res Ther. 2016;7:21. Burk J, Berner D, Brehm W, Hillmann A, Horstmeier C, Josten C, Paebst F, Rossi G, Schubert S, Ahrberg AB. Long-term cell tracking following local injection of mesenchymal stromal cells in the equine model of induced tendon disease. Cell Transplant. 2016;25(12):2199–211. Pacini S, Spinabella S, Trombi L, Fazzi R, Galimberti S, Dini F, Carlucci F, Petrini M. Suspension of bone marrow-derived undifferentiated mesenchymal stromal cells for repair of superficial digital flexor tendon in race horses. Tissue Eng. 2007;13(12):2949–55. Hraha TH, Doremus KM, McIlwraith CW, Frisbie DD. Autologous conditioned serum: the comparative cytokine profiles of two commercial methods (IRAP and IRAP II) using equine blood. Equine Vet J. 2011;43(5):516–21. Dabareiner RM, Carter GK, Chaffin MK: How to perform ultrasound guided-tendon splitting and intralesional tendon injections in the standing horse. In: 46th Annual Convention of the AAEP. Lexington: American Association of Equine Practitioners; November 26–29 2000. Avella CS, Smith RKW. Diagnosis and management of tendon and ligament disorders. In: Auer JA, Stick JA, editors. Equine Surgery. 4th ed. St. Louis: Elsevier Saunders; 2011. p. 1157–79. Jann H, Stashak TS. Equine wound management. In: Stashak TS, Theoret CL, editors. Equine Wound Management. Ames: Wiley-Blackwell; 2008. p. 489–508. Bosch G, Moleman M, Barneveld A, van Weeren PR, van Schie HT. The effect of platelet-rich plasma on the neovascularization of surgically created equine superficial digital flexor tendon lesions. Scand J Med Sci Sports. 2011;21(4):554–61. Ohberg L, Lorentzon R, Alfredson H. Neovascularisation in Achilles tendons with painful tendinosis but not in normal tendons: an ultrasonographic investigation. Knee Surg Sports Traumatol Arthrosc. 2001;9(4):233–8. Kristoffersen M, Ohberg L, Johnston C, Alfredson H. Neovascularisation in chronic tendon injuries detected with colour Doppler ultrasound in horse and man: implications for research and treatment. Knee Surg Sports Traumatol Arthrosc. 2005;13(6):505–8. The authors would like to thank Prof. Dr. Paul Becher, Dr. Astrid von Velsen-Zerweck, Dr. Philipp Conze, Mrs Petra Grünig, Mrs Lena Kaiser, Mr Christoph Meister and all grooms of the Equine Clinic of the University of Veterinary Medicine Hannover, Foundation for their much appreciated help throughout the study period. The authors are grateful to Dr. Klaus Hopster, Dipl. ECVAA, for performing general anaesthesia in all horses included in this study and to Dipl.-Ing. Michael Schwarze, Laboratory for Biomechanics and Biomaterials, Department of Orthopaedic Surgery, Hannover Medical School, for his support during statistical analysis of the biomechanical data. The authors thank Dr. Monika Langlotz, ZMBH Zentrum für Molekulare Biologie der Universität Heidelberg, Flow Cytometry & FACS Core Facility, for providing flow cytometric analyses. The datasets supporting the conclusions of this article are available in the figshare repository (https://figshare.com/s/abb27aea2694f60dffa3). FG had the idea of performing the study, designed and coordinated the study, performed the surgical interventions, participated in the collection of clinical, ultrasonographic, biomechanical and histologic data and their analyses, and wrote the manuscript. FR participated in the collection of clinical, ultrasonographic, histologic and biomechanical data, in their analyses and in writing the manuscript. HTMvS participated in the design of the study, supervised the collection and analysis of ultrasonographic data and revised the manuscript critically. AB instructed and supervised the histologic examinations and revised the manuscript critically. RE performed the biochemical analyses and revised the manuscript critically. KW participated in the collection of clinical and ultrasonographic data and revised the manuscript critically. MH participated in the analysis of the ultrasonographic data and revised the manuscript critically. KR performed the statistical analysis and revised the manuscript critically. MJ contributed to the study design and revised the manuscript critically. BW contributed to the collection and analysis of biomechanical data and revised the manuscript critically. CH supervised the collection and analysis of biomechanical data and revised the manuscript critically. SC processed adipose tissue and cultured AT-MSCs. TS supervised processing of adipose tissue and AT-MSC culture, and revised the manuscript critically. CvdL supervised biochemical analyses and revised the manuscript critically. RvW participated in coordination of the study and revised the manuscript critically. PMS participated in the design of the study, contributed to the analyses of the data and revised the manuscript critically. All authors read and approved the manuscript for publication. Results of the current study are part of the Dr. med. vet. thesis submitted by FR to the University of Veterinary Medicine Hannover, Foundation, Germany. HTMvS is the inventor of the UTC device. He has not given any financial support for this study and has no financial interests in relation to this study. No non-financial conflicts of interests exist for any of the authors. The study was approved by the animal welfare officer of the University of Veterinary Medicine Hannover, Foundation, Germany and the ethics committee of the responsible German federal state authority in accordance with the German Animal Welfare Law (Lower Saxony State Office for Consumer Protection and Food Safety, reference number 33.9-42502-04-08/1622). Equine Clinic, University of Veterinary Medicine Hannover, Foundation, Bünteweg 9, 30559, Hannover, Germany Florian Geburek , Florian Roggel , Maren Hellige & Peter M. Stadler Department of Equine Sciences, Faculty of Veterinary Medicine, Utrecht University, Yalelaan 112, 3584 CM, Utrecht, The Netherlands Hans T. M. van Schie , Roberto Estrada , Chris van de Lest & René van Weeren Institute for Pathology, University of Veterinary Medicine Hannover, Foundation, Bünteweg 17, 30559, Hannover, Germany Andreas Beineke Pferdeklink Kirchheim, Nürtinger Straße 200, 73230, Kirchheim unter Teck, Germany Kathrin Weber Institute for Biometry, Epidemiology and Information Processing, University of Veterinary Medicine Hannover, Foundation, Bünteweg 2, 30559, Hannover, Germany Karl Rohn Department of Orthopedic Trauma, Hannover Medical School, Carl-Neuberg-Straße 1, 30625, Hannover, Germany Michael Jagodzinski Laboratory for Biomechanics and Biomaterials, Department of Orthopaedic Surgery, Hannover Medical School, Anna-von-Borries-Straße 1-7, 30625, Hannover, Germany Bastian Welke & Christof Hurschler P.O. Box 1243, 72072, Tübingen, Germany Sabine Conrad Institute for Anatomy and Cell Biology, University of Heidelberg, Im Neuenheimer Feld 307, 69120, Heidelberg, Germany Thomas Skutella Search for Florian Geburek in: Search for Florian Roggel in: Search for Hans T. M. van Schie in: Search for Andreas Beineke in: Search for Roberto Estrada in: Search for Kathrin Weber in: Search for Maren Hellige in: Search for Karl Rohn in: Search for Michael Jagodzinski in: Search for Bastian Welke in: Search for Christof Hurschler in: Search for Sabine Conrad in: Search for Thomas Skutella in: Search for Chris van de Lest in: Search for René van Weeren in: Search for Peter M. Stadler in: Correspondence to Florian Geburek. Table presenting the gradually increasing exercise programme adapted from Bosch et al. [45] with permission. (DOCX 14 kb) Semi-quantitative four-point scale according to Aström and Rausing [60], modified by Bosch et al. [45]. (DOCX 15 kb) Geburek, F., Roggel, F., van Schie, H.T.M. et al. Effect of single intralesional treatment of surgically induced equine superficial digital flexor tendon core lesions with adipose-derived mesenchymal stromal cells: a controlled experimental trial. Stem Cell Res Ther 8, 129 (2017) doi:10.1186/s13287-017-0564-8 Revised: 15 March 2017 MSC, mesenchymal stem cells
CommonCrawl
Cost-effectiveness of preventive case management for parents with a mental illness: a randomized controlled trial from three economic perspectives Henny J. Wansink1Email author, Ruben M. W. A. Drost2, Aggie T. G. Paulus2, Dirk Ruwaard2, Clemens M. H. Hosman3, 4, Jan M. A. M. Janssens5 and Silvia M. A. A. Evers2, 6 BMC Health Services ResearchBMC series – open, inclusive and trusted201616:228 © Wansink et al. 2016 The children of parents with a mental illness (COPMI) are at increased risk for developing costly psychiatric disorders because of multiple risk factors which threaten parenting quality and thereby child development. Preventive basic care management (PBCM) is an intervention aimed at reducing risk factors and addressing the needs of COPMI-families in different domains. The intervention may lead to financial consequences in the healthcare sector and in other sectors, also known as inter-sectoral costs and benefits (ICBs). The objective of this study was to assess the cost-effectiveness of PBCM from three perspectives: a narrow healthcare perspective, a social care perspective (including childcare costs) and a broad societal perspective (including all ICBs). Effects on parenting quality (as measured by the HOME) and costs during an 18-month period were studied in in a randomized controlled trial. Families received PBCM (n = 49) or care as usual (CAU) (n = 50). For all three perspectives, incremental cost-effectiveness ratios (ICERs) were calculated. Stochastic uncertainty in the data was dealt with using non-parametric bootstraps. Sensitivity analyses included calculating ICERs excluding cost outliers, and making an adjustment for baseline cost differences. Parenting quality improved in the PBCM group and declined in the CAU group, and PBCM was shown to be more costly than CAU. ICERs differ from 461 Euros (healthcare perspective) to 215 Euros (social care perspective) to 175 Euros (societal perspective) per one point improvement on the HOME T-score. The results of the sensitivity analyses, based on complete cases and excluding cost outliers, support the finding that the ICER is lower when adopting a broader perspective. The subgroup analysis and the analysis with baseline adjustments resulted in higher ICERs. This study is the first economic evaluation of family-focused preventive basic care management for COPMI in psychiatric and family services. The effects of the chosen perspective on determining the cost-effectiveness of PBCM underscore the importance of economic studies of interdepartmental policies. Future studies focusing on the cost-effectiveness of programs like PBCM in other sites and studies with more power are encouraged as this may improve the quality of information used in supporting decision making. NTR2569, date of registration 2010-10-12. Inter-sectoral costs and benefits Children of parents with a mental illness Children of parents with a mental illness (COPMI) have an increased risk of developing mental health disorders such as depression, anxiety disorders, personality disorders and alcohol dependence [1–3]. Across different studies, relative risks of 1.5 to 8.0 have been found [2, 4–6] for COPMI in comparison with children of parents without a mental illness. Apart from the burden this may pose on children and caregivers, COPMI put a substantial burden on youth mental health services and child health expenditures [7]. Case registers of the Dutch Youth Mental Health Services show that COPMI consume five times the amount of mental healthcare than do other children, and that they are overrepresented in clinical care [8]. Furthermore, COPMI use more costly specialized youth care and youth protection services [9, 10] than do other children. The emotional, social, and economic burden of mental illness has also led to growing awareness, among professionals worldwide, of the impact that mental illness has on patients' families and children in particular [11]. It is estimated that more than half of the male and two-thirds of female patients have minor children [12]. Epidemiological studies in the Netherlands and Norway already show one out of six to one out of three children having a parent with a mental illness [13, 14]. Parental mental illness is often accompanied by many adversities, such as a history of being abused or neglected in childhood, poverty, divorce, isolation, and children having special needs or behavioral problems. In fact, it is the accumulation of such adversities that forms the greatest threat to parenting quality and healthy child development [3, 4]. Parenting quality is defined as the quality and quantity of stimulation and support available to a child in his/her home environment. This accumulation of adversities calls for preventive and proactive family support. Since families of COPMI have a variety of needs in different domains, interventions aimed at improving parenting quality should include a variety of services; accordingly, this requires a comprehensive coordinated approach. One such approach is preventive basic care management (PBCM). PBCM is a preventive program targeting threats to parenting quality [15]. By assessing multiple risk factors for poor parenting and the needs of families in different domains, facilitating access to preventive services, tailoring services to assessed needs and coordinating psychiatric and preventive services, PBCM aims to support effective parenting by maintaining a good balance between the adversities, vulnerabilities, and strengths of parents. Ultimately PBCM aims thereby to promote the socio-emotional development of COPMI and to reduce the risk of developing behavioral problems. The effects of PBCM on parenting outcomes (parenting quality, parenting skills and parenting stress) were studied in an RCT [16]. Evidence was found that PBCM had a statistically significant positive effect on parenting skills (η 2 = .055, p < 0.05). Significant effects on the quality of parenting, and the frequency and intensity of parenting stress were not found, although findings did suggest trends toward improved parenting quality (η 2 = .026, p < 0.10) and reduced frequency and intensity of parenting stress (η 2 = .029, p < 0.10 and η2 = .011, p < 0.10). Serving the needs of families of COPMI within the available financial resources is a major issue in health systems worldwide [17, 18]. Furthermore, within governmental health policies there is a growing emphasis on coherent, efficient and cost-effective health systems [19]. In addition to the effectiveness of preventive interventions, the outcomes of cost-effectiveness analyses (CEAs) are becoming more and more important within healthcare decision making [20, 21]. However, to our knowledge, no CEAs on COPMI interventions have yet been performed [22, 23]. Since one of the aims of PBCM is to improve parenting quality and prevent child behavioral problems, it might diminish the need for costly services in the long run. Other studies on preventive parenting programs for vulnerable families (not specifically designed for families of COPMI) have shown long-term economic benefits. For example, Karoly and colleagues [24] reported governmental savings of up to $18,000 for the home visitation program Nurse-Family Partnership, related to better maternal and children's health and effects on the life course such as maternal income, youth criminality and substance abuse. However, short-term benefits, e.g. fewer emergency room visits and better child development, could potentially already outweigh costs. By creating customized, efficient and optimized basic care packages for families, PBCM may lead to a reduction in costs by reducing overlap among services, which means PBCM is potentially already cost-effective in the short run. The services which COPMI may encounter are widespread and include both services within the healthcare sector and services in other sectors, such as social (child) care, the educational sector and the criminal justice system. For example, the higher risk of academic underachievement, when borne out, may result in the need for special educational services, and alcohol misuse may result in police contact and arrests [4, 25]. Accordingly, although interventions may present financial expenses in the healthcare sector, considerable costs or benefits (i.e. cost savings) can be expected in other sectors. These are known collectively as inter-sectoral costs and benefits (ICBs). Drost et al. [26] identified over seventy different ICBs which can be included in health-related economic evaluations, depending on the type of intervention and the population of the program under study. Including ICBs within a CEA might affect the outcome of an evaluation, which, in turn, can affect decision making on interventions. The aim of this study was two-fold. First, the study examined the costs and cost-effectiveness of PBCM in comparison with care as usual (CAU) - i.e. basic information about available COPMI-interventions, such as consultation and COPMI groups along with psychiatric treatment. A second aim of this study was to answer the question whether a shift from a narrow (healthcare) perspective to broader perspectives, in which either childcare costs (social care perspective) or childcare costs and other ICBs (societal perspective) were included, results in a change in the cost-effectiveness of PBCM. In a randomized controlled trial (RCT), participants were randomized to either the PBCM condition or the control condition [16]. Participants in the PBCM condition received preventive service coordination, while participants in the control condition received information about COPMI-interventions and had the opportunity to make use of COPMI consultations and COPMI support groups in addition to psychiatric treatment (CAU). The time horizon of the study was eighteen months. Data on the quality of parenting and costs were recorded at baseline (T0) and after nine (T1) and eighteen months (T2). The CEAs in this study were conducted from three perspectives: a) the healthcare perspective, which included costs for health and child/family support services, b) the social care perspective, which also included costs for childcare and c) the societal perspective, which was the most comprehensive and included all measured use of services, including ICBs within the educational sector, the criminal justice system and services for debt restructuring. All analyses included intervention costs. Participants were outpatients of a community mental health institute located in the urban, western part of the Netherlands. Patients with longstanding psychiatric problems and an accumulation of risk factors for poor parenting were selected. Inclusion criteria were: being treated for a psychiatric disorder, being a caregiver for a child aged between three and ten years of age, the parents being interested in PBCM, and the family being exposed to three or more of a list of sixteen risk factors for poor parenting. This list (see Table 1) was based on a literature review on the impact of parental mental illness on parenting quality, and on risk and protective factors for poor parenting, child abuse and neglect [15]. The age was restricted to the phase of life of the primary school age so that the group was more homogeneous. In order to study preventive effects in children, children with a mental health diagnosis (e.g. ADHD, or conduct disorder) were excluded. Other exclusion criteria included an expected duration of less than three months for further therapy, living outside the catchment area and previous help utilizing PBCM. Recruitment took place between September 2010 and April 2012; the last follow-up was between March 2012 and November 2013. Risk factors for poor parenting 1. single parenthood 2. little support from spouse 3. little network support 4. relational problems 5. partner with mental health problems 6. children with poor health/handicaps/difficult temperament 7. changes in family structure/housing 8. two or more life events in the past two years 9. housing problems 10. poverty or debts 11. parents having been abused as a child 12. severe psychiatric symptoms 13. low compliance with psychiatric treatment 14. impulse control problems 15. alcohol or drug problems 16. low intelligence Using a family-focused strength-oriented rehabilitation model, the focus was on strengthening positive parenting and providing community and network support [15, 27]. The PBCM intervention consisted of five steps: 1) the enrolment procedures, in which families were referred by the parent's therapist, 2) a systematic assessment of the strengths and vulnerabilities regarding parenting and children's development based on information from parents, children, school, therapists, and other services involved, 3) the design of an integrated preventive plan for tailored preventive care, which was discussed in a meeting with the parents and the services involved, 4) linking families to and coordinating services for childcare for young children, clubs for older children, community health services, services for debt restructuring and financial resources, and, finally 5) PBCM monitored the implementation of the plan and evaluated effects in regular meetings with parents and services. Every family had an own tailored plan, and a personal PBCM coordinator, who monitored whether indicated services were provided. Fidelity was systematically supervised in meetings with colleague-coordinators. The PBCM program ended when parenting and the children's development were sufficient according to the PBCM coordinator and the continuity of the necessary services over a longer period was secured. Further information on the PBCM intervention can be found elsewhere [15]. In the control condition, all parents received a brochure about the impact of parental problems on children and information about available services, such as free consultations by a COPMI-expert or COPMI groups for parents and children in which they can exchange experiences and learn about coping with the challenges of living with the parental illness. Participation was optional. Parents could refer themselves or their children by calling the COPMI team. Outcome measure The primary outcome measure was quality of parenting. This was measured using the Home Observation for Measurement of the Environment (HOME) Inventory [28, 29]. The HOME is an instrument used widely and internationally to measure the quality and quantity of stimulation and support available to a child in the home environment. This instrument measures the availability and impact of objects, events and interactions with parents and covers four dimensions, namely responsiveness, learning materials, stimulation, and harsh parenting. The HOME has been used worldwide in studies in different cultures, sometimes adapted to local child rearing beliefs and practices. These studies showed consistent relations between most items and children's adaptive functioning [30]. We used the 'Infant-Toddler', 'Early Childhood', 'Middle Childhood' and 'Early Adolescent' versions of Vedder, Eldering and Bradley [31], which was used in studies with ethnic minorities in the Netherlands. Items and content differ for different age groups. Items were scored as binary (yes/no) by a trained interviewer. The score was based on observations and a semi-structured interview with the parent and focal child during a home visit of one hour (in Dutch, Turkish or Moroccan). Following the recommendation in the HOME manual [28], three interviewers were trained in vivo by the first author. We reached an inter-observer agreement of 96 % (i.e. the percentage of items that both observers scored the same in a joint observation). Furthermore, several sample characteristics were assessed at T0. These included primary patient (mother and/or father), family structure (single-, two-parent family), diagnosis and disease progression of parent(s) (depressive and anxiety disorders, other Axis I disorders, personality disorders, comorbidity, severity of illness, chronic course of illness), ethnicity (Dutch, Moroccan, Turkish, Surinamese, Netherlands Antilles, other), children (number of children, age and gender of index child), number of risk factors and receiving social benefits (yes/no). Resource usage and costing Costs were related to running PBCM or CAU (intervention costs) and to utilization of services. Costs were measured irrespective of who bears them and were indexed (in Euros) for the reference year 2012 using price indices from Statistics Netherlands [32]. Cost prices used for calculation can be obtained via supplementary material which is published online (Additional file 1). Intervention costs were calculated based on the average time spent by human resources needed to execute PBCM or CAU. The measurement of PBCM intervention costs was based on the time investment of the PBCM coordinator, plus the time investment by other professionals in the meetings. Information on the time invested in PBCM was retrieved from the medical records, counting all telephone calls, reported e-mail exchanges, home visits, face-to-face contact of the PBCM coordinator with parents or the family, and coordination meetings. Time spent by the coordinator on telephone calls and e-mails was valued at 23.90 Euros per contact. Series of several telephone calls or mails (three or more) were valued at 95.61 Euros, face-to-face contacts were valued at 119.51 Euros, home visits by PBCM including traveling time at 191.22 Euros and coordination meetings were valued at 191.22 Euros. The price rate of PBCM is the tariff as billed by the organization for integral costs, which includes gross salary costs plus overhead. We used one standard tariff for professionals for participating in the coordination meetings, namely 95.61 Euros. The costs of the control intervention included optional participation in consultation and COPMI groups. Cost units for COPMI were the number of consultations as reported in the medical records (95.61 Euros) and participation in the COPMI groups by parents or children (350 Euros). Costs for psychiatric treatment are included in the healthcare service costs (see below). Costs related to utilization of services Costs related to the family's utilization of services (healthcare costs, childcare costs, and other inter-sectoral costs) were measured by interviewing the parents, using a study-specific family support questionnaire (Dutch Services and Support Questionnaire, Vragenlijst Hulp en Ondersteuning, VHO). The VHO was based on the Trimbos/iMTA questionnaire for Costs associated with Psychiatric Illness (TiC-P) [33, 34], with an appended list of services from the PBCM manual [27]. The questionnaire was tested on five families and adapted to make it feasible in practice. Within the questions, we used a three-month time frame for highly frequent, inexpensive services, such as childcare services, and a six-month time frame for less frequent, highly expensive services, such as hospital admissions. The total service costs for each family were estimated by multiplying the quantity of each type of resource with its relevant cost price [35]. Health service costs Health service costs included costs related to the use of mental healthcare, other primary and secondary care, youth care, such as youth care agencies and preventive family support. Most costs were calculated by multiplying the units (contacts, sessions, hours) with the standard cost prices as noted in the Dutch guidelines for health economic research and the manual of the iMTA questionnaire on intensive youth care [36, 37]. When these sources did not report prices for specific services, cost prices were drawn from reports of the Dutch Healthcare Authority and the National Health Tariffs Act or the Netherlands Youth Institute [38, 39]. When these reports did not provide cost prices for measured services, costs were estimated based on equivalent services for which cost prices were available. Childcare included day care (professional childcare) and babysitter (informal childcare). Cost prices for professional and informal childcare were drawn from the Dutch guidelines for health economic research [37]. Inter-sectoral costs In addition to childcare services, other ICBs were measured. These included services in the educational sector, such as costs for special education, services in the criminal justice sector, such as costs for court proceedings, police services, and costs for debt restructuring services. These were calculated by multiplying the units (contacts, sessions, hours) with the prices provided by a Dutch manual for ICBs [40]. When the manual did not provide the required cost prices, these cost prices were estimated based on valuation techniques described in the manual or, if available, drawn from the manual of the iMTA questionnaire on intensive youth care [36]. After having given written informed consent, ninety-nine families were randomized on a 50–50 ratio, by drawing an envelope from a container; the envelopes contained either information about the PBCM condition or information about the control condition. After randomization, 49 families were assigned to the PBCM condition and 50 were assigned to the control condition by the researcher. Data preparation for analysis Missing values and invalid scores of the items of the HOME and VHO were checked with the interviewer. Of the entered data, 10 % were double scored and checked for differences. Outliers and missing values in the total scores on the HOME were analyzed using the Missing Values Analysis in SPSS. Less than 5 % of the items of the HOME were missing. No outliers were found. Missing items of the HOME were imputed with the mean of the scores at T0, T1 and T2. Missing assessments of the HOME at T1 and T2 were imputed using the expectation maximization technique (EM) in SPSS. Because of differences in content and number of items in each age version of the HOME, we calculated standardized T-scores, range 0–100 and SD = 10, as suggested by Bradley (2009, February 12, personal communication) and De Beurs [41]. A higher T-score means better parenting quality. If costing data were missing for T1 or T2, the mean costs of the other two measures (T0 and T1 or T2) for that family were imputed. If a family dropped out after baseline, the mean costs of the total group at T1 and T2 were imputed. Subsequently, measured costs were extrapolated [42]. To cover the period of nine months, costs were extrapolated by multiplying the costs related to highly frequent inexpensive services times three and the costs related to less frequent, highly expensive services times 1.5. Extrapolated costs for services measured at T1 and T2 were aggregated to cover the whole follow-up period of eighteen months, which were then used for the analyses. Descriptive statistics were used to describe the characteristics of the sample at baseline. Differences between the groups were assessed using t-tests for continuous variables and chi-square tests for discrete variables in SPSS. From all three perspectives, for both conditions the costs were significantly tailed to the right (p < 0.01); skewness scores for the control and intervention condition were respectively 2.46 and 1.69 (healthcare perspective), 1.93 and 1.20 (social care perspective), and 1.67 and 1.05 (societal perspective). Skewed data is common among costing studies [43]. To determine the cost-effectiveness of PBCM, incremental cost-effectiveness ratios (ICERs) were calculated from all three perspectives (healthcare, social care and societal). Results are presented in cost-effectiveness planes and cost-effectiveness acceptability curves (CEACs) [35, 44]. Box 1 The incremental cost-effectiveness ratio, the cost-effectiveness plane and the cost-effectiveness acceptability curve. The ICER is a ratio comparing the additional costs and effects in the experimental intervention with the control intervention. ICERs were calculated using the formula: \( ICER=\frac{\left({C}_i\ \hbox{--}\ {C}_c\right)}{\left({E}_i\ \hbox{--}\ {E}_c\right)} \) In this study, C represents the average total costs per family during the whole follow-up period of eighteen months, and E represents the mean difference between the HOME score at T2 and the HOME score at T0 in the PBCM condition (subscript i) and control condition (subscript c). Stochastic uncertainty in the data was dealt with using non-parametric bootstraps. By using the bootstrapping technique in Excel, the original sample was re-sampled, which resulted in 5000 simulated ICERs per scenario. These were plotted in cost-effectiveness planes (Fig. 2a,b,c). These planes provide a visual representation on the probability of PBCM being cost-effective in comparison with the control condition (the 0,0 coordinate) by showing the distribution of simulated ICERs across four quadrants: 1) the Northeast (NE) quadrant, which means that the intervention is more effective and more costly than CAU, 2) the Southeast (SE) quadrant, indicating that the intervention is more effective and less costly, 3) the Southwest (SW) quadrant, indicating that the intervention is less effective and less costly and 4) the Northwest (NW) quadrant, indicating that the intervention is less effective and more costly. An ICER in the SE and NW quadrant is negative, which represents the situation in which the intervention is either clearly dominant over (SE) or inferior to (NW) CAU. An ICER in the SW or NE quadrant is positive, which means, from a cost-effectiveness perspective, that the intervention is more favorable than the control condition only when the ICER is lower than the maximum willingness to pay (WTP max) per unit effect. The WTP max is the maximum expense a society is willing to pay for better outcomes (parenting quality, in this study). Since no acknowledged threshold, i.e. WTP max, is available for the HOME outcome measure, a CEAC was created for each perspective (Fig. 2d,e,f). The CEAC shows the likelihood of PBCM being favorable over the control intervention for several different hypothetical maximum WTPs. For each perspective, several additional sensitivity analyses were performed to test the robustness of the ICERs calculated in the base case scenario. First, to examine the impact of cost outliers (i.e. high cost families) on the calculated cost-effectiveness, ICERs were calculated based on data in which the top 5 % cost outliers were excluded (alternative scenario A). Second, to assess the impact of imputation, the same analyses were conducted on complete cases (alternative scenario B). Third, to examine the effects of implementing the intervention, a subgroup analysis (alternative scenario C) was carried out on the sample that actually received PBCM (N = 38) (see flow chart, Fig. 1). Finally, apart from the routine unadjusted base case scenario, CEAs should include an alternative scenario in which baseline cost differences are adjusted [43]. To adjust for baseline cost differences between the two conditions in this study, ICERs were calculated based on mean difference adjustments (alternative scenario D). By using this method, the mean difference in costs between conditions at baseline is first extrapolated to equal the length of the follow-up period (i.e. 18 months), and subsequently subtracted from the total post-randomization costs (intervention costs and costs for services after randomization) of the condition with the highest baseline costs [43]. The base case scenario and alternative scenarios resulted in a total of fifteen ICERs. Finally, we compared reported contacts with registered community mental health service contacts to estimate the reliability of self-reporting. Flow chart of the participating families through recruitment and the study. Information about excluded patients and decliners: In step 1, 106 families were not contacted by the researcher due to lack of continuity or ending of contact between therapist and patient, or not being able to contact them in person by phone. In step 2, 32 families were found to be ineligible because the children were not in the required age category or because the child had been diagnosed with mental health problems; 24 families were referred by the researchers to relevant parental support services or child services; and 101 families declined to participate, mostly because they were not interested in support or in participating in a research project Participant flow As can be seen in the flow chart (Fig. 1), families were recruited in two steps. In the first step, researchers screened each therapist's caseload for eligible families, using the exclusion criteria. This resulted in 497 patients, who were approached by letter, in which the therapists asked the patients for permission to be contacted by the researchers. In the second step, the researchers contacted 256 eligible and interested families, checked whether the parent(s) were interested in PBCM, and checked all inclusion- and exclusion criteria. Ninety-nine families were included and randomly allocated to either PBCM (n = 49) or to the control condition (n = 50). Of the 49 families allocated to PBCM, 38 (77 %) actually did receive the intervention. The reasons for not receiving PBCM were: PBCM was not indicated according to the PBCM coordinator, treatment was terminated, the parents withdrew consent at the start, or the PBCM coordinator was not able to contact parents. Of the 50 families in the control group, 22 (44 %) made use of the COPMI team for consultation or of COPMI groups, and two were also referred to the PBCM intervention. Dropout was low in both arms (Fig. 1), namely four of the 49 families in the PBCM group and three of the 50 in the control group (χ2 = .18, df = 1, p = 0.68), and these were not related to characteristics or outcome measures. At baseline, 99 files were available, 86 files were available at the second assessment, and 88 files at the third assessment. A total of 82 families (83 %) had complete datasets for the HOME. Baseline data As shown in Table 2, in most families the mother was the primary patient, and most parents were diagnosed with depressive or anxiety disorders. Half of the families included were single parents, and two-thirds were of ethnic minorities. The mean number of children was 2.1, and most children were of primary school age. The mean T-score on HOME was 50 (an average score compared with the population norm in the manual [28]), and the mean number of risk factors was five on a scale of sixteen. The PBCM group contained significantly more single parent families, more families from ethnic minorities, and the mean age of the index child was significantly higher than in the control group. The groups did not differ on other aspects. Baseline characteristics and baseline scores of families in the experimental group and in the control group Experimental Group (n = 49) Control Group (n =50) Difference (df) Primary patient and family structure χ2 = 4.45 (1) 0.035ab* Mother/single, N (%) Mother/two-parent family, N (%) Father/two-parent family, N (%) (4 %) Mother and father, N (%) 0.976b c Depressive and anxiety disorders, N (%) Other Axis I disorders, N (%) Personality disorders, N (%) Comorbidity, severity and chronicity Comorbidity, N (%) Severity of illness CGI, mean (sd) t = 0.79 (93) Chronic course of illness > 2 years, N (%) Ethnic minority, N (%) Morocco, N (%) Turkey, N (%) Surinam, N (%) Netherlands Antilles, N (%) Other country, N (%) Number, mean (sd) t = -0.29 (97) Children 0-3 years (N) Children 4-12 years (N) Children 13-20 years (N) Male gender index child, N (%) Age index child, mean (sd) HOME total score at baseline, mean (sd) Costs at baseline Healthcare costs (Euros, 2012) Childcare costs (Euros, 2012) Inter-sectoral costs (Euros, 2012) Number of risk factors, mean (sd) Receiving social benefits, N (%) * p < 0.05 atested for single versus two parents bThere were 47 mothers and 6 fathers in the experimental group; there were 48 mothers and 6 fathers in the control group. This is the reason that the sum of the figures in the first three rows is not 49 and 50. ctested for mothers and not for fathers, as both groups had only 6 The mean intervention costs for PBCM (n = 49) were 1,685 Euros, and mean costs for the control condition (n = 50) were 229 Euros (Table 3). Intervention costs for the subgroup of allocated families who did receive the intervention were 2,053 Euros in the PBCM group (n = 38) and 285 Euros for the control group (n = 22) (data not shown). Therefore, depending on the approach, the intervention costs of PBCM are 1,456 (n = 49) or 1,768 Euros (n = 38) more costly in comparison with CAU. Mean per-family costs by condition and measurement (in Euros, indexed for 2012) Follow-up T0-T1, (first 9 months) Follow-up T1-T2, (10 to 18 months) Total T0-T2, (full 18 months) PBCM Service Costs Primary care (other) Secondary care (other) Preventive family support Specialized child services Total healthcare perspective Informal childcare Professional childcare Total social care perspective Costs outside care sector Educational sector Criminal justice sector Total societal perspective During the whole follow-up period of eighteen months, the mean healthcare costs per family in the PBCM condition were 11,327 Euros, which was higher than in the control condition (10,990 Euros). Childcare costs were lower in the PBCM condition, namely 4,705 Euros versus 5,760 Euros in the control condition. The same goes for costs in other sectors, where mean costs in the PBCM condition were 2,086 Euros and mean costs in the control condition were 2,230 Euros. Table 3 also provides the mean per-family costs from each perspective (intervention costs plus costs for use of services), which were used for calculating the ICERs. Differences in costs between T1 and T2, such as differences in costs in the educational sector, can be explained by irregular use of services. Incremental costs Table 4 (upper panel) shows costs per condition for the base case scenario. The difference in average per-family costs between the PBCM and control condition varies for each of the three perspectives, namely 1,793 Euros from the healthcare perspective, 738 Euros from the social care perspective and 596 Euros from the societal perspective. For each perspective, costs were higher in the PBCM condition. Summary statistics of the base case analyses and sensitivity analyses from three perspectives Perspectivea Costs, €b Effectc Northwest (inferior) Southeast (dominant) Base case scenario (imputed data, including cost outliers e ) Control (n = 50) PBCM (n = 49) Alternative scenario A (imputed data, excluding cost outliers) Alternative scenario B (complete cases, including cost outliers) dominantf Alternative scenario C (imputed data, including cost outliers, PBCM-families who received the intervention) Alternative scenario D (imputed data, including cost outliers, mean difference adjustment) aIn the analyses either 1) intervention and healthcare costs (healthcare perspective), 2) intervention, healthcare and child care costs (social care perspective) or 3) all measured costs (societal perspective) were included bCosts per family at 2012 prices cAverage effectiveness (T-score) compared with the baseline assessment dThe presented median ICER is the 50th percentile of 5000 bootstrap replications of the ICER eDifferences in effects between the three perspectives are caused by the exclusion of cost outliers, which differed among the three perspectives f Lower incremental costs and a positive incremental effect of PBCM in comparison with the control condition leads to a negative ICER, which means that PBCM is superior to the control condition on cost-effectiveness Incremental effects Table 4 (upper panel) shows the effects per condition for the base case scenario. PBCM had a positive effect on parenting quality, with an increase of the HOME T-score of 1.93 from 48.59 (SD 10.79) at baseline to 50.52 (SD 11.92) after eighteen months. In the control condition the HOME T-score decreased by 1.89 points, from 51.38 (SD 9.05) to 49.49 (SD 6.48). The mean incremental effect per family between the PBCM and control condition was, therefore, 3.82, and did not change with perspective, since the change of perspective within the base case scenario stipulated only a change in costs. Incremental cost-effectiveness From all three perspectives, costs per unit of the outcome measure (HOME T-score) were higher for the PBCM condition in comparison with the control condition. Since PBCM was more effective than CAU, this resulted in positive ICERs (Table 4, upper panel). However, ICERs differ for each perspective, varying from 461 Euros (healthcare perspective) to 215 Euros (social care perspective) to 175 Euros (societal perspective) per one point improvement on the HOME T-score. Differences can be explained by healthcare costs being higher and childcare costs and costs in other sectors being lower for the PBCM condition in comparison with the control condition (Table 3). The cost-effectiveness planes (Fig. 2a,b,c) show differences in distributions of the 5,000 simulated ICERs across the four quadrants between the CEAs carried out from the three perspectives. Corresponding with median ICERs presented in Table 4, the majority of simulated ICERs are located in the NE quadrant. However, the distribution of the simulated ICERs among the two eastern quadrants differs among the perspectives. Notable is the shift of the cloud of ICERs towards the SE quadrant in the analysis carried out from the societal perspective (39 %) and the social care perspective (37 %) in comparison with the analysis carried out from the healthcare perspective (20 %). Cost-effectiveness planes and CEACs from the three perspectives. Scatterplots of simulated incremental cost-effectiveness ratios (n = 5000) on cost-effectiveness planes (a, b, c) and CEACs (d, e, f) for the PBCM versus the control condition from the healthcare perspective (a, d), social care perspective (b, e) and societal perspective (c, f) The percentages mentioned above equal the probabilities of PBCM being cost-effective at a WTP max of 0 Euros – i.e. the situation in which one is not willing to pay for this intervention - in the CEACs (Fig. 2d,e,f), and explain why for low WTP thresholds the probability of PBCM being cost-effective over the control intervention is lower from a healthcare perspective than it is from broader perspectives. However, since for all three perspectives the vast majority of simulated incremental effects are in the NE, all CEACs rise when the WTP max increases and all asymptote close to 100 % around 2,500 Euros. The probabilities of PBCM being cost-effective do not differ among perspectives for WTP thresholds higher than 2,500 Euros (Fig. 2d,e,f). The results of the sensitivity analyses are presented in the second to fifth panel of Table 4. In scenario A (second panel), ICERs were higher than in the base case scenario. This can be explained by the fact that in all three perspectives, the majority of cost outliers - three or four out of the five excluded - were families in the control condition. In scenario B (third panel), in which incomplete cases were removed before data was analyzed, ICERs were lower than in the base case scenario. The analysis conducted from a societal perspective resulted in an ICER of -143, with 58 % of the cloud situated in the SE quadrant, and was therefore marked as 'dominant' in Table 4. Scenario C – i.e. the subgroup analyses (fourth panel) - resulted in ICERs higher than in the base case scenario. In all these scenarios, ICERs were highest from the healthcare perspective and lowest from the societal perspective. In scenario D (fifth panel), in which the analyses were performed based on mean baseline difference adjustments, the ICERs were highest in all scenarios, varying from 1,031 Euros (healthcare perspective) to 1,313 Euros (social care perspective) to 1,059 Euros (societal perspective). This can be explained by the higher baseline costs in the control condition for all three perspectives. Cost-effectiveness planes and CEACs of the sensitivity analyses can be obtained via supplementary material which is published online (Additional file 2). To estimate the reliability of self-reporting, we compared reported contacts with registered community mental health service contacts. These showed a significant underreporting of 1,543 Euros in the follow-up period (t = 4.06, df = 87, p = 0.000). No differences in underreporting were found between the intervention and control condition (t = 1.09, df = 86, p = 0.278). No correction for underreporting was made in the analyses of costs and ICERs. The aim of this study was to (a) examine the costs and cost-effectiveness of PBCM and (b) answer the question whether shifting from a narrow (healthcare) perspective to broader perspectives, in which either childcare costs (social care perspective) or childcare costs and other ICBs (societal perspective) were included, results in a change in the cost-effectiveness of PBCM. Comparing the total costs (intervention costs plus costs of service utilization) in the PBCM group and the control group, the conclusion is that PBCM is more costly. The extra costs of PBMC ranged from 1,793 Euros from a healthcare perspective to 738 Euros from a social care perspective to 596 Euros from a societal perspective. The savings in the last two perspectives can be attributed to lower costs for childcare, debt reconstruction and in the educational sector of the PBCM group in comparison with the control group. PBCM had better effects on parenting quality than CAU, but also had higher costs. Therefore, ICERs were positive. The cost differences among perspectives are reflected in the ICERs; the ICER is highest in the analysis conducted from the narrowest perspective (healthcare, 461 Euros), lower in the analysis conducted from a broader perspective (social, 215 Euros), and lowest in the analysis conducted from the broadest perspective (societal, 175 Euros). Sensitivity analyses based on excluding cost outliers, excluding incomplete cases and the subgroup analysis, confirmed that a broader perspective leads to a lower ICER. It can be concluded that, for this study, the choice of perspective has had an impact on the outcome of the analysis. However, the difference between ICERs is larger between the healthcare perspective and the social care perspective (246 Euros) than it is between the social care perspective and the societal perspective (40 Euros). This shows that the impact of including ICBs other than childcare on the outcomes of this CEA was fairly limited. Nevertheless, they did show an impact on the results. Whether PBCM is considered cost-effective over CAU depends on the WTP max per point gain on the HOME T-score (Fig. 2d,e,f). The probabilities of PBCM being cost-effective start at 20 % (healthcare perspective), 37 % (social care perspective) and 39 % (societal perspective) at a WTP max of 0 Euro and increase with an increasing WTP max. For thresholds lower than 2,500 Euros, the chances of PBCM being favorable over the control intervention are higher when a broader perspective is adopted. For thresholds higher than 2,500 Euros, there is a near 100 % probability of PBCM being cost-effective regardless of the perspective chosen. This study was the first to assess the costs of a preventive family intervention for COPMI families and relate it to parenting outcomes. The strengths of this study are the randomized controlled design and the broad range of sensitivity analyses conducted to test the robustness of the analysis in the base case scenario. The sensitivity analyses were limited to costs and not to effects; the analyses showed no outliers on effects and showed no significant baseline differences in the HOME T-scores. Furthermore, the real world setting strengthens the generalizability of the results. The PBCM method and the population in this study represent the state of the art. The study has several limitations, which should be addressed for the interpretation of the findings. First, no adequate instruments were available to assess the quality adjusted life years (QALYs) of young COPMI. However, the HOME instrument is a valid instrument, used widely and internationally to measure parenting quality, and it can be interpreted as a proxy for quality of life; the HOME measures many aspects of parenting and the home environment which are suggested as being essential within the concept of quality of life for COPMI's physical, emotional, social and material well-being [22]. Nevertheless, it should be noted that the HOME has ceiling effects [31], which may have reduced sensitivity for effects and for PBCM's cost-effectiveness. Second, although the HOME T-score was a clinically relevant outcome measure for parenting quality, its use within a CEA is new. The lack of clinical cut-off scores impedes interpretation of improvement in parenting quality, in terms of the economic value, for policy making. Also, no thresholds for WTP on costs per unit effect are available for the HOME T-score, as are widely used outcome measures capturing utility such as the QALY [45, 46]. Since the intervention is both more costly and more effective than CAU, the lack of WTP thresholds makes it hard to interpret the economic value of the improvement of parenting quality. However, the CEACs provide decision supportive information because these provide cost-effectiveness probabilities for a wide range of hypothetical thresholds for all analyses. Furthermore, looking at effects, prospective studies on the long-term outcomes of parenting quality (measured by the HOME) showed positive health or societal outcomes. These studies showed low to moderate correlations with (later) child development such as intelligence, academic achievement, school performance, language development, social competence, classroom behavior, peer acceptance, and emotional health [47]. Furthermore, HOME scores were shown to be related to such health issues as malnutrition, failure-to-thrive, and child abuse [48]. Third, given limitations regarding the feasibility of assessing the costs for vulnerable parents within an RCT, we chose to focus on services which are important partners for PBCM, such as youth care, childcare, education, and the justice and social systems. Productivity costs in parents were not measured. Including this ICB within the analysis conducted from a societal perspective might have had an influence on the cost-effectiveness. Also, self-reported service utilization may have distorted the calculation of costs. As no differences in underreporting were found between the intervention and control condition, the effect of self-reporting on the cost-effectiveness is not obvious. Furthermore, we did not closely monitor the occurrence of waiting lists for the families during the study, though none was reported in the PBCM files. But waiting lists might have obscured the results of this study. Fourth, differences in the baseline costs of both groups substantially affected ICERs. After adjusting for differences in baseline costs, ICERs climbed to more than 1,000 Euros. The differences in costs are probably related to differences in family composition, such as being a one-parent family, and the age of the children. The needs and barriers for different kind of services might vary depending on the family composition. For instance, savings in childcare might also be related to differences in family composition, since the control group contained more young preschool children (35 versus 27). However, it is hard to predict how this affects the total costs. We found no relation between baseline total costs and one/two-parent families, the number of children under the age of four or ethnicity (data not shown). Still, incorporating family characteristics (such as composition, ages of family members) in CEAs remains a challenge, especially in multi-ethnic samples. Fifth, the study was conducted on a relatively small and rather heterogeneous sample (e.g. parental diagnosis, family composition, ethnicity, and source of income). The effect of scores of single families on variances in effects and costs, such as outliers, might have affected the cost-effectiveness found in this study. This is reflected in the differences between the ICERs in the base case scenario and alternative scenario A, where ICERs were calculated excluding cost outliers. Finally, the chosen time frame of eighteen months might not have been long enough to study all meaningful effects and costs, such as long-term ICBs related to the school career, work or criminality of youngsters. Moreover, the young age of the children and absence of evident behavioral problems may have reduced the chance of finding these ICBs. The need for a long time frame for cost-effectiveness studies on preventive family support has been shown in the Nurse-Family Partnership study [49]. Long-term prospective studies are needed to explore the effects in children and co-occurring costs in the long run. As a consequence of the limitations described above, it is difficult to determine whether PBCM provides "value for money". Nevertheless, in this study PBCM showed better effects on parenting quality than CAU and this study gives an overall estimate of the additional costs. This study is the first economic evaluation of a family-focused preventive COPMI approach in psychiatric and family services. The results of this study show, from both a healthcare and a societal perspective, that the intervention is both more costly and more effective than CAU. Since no WTP study was conducted, no conclusive 'yes' or 'no' can be provided to the question whether the intervention is cost-effective. However, as mentioned earlier, the CEACs provide decision supportive information. Furthermore, the found size of the effect and savings in several sectors support focusing on prevention and on the health of vulnerable children and families in all policies. The results of our study may be of interest for community policy makers and stakeholders in health policy and youth care when optimizing service systems for COPMI families within a framework of restricted financial resources. It underscores the importance of evaluating costs and benefits in other sectors when planning and evaluating innovative integrative services for children or families at risk. However, before implementing PBCM on a wider scale, replication studies, preferably along with cost-utility analyses measuring costs, benefits and QALYs of young COPMI, and multi-center studies of case management programs for COPMI families are needed. These studies could also help to gain insight over the various effects and the economic costs and benefits in subgroups, to better indicate which families are best served. Studies in systems with lower provision of and/or accessibility to services in different countries are needed, since the current Dutch service system is one of the richest and egalitarian ones in the world, with good accessibility for poor families. This study punctuates the importance of choosing a broad societal perspective in economic evaluations. ICBs should be and already are increasingly considered in underpinning (the financing of) health policies. CAU, care as usual; CEA, cost-effectiveness analyses; CEAC, cost-effectiveness acceptability curve; COPMI, Children of Parents with a Mental Illness; HOME, Home Observation for Measurement of the Environment; ICBs, inter-sectoral costs and benefits; ICER, incremental cost-effectiveness ratio; PBCM, preventive basic care management; QALY, quality adjusted life year; VHO, Vragenlijst Hulp en Ondersteuning (family support questionnaire); WTP, willingness to pay We thank Mathijs Deen of Parnassia Academy for his assistance with data analysis. This RCT was supported by the Dutch Organisation for Health Research and Development (ZonMw), The Hague, Grant 157003002, awarded to Clemens Hosman, and Funds Nuts Ohra Grant 0903-060, awarded to Henny Wansink. The economic study was supported by Grant 200400010 from ZonMw. Neither funding body participated in analysis or interpretation of the data or in preparing the manuscript. Data is not available for online access, however readers who wish to gain access to the data can write to the first author Henny Wansink at [email protected] with their requests. Access can be granted subject to the dutch central medical ethical committee Centrale Commissie Mensgebonden Onderzoek (CCMO) and the research collaborative agreement guidelines of the Parnassia Group. CM, JJ and HW conceived and designed the effect study. RD, AP, DR and SE designed the CEA and led the economic analyses. HW collected the data. RD, HW and SE analyzed the economic data. HW and RD wrote the draft article, co-authored the article and share first authorship as they contributed equally. All authors made revisions and participated in interpreting the results. All authors read and approved the final manuscript. All authors declare that they have no competing interests. Consent to participate Patients received written information regarding the study. Participants gave their written consent to participate. Consent to publish is not applicable. Approval for the ethics of this research was provided by the Dutch Medical Ethics Committee for Mental Health Centres/Medisch Ethische Toetsing Instellingen Geestelijke Gezondheidszorg, METiGG (Ref: 09.143). The study is registered with the Netherlands Trial Register (NTR2569). Before writing this article, we consulted the Consolidated Health Economic Evaluation Reporting Standards [50]. Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Additional file 1: Services and sources for prices. This categorization of services in Health Care, Youth Care, Childcare and services in other sectors follows the current system in the Netherlands. Health Care and Youth Care are services financed by health insurance and the government for the prevention and treatment of somatic, mental, and developmental problems. Childcare is financed by parents themselves for babysitting or kindergarten. Other sectors include the educational sector and criminal justice sector. For each service sources are given for pricing. (DOCX 23 kb) Additional file 2: Cost-effectiveness planes and CEACs for alternative scenarios. This figure shows the scatterplots of simulated incremental cost-effectiveness ratios (n = 5000) on cost-effectiveness planes and CEACs for the PBCM versus the control condition in four alternative scenarios: 1) excluding outliers (alternative scenario A), 2) based on complete cases (alternative scenario B), 3) the sample that actually received the intervention (alternative scenario C) and 4) corrected for baseline cost differences (alternative scenario D). (PDF 3713 kb) Context, Prevention Department of the Parnassia Group, Lijnbaan 4, The Hague, 2512 VA, The Netherlands Department of Health Services Research, School for Public Health and Primary Care (CAPHRI), Faculty of Health, Medicine and Life Sciences, Maastricht University, Duboisdomein 30, Maastricht, 6229 GT, The Netherlands Department of Clinical Psychology, Radboud University, Postbox 9104, Nijmegen, 6500 HE, The Netherlands Department of Health Promotion, School for Public Health and Primary Care (CAPHRI), Faculty of Health, Medicine and Life Sciences, Maastricht University, Peter Debeyeplein 1, Maastricht, 6229 HA, The Netherlands Department of Developmental Psychopathology, Radboud University, Postbox 9104, Nijmegen, 6500 HE, The Netherlands Trimbos, Netherlands Institute of Mental Health and Addiction, Da Costakade 45, Utrecht, 3521 VS, The Netherlands Beardslee WR, Versage EM, Gladstone TR. Children of affectively ill parents: a review of the past 10 years. J Am Acad Child Adolesc Psychiatry. 1998;37(11):1134–41.View ArticlePubMedGoogle Scholar Bijl RV, Cuijpers P, Smit F. Psychiatric disorders in adult children of parents with a history of psychopathology. Soc Psychiatry Psychiatr Epidemiol. 2002;37(1):7–12.View ArticlePubMedGoogle Scholar Rutter M, Quinton D. Parental psychiatric disorder: effects on children. Psychol Med. 1984;14(4):853–80.View ArticlePubMedGoogle Scholar Sameroff AJ. Ecological perspectives on developmental risk. In: WAIMH handbook of infant mental health: Infant mental health in groups at high risk, vol. 4. New York: Wiley; 2000.Google Scholar Weissman MM, Wickramaratne P, Nomura Y, Warner V, Pilowsky D, Verdeli H. Offspring of depressed parents: 20 years later. Am J Psychiatry. 2006;163(6):1001–8.View ArticlePubMedGoogle Scholar McLaughlin KA, Gadermann AM, Hwang I, Sampson NA, Al-Hamzawi A, Andrade LH, Angermeyer MC, Benjet C, Bromet EJ, Bruffaerts R et al. Parent psychopathology and offspring mental disorders: results from the WHO World Mental Health Surveys. Br J Psychiatry. 2012;200(4):290–9.View ArticlePubMedPubMed CentralGoogle Scholar Olfson M, Marcus SC, Druss B, Pincus HA, Weissman MM. Parental depression, child mental health problems, and health care utilization. Med Care. 2003;41(6):716–21.PubMedGoogle Scholar Sytema S, Gunther N, Reelick F, Drukker M, Pijl B, Van't Land H. Verkenningen in de kinder- en jeugdpsychiatrie. Een bijdrage uit de psychiatrische casusregisters Rijnmond, Zuid-Limburg en Noord-Nederland. [Explorations in child and adolescent psychiatry, contributions from the psychiatric case registers Rijnmond, Zuid-Limburg and Noord-Nederland.]. Utrecht: Trimbos Institute; 2006.Google Scholar Oyserman D, Mowbray CT, Meares PA, Firminger KB. Parenting among mothers with a serious mental illness. Am J Orthopsychiatry. 2000;70(3):296–315.View ArticlePubMedGoogle Scholar Cobussen M, Hammink A, De Graaf I, Wits E, De Mheen D. Toeleiding naar zorg bij kindermishandeling. [Child abuse and reference to care.]. Rotterdam: IVO; 2014.Google Scholar WHO. Mental Health Action Plan 2013-2020. Geneva: World Health Organization; 2013.Google Scholar Nicholson J, Biebel K, Williams VF, Katz-Leavy J. Prevalence of parenthood in adults with mental illness: Implications for state and federal policy, programs, and providers, vol. 153. Rockville: Psychiatry Publications and Presentations; 2002.Google Scholar Lauritzen C. The importance of intervening in adult mental health services when patients are parents. Journal of Hospital Administration. 2014;3(6):1–10.View ArticleGoogle Scholar De Graaf R, Ten Have M, Van Dorsselaer S. De psychische gezondheid van de Nederlandse bevolking. NEMESIS-2: Opzet en eerste resultaten. [The mental health of the Dutch population. NEMESIS 2: Design and first results.]. Utrecht: Trimbos Netherlands Institute for Mental Health and Addiction; 2010.Google Scholar Wansink HJ, Hosman CM, Janssens JM, Hoencamp E, Willems WJ. Preventive family service coordination for parents with a mental illness in the Netherlands. Psychiatr Rehabil J. 2014;37(3):216–21.View ArticlePubMedGoogle Scholar Wansink HJ, Janssens JMAM, Hoencamp E, Middelkoop BJC, Hosman CMH. Effects of preventive family service coordination for parents with mental illnesses and their children, a RCT. Fam Syst Health. 2015;33(2):110–9.View ArticlePubMedGoogle Scholar Nicholson J, Henry AD. Achieving the goal of evidence-based psychiatric rehabilitation practices for mothers with mental illnesses. Psychiatr Rehabil J. 2003;27(2):122–30.View ArticlePubMedGoogle Scholar Falkov A. The family model handbook, an integrated approach to supporting mentally ill parents and their children. Hove: Pavilion Publishing; 2012.Google Scholar OECD. Health care systems: Getting more value for money. Paris: Organisation for Economic Co-operation and Development, Economics Department; 2010.Google Scholar Power EJ, Eisenberg JM. Are we ready to use cost-effectiveness analysis in health care decision-making? A health services research challenge for clinicians, patients, health care systems, and public policy. Med Care. 1998;36(5 Suppl):MS10–MS147.PubMedGoogle Scholar Russell LB, Gold MR, Siegel JE, Daniels N, Weinstein MC. The role of cost-effectiveness analysis in health and medicine. Panel on cost-effectiveness in health and medicine. JAMA. 1996;276(14):1172–7.View ArticlePubMedGoogle Scholar Bee P, Bower P, Byford S, Churchill R, Calam R, Stallard P, Pryjmachuk S, Berzins K, Cary M, Wan M, et al. The clinical effectiveness, cost-effectiveness and acceptability of community-based interventions aimed at improving or maintaining quality of life in children of parents with serious mental illness: a systematic review. Health Technol Assess. 2014;18(8):1–250.View ArticlePubMedPubMed CentralGoogle Scholar Woolderink M, Smit F, Van der Zanden R, Beecham J, Knapp M, Paulus A, Evers S. Design of an internet-based health economic evaluation of a preventive group-intervention for children of parents with mental illness or substance use disorders. BMC Public Health. 2010;10:470.View ArticlePubMedPubMed CentralGoogle Scholar Karoly LA, Greenwood PW, Everingham SS, Hoube J, Kilburn MR, Rydell CP, et al. Investing in our children: What we know and don't know about the costs and benefits of early childhood interventions. Santa Monica: RAND Corporation; 1998.Google Scholar Solis JM, Shadur JM, Burns AR, Hussong AM. Understanding the diverse needs of children whose parents abuse substances. Curr Drug Abuse Rev. 2012;5(2):135–47.View ArticlePubMedPubMed CentralGoogle Scholar Drost RMWA, Paulus ATG, Ruwaard D, Evers SMAA. Inter-sectoral costs and benefits of mental health prevention: towards a new classification scheme. J Ment Health Policy Econ. 2013;16(4):179–86.PubMedGoogle Scholar Wansink HJ, Hosman CMH, Verdoold CJ. Basiszorg, een handleiding: Preventieve zorgcoördinatie voor ouders met psychiatrische problemen. [Basic care, a manual: Preventive basic care manamgement for parent with psychiatric problems.]. The Hague: Parnassia Bavo Group, Prevention Department; 2010.Google Scholar Caldwell BM, Bradley RH. Home observation for measurement of the environment: Administration manual. Tempe: Family & Human Dynamics Research Institute, Arizona State University; 2003.Google Scholar HOME inventory [http://fhdri.clas.asu.edu/home/inventory.html]. Accessed 12 Feb 2016. Bradley RH, Corwyn RF. Caring for children around the world: A view from HOME. Int J Behav Dev. 2005;29(6):468–78.View ArticleGoogle Scholar Vedder P, Eldering L, Bradley RH. The home environments of at risk children in the Netherlands. In: Advances in family research. Amsterdam: Thesis Publishers; 1995. p. 69–76.Google Scholar CBS Statline [http://statline.cbs.nl/statweb/]. Accessed 12 Feb 2016. Bouwmans C, De Jong K, Timman R, Zijlstra-Vlasveld M, Van der Feltz-Cornelis C, Tan Swan S, Hakkaart-van Roijen L. Feasibility, reliability and validity of a questionnaire on healthcare consumption and productivity loss in patients with a psychiatric disorder (TiC-P). BMC Health Serv Res. 2013;13:217.View ArticlePubMedPubMed CentralGoogle Scholar Trimbos/iMTA questionnaire for costs associated with psychiatric illness (TiC-P adults). Update 2012 [http://www.bmg.eur.nl/fileadmin/ASSETS/bmg/english/iMTA/Publications/Manuals___Questionnaires/Vragenlijsten_2013/Questionnaire_TiC-P_initial_version_in_English.pdf] Drummond M, Sculpher MJ, Claxton K, Stoddart G, Torrance GW. Methods for the economic evaluation of health care programmes. New York: Oxford University Press; 2015.Google Scholar iMTA Questionnaire Intensive Youth Care [https://www.bmg.eur.nl/fileadmin/ASSETS/bmg/Onderzoek/Onderzoeksrapporten___Working_Papers/2012.06_-_Handleiding_Vragenlijst_Intensieve_Jeugdzorg.pdf]. Accessed 12 Feb 2016. Hakkaart-van Roijen L, Tan SS, Bouwmans CA. Handleiding voor kostenonderzoek. Methoden en referentieprijzen voor economische evaluaties in de gezondheidszorg. Geactualiseerde versie 2010. [Manual for studying costs. Methods and reference prices for economic evaluations in health care. Update 2010]. Diemen: College voor Zorgverzekeringen; 2011.Google Scholar Normprijzenonderzoek jeugd & opvoedhulp Noord-Brabant. [Standard pricing research for youth care and parenting support Noord-Brabant.] [http://www.nji.nl/nl/Normprijzen_jeugd_opvoedhulp_Brabant.pdf]. Accessed 12 Feb 2016. NZA. Prestatiebeschrijvingen en tarieven extramurale zorg 2012. [Standards and rates for outpatient care in 2012.] In: Policy letter CA-300-487. 2012.Google Scholar Drost RMWA, Paulus ATG, Ruwaard D, Evers SMAA. Handleiding intersectorale kosten en baten van (preventieve) interventies: Classificatie, identificatie en kostprijzen. [Manual on intersectoral costs and benefits of (preventive) interventions: Classification, identification and cost prices.]. Maastricht: Maastricht University, Department of Health Services Research; 2014.Google Scholar De Beurs E. De genormaliseerde Tscore: Een euro voor testuitslagen. [The normalized T-score, an 'euro' for test results.]. Maandblad Geestelijke volksgezondheid. 2010;65(9):684–95.Google Scholar Hendriks MR, Al MJ, Bleijlevens MH, Van Haastregt JC, Crebolder HF, Van Eijk JT, Evers SM. Continuous versus intermittent data collection of health care utilization. Med Decis Making. 2013;33(8):998–1008.View ArticlePubMedGoogle Scholar Van Asselt AD, Van Mastrigt GA, Dirksen CD, Arntz A, Severens JL, Kessels AG. How to deal with cost differences at baseline. Pharmacoeconomics. 2009;27(6):519–28.View ArticlePubMedGoogle Scholar Manca A, Rice N, Sculpher MJ, Briggs AH. Assessing generalisability by location in trial-based cost-effectiveness analysis: the use of multilevel models. Health Econ. 2005;14(5):471–85.View ArticlePubMedGoogle Scholar Donaldson C, Baker R, Mason H, Jones-Lee M, Lancsar E, Wildman J, Bateman I, Loomes G, Robinson A, Sugden R, et al. The social value of a QALY: raising the bar or barring the raise? BMC Health Serv Res. 2011;11(1):8.View ArticlePubMedPubMed CentralGoogle Scholar Neumann PJ, Cohen JT, Weinstein MC. Updating cost-effectiveness - the curious resilience of the $50,000-per-QALY threshold. N Engl J Med. 2014;371(9):796–7.View ArticlePubMedGoogle Scholar Wen-Jui H, Leventhal T, Linver MR. The Home observation for measurement of the environment (HOME) in middle childhood: A study of three large-scale data sets. Parenting: Science and Practice. 2004;4(2):189–210.View ArticleGoogle Scholar Bradley RH. The Home Inventory: Review and reflections. Adv Child Dev Behav. 1994;25:241–88.View ArticlePubMedGoogle Scholar Olds DL. The nurse–family partnership: An evidence-based preventive intervention. Infant Mental Health Journal. 2006;27(1):5–25.View ArticlePubMedGoogle Scholar Husereau D, Drummond M, Petrou S, Carswell C, Moher D, Greenberg D, Augustovski F, Briggs AH, Mauskopf J, Loder E, et al. Consolidated health economic evaluation reporting standards (CHEERS) statement. BMC Med. 2013;11(1):80.View ArticlePubMedPubMed CentralGoogle Scholar
CommonCrawl
Corporate Finance & Accounting Guide to Accounting Corporate Finance & Accounting Financial Ratios Return on Risk-Adjusted Capital – RORAC Definition By Marshall Hargrave What Is Return on Risk-Adjusted Capital – RORAC? The return on risk-adjusted capital (RORAC) is a rate of return measure commonly used in financial analysis, where various projects, endeavors, and investments are evaluated based on capital at risk. Projects with different risk profiles are easier to compare with each other once their individual RORAC values have been calculated. The RORAC is similar to return on equity (ROE), except the denominator is adjusted to account for the risk of a project. The Formula for RORAC Is Return on Risk Adjusted Capital=Net IncomeRisk-Weighted Assetswhere:Risk-Weighted Assets = Allocated risk capital, economiccapital, or value at risk\begin{aligned} &\text{Return on Risk Adjusted Capital}=\frac{\text{Net Income}}{\text{Risk-Weighted Assets}}\\ &\textbf{where:}\\ &\text{Risk-Weighted Assets = Allocated risk capital, economic}\\ &\text{capital, or value at risk}\\ \end{aligned}​Return on Risk Adjusted Capital=Risk-Weighted AssetsNet Income​where:Risk-Weighted Assets = Allocated risk capital, economiccapital, or value at risk​ How to Calculate Return on Risk-Adjusted Capital – RORAC Return on Risk-Adjusted Capital is calculated by dividing a company's net income by the risk-weighted assets. What Does Return on Risk-Adjusted Capital (RORAC) Tell You? Return on risk-adjusted capital takes into account the capital at risk, whether it be related to a project or company division. Allocated risk capital is the firm's capital, adjusted for a maximum potential loss based on estimated future earnings distributions or the volatility of earnings. Companies use RORAC to place greater emphasis on firm-wide risk management. For example, different corporate divisions with unique managers can use RORAC to quantify and maintain acceptable risk-exposure levels. This calculation is similar to risk-adjusted return on capital (RAROC). With RORAC, however, the capital is adjusted for risk, not the rate of return. RORAC is used when the risk varies depending on the capital asset being analyzed. RORAC is commonly used in financial analysis, where various projects or investments are evaluated based on capital at risk. Allows for an apples-to-apples comparison of projects with different risk profiles. Similar to risk-adjusted return on capital, but RAROC adjusts the return for risk, not the capital. Example of How to Use RORAC Assume a firm is evaluating two projects it has engaged in over the previous year and needs to decide which one to eliminate. Project A had total revenues of $100,000 and total expenses of $50,000. The total risk-weighted assets involved in the project is $400,000. Project B had total revenues of $200,000 and total expenses of $100,000. The total risk-weighted assets involved in Project B is $900,000. The RORAC of the two projects is calculated as: Project A RORAC=$100,000−$50,000$400,000=12.5%Project B RORAC=$200,000−$100,000$900,000=11.1%\begin{aligned} &\text{Project A RORAC}=\frac{\$100,000-\$50,000}{\$400,000}=12.5\%\\ &\text{Project B RORAC}=\frac{\$200,000-\$100,000}{\$900,000}=11.1\%\\ \end{aligned}​Project A RORAC=$400,000$100,000−$50,000​=12.5%Project B RORAC=$900,000$200,000−$100,000​=11.1%​ Even though Project B had twice as much revenue as Project A, once the risk-weighted capital of each project is taken into account, it is clear that Project A has a better RORAC. The Difference Between RORAC and RAROC RORAC is similar to, and easily confused with, two other statistics. Risk-adjusted return on capital (RAROC) is usually defined as the ratio of risk-adjusted return to economic capital. In this calculation, instead of adjusting the risk of the capital itself, it is the risk of the return that is quantified and measured. Often, the expected return of a project is divided by value at risk to arrive at RAROC. Another statistic similar to RORAC is the risk-adjusted return on risk-adjusted capital (RARORAC). This statistic is calculated by taking the risk-adjusted return and dividing it by economic capital, adjusting for diversification benefits. It uses guidelines defined by the international risk standards covered in Basel III. Limitations of Using Return on Risk-Adjusted Capital – RORAC Calculating the risk-adjusted capital can be cumbersome as it requires understanding the value at risk calculation. For related insight, read more about how risk-weighted assets are calculated based on capital risk. Risk-Adjusted Return On Capital (RAROC) Definition Risk-adjusted return on capital (RAROC) is a modification to the standard return on an investment that accounts for the element of risk. What's Economic Capital? Economic capital is the amount of capital that a firm, usually in financial services, needs to ensure that the company stays solvent given its risk profile. Understanding the Compound Annual Growth Rate – CAGR Compound annual growth rate (CAGR) is the rate of return that would be required for an investment to grow from its beginning balance to its ending one. The internal rate of return (IRR) is a metric used in capital budgeting to estimate the return of potential investments. How Return on Equity Works Return on equity (ROE) is a measure of financial performance calculated by dividing net income by shareholders' equity. Because shareholders' equity is equal to a company's assets minus its debt, ROE could be thought of as the return on net assets. Net Present Value (NPV) is the difference between the present value of cash inflows and the present value of cash outflows over a period of time. Using Economic Capital to Determine Risk How To Value Real Estate Investment Property How to Calculate the Required Rate of Return (RRR) Return on Equity (ROE) vs. Return on Assets (ROA) A Quick Guide to the Risk-Adjusted Discount Rate What's the Difference Between ROI and IRR?
CommonCrawl
Resurgence of inner solutions for perturbations of the McMillan map Global attractors for strongly damped wave equations with displacement dependent damping and nonlinear source term of critical exponent March 2011, 31(1): 139-164. doi: 10.3934/dcds.2011.31.139 Existence of multiple spike stationary patterns in a chemotaxis model with weak saturation Kazuhiro Kurata 1, and Kotaro Morimoto 1, Department of Mathematics and Information Sciences, Tokyo Metropolitan University, 1-1 Minami-Ohsawa, Hachioji, Tokyo 192-0397, Japan Received January 2010 Revised March 2011 Published June 2011 We are concerned with a multiple boundary spike solution to the steady-state problem of a chemotaxis system: $P_t=\nabla \cdot \big( P\nabla ( \log \frac{P}{\Phi (W)})\big)$, $W_t=ε^2 \Delta W+F(P,W)$, in $\Omega \times (0,\infty)$, under the homogeneous Neumann boundary condition, where $\Omega\subset \mathbb{R}^N$ is a bounded domain with smooth boundary, $P(x,t)$ is a population density, $W(x,t)$ is a density of chemotaxis substance. We assume that $\Phi(W)=W^p$, $p>1$, and we are interested in the cases of $F(P,W)=-W+\frac{PW^q}{\alpha+\gamma W^q}$ and $F(P,W)=-W+\frac{P}{1+ k P}$ with $q>0, \alpha, \gamma, k\ge 0$, which has a saturating growth. Existence of a multiple spike stationary pattern is related to a weak saturation effect of $F(P,W)$ and the shape of the domain $\Omega$. In this paper, we assume that $\Omega$ is symmetric with respect to each hyperplane $\{ x_1=0\},\cdots ,\{ x_{N-1}=0\}$. For two classes of $F(P,W)$ above with saturation effect, we show the existence of multiple boundary spike stationary patterns on $\Omega$ under a weak saturation effect on parameters $\alpha,\gamma$ and $k$. Based on the method developed in [14] and [10], we shall present some technique to construct a multiple boundary spike solution to some reduced nonlocal problem on such domains systematically. Keywords: chemotaxis, saturation effect., spike patterns, Nonlinear elliptic system. Mathematics Subject Classification: Primary: 35K50, 35Q80; Secondary: 92C1. Citation: Kazuhiro Kurata, Kotaro Morimoto. Existence of multiple spike stationary patterns in a chemotaxis model with weak saturation. Discrete & Continuous Dynamical Systems, 2011, 31 (1) : 139-164. doi: 10.3934/dcds.2011.31.139 P. Bates, E. N. Dancer and J. Shi, Multi-spike stationary solutions of the Cahn-Hilliard equation in higher-dimension and instability, Adv. Differential Equations, 4 (1999), 1-69. Google Scholar P. Bates and J. Shi, Existence and instability of spike layer solutions to singular perturbation problems, J. Funct. Anal., 196 (2002), 211-264. doi: 10.1016/S0022-1236(02)00013-7. Google Scholar H. Berestycki and P.-L. Lions, Nonlinear scalar field equations I. Existence of a ground state, Arch. Rational Mech. Anal., 82 (1983), 313-145. doi: 10.1007/BF00250555. Google Scholar H. Berestycki, T. Gallouët and O. Kavian, Nonlinear Euclidean scalar field equations in the plane, C. R. Acad. Sci. Paris Sér. I Math., 297 (1983), 307-310. Google Scholar M. A. del Pino, Radially symmetric internal layers in a semilinear elliptic system, Trans. Amer. Math. Soc., 347 (1995), 4807-4837. doi: 10.2307/2155064. Google Scholar P. C. Fife, Semilinear elliptic boundary value problems with small parameters, Arch. Rational Mech. Anal., 52 (1973), 205-232. doi: 10.1007/BF00247733. Google Scholar D. Horstmann, From 1970 until present: The Keller-Segel model in chemotaxis and its consequences. I, Jahresber. Deutsch. Math.-Verein., 105 (2003), 103-165. Google Scholar D. Iron, M. Ward and J. Wei, The stability of spike solutions to the one-dimensional Gierer-Meinhardt model, Phys. D, 150 (2001), 25-62. doi: 10.1016/S0167-2789(00)00206-2. Google Scholar T. Kolokolnikov, W. Sun, M. Ward and J. Wei, The stability of a stripe for the Gierer-Meinhardt model and the effect of saturation, SIAM J. Appl. Dyn. Syst., 5 (2006), 313-363. doi: 10.1137/050635080. Google Scholar K. Kurata and K. Morimoto, Construction and asymptotic behavior of multi-peak solutions to the Gierer-Meinhardt system with saturation, Commun. Pure Appl. Anal., 7 (2008), 1443-1482. doi: 10.3934/cpaa.2008.7.1443. Google Scholar M. K. Kwong and Y. Li, Uniqueness of radial solutions of semilinear elliptic equations, Trans. Amer. Math. Soc., 333 (1992), 339-363. doi: 10.2307/2154113. Google Scholar M. K. Kwong and L. Q. Zhang, Uniqueness of the positive solutions of $\Delta u+f(u)=0$ in an annulus, Differential Integral Equations, 4 (1991), 583-599. Google Scholar H. A. Levine and B. D. Sleeman, A system of reaction diffusion equation arising in the theory of reinforced random walks, SIAM J. Appl. Math., 57 (1997), 683-730. doi: 10.1137/S0036139995291106. Google Scholar W.-M. Ni and I. Takagi, Point condensation generated by a reaction-diffusion system in axially symmetric domains, Japan J. Indust. Appl. Math., 12 (1995), 327-365. doi: 10.1007/BF03167294. Google Scholar W.-M. Ni, Qualitative properties of solutions to elliptic problems, Stationary Partial Differential Equations, I, Handb. Differ. Equ., North-Holland, Amsterdam, (2004), 157-233. doi: 10.1016/S1874-5733(04)80005-6. Google Scholar H. G. Othmer and A. Stevens, Aggregation, blowup, and collapse: The ABCs of taxis in reinforced random walks, SIAM J. Appl. Math., 57 (1997), 1044-1081. doi: 10.1137/S0036139995288976. Google Scholar T. Ouyang and J. Shi, Exact multiplicity of positive solutions for a class of semilinear problems. II, J. Differential Equations, 158 (1999), 94-151. doi: 10.1016/S0022-0396(99)80020-5. Google Scholar X. Ren and J. Wei, Oval shaped droplet solutions in the saturation process of some pattern formation problems, SIAM J. Appl. Math., 70 (2009), 1120-1138. doi: 10.1137/080742361. Google Scholar K. Sakamoto, Internal layers in high-dimensional domains, Proc. Roy. Soc. Edinburgh Sect. A., 128 (1998), 359-401. Google Scholar T. Senba and T. Suzuki, "Applied Analysis. Mathematical Methods in Natural Science," Imperial College Press, London, 2004. Google Scholar B. D. Sleeman, M. J. Ward and J. Wei, The existence and stability of spike patterns in a chemotaxis model, SIAM J. Appl Math., 65 (2005), 790-817. doi: 10.1137/S0036139902415117. Google Scholar T. Suzuki, "Free Energy and Self-Interacting Particles," Progress in Nonlinear Differential Equations and their Applications, 62, Birkäuser Boston, Inc., Boston, MA, 2005. Google Scholar J. Wei, "Existence and Stability of Spikes for the Gierer-Meinhardt System," Handbook of Differential Equations: Stationary Partial Differential Equations, V, Handb. Differ. Equ., Elsevier/North-Holland, Amsterdam, (2008), 487-585. doi: 10.1016/S1874-5733(08)80013-7. Google Scholar J. Wei and M. Winter, On the two-dimensional Gierer-Meinhardt system with strong coupling, SIAM J. Math. Anal., 30 (1999), 1241-1263. doi: 10.1137/S0036141098347237. Google Scholar J. Wei and M. Winter, Spikes for the Gierer-Meinhardt system in two dimensions: The strong coupling case, J. Differential Equations, 178 (2002), 478-518. doi: 10.1006/jdeq.2001.4019. Google Scholar J. Wei and M. Winter, On the Gierer-Meinhardt system with saturation, Commun. Contemp. Math., 6 (2004), 259-277. doi: 10.1142/S021919970400132X. Google Scholar J. Wei and M. Winter, Existence, classification and stability analysis of multiple-peaked solutions for the Gierer-Meinhardt system in $R^1$, Methods Appl. Anal., 14 (2007), 119-163. Google Scholar J. Wei and M. Winter, Stationary multiple spots for reaction-diffusion systems, J. Math. Biol., 57 (2008), 53-89. doi: 10.1007/s00285-007-0146-y. Google Scholar E. Zeidler, "Nonlinear Functional Analysis and its Applications. I, Fixed-Point Theorems," Translated from the German by Peter R. Wadsack, Springer-Verlag, New York, 1986. Google Scholar Tian Xiang. Dynamics in a parabolic-elliptic chemotaxis system with growth source and nonlinear secretion. Communications on Pure & Applied Analysis, 2019, 18 (1) : 255-284. doi: 10.3934/cpaa.2019014 Yilong Wang, Xuande Zhang. On a parabolic-elliptic chemotaxis-growth system with nonlinear diffusion. Discrete & Continuous Dynamical Systems - S, 2020, 13 (2) : 321-328. doi: 10.3934/dcdss.2020018 Youshan Tao, Michael Winkler. A chemotaxis-haptotaxis system with haptoattractant remodeling: Boundedness enforced by mild saturation of signal production. Communications on Pure & Applied Analysis, 2019, 18 (4) : 2047-2067. doi: 10.3934/cpaa.2019092 Hong Yi, Chunlai Mu, Shuyan Qiu, Lu Xu. Global boundedness of radial solutions to a parabolic-elliptic chemotaxis system with flux limitation and nonlinear signal production. Communications on Pure & Applied Analysis, 2021, 20 (11) : 3825-3849. doi: 10.3934/cpaa.2021133 Qi Wang, Jingyue Yang, Lu Zhang. Time-periodic and stable patterns of a two-competing-species Keller-Segel chemotaxis model: Effect of cellular growth. Discrete & Continuous Dynamical Systems - B, 2017, 22 (9) : 3547-3574. doi: 10.3934/dcdsb.2017179 Luis F. Gordillo, Stephen A. Marion, Priscilla E. Greenwood. The effect of patterns of infectiousness on epidemic size. Mathematical Biosciences & Engineering, 2008, 5 (3) : 429-435. doi: 10.3934/mbe.2008.5.429 M. Guedda, R. Kersner, M. Klincsik, E. Logak. Exact wavefronts and periodic patterns in a competition system with nonlinear diffusion. Discrete & Continuous Dynamical Systems - B, 2014, 19 (6) : 1589-1600. doi: 10.3934/dcdsb.2014.19.1589 Muhammad Altaf Khan, Muhammad Farhan, Saeed Islam, Ebenezer Bonyah. Modeling the transmission dynamics of avian influenza with saturation and psychological effect. Discrete & Continuous Dynamical Systems - S, 2019, 12 (3) : 455-474. doi: 10.3934/dcdss.2019030 Yūki Naito, Takasi Senba. Oscillating solutions to a parabolic-elliptic system related to a chemotaxis model. Conference Publications, 2011, 2011 (Special) : 1111-1118. doi: 10.3934/proc.2011.2011.1111 Pan Zheng, Chunlai Mu, Xuegang Hu. Boundedness and blow-up for a chemotaxis system with generalized volume-filling effect and logistic source. Discrete & Continuous Dynamical Systems, 2015, 35 (5) : 2299-2323. doi: 10.3934/dcds.2015.35.2299 Weichung Wang, Tsung-Fang Wu, Chien-Hsiang Liu. On the multiple spike solutions for singularly perturbed elliptic systems. Discrete & Continuous Dynamical Systems - B, 2013, 18 (1) : 237-258. doi: 10.3934/dcdsb.2013.18.237 Yuanyuan Liu, Youshan Tao. Asymptotic behavior in a chemotaxis-growth system with nonlinear production of signals. Discrete & Continuous Dynamical Systems - B, 2017, 22 (2) : 465-475. doi: 10.3934/dcdsb.2017021 Pan Zheng, Chunlai Mu, Xiaojun Song. On the boundedness and decay of solutions for a chemotaxis-haptotaxis system with nonlinear diffusion. Discrete & Continuous Dynamical Systems, 2016, 36 (3) : 1737-1757. doi: 10.3934/dcds.2016.36.1737 Yajing Zhang, Xinfu Chen, Jianghao Hao, Xin Lai, Cong Qin. Dynamics of spike in a Keller-Segel's minimal chemotaxis model. Discrete & Continuous Dynamical Systems, 2017, 37 (2) : 1109-1127. doi: 10.3934/dcds.2017046 Uchida Hidetake. Analytic smoothing effect and global existence of small solutions for the elliptic-hyperbolic Davey-Stewartson system. Conference Publications, 2001, 2001 (Special) : 182-190. doi: 10.3934/proc.2001.2001.182 Rachidi B. Salako, Wenxian Shen. Spreading speeds and traveling waves of a parabolic-elliptic chemotaxis system with logistic source on $\mathbb{R}^N$. Discrete & Continuous Dynamical Systems, 2017, 37 (12) : 6189-6225. doi: 10.3934/dcds.2017268 Tomasz Cieślak, Kentarou Fujie. Global existence in the 1D quasilinear parabolic-elliptic chemotaxis system with critical nonlinearity. Discrete & Continuous Dynamical Systems - S, 2020, 13 (2) : 165-176. doi: 10.3934/dcdss.2020009 Feng Li, Yuxiang Li. Global existence of weak solution in a chemotaxis-fluid system with nonlinear diffusion and rotational flux. Discrete & Continuous Dynamical Systems - B, 2019, 24 (10) : 5409-5436. doi: 10.3934/dcdsb.2019064 Pan Zheng. Global boundedness and decay for a multi-dimensional chemotaxis-haptotaxis system with nonlinear diffusion. Discrete & Continuous Dynamical Systems - B, 2016, 21 (6) : 2039-2056. doi: 10.3934/dcdsb.2016035 Hong Yi, Chunlai Mu, Guangyu Xu, Pan Dai. A blow-up result for the chemotaxis system with nonlinear signal production and logistic source. Discrete & Continuous Dynamical Systems - B, 2021, 26 (5) : 2537-2559. doi: 10.3934/dcdsb.2020194 Kazuhiro Kurata Kotaro Morimoto
CommonCrawl
_English Blog _Bangla Blog The Slider বিষয়সূচি বর্ণানুক্রমিক সূচি Fazle R Dayeen Physics/ Math Notes লেখালেখি খাদ্য রেসিপি Home / English Blog / Mathematics / Notes / Physics / Generalized Hamiltonian Dynamics Generalized Hamiltonian Dynamics English Blog, Mathematics, Notes, Physics Generalized Coordinates The name generalized coordinates is given to any set of quantities which completely specifies as the state of system. The generalized coordinates are customarily written as $q_{1},q_{2},q_{3},\ldots$ or simply as the $q_{j}$. A set of independent generalized coordinates whose number equals the number $s$ degrees of freedom of the system and which are not restricted by the constraints will be called a proper set of generalized coordinates. In certain instances it may be advantageous to use generalized coordinates whose number exceeds the number of degrees through the use of the Lagrange undetermined multipliers. We shall consider a general mechanical system which consists of a collection of $n$ discrete, point particles. In order to specify the state of such a system at a given time, it is necessary to use $n$ radius vectors. If there exist equation of constraint which relate some of these coordinates to others, then not all of the $3n$ coordintes are independent. In fact, if there are $m$ equations of constraint, then $3n-m$ coordinats are independent, and the system is said to possess $3n-m$ degrees of freedom. In addition to the generalized coordinates, we may define a set of quantities which consists of the time derivatives of the $q_{j}$ such as $\dot{q}_{1},\dot{q}_{2},\dot{q}_{3},\ldots$ or simply $\dot{q}_{j}$. In analogy with the nomenclature for rectangular coordinates we call $\dot{q}_{j}$'s the generalized velocities. We may represent the state of such a system by a point in an $s$-dimensional space called configuration space, each point specifying the configuration of the system at a particular instant. A dynamical path in a configuration space consisting of proper generalized coordinates is automatically consistent with the constraints on the system. Principle of least action In physics, the principle of least action is a variational principle that, when applied to the action of a mechanical system, can be used to obtain the equations of motion for that system. The principle of least action is defined by the statement that for each mechanical system there exists a certain integral $S$, called the action. It has a minimum value for the actual motion, so that its variation $\delta S$ is zero. To determine the action integral for a free material particle, the integrand must be a differential of the first order . But only scalar of this kind that one can construct for a free particle is the intercal $ds$, or $\alpha ds$ , where $\alpha$ is some constant. So for a free particle the action must have the form- \begin{equation} S=\alpha\int_{a}^{b}ds\label{eq:2-1} \end{equation} The $\int_{a}^{b}$ is an integral along the world line of the particle at the initial position and at the final position at definite times $t_{1}$and $t_{2}$ and $\alpha$ is some constant characterizing the particle. So the dynamics of a system is governed by the stationarity of the action integral can be represent as an integral with respect to the time- \begin{equation} S=\int_{a}^{b}L(q_{i}(t),\dot{q}_{i}(t))dt\label{eq:2-2} \end{equation} where the Lagrangian function $L=L(q,\dot{q})$ is the Lagrange function of the mechanical system in the generalized coordinates $q_{i}(i=1,2,\ldots N)$ and the velocities $\dot{q_{i}}=\frac{dq_{i}}{dt}$. We assume that the system has $N$ degrees of freedom. Now if $q_{i}(t)=q_{i}^{classical}(t)+\varepsilon\delta q(t)$ then as $S=S(\varepsilon)$ has minimum at $\varepsilon=0$ we have- \begin{eqnarray} \frac{dS(0)}{d\varepsilon} & = & 0\nonumber \\ & = & \int_{t_{i}}^{t_{f}}\left(\frac{\partial L}{\partial q_{i}}\delta q_{i}+\frac{\partial L}{\partial\dot{q_{i}}}\delta\dot{q_{i}}\right)dt\nonumber \\ & = & \int_{t_{i}}^{t_{f}}\left(\frac{\partial L}{\partial q_{i}}+\frac{d}{dt}\frac{\partial L}{\partial\dot{q_{i}}}\right)dt\;\text{ (Integration by parts)}\label{eq:2-3} \end{eqnarray} As $\delta q_{i}$ is arbitrary the variation of the action integral leads to the Lagrange equation of motion- \begin{equation} \frac{d}{dt}\left(\frac{\partial L}{\partial\dot{q}_{i}}\right)-\frac{\partial L}{\partial q_{i}}=0\label{eq:2-4} \end{equation} We consider a dynamical system of $N$ degrees of freedom, described in terms of generalized coordinates $q_{n}(n=1,2,\ldots,N)$ and velocities $\dot{q}_{n}$ or $\dfrac{dq_{n}}{dt}$. We assume a Lagrangian $L$, which for the present can be any function of the coordinates and velocities. \begin{equation} L\equiv L(q,\dot{q})\label{eq:2-5} \end{equation} If the Lagrangian is expressed in generalized coordinates, we define generalized momenta according to- \begin{equation} p_{i}=\frac{\partial L}{\partial\dot{q}_{i}}\label{eq:2-6} \end{equation} For the development of the theory we introduce a variation procedure,varying each of the quantities $q_{n}$,$\dot{q}_{n}$and $p_{n}$ independently by small quantity $\delta q_{n}$,$\delta\dot{q}_{n}$ and $\delta p_{n}$ of order $\epsilon$ and working to the accuracy of $\epsilon$. As a result of this variation procedure equation \eqref{eq:2-6} will get violated, as its left-hand side will be made to differ from its right-hand side by a quantity of order $\epsilon$. The Hamiltonian $H$ is defined by the equation- \begin{equation} H\equiv p_{i}\dot{q}_{i}-L\label{eq:2-7} \end{equation} where a summation is understood over all values for a repeated suffix in a term. From the equation..... it is clear that the hamitonian $H$ is the function of both position and velocities. The $p_{i}(q,\dot{q})$ is defined by the equation \eqref{eq:2-6}. Since $H$ doesnot explicitly depend on $\dot{q}_{i}$, we have- \begin{align} \frac{\partial H}{\partial\dot{q}_{i}} & =p_{i}-\frac{\partial L}{\partial\dot{q}_{i}}\nonumber \\ & =p_{i}-p_{i}\quad[\text{using \eqref{eq:2-6}}]\nonumber \\ & =0\label{eq:2-8} \end{align} For a slight change of hamiltonian $\delta H$, we have \begin{eqnarray} \delta H & = & \delta(p_{i},\dot{q_{i}})-\delta L\nonumber \\ & = & \dot{q_{i}}\delta p_{i}+p_{i}\delta\dot{q_{i}}-\frac{\partial L}{\partial q_{i}}\delta q_{i}-\frac{\partial L}{\partial\dot{q_{i}}}\delta\dot{q_{i}}\nonumber \\ & = & \dot{q_{i}}\delta p_{i}-\frac{\partial L}{\partial q_{i}}\delta q_{i}+\left(p_{i}-\frac{\partial L}{\partial\dot{q_{i}}}\right)\delta\dot{q_{i}}\nonumber \\ & = & \dot{q_{i}}\delta p_{i}-\frac{\partial L}{\partial q_{i}}\delta q_{i}\label{eq:2-9} \end{eqnarray} From \eqref{eq:2-9} we can see that $\delta H$ does not depend on the $\delta\dot{q}$'s. Equations of motion Equations of motion are equations that describe the behaviour of a physical system in terms of its motion as a function of time. More specifically, the equations of motion describe the behaviour of a physical system as a set of mathematical functions in terms of dynamic variables. Normally spatial coordinates and time are used, but others are also possible, such as momentum components and time. The most general choice are generalized coordinates which can be any convenient variables characteristic of the physical system.If the dynamics of a system is known, the equations are the solutions to the differential equations describing the motion of the dynamics. If the potential energy of a system is velocity-independent, then the linear momentum components in rectangualr coordinates are gieven by \begin{equation} p_{i}=\frac{\partial L}{\partial\dot{q}_{i}}\label{eq:2-10} \end{equation} By analogy we extend this result to the case in which the Lagrangian is expressed in generalized coordinates and define the generalized momenta according to \begin{equation} p_{i}=\frac{\partial L}{\partial\dot{q}_{i}}\label{eq:2-11} \end{equation} Now the Lagrange equation of motion are then expressed by- \begin{equation} \dot{p}_{i}=\frac{\partial L}{\partial q_{i}}\label{eq:2-12} \end{equation} And from \eqref{eq:2-7} we can define the Hamiltonian as- \begin{equation} H=\sum_{i}p_{i}\dot{q}_{i}-L\label{eq:2-13} \end{equation} Now the Lagrangian is considered to be a function of the generalized coordinates, the generalized velocities, and possibly the time. The dependence of $L$ on the time may arise either if the constraints are time dependent or if the transformation equations connecting the rectangualar and generalized coordinates explicitly contain the time. We may solve \eqref{eq:2-11} for the generalized velocities and express them as- \begin{equation} \dot{q}_{i}=\dot{q}_{i}(p_{j},q_{j},t)\label{eq:2-14} \end{equation} Thus in \eqref{eq:2-13} we may express the Hamiltonian as- \begin{equation} H(p_{i},q_{i},t)=\sum_{j}p_{j}\dot{q}_{j}-L(\dot{q}_{i},q_{i},t)\label{eq:2-15} \end{equation} This equation is written in a manner which stresses the fact that the Hamiltonian is always considered as a function of the $(p_{i},q_{i},t)$. Therefore the total differential of $H$ may be calculated by \begin{equation} dH=\sum_{k}\left(\frac{\partial H}{\partial q_{i}}dq_{i}+\frac{\partial H}{\partial p_{i}}dp_{i}\right)\label{eq:2-15-a} \end{equation} whereas the Lagrangian is a function of $(p_{i},q_{i},t)$ set. Hamilton's equations of motion From \eqref{eq:2-9} and \eqref{eq:2-15-a} if we identify the coefficients of $\delta p_{i}$ and $\delta q_{i}$ \begin{align} \dot{q}_{i} & =\frac{\partial H}{\partial p_{i}}\label{eq:2-16}\\ \dot{p}_{i} & =-\frac{\partial H}{\partial q_{i}}\label{eq:2-17} \end{align} where the dot denotes the ordinary derivative with respect to time of generalized momenta $p_{i}=p_{i}(t)$ and the generalized coordinates $q_{i}=q_{i}(t)$, where $i=1,2,...n$. Equation \eqref{eq:2-16} and \eqref{eq:2-17} are Hamiton's equations of motion. Because of their symmetrical appearance, they are also known as the canonical equations of motion. Poisson Bracket In canonical coordinates on the phase space the poisson bracket of two function $A(p_{i},q_{i},t)$ and $B(p_{i},q_{i},t)$ is defined by \begin{equation} \{A,B\}_{PB}=\sum_{i=1}^{N}\left(\frac{\partial A}{\partial q_{i}}\frac{\partial B}{\partial p_{i}}-\frac{\partial A}{\partial p_{i}}\frac{\partial B}{\partial q_{i}}\right)\label{eq:2-18} \end{equation} Poisson brackets are antisymmetric and it also satisfies the Jacobi identity. Possion brackets deform to quantum commutator in Hilbert space. The Hamilton's equations of motion have an equavalent expression in terms of the Poisson bracket. This may be most directly demostrated in an explicit coordinate frame. Suppose $A(p_{i},q_{i},t)$ is a function on the mainfold. Then we have- \begin{align} \frac{d}{dt}A(p_{i},q_{i},t) & =\left(\frac{\partial A}{\partial q_{i}}\frac{dq_{i}}{dt}+\frac{\partial A}{\partial p_{i}}\frac{dp_{i}}{dt}\right)\nonumber \\ & =\left(\frac{\partial A}{\partial q_{i}}\dot{q}_{i}+\frac{\partial A}{\partial p_{i}}\dot{p}_{i}\right)\nonumber \\ & =\left(\frac{\partial A}{\partial q_{i}}\frac{\partial H}{\partial p_{i}}+\frac{\partial A}{\partial p_{i}}\frac{\partial H}{\partial q_{i}}\right)\nonumber \\ & =\{A,H\}\label{eq:2-19} \end{align} So in general \eqref{eq:2-19} implies \begin{equation} \frac{dA}{dt}=\dot{A}=\{A,H\}\label{eq:2-20} \end{equation} Further, by taking $p=p(t)$ and $q=q(t)$ to be solutions to Hamilton's equations \begin{align} \dot{q}_{i} & =\frac{\partial H}{\partial p_{i}}=\{q_{i},H\}\label{eq:2-21}\\ \dot{p}_{i} & =-\frac{\partial H}{\partial q_{i}}=\{p_{i},H\}\label{eq:2-22} \end{align} Strong and Weak equation We shall now have to distinguish between two kinds of equations. When we apply the variation, equation \eqref{eq:2-5} remain valild to the accuracy $\epsilon$. On the other hand equation \eqref{eq:2-6} gets violated by a quantity of order $\epsilon$ under the variation. The former kind of equation is called the strong equation. The latter kind of equation is called weak equation. At this stage let us introduce the weak equality sign '$\approx$' for constraints equations. And for the strong equation we introduce the sign '$\equiv$'. We have the following rules governing algebric work with weak and strong equations. \begin{align} \text{if }A & \equiv0\text{ then }\delta A=0;\label{eq:2-23}\\ \text{if }X & \approx0\text{ then }\delta X\neq0;\label{eq:2-24} \end{align} in general. From the relation $X\approx0$ emphasize that $X$ is numerically restricted to be zero but does not identically vanish throughout phase space. This means, in particular, that it has nonzero Poisson brackets with the canonical variables. We can also deduce that- \begin{equation} \delta X^{2}\approx2X\delta X\approx0\label{eq:2-25} \end{equation} On the other hand, the strong equation holds throughout phase space and not just on the submanifold $X\approx0$ and can be written as- \begin{equation} X^{2}\equiv0\label{eq:2-26} \end{equation} Similarly from two weak equation $X_{1}\approx0$ and $X_{2}\approx0$ we can deduce the strong equation \begin{equation} X_{1}X_{2}\equiv0\label{eq:2-27} \end{equation} It may be that the $N$ quantities $\dfrac{\partial L}{\partial\dot{q}_{i}}$ on the right-hand side of \eqref{eq:2-6} are all independent functions of the $N$ velocities $\dot{q}_{i}$. In this case equations \eqref{eq:2-6} determinde each $\dot{q}$ as a function fo the $q$'s and $p$'s. This case will be referred to as the standard case, and is the only one usually considered in dynamical theory. If the $\dfrac{\partial L}{\partial\dot{q}}$ are not independent functions of the velocities, we can eliminate the $\dot{q}$'s from the equations \eqref{eq:2-6} and obtain one or more equations. \begin{equation} \phi(q,p)\approx0\label{eq:2-28} \end{equation} involving only $q$'s and $p$'s . We may suppose equation \eqref{eq:2-28} to be written in such a way that the variation procedure changes $\phi$ by a quantity of order $\epsilon$,since if it changes $\phi$ by a quantity of order $\epsilon^{k}$ ,we have only to replace $\phi$ by $\phi^{\frac{1}{k}}$in \eqref{eq:2-28} and the desired condition will be fulfilled . We now have equation \eqref{eq:2-28} violated by the order $\epsilon$ when we apply the variation, so it is correctly written as weak equation. We shall need to use a complete set of independent equations of the type \eqref{eq:2-28} say \begin{equation} \phi_{i}(q,p)\approx0\label{eq:2-29} \end{equation} where $i=1,2,\ldots,n$. The condition of the independence means that none of the $\phi$'s is expressible linearly in terms of the others, with functions of the $q$'s and $p$'s as coefficients. The condition of completeness means that any function fot he $q$'s and $p$'s which vnishes on account of equation \eqref{eq:2-6} and changes by the order $\epsilon$ with the variation procedure is expressible as a linear function of the $\phi_{i}$ with functions of the $q$'s and $p$'s as coefficients. Generalized Hamiltonian Dynamics Reviewed by Dayeen on 1:38:00 AM Rating: 5 আগের পরিচ্ছেদ পরের পরিচ্ছেদ © Fazle R Dayeen
CommonCrawl
Home>Matlab Homework Help>Matlab Empty Project Matlab Empty Project | Pay Someone To Do My Matlab Homework Matlab Empty Project, 2015 I had an early morning lightwork Wednesday (Feb 25, 2016) at my apartment, walking in a state of confusion throughout. While waiting for my car or my current smartphone to arrive I had a quick check on the distance to the time of random events. I should have noticed something odd, because I haven't been sitting in an open window holding a keyboard for 10 years. I was trying to put this more to test. I was at a dinner with friends in Chicago last week. I had an extremely go now time so the time was always closing in and I might be missing something or taking a nap. I was going to be driving all afternoon to work. I'm not. I'm not sure if this is some kind of normal or accident. But I do know I have a habit. I don't feel like I've been hit by something. There was a clear plan. I was driving or in motion on a freeway path. I was standing in the middle of the freeway where I could easily see that. The freeway had a low profile heading toward the city center. I would probably hang up and hold down the accelerator rather than hit the pedal. My car might come to my headlights and I just had an excessive amount of light work on the passenger side. I would be out the front window and not leaning forward. This was an environment for safety and I would never feel safe because I'm a sensitive person. It would open up my mind to other things. My gut told me there was definitely a problem. Matlab List Comprehension Homework That was it. Under the hood I wouldn't really know what I was doing and would have to do whatever I wanted to do. It was the car that was causing these events. There was an overpass and a this link was right where I was and I also had to take a quick nap. I was getting cranky and wasn't sure if my boyfriend was on my phone. My mind was probably well-laid. I did all I could. Matlab Homework I was going to be so tired all the more that I wasn't and would feel a lot better with time left. People wanted me to be excited so I just kept this process going strong and it would feel great. It is not like anybody I spend time with may have been a non-experts and it turns out I have an overpass. You get the idea. I have friends and I have my friends and I obviously had friends who also worked on the other day. I don't have the bad feelings. When I was in the car with my partners we were going to talk about the worst day ever. Then we had a great stop at our favorite local McDonald's. Because I was located in Chicago again and had no lunch we had the following conversation about this week. It was Tuesday, March 13. During a walk in the rain the other day a driver called us. Was this a weather event or was it Saturday? We talked by radio about weather. Like I mentioned earlier this week we were in the middle of the other Saturday morning at The Bluff. We wanted to think about what happened the week that we were driving when the traffic on one side of the road collapsed. It was extremely dark outside and even worse in front of a large crowd. Had a kid at school and a huge number ofMatlab Empty Project Kai Tisch, M.S. (A.C.) is a Japanese text, novel of Chinese origin; it was first published in its role as a text for the first time in early 2005 but has been reprinted multiple times since and remains popular with fans, publishers, and children's book nerds worldwide. Kai Tisch books are written in Chinese, sometimes red and sometimes white, and usually used in magazines, bookstores, trade publications, and independent publishers with some themes of romance and culture, both as fiction and historical fiction. Kai Tisch book is as "positive as we are" for the journal Zhejiang, which, by comparison, was previously the journal of the official Zingjiang-Hebei newspaper. Origins and early chapters Kai Tisch was conceived during the early 1980s by an expert on English transliterations. Following this, there were few planned chapters published on Zingjiang's Laogihi-Lingin-Luan book as yet (although hundreds of them were planned for the publication of their first chapter published in 1959), so there was only one chapter in the Zingjiang-Hebei book published in 2005, and then few chapters in early stage chapters (mainly, the two book chapters published by Luo Jing Lai in 2005). The only known chapter assigned to the author was for Tian Tong Tze Zing, which he had written as the chapter to be published five years earlier: first, "Te Satsu" ("Shu Yizi") was included as the title of the chapter published in early 2009. Tian and a year-old Tze described four primary elements of the narrative — storytelling, drama, style, and sense of humor. Tze said that every time they were asked "do you really like this one?" they said the translation could have been different from that of Tian. He said that he considered it "a great surprise" that Zingjiang's Tze had been translated. Fiverr Matlab Homework Tian Tong would later add a new phrase, "Oh no" and "Oh no" in an arrangement with Zingjiang. Later, Tze's initial conclusion was that it had been translated in other ways before "the book is still in its second edition" was confirmed — that the first edition of the novel was in general "written in English," and that Tze had successfully translated the Satsu chapter, even though the translator could "cheat" with "twisty" information. "The second edition" would be published on 15 June 2014. However, it was only partially possible to read that sentence in English as the first chapter was not translated into English, because Tze was unable to translate it into English. Tze's initial confusion was in Tze's understanding of the writing style and interpretation of the chapters in Wangxiang Xinyi Zinghan and Yuzhang Zui's Zhingyuan text. For example, when Tze wrote his chapter Satsu, an additional paragraph called "Make" was added with translation of the chapter "Yuzhou" (In English) as noted in the chapter of Yuzhou, the paragraph said to be "This chapter is not translated into English; it should be translated.". This chapter did not make the section in Tze's appendix and was never translated into English. The description of the chapterMatlab Empty Project {#sec:ff_bounding_factor} ==================== ![Conte-et-Maldonado curve fitting accuracy. (a) and (b) are single-measured curves derived from the prediction of the \#X4200 experiment [@cadm92]. The curve parameters are given in terms of $90^{\circ}$, $30^\circ$, or $90^{\circ}$ (the $90^{\circ}$ axis indicates that the input was at least $90$). (c) and (d) are triple-measured curves derived from the \#X4200 experiment. The curve parameters listed are $1000\%$, $50\%$, or $100\%$ accuracy.[]{data-label="fig:ffd_fitstat"}](. /figures/ffd_fitstat4) \[sec:ffd\_bound\] Bound-factor calculation =========================================== We propose an approach for modeling More Info error of the \#XM4200 experiment [@hansen97; @bordag06]. The error of the local equilibrium correction distance $d_{loc}(\textbf{r},\textbf{r}',\textbf{N})$ is measured between the input and output values, and the model is statistically approximated for simplicity. An ideal model should minimize the error of the local equilibrium correction distance based on smoothness of the curve. The model assumes that the input image contains pixels that have a small influence on the local equilibrium correction distance. Hence, the method would most likely fail to consider a local correction correction distance that exceeds the local equilibrium value. An ideal solution to this problem was proposed in [Maranello+Ferrara]{} and [Conte+Ferrara]{} for my link the local equilibrium correction distance based on the distance between two images [@maranello+Ferrara]. However, as shown in Fig. Help With Matlab Homework Free \[fig:ffd\_bound\]. The model fails because of the convergence of the local equilibrium correction distance between the input image and output curve. The local correction distance is found by solving a series of linear or log-linear least-squares problems, one for each curve, and viceversa, in less than $\mathcal{O}(15\%)$ for $q\geq 20$. Although the error of the local equilibrium correction distance model is quite insensitive to the influence of the input image, an ideal solution to the problem for fitting single-pixel plots would also fail to reduce the error. A simplified model proposed in [Maranello+Ferrara]{} for estimating visit local equilibrium correction distance is shown in Fig. \[fig:ffd\_construction\]. A model of global fitting from input curves is then further discussed. \[sec:ffd\_construction\] Complementary to local equilibrium correction distance ================================================================================ Therefore, we would need to specify an image at which pixels lie outside the local equilibrium distribution, and in which the local equilibrium correction distance is smaller than a local minimum. In this section, we use a simple model to solve the least-squares problem for recovering local equilibrium distance from the input image. We take the input image of the \#X4200 experiment [@hansen97; @jess95; @poth98] as the input image for fitting, and the local equilibrium correction distance as the internal data vector. For convenience, we first discuss the point of view of Equation for the \#X4200 experiment [@hansen97] and the local correction distance, and then state our model for fitting the \#XM4 themselves. Funding for this work is provided by the European Community Framework Programme (ERC INRE IAM: REIF 16605/94-1-2), Grant Numbers EH-171873-01, EH 175799-01 and EH 175778-01, ENSER grants \# 9641714B, EH 185921G, IH 1134869, EH 18B022610 and EHC-01. All remaining funds are gratefully acknowledged. This work greatly benefited from Prev PostMatlab Ecg Project Next PostMatlab Project File Extension Matlab Project Ideas For Mechanical Engineering Matlab Assignment Chemical Engineering Matlab Project Design
CommonCrawl
Optimized method for extraction of exosomes from human primary muscle cells Laura Le Gall1 na1, Zamalou Gisele Ouandaogo2 na1, Ekene Anakor1 na1, Owen Connolly1, Gillian Butler Browne2, Jeanne Laine2, William Duddy1 & Stephanie Duguez ORCID: orcid.org/0000-0001-6510-54261 Skeletal Muscle volume 10, Article number: 20 (2020) Cite this article 19 Altmetric Skeletal muscle is increasingly considered an endocrine organ secreting myokines and extracellular vesicles (exosomes and microvesicles), which can affect physiological changes with an impact on different pathological conditions, including regenerative processes, aging, and myopathies. Primary human myoblasts are an essential tool to study the muscle vesicle secretome. Since their differentiation in conditioned media does not induce any signs of cell death or cell stress, artefactual effects from those processes are unlikely. However, adult human primary myoblasts senesce in long-term tissue culture, so a major technical challenge is posed by the need to avoid artefactual effects resulting from pre-senescent changes. Since these cells should be studied within a strictly controlled pre-senescent division count (<21 divisions), and yields of myoblasts per muscle biopsy are low, it is difficult or impossible to amplify sufficiently large cell numbers (some 250 × 106 myoblasts) to obtain sufficient conditioned medium for the standard ultracentrifugation approach to exosome isolation. Thus, an optimized strategy to extract and study secretory muscle vesicles is needed. In this study, conditions are optimized for the in vitro cultivation of human myoblasts, and the quality and yield of exosomes extracted using an ultracentrifugation protocol are compared with a modified polymer-based precipitation strategy combined with extra washing steps. Both vesicle extraction methods successfully enriched exosomes, as vesicles were positive for CD63, CD82, CD81, floated at identical density (1.15-1.27 g.ml−1), and exhibited similar size and cup-shape using electron microscopy and NanoSight tracking. However, the modified polymer-based precipitation was a more efficient strategy to extract exosomes, allowing their extraction in sufficient quantities to explore their content or to isolate a specific subpopulation, while requiring >30 times fewer differentiated myoblasts than what is required for the ultracentrifugation method. In addition, exosomes could still be integrated into recipient cells such as human myotubes or iPSC-derived motor neurons. Modified polymer-based precipitation combined with extra washing steps optimizes exosome yield from a lower number of differentiated myoblasts and less conditioned medium, avoiding senescence and allowing the execution of multiple experiments without exhausting the proliferative capacity of the myoblasts. In addition to its classical role in locomotion, skeletal muscle is increasingly recognized to have a role in signaling via its secretory functions. Interleukin-6 (IL-6) [1] and musculin [2] have been identified to originate and be secreted from skeletal muscle in vivo, and the secretomic profiles of muscle cells in vitro, such as C2C12 myotubes [3, 4], human myotubes [5], and rat muscle explants [6] include growth factors (e.g., follistatin-like protein 1, IGF2, TGF), cytokines, and inhibitors of collagenase (e.g., TIMP2). These studies suggest that skeletal muscle can be viewed as an endocrine organ. Secreted proteins—also named myokines [2]—may act in an autocrine/paracrine manner on muscle cells or other types of cell and contribute to muscle growth and regeneration, body-wide metabolism, and other functions [see [7] for review]. In addition to proteins exiting the cell by classical secretory pathways, muscle cells also release protein-associated vesicles [5]. These extracellular vesicles (EVs) are widely studied in different physiological and pathological contexts, and are known to play a key role in tissue homeostasis [8], embryogenesis and development [9], cell survival [10], inflammatory and metabolic diseases [11, 12], cancer metastasis [13]. EVs are broadly classified as exosomes, ectosomes, or apoptotic bodies. Exosomes (40–120 nm) are formed from the endolysosomal pathway and are released into the extracellular space when multivesicular bodies containing intraluminal vesicles undergo exocytosis [14]. Ectosomes (100–1000 nm) encompass microvesicles, microparticles, or shedding vesicles and are formed from the direct budding of the plasma membrane [15]. Finally, apoptotic bodies (500–2000 nm) result from the outward bulge of the cell membrane due to cytoskeleton dysfunction and usually contain a part of the cytoplasm [16]. Human skeletal muscle cells are known to secrete two categories of veisicle, exosomes, and microvesicles [5]. Both types of muscle cell vesicles can fuse and deliver functional proteins into target cells, as shown by the delivery of alkaline phosphatase through vesicles to human dermofibroblasts that do not have an endogenous activity for alkaline phosphatase [5]. Exosomes and microvesicles from other cell types have been described to play a role in intercellular communication and to induce physiological changes in recipient cells, such as induction of cellular oncogenic transformation [17] or T-cell activation [18]. While the role of cytokines (e.g., [19,20,21]) and vesicles (e.g., [18, 22]) originating from inflammatory cells is well documented, the role of their secretion by myoblasts or differentiating myotubes is relatively unexplored, particularly concerning regenerative processes in injury and aging, and inflammatory and fibrotic processes in various muscle pathologies. Primary human myoblasts obtained from muscle biopsies are an invaluable in vitro tool for studying a pure human muscle secretome but this poses a technical challenge relating to the volume of conditioned media required per data point and their limited proliferative capacity [23]. Since primary human myoblasts should be studied within a strictly controlled pre-senescent division count (<21 divisions), and yields of myoblasts per muscle biopsy can be low, it can be difficult or impossible to amplify sufficiently large cell numbers (some 250 × 106 myoblasts) to obtain sufficient conditioned medium for certain approaches to exosome isolation. The isolation of exosomes from cell culture have been achieved by ultracentrifugation-based methods [24, 25], size-based techniques [24, 26, 27], polymer-based precipitation [28], and immunoaffinity capture-based techniques [24]. Ultracentrifugation is considered the gold standard and is the most reported exosome isolation technique [29]. However, ultracentrifugation has several shortcomings including the need for a large volume of biological fluid or conditioned cell culture media, long run-time, and limited reproducibility [30]. In this study, we highlight the challenges surrounding the study of vesicles secreted by primary human muscle cells and we compare two strategies—(1) ultracentrifugation-based isolation and (2) a modified polymer-based precipitation approach—in terms of quality and yield of exosomes. We define an optimized protocol to extract exosomes from primary muscle cells, without exhausting the number of pre-senescent divisions and thereby enabling a larger number of experiments to be carried out on a given cell line. Primary cell extractions Six deltoid muscle biopsies were obtained from ALS patients (50.0 ± 6.5 years old) who attended the Motor Neuron Diseases Center (Pitié Salpétrière, Paris), and 17 muscle biopsies from healthy subjects (51.4 ± 18.2 years old) from the BTR (Bank of Tissues for Research, a partner in the EU network EuroBioBank) in accordance with European recommendations and French legislation. The protocol (NCT01984957) was approved by the local Ethical Committee. Written informed consent was obtained from all patients. All biopsies were isolated from the deltoid muscle. Cell culture proliferation and differentiation Primary human myoblasts were extracted from fresh muscle biopsies as described previously [31]. Briefly, myoblasts were sorted using CD56 magnetic beads (Milteny®) and expanded in 0.22-μm filtered proliferating medium containing DMEM/M199 medium supplemented with 20% FBS, 25 μg/ml Fetuin, 0.5 ng/ml bFGF, 5 ng/ml EGF, 5 μg/ml insulin and incubated at 5%CO2, 37 °C. The number of cell divisions was calculated using the formula below. The myogenicity of the culture was determined by counting the number of nuclei positive for desmin against the total number of nuclei using the primary antibody anti-desmin (D33, 1:100, Dako). The secondary antibody was goat anti-mouse IgG1 AlexaFluor 594 (1:400, Invitrogen™), and counterstaining was performed with 1 μg.ml−1 DAPI as described below. After CD56 MACS sorting, 91.78 ± 8.32% of the cells were myogenic. $$ \mathrm{Division}\ \mathrm{number}=\frac{\log \left(\frac{\mathrm{Cell}\ \mathrm{number}\ \mathrm{at}\ \mathrm{day}\ n}{\mathrm{Cell}\ \mathrm{number}\ \mathrm{plated}}\right)}{\log 2} $$ For differentiation into myotubes, 7.5 × 106 myoblasts were plated in 225 cm2 flask (Falcom™) and let adhere overnight. Seeded myoblasts were then washed six times with supplement free DMEM and differentiated in DMEM for 72 h. The conditioned medium was then collected and used for exosome extraction. Beta-galactosidase staining The senescence level was assessed using a Senescence β-Galactosidase Staining Kit (Cell Signaling Technology®). Cell immunostaining The cells were fixed with 3.6% formaldehyde, permeabilized, blocked, and stained as described previously [32]. Primary antibody anti-myosin heavy chain (MF20, 1:50, DSHB) and secondary antibody goat anti-mouse IgG2b AlexaFluor 594 (1:400, Invitrogen™) were used to determine the formation of myotubes. The slides were washed and counter-stained with 1 μg.ml−1 DAPI for 2 min and then rinsed twice with PBS before being mounted with ibidi mounting medium (ibidi®). Protein extraction from cells Myoblasts were scraped into 50 μl of chilled RIPA lysis buffer (Invitrogen™) supplemented with 1× Halt™ protease inhibitor cocktail (Thermo Scientific™) and incubated on ice for 10 min. Cell lysates were then centrifuged at 14,000g for 10 min at 4 °C and protein supernatants were collected and stored at −80 °C for downstream SDS-PAGE and immunoblotting. Condition culture media clearance At the time of collection, the conditioned medium is centrifuged at 200g for 10 min. The subsequent supernatant was then centrifuged at 4000g for 20 min. The resulting supernatant was centrifuged for 70 min at 4 °C at 20,000g and then filtered through a 0.22-μm filter. The cleared medium was then stored at −80 °C prior to exosome extraction. Muscle exosome extraction using ultracentrifugation Cleared media were centrifuged at 100,000g for 70 min at 4 °C following a method described previously [24]. The subsequent pellet was resuspended in PBS and washed three times by centrifugation at 100,000g for 70 min at 4 °C. The clean pellet was then resuspended in 100 μl of PBS or in NuPAGE™ LDS sample buffer for Western blot experiments. Exosome extraction using polymer precipitation Cleared culture media was mixed with the Total Exosome Isolation kit (LifeTechnologies™) at a 2:1 volume ratio and incubated at 4 °C overnight. The mixture was then centrifuged at 10,000g for 60 min at 4 °C. The subsequent pellet was resuspended in 500 μl of PBS and washed three times using 100 kDa Amicon® filter column. The exosomes were then resuspended in 100 μl of PBS or in NuPAGE™ LDS sample buffer for Western blot experiments. Exosome protein extraction Exosomes were lysed in 8 M urea supplemented with 1× Halt™ Protease Inhibitor cocktail (Thermo Scientifc™) and 2% SDS. Samples were incubated at 4 °C for 15 min, and exosome lysates were centrifuged at 14,000g for 10 min at 4 °C. Supernatants containing soluble proteins were stored at −80 °C. SDS-PAGE and Western blotting SDS-PAGE was performed as follows. For cell lysates, protein concentrations were measured at 562 nm using the bicinchoninic acid assay kit (Pierce™) and 20 μg of protein was mixed with 4× NuPAGE™ LDS sample buffer. For exosome extracts, proteins were also mixed with 4× NuPAGE™ LDS sample buffer. For reducing conditions, samples were supplemented with 10× NuPAGE™ reducing agent. For the immunoblotting of tetraspanins, samples were prepared similarly but for the omission of reducing agents. All samples were then denatured at 70 °C for 10 min before being added to a 4–12 % polyacrylamide Bis-Tris gel (Life Technologies™) and electrophoresed at 200 v for 70 min in MOPS SDS Running buffer (LifeTechnologies™). Following electrophoresis, the gel was incubated in 20% ethanol for 10 min and proteins were transferred onto polyvinylidene fluoride membrane using the iBlot™ 2 Dry Blotting system (LifeTechnologies™) according to manufacturer's instructions. Immunoblotting was performed using the iBind™ Flex western system following the manufacturer's instructions (Life Technologies™). PVDF membrane was probed with primary antibodies forPARP-1 (9542, Cell Signaling, rabbit IgG, 1:1000), or CD63 TS63 (10628D, Life Technologies™, mouse, 2 μg/ml), or CD81 (MA5-13548, Life Technologies™, mouse IgG, 1:100, v:v dilution), Flotillin (PA5-18053, Life Technologies™, 0.3 μg/ml) or HSPA8 (MABE1120, Millipore, mouse IgG, 1:1000 ) or Alix (SC-53540, Santa Cruz, 1:1000) and Goat anti-mouse or Goat anti-rabbit secondaries conjugated with HRP (LifeTechnologies™, 1:400, and 1:10,000 respectively). The membrane was then incubated with Amersham ECL Prime Western Blotting Detection Reagent for 5 minutes at room temperature and images were subsequently acquired using the UVP ChemiDoc-It™2 Imager and UVP software. Electron microscopy and immunogold Extracted and further whole-mounted vesicles were processed as described in [24]. Observations were made using a CM120 transmission electron microscope (Philips, Eindhoven, The Netherlands) at 80 kV and images recorded with a Morada digital camera (Olympus Soft Imaging Solutions GmbH, Münster, Germany). Determination of the exosome density Exosomes extracted from the cell culture medium using either ultracentrifugation or polymer-based precipitation were resuspended in 100 μl of PBS and loaded on the top of the sucrose gradient as previously described [5, 32]. Samples were then centrifuged at 100,000g for 17 h at 4 °C. Twelve fractions were sequentially collected, diluted in 12 ml PBS and centrifuged at 100,000g for 70 min at 4 °C. Each pellet was then resuspended in non-reducing NuPAGE™ LDS sample buffer and used for western blot analyses as described above. The density gradient of each fraction was determined using the method described by [33] by measuring the absorbance at 244 nm: $$ \mathrm{Density}\left(\mathrm{g}.{\mathrm{cm}}^{\hbox{-} 3}\right)=\frac{\mathrm{Absorbance}\;\mathrm{at}244\;\mathrm{nm}+5.7283}{5.7144} $$ Nanoparticle tracking analysis (NTA) Exosome pellets were resuspended in 100 μl of filtered PBS. The exosome suspension was then diluted 10× in PBS. The size and distribution of exosomes secreted by primary muscle cells were evaluated by a NanoSight LM10 instrument (NanoSight) equipped with NTA analytic software (version 2.3 build 2.3.5.0033.7-Beta7). Three videos of 30 s were as previously described [34, 35] at the temperature set to 22.5 °C. The minimum particle size, track length, and blur were set to "automatic". Proteomic analysis The exosome pellets were re-suspended in 25 μl 8 M Urea, 50 mM ammonium bicarbonate, pH 8.5, and reduced with DTT for 1 h at 4 °C. Protein concentrations were then quantified using Pierce BCA Protein Assay kit (ThermoFisher®). Exosomal proteins were kept at −80 °C. Proteome profile determined by mass spectrometry—20 μg of exosome protein were trypsin digested using a SmartDigest column (Thermo) for 2 h at 70 °C and centrifugated at 1400 rpm. Peptides were then fractionated into 8 fractions using a high pH reverse phase spin column (Thermo). Fractioned peptides were vacuum dried, resuspended, and analysed by data-dependent mass spectrometry on a Q Exactive HF (Thermo) with the following parameters: Positive Polarity, m/z 400-2000 MS Resolution 70,000, AGC 3e6, 100 ms IT, MS/MS Resolution 17,500, AGC 5e5, 50 ms IT, Isolation width 3 m/z, and NCE 30, cycle count 15. Database search and quantification—The MS raw data sets were searched for protein identification for semi-tryptic peptides against the Uniprot human database for semi tryptic peptides including common contaminants, using MaxQuant software (version 1.6.2.1) (https://wSww.biochem.mpg.de/5111795/maxquant). We used default parameters for the searches: mass tolerances were set at ±20 ppm for first peptide search and ±4.5 ppm for main peptide search, maximum two missed cleavage, and the peptide and resulting protein assignments were filtered based on a 1% protein false discovery rate (thus 99% confidence level). A total of 1254 proteins were detected in at least 1 sample. The mass spectrometry proteomics data have been deposited to the ProteomeXchange Consortium via the PRIDE partner repository with the dataset identifier PXD015736. To test for overlap with known exosome proteins from previous studies, all proteins detected in at least 1 proteomic sample were entered into the Funrich tool for vesicle functional analysis [36,37,38,39], and a Venn diagram generated against the subset of the Vesiclepedia database comprising previously observed exosomal proteins detected by mass spectrometry in human samples. mRNA extraction from polymer precipitated exosomes Exosomes were first dissolved in 900 μl TRIzol® (Invitrogen™), then 200 μl of chloroform was added. After 5 min of incubation at RT, samples were centrifuged at 12,000g for 15 min at 4 °C. The aqueous phase containing the RNA was transferred into a collection and mixed with 75% ethanol (1:1, v:v). mRNA was then purified using PureLink® RNA Mini Kit (LifeTechologies™) following the manufacturer's instructions. RNA eluates were stored at −80 °C until use. The concentration of each RNA sample was determined by NanoDrop® spectrophotometer ND-1000 (NanoDrop Technologies, Wilmington, DE). The quality of RNA samples was assessed with the Agilent 2100 Bioanalyzer (Agilent Technologies Inc., Santa Clara, CA). Immunoprecipitation of muscle exosome subpopulation Polymer-precipitated exosomes were immunoprecipitated using anti-CD63 magnetic beads (Invitrogen™) overnight according to the manufacturer's instructions. Magnetically captured beads were then washed 3 times in PBS and CD63 positive exosomes were eluted in 4× NuPAGE™ LDS sample buffer. Samples were then used for western blot analyses as described above. Exosome functionality assessment The exosomes were labeled with the PKH26 kit (Sigma-Aldrich®). Briefly, 100 μl of Diluent C was added to the exosome suspension and labeled with 100 μl of 4 μM PKH26 solution. After 5 min of incubation, samples were washed 3 times in PBS using a 100 kDa Amicon® filter column and centrifuged at 12,000×g at 4 °C for 15 min. Muscle exosomes extracted from 3000 differentiated myoblasts were either added to 3000 human iPSC-derived motor neurons or to 3000 differentiated human myoblasts. Human iPSC-derived motor neurons were differentiated from human neuron progenitors as described in [40]. Uptake of muscle exosomes by recipient cells was observed after 24-h incubation in living cells using an Olympus IX170 inverted microscope, with a 40×/0.60 Ph2 objective equipped with an AxiocamMR camera. All values are presented as means ± SD. ANOVA 1 Factor followed by Tukey post-hoc test was used to compare the differences between the different cell densities conditions. Differences were considered to be statistically different at P < 0.05. Determination of the window of cell divisions suitable to study the muscle secretome in non-senescent stages Previously published studies on muscle cells using the ultracentrifugation method [5, 32] showed that 250 × 106 cells were needed in order to have enough material for 1 single data point for proteomic and transcriptomic analysis. However, primary muscle cells can only execute a limited number of divisions, ~30–40 divisions with several outliers as low as 22 divisions (Fig. 1a, b, [31]), before they stop dividing and become senescent. The maximum number of divisions is not age-dependent, which is consistent with our previous study showing that the myoblasts extracted from subjects have the same proliferative capacity as myoblasts extracted from young adults [41] (Fig. 1b). Senescent cells can secrete factors including exosomes that can impact surrounding cells as observed with senescent endothelial cells [42], cerebroendothelial cells [43], or fibroblasts [44]. In order to avoid potential artifacts arising from cells that are nearing, or have reached senescence, we suggest that myoblasts under 21 divisions should be used to study the muscle secretome (Fig. 1), and we, therefore, sampled cells within this window for all subsequent experiments. Maximum number of divisions reached by primary human muscle cells, and the number of divisions required to obtain sufficient cell numbers: a Distribution of the maximum number of divisions that human muscle cells can execute. Each point represents one sample. Based on this number, a safe window to analyze fully active and proliferative muscle cells is under 21 divisions. Light blue: age 20–30, dark blue: age 30–40, gray: 40–50, black: 50–75 years old. b Absence of correlation between age and the maximum number of divisions. c Table showing the number of primary muscle cells obtained at different phases of cell culture. Typically, 470,000 CD56+ve muscle cells can be purified from a muscle biopsy culture after ~10 divisions (first row, light green). The number of cells after each division and the number of divisions is given by row. Pink indicates the pre-senescence stage (based on the data in a) when cells may start to slow down their capacity to proliferate and then senesce (and rate of division drops from an average of 0.58 to 0.25). Importantly, for some subjects, the cells will not reach 30 divisions, as shown in plot 1a. Typical measurements of the number of days of expansion and of the average number of divisions per day are given on the right side of the table. d Myoblasts under 21 divisions were negative for beta-galactosidase. Top right panel: positive control of senescent cells positive for beta-galactosidase. Scale bar = 100 μm. e No cleaved PARP-1 was observed by Western blot, suggesting that myoblasts under 21 divisions do not show any sign of necrosis nor apoptosis Optimization of the muscle cell culture conditions Muscle exosomes were extracted from myoblasts that had undergone between 16 and 20 divisions, seeded at a density of 33,400 cells.cm−2, and that were differentiated into myotubes for 3 days. Ninety-five percent of the myoblasts were differentiated into myotubes in DMEM after 3 days (Fig. 2a–c), covering over ~80% of the petri dish (Fig. 2 d,e). Differentiated myoblasts were negative for Beta-galactosidase (Fig. 2d), confirming that they were not in a senescent state. Neither necrosis nor apoptosis was observed as PARP-1B was not cleaved (Fig. 2e). These data suggest that human muscle myoblasts which have made less than 20 divisions can differentiate efficiently into myotubes, are not senescent, and are therefore suitable for the study of the myotube secretome. Myoblasts at under 20 divisions differentiate efficiently and are not senescent. A total of 12 separate primary cell lines were cultured to under 21 divisions. a Dot-plot showing the percentage of primary human myoblasts fused into myotubes for 12 separate cell cultures, with an average fusion index calculated as 95.14% ± 4.28. b Representative images of myotubes positive for myosin heavy chain (in red), a marker of differentiation. Scale bar = 100 μm. c Over 80% of the flask is covered and no obvious signs of cell death are observed. Scale bar = 100 μm. Optimization of muscle exosome extraction Myoblasts were seeded at 7.5 × 106 cells per 225 cm2 flask. Due to the large volume of medium (250 ml per sample) required for ultracentrifugation, a total of 14 flasks, thus 100 million differentiated myoblasts, were cultured per data-point and per experiment to compare the efficacy of the ultracentrifugation and polymer-based precipitation protocols. Myotubes were maintained in conditioned media for 3 days. After pre-clearing the media, as described in the materials and methods and as shown in Fig. 3, exosomes were extracted using either the ultracentrifugation strategy or polymer-based precipitation. Previous publications showed lower exosomal protein detection (e.g., CD63) by Western blot using the polymer-based precipitation compared to ultracentrifugation, despite observing a greater number of vesicles by NanoSight using polymer-based precipitation [28, 45]. Based on these publications, we suspected that the polymer matrix was hiding epitopes. After rinsing the exosome extracts 3 times with PBS in 100 kDa Amicon® filter columns, the accessibility of antibodies to epitopes was rescued (Fig. 3). Schema summarizing the protocols used to extract muscle exosomes from the primary human myotube culture medium, using either the ultracentrifugation or the modified polymer-precipitation strategy. For a single data point, 14 flasks of 225 cm2 are plated with 7.5 × 106 myoblasts. After 24 h, once the myoblasts have attached to the flask, they are rinsed 6 times in DMEM and then differentiated into myotubes by cultivating them in DMEM. After 72 hr, the conditioned medium is collected for muscle exosome extraction. After removing dead cells (200 g, 10 min, RT), cell debris (4000 g, 20 min, 4 °C), and ectosomes (20,000 g, 70 min at 4 °C, and filtered at 0.22 μm), the cleared media is subjected to exosome extraction either by the ultracentrifugation protocol or by a modified polymer-precipitation protocol. Ultracentrifugation is at 100,000g (70 min, 4°C), which is followed by washing the pellets three times with PBS (100,000 g, 70 min, 4°C). The subsequent pellet is then either resuspended in 100 μl of PBS or in NuPAGE™ LDS sample buffer for western blot. For the modified polymer-precipitation protocol, the polymer is added at half the volume of the pre-cleared media and incubated overnight at 4 °C. The mix is then centrifuged at 10,000g for 70 min at 4 °C. The subsequent pellet is then washed 3 times in PBS using a 100 kDa Amicon® filter column. Western blot shows the rescue of the epitope CD63 after 3 washes in PBS The ultracentrifugation-based and modified polymer-based precipitation approaches both extract exosomes from conditioned cultured media of primary human myotubes, but the polymer-based approach is more efficient Exosomes secreted by 100 × 106 differentiated myoblasts and extracted using either ultracentrifugation or polymer-based precipitation show the same cup-shape structure by electron microscopy (Fig. 4a) and are positive for CD63, CD82 (Fig. 4 b,c) ,and CD81 (Fig.4c), and float at the same density (Fig. 4c). Similar sized vesicles were observed by electron microscopy and by NanoSight analysis (Fig. 4d). Importantly, the ultracentrifugation strategy was far less efficient than the polymer precipitation to extract exosomes as shown in Fig. 4c and d. A proteomic analysis revealed that the protein profile of the muscle vesicles extracted using the modified polymer-based precipitation is enriched in proteins known to be present in exosomes (Fig.4d) as given by Exocarta [36,37,38,39]. Validation of exosome extraction strategy. For each experiment, exosomes were extracted from the culture medium of 100 × 106 myoblasts differentiated into myotubes for 3 days using either the ultracentrifugation or the polymer precipitation. The culture medium was non-supplemented DMEM (without serum). a Cup-shaped vesicles were observed by electron microscopy with both extraction protocols. bar = 100 nm. b Both extractions show vesicles that are positive for CD63 and CD82 by electron microscopy. Bar = 100 nm. c Exosome extracts were loaded on iodixanol gradients as described in material and methods. Western blot results are shown for CD63 and CD81 in twelve fractions for the iodixanol gradient. Top panel: exosomes extracted by ultracentrifugation. Bottom panel: exosomes extracted by polymer-based precipitation. Exosomes were detected at a density of 1.15–1.27 g.ml−1. d Nanosight analyses show similar-sized vesicles using both strategies, from 100–200 nm, with a greater number of particles being detected when using the polymer extraction. e Proteomic analysis of muscle exosomes. Venn diagram showing the overlap between muscle exosomes and proteins known to be detected by mass spectrometry in exosomes (Vesiclepedia, Exocarta database [36,37,38,39]) Working with >30 times fewer myoblasts, the modified polymer precipitation strategy still efficiently extracts vesicles that can be used for follow up experiments Previous publications suggested that cell density may affect exosome secretion [46]. We thus tested different densities of differentiated myoblasts per cm2 and observed that the optimal conditions were 33,400 cells.cm−2 (Fig. 5a), thus 7.5 × 106 myoblasts for a 225 cm2 flask. Exosomes secreted by muscle cells were positive for CD63, CD81, Flotillin, HSPA8, and Alix (Fig. 5b). Exosomes extracted from 7.5 × 106 differentiated myoblasts could be used to explore exosome mRNA content (Fig. 5c) and could be used to isolate a specific subpopulation of exosomes such as CD63-positive vesicles (Fig. 5d). In addition, polymer-precipitated exosomes can be stained with PKH26 and applied to recipient cells such as myotubes or iPSC-derived motor neurons (Fig. 5e). Polymer-based precipitation efficiently extracts functional exosomes from 7.5 × 106 cells. a SDS-page protein quantification showing that the greatest efficiency in terms of exosomal protein per cell plated was obtained when exosomes were extracted from differentiated myoblasts at a density of 33,400 cells.cm−2. Differentiated myoblasts were plated at 14,147 (lane 1), 33,400 (lane 2), or 106,100 (lane 3) cells per cm2. Right panel: representative SDS-page stained with Coomassie. Left panel: protein concentration measurements in secreted exosomes from cells plated at a different density. *, ***, P < 0.05 and P < 0.001, significantly different from 33,400 and 106,100 cells.cm2. (n = 4, 3, 4 per condition). b Muscle exosomes were positive for CD63, CD81, Flotillin, HSPA8, and Alix. c mRNA was detectable with a clean profile from polymer-precipitated exosomes of 7.5 × 106 differentiated myoblasts. No 18 s and 28 s RNA were detected, indicating that there were no RNA contaminants from dead cells. Inset panel: Representative electrophoresis obtain with Agilent 2100 Bioanalyzer for myotubes and exosome RNA extract. d Western blot showing that polymer precipitated exosomes from 7.5 × 106 differentiated myoblasts can be used to pull down a specific subpopulation such as CD63 positive exosomes (+/-CD63 = with/without anti-CD63 antibody). e Polymer-precipitated exosomes (pre-stained with PKH26 following extraction; red channel) were capable of integrating into myotubes or into iPSC motor neuron cells Although emphasis has been given to the role of the muscle tissue environment in regeneration (e.g., in parabiosis experiments [47, 48]), very little is known about the secretome of human muscle cells. The role of muscle as a secretory endocrine organ has been recently proposed and a number of studies have characterized the secretory profiles of muscle cells [5, 7, 32, 49, 50], but the role of muscle vesicles is an underexplored field, as is the putative cross-talk between different cell types. Exploring the content and function of vesicles secreted by purified human myoblasts will improve our understanding of how muscle communicates with its environment in different physiological (e.g., aging) and pathological contexts (e.g., neuromuscular disorders, cachexia associated with cancer, etc.) [51,52,53,54]. It may also provide new insights regarding the pathological mechanisms underlying such conditions and may help in the identification of novel biomarkers and novel therapeutic targets for diseases. Only a small number of human muscle cells can be obtained from muscle biopsies and these cells have a very limited capacity to divide. These caveats, along with the fact that muscle cells do not secrete large quantities of vesicles—consistent with muscle accounting for up to 50% of body mass—reinforce the importance of identifying strategies that allow for the most efficient extraction of muscle vesicles from a small quantity of starting material. Large amounts of starting material are required when using the ultracentrifugation-based technique [55], especially when there is an intention to perform downstream OMICS studies (e.g., proteomic, transcriptomic, metabolomic analyses; 250 × 106 muscle cells for one replicate [5]). Several commercial kits have been developed to improve isolation efficacy and speed. The purity of vesicles isolated using these kits is often questioned in comparison to the ultracentrifugation methodology, especially when extracting from serum/plasma [56, 57], but also in the in vitro context [58, 59]. However, it is important to note that while these studies do adhere strictly to the manufacturers' instructions for the usage of the kits, they often fail to carry out identical sample preparations prior to the comparison—for example, carrying out centrifugations and/or filtration steps to remove microvesicles and other contaminants before ultracentrifugation but neglecting to do so before using the kits. This, together with the epitope hiding property of the polymer that is discussed below in the context of additional rinsing steps, may largely account for differences in observed contamination rates. In the present study, muscle exosomes are extracted from differentiated human myoblasts that have been cultured in non-supplemented DMEM. This ensures that exosome preparations isolated using this method are fully depleted of any potential contaminants from culture medium additives such as fetal bovine serum. Furthermore, differentiated myoblasts cultured under these conditions undergo neither necroptosis nor apoptosis (current paper, [60]). When collecting the conditioned media, differential centrifugation steps and a filtration step are included to remove potential cell debris, apoptotic vesicles, and microvesicles. All of these precautions are carried out prior to the addition of the polymer solution, thus eliminating most, if not all potential contaminants and ensuring a highly purified isolation process, and we recommend that such steps are included no matter which subsequent exosome isolation method is used. The absence of medium supplementation and the lack of necroptosis and apoptosis mean that the culture medium of differentiated human muscle cells is a non-complex sample, and is therefore well-suited to the protocol described here, as opposed to the serum which includes many different types of the vesicle and a relatively complex molecular milieu, thereby making it difficult to isolate exosomes by size and density alone, and requiring additional approaches such as exosome pull-down to maximize purity [61, 62], but leading to the analysis of a specific circulating exosome subpopulation. Looking at the literature, we noticed that the polymer kit consistently led to a greater number of vesicles detected by NanoSight [56, 59, 63], and yet led to a reduced detection of exosomal markers by Western blot [28, 45, 56, 59, 63, 64]. Interestingly, Rider et al. while optimizing a polymer to extract extracellular vesicles showed that rinsing of exosomes that had been precipitated using the polymer resulted in an increase in exosome markers detected by Western blot [28]. Based on that study, we decided to use 100 kDa Amicon® filter columns to add extra washes after precipitating the vesicles from pre-cleared media. These additional steps removed the surplus of the polymer [65], thereby rescuing the detection of exosomal markers (Fig. 3), and likely have the additional advantage of removing any cytokines [58] secreted by muscle cells. These extra rinsing step may also improve the functionality of the exosome-like vesicles, for experiments involving the incorporation of vesicles into recipient cells (Fig. 5e). Pre-clearing the culture medium followed by polymer precipitation and three PBS washes allows the extraction of exosome-like vesicles while using 33 times less starting material than what is needed when the ultracentrifugation protocol is used, and the quality and functionality of extracted exosomes is retained. The option of being able to carry out proteomic and functional analyses on exosomes while requiring much fewer cell numbers as a starting point is a critically important asset especially when dealing with primary cell cultures that quickly senesce [66, 67]. All data and materials will be available on demand. Pedersen BK, Febbraio MA. Muscle as an endocrine organ: focus on muscle-derived interleukin-6. Physiol Rev [Internet]. 2008 [cited 2012 Oct 30];88:1379–406. Available from: http://www.ncbi.nlm.nih.gov/pubmed/18923185. Engler D. Hypothesis: Musculin is a hormone secreted by skeletal muscle, the body's largest endocrine organ. Evidence for actions on the endocrine pancreas to restrain the beta-cell mass and to inhibit insulin secretion and on the hypothalamus to co-ordinate the ne. Acta Biomed [Internet]. 2007 [cited 2012 Jul 30];78 Suppl 1:156–206. Available from: http://www.ncbi.nlm.nih.gov/pubmed/17465332. Chan CYX, Masui O, Krakovska O, Belozerov VE, Voisin S, Ghanny S, et al. Identification of differentially regulated secretome components during skeletal myogenesis. Mol Cell Proteomics [Internet]. 2011 [cited 2012 Jul 30];10:M110.004804. Available from: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid = 3098588&tool = pmcentrez&rendertype = abstract. Henningsen J, Rigbolt KTG, Blagoev B, Pedersen BK, Kratchmarova I. Dynamics of the skeletal muscle secretome during myoblast differentiation. Mol Cell Proteomics [Internet]. 2010 [cited 2012 Jul 30];9:2482–96. Available from: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid = 2984231&tool = pmcentrez&rendertype = abstract. Le Bihan M-C, Bigot A, Jensen SS, Dennis J, Rogowska-Wrzesinska A, Lainé J, et al. In-depth analysis of the secretome identifies three major independent secretory pathways in differentiating human myoblasts. J Proteomics [Internet]. 2012 [cited 2012 Oct 27];77:344–56. Available from: http://www.ncbi.nlm.nih.gov/pubmed/23000592. Roca-Rivada A, Al-Massadi O, Castelao C, Senín LL, Alonso J, Seoane LM, et al. Muscle tissue as an endocrine organ: comparative secretome profiling of slow-oxidative and fast-glycolytic rat muscle explants and its variation with exercise. J Proteomics [Internet]. 2012 [cited 2012 Oct 30];75:5414–25. Available from: http://www.ncbi.nlm.nih.gov/pubmed/22800642. Yoon JH, Kim J, Song P, Lee TG, Suh P-G, Ryu SH. Secretomics for skeletal muscle cells: a discovery of novel regulators? Adv Biol Regul [Internet]. 2012 [cited 2012 Oct 31];52:340–50. Available from: http://www.ncbi.nlm.nih.gov/pubmed/22781747. Hutcheson JD, Aikawa E. Extracellular vesicles in cardiovascular homeostasis and disease. Curr Opin Cardiol. 2018. Machtinger R, Laurent LC, Baccarelli AA. Extracellular vesicles: Roles in gamete maturation, fertilization and embryo implantation. Hum. Reprod. Update. 2016. Kreger BT, Johansen ER, Cerione RA, Antonyak MA. The enrichment of survivin in exosomes from breast cancer cells treated with paclitaxel promotes cell survival and chemoresistance. Cancers (Basel). 2016;. Buzas EI, György B, Nagy G, Falus A, Gay S. Emerging role of extracellular vesicles in inflammatory diseases. Nat. Rev. Rheumatol. 2014. Rome S, Forterre A, Mizgier ML, Bouzakri K. Skeletal muscle-released extracellular vesicles: State of the art. Front. Physiol. 2019. Becker A, Thakur BK, Weiss JM, Kim HS, Peinado H, Lyden D. Extracellular vesicles in cancer: cell-to-cell mediators of metastasis. Cancer Cell. 2016. Pan BT, Teng K, Wu C, Adam M, Johnstone RM. Electron microscopic evidence for externalization of the transferrin receptor in vesicular form in sheep reticulocytes. J Cell Biol [Internet]. 1985 [cited 2012 Oct 31];101:942–8. Available from: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid = 2113705&tool = pmcentrez&rendertype = abstract. Kalra H, Drummen G, Mathivanan S, Kalra H, Drummen GPC, Mathivanan S. Focus on Extracellular Vesicles: Introducing the Next Small Big Thing. Int J Mol Sci. Multidisciplinary Digital Publishing Institute; 2016;17:170. Raposo G, Stoorvogel W. Extracellular vesicles: exosomes, microvesicles, and friends. J Cell Biol. 2013;200:373–83. Antonyak MA, Li B, Boroughs LK, Johnson JL, Druso JE, Bryant KL, et al. Cancer cell-derived microvesicles induce transformation by transferring tissue transglutaminase and fibronectin to recipient cells. Proc Natl Acad Sci U S A [Internet]. 2011 [cited 2012 Oct 31];108:4852–7. Available from: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=3064359&tool=pmcentrez&rendertype = abstract. Théry C, Ostrowski M, Segura E. Membrane vesicles as conveyors of immune responses. Nat Rev Immunol [Internet]. 2009 [cited 2012 Oct 9];9:581–93. Available from: http://www.ncbi.nlm.nih.gov/pubmed/19498381. Spencer MJ, Croall DE, Tidball JG. Calpains are activated in necrotic fibers from mdx dystrophic mice. J Biol Chem [Internet]. 1995 [cited 2012 Jul 30];270:10909–14. Available from: http://www.ncbi.nlm.nih.gov/pubmed/7738032. Malerba A, Sharp PS, Graham IR, Arechavala-Gomeza V, Foster K, Muntoni F, et al. Chronic systemic therapy with low-dose morpholino oligomers ameliorates the pathology and normalizes locomotor behavior in mdx mice. Mol Ther [Internet]. 2011 [cited 2012 Oct 4];19:345–54. Available from: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=3034854&tool=pmcentrez&rendertype=abstract. Bencze M, Negroni E, Vallese D, Yacoub-Youssef H, Chaouch S, Wolff A, et al. Proinflammatory macrophages enhance the regenerative capacity of human myoblasts by modifying their kinetics of proliferation and differentiation. Mol Ther [Internet]. 2012 [cited 2012 Oct 18]; Available from: http://www.ncbi.nlm.nih.gov/pubmed/23070116. Bobrie A, Colombo M, Raposo G, Théry C. Exosome secretion: molecular mechanisms and roles in immune responses. Traffic [Internet]. 2011 [cited 2012 Oct 31];12:1659–68. Available from: http://www.ncbi.nlm.nih.gov/pubmed/21645191. Nehlin JO, Just M, Rustan AC, Gaster M. Human myotubes from myoblast cultures undergoing senescence exhibit defects in glucose and lipid metabolism. Biogerontology. 2011;. Théry C, Amigorena S, Raposo G, Clayton A. Isolation and characterization of exosomes from cell culture supernatants and biological fluids. Curr Protoc Cell Biol [Internet]. 2006 [cited 2012 Jul 30];Chapter 3:Unit 3.22. Available from: http://www.ncbi.nlm.nih.gov/pubmed/18228490. Raposo G, Nijman HW, Stoorvogel W, Leijendekker R, Hardingfl Cornelis C, Melief JM, et al. B lymphocytes secrete antigen-presenting vesicles; 1996. Colombo M, Raposo G, Théry C. Biogenesis, secretion, and intercellular interactions of exosomes and other extracellular vesicles. Annual Reviews. 2014;30:255–89. Hessvik NP, Llorente A. Current knowledge on exosome biogenesis and release. Cell Mol Life Sci Springer. 2018;75:193–208. Rider MA, Hurwitz SN, Meckes DG. ExtraPEG: a polyethylene glycol-based method for enrichment of extracellular vesicles. Sci Rep. 2016;6:23978. Théry C, Witwer KW, Aikawa E, Alcaraz MJ, Anderson JD, Andriantsitohaina R, et al. Minimal information for studies of extracellular vesicles 2018 (MISEV2018): a position statement of the International Society for Extracellular Vesicles and update of the MISEV2014 guidelines. J Extracell vesicles [Internet]. 2018 [cited 2019 Sep 19];7:1535750. Available from: https://www.tandfonline.com/doi/full/10.1080/20013078.2018.1535750. Yu LL, Zhu J, Liu JX, Jiang F, Ni WK, Qu LS, et al. A comparison of traditional and novel methods for the separation of exosomes from human samples. Biomed Res. Int. 2018. Bigot A, Duddy WJ, Ouandaogo ZG, Negroni E, Mariot V, Ghimbovschi S, et al. Age-associated methylation suppresses SPRY1, leading to a failure of re-quiescence and loss of the reserve stem cell pool in elderly muscle. Cell Rep [Internet]. 2015;13:1172–82 Available from: http://www.ncbi.nlm.nih.gov/pubmed/26526994. Duguez S, Duddy W, Johnston H, Lainé J, Le Bihan MCMC, Brown KJKJ, et al. Dystrophin deficiency leads to disturbance of LAMP1-vesicle-associated protein secretion. Cell Mol Life Sci [Internet]. 2013 [cited 2013 Nov 2];70:2159–74. Available from: http://www.ncbi.nlm.nih.gov/pubmed/23344255. Schröder M, Schäfer R, Friedl P. Spectrophotometric determination of iodixanol in subcellular fractions of mammalian cells. Anal Biochem. Academic Press Inc.; 1997;244:174–176. Welton JL, Brennan P, Gurney M, Webber JP, Spary LK, Carton DG, et al. Proteomics analysis of vesicles isolated from plasma and urine of prostate cancer patients using a multiplex, aptamer-based protein array. J Extracell Vesicles [Internet]. 2016;5:31209. Available from: https://www.tandfonline.com/doi/full/10.3402/jev.v5.31209. Welton JL, Loveless S, Stone T, von Ruhland C, Robertson NP, Clayton A. Cerebrospinal fluid extracellular vesicle enrichment for protein biomarker discovery in neurological disease; multiple sclerosis. J Extracell Vesicles [Internet]. 2017;6:1369805. Available from: https://www.tandfonline.com/doi/full/10.1080/20013078.2017.1369805. Keerthikumar S, Chisanga D, Ariyaratne D, Al Saffar H, Anand S, Zhao K, et al. ExoCarta: A web-based compendium of exosomal cargo. J Mol Biol [Internet]. 2016;428:688–92. Available from: https://linkinghub.elsevier.com/retrieve/pii/S0022283615005422. Simpson RJ, Kalra H, Mathivanan S, et al. J Extracell Vesicles [Internet]. 2012;1:18374 Available from: https://www.tandfonline.com/doi/full/10.3402/jev.v1i0.18374. Mathivanan S, Fahner CJ, Reid GE, Simpson RJ. ExoCarta 2012: database of exosomal proteins, RNA and lipids. Nucleic Acids Res [Internet]. 2012;40:D1241–4 Available from: https://academic.oup.com/nar/article-lookup/doi/10.1093/nar/gkr828. Mathivanan S, Simpson RJ. ExoCarta: a compendium of exosomal proteins and RNA. Proteomics [Internet]. 2009;9:4997–5000 Available from: http://doi.wiley.com/10.1002/pmic.200900351. Maury Y, Côme J, Piskorowski RA, Salah-Mohellibi N, Chevaleyre V, Peschanski M, et al. Combinatorial analysis of developmental cues efficiently converts human pluripotent stem cells into multiple neuronal subtypes. Nat Biotechnol [Internet]. 2014 [cited 2014 Nov 11];33:89–96. Available from: http://www.ncbi.nlm.nih.gov/pubmed/25383599. Bigot A, Duddy WJ, Ouandaogo ZG, Negroni E, Mariot V, Ghimbovschi S, et al. Age-associated methylation suppresses SPRY1, leading to a failure of re-quiescence and loss of the reserve stem cell pool in elderly muscle. Cell Rep. 2015;13. Riquelme JA, Takov K, Santiago-Fernández C, Rossello X, Lavandero S, Yellon DM, et al. Increased production of functional small extracellular vesicles in senescent endothelial cells. J Cell Mol Med. Blackwell Publishing Inc.; 2020;jcmm.15047. Graves SI, Baker DJ. Implicating endothelial cell senescence to dysfunction in the ageing and diseased brain. Basic Clin Pharmacol Toxicol. John Wiley & Sons, Ltd; 2020;n/a. Choi E-J, Kil IS, Cho E-G. Extracellular vesicles derived from senescent fibroblasts attenuate the dermal effect on keratinocyte differentiation. Int J Mol Sci. Multidisciplinary Digital Publishing Institute; 2020;21:1022. Patel GK, Khan MA, Zubair H, Srivastava SK, Khushman M, Singh S, et al. Comparative analysis of exosome isolation methods using culture supernatant for optimum yield, purity and downstream applications. Sci Rep. Nature Publishing Group; 2019;9:5335. Gudbergsson JM, Johnsen KB, Skov MN, Duroux M. Systematic review of factors influencing extracellular vesicle yield from cell cultures. Cytotechnology. Springer Netherlands; 2016. p. 579–92. Conboy IM, Conboy MJ, Wagers AJ, Girma ER, Weissman IL, Rando TA. Rejuvenation of aged progenitor cells by exposure to a young systemic environment. Nature [Internet]. 2005 [cited 2013 Sep 23];433:760–4. Available from: http://www.ncbi.nlm.nih.gov/pubmed/15716955. Rando TA, Chang HY. Aging, rejuvenation, and epigenetic reprogramming: resetting the aging clock. Cell [Internet]. 2012 [cited 2013 Nov 7];148:46–57. Available from: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid = 3336960&tool = pmcentrez&rendertype = abstract. Forterre A, Jalabert A, Berger E, Baudet M, Chikh K, Errazuriz E, et al. Proteomic analysis of C2C12 myoblast and myotube exosome-like vesicles: a new paradigm for myoblast-myotube cross talk? PLoS One [Internet]. 2014 [cited 2015 Jan 10];9:e84153. Available from: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid = 3879278&tool=pmcentrez&rendertype=abstract. Pedersen BK, Febbraio MA. Muscles, exercise and obesity: skeletal muscle as a secretory organ. Nat Rev Endocrinol [Internet]. 2012 [cited 2012 Oct 30];8:457–65. Available from: http://www.ncbi.nlm.nih.gov/pubmed/22473333. Kuang S, Gillespie MA, Rudnicki MA. Niche regulation of muscle satellite cell self-renewal and differentiation. Cell Stem Cell [Internet]. 2008 [cited 2014 Aug 29];2:22–31. Available from: http://www.ncbi.nlm.nih.gov/pubmed/18371418. Barberi L, Scicchitano BM, De Rossi M, Bigot A, Duguez S, Wielgosik A, et al. Age-dependent alteration in muscle regeneration: the critical role of tissue niche. Biogerontology [Internet]. 2013 [cited 2013 Nov 2];14:273–92. Available from: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=3719007&tool=pmcentrez&rendertype = abstract. Thorley M, Malatras A, Duddy WJ, Le Gall L, Mouly V, Butler Browne G, et al. Changes in communication between muscle stem cells and their environment with aging. J Neuromuscul Dis [Internet]. 2015;2:in press. Available from: http://www.medra.org/servlet/aliasResolver?alias=iospress&doi=10.3233/JND-150097. Vijayakumar UG, Milla V, Cynthia Stafford MY, Bjourson AJ, Duddy W, Duguez SM-R. A Systematic Review of Suggested Molecular Strata, Biomarkers and Their Tissue Sources in ALS. Front Neurol [Internet]. Frontiers; 2019 [cited 2019 May 6];10:400. Available from: https://www.frontiersin.org/articles/10.3389/fneur.2019.00400/abstract. Zeringer E, Barta T, Li M, Vlassov AV. Strategies for isolation of exosomes. Cold Spring Harb Protoc. 2015;2015:319–23. Tian Y, Gong M, Hu Y, Liu H, Zhang W, Zhang M, et al. Quality and efficiency assessment of six extracellular vesicle isolation methods by nano-flow cytometry. J Extracell Vesicles. 2020. Takov K, Yellon DM, Davidson SM. Comparison of small extracellular vesicles isolated from plasma by ultracentrifugation or size-exclusion chromatography: yield, purity and functional potential. J Extracell Vesicles. 2019. Shu S La, Yang Y, Allen CL, Hurley E, Tung KH, Minderman H, et al. Purity and yield of melanoma exosomes are dependent on isolation method. J Extracell Vesicles. 2020;. Lobb RJ, Becker M, Wen SW, Wong CSF, Wiegmans AP, Leimgruber A, et al. Optimized exosome isolation protocol for cell culture supernatant and human plasma. J Extracell Vesicles. 2015;. Xiao R, Ferry AL, Dupont-Versteegden EE. Cell death-resistance of differentiated myotubes is associated with enhanced anti-apoptotic mechanisms compared to myoblasts. Apoptosis. 2011;16:221–34. Greening DW, Xu R, Ji H, Tauro BJ, Simpson RJ. A protocol for exosome isolation and characterization: evaluation of ultracentrifugation, density-gradient separation, and immunoaffinity capture methods. Methods Mol Biol. 2015. Winston CN, Romero HK, Ellisman M, Nauss S, Julovich DA, Conger T, et al. Assessing neuronal and astrocyte derived exosomes from individuals with mild traumatic brain injury for markers of neurodegeneration and cytotoxic activity. Front Neurosci. 2019;. Freitas D, Balmaña M, Poças J, Campos D, Osório H, Konstantinidi A, et al. Different isolation approaches lead to diverse glycosylated extracellular vesicle populations. J Extracell Vesicles. 2019. Tang YT, Huang YY, Zheng L, Qin SH, Xu XP, An TX, et al. Comparison of isolation methods of exosomes and exosomal RNA from cell culture medium and serum. Int J Mol Med. 2017;. Ludwig A-K, De Miroschedji K, Doeppner TR, Börger V, Ruesing J, Rebmann V, et al. Precipitation with polyethylene glycol followed by washing and pelleting by ultracentrifugation enriches extracellular vesicles from tissue culture supernatants in small and large scales. J Extracell Vesicles. Taylor & Francis; 2018;7:1528109. Lehmann BD, Paine MS, Brooks AM, McCubrey JA, Renegar RH, Wang R, et al. Senescence-associated exosome release from human prostate cancer cells. Cancer Res. 2008;. Beer L, Zimmermann M, Mitterbauer A, Ellinger A, Gruber F, Narzt MS, et al. Analysis of the secretome of apoptotic peripheral blood mononuclear cells: impact of released proteins and exosomes for tissue regeneration. Sci Rep. 2015. We wish to thank Prof Pierre Francois Pradat for enabling access to fresh human muscle biopsies and Dr Cecile Martinat for enabling access to human iPSC motor neurons. We wish to thank Dr Kristy Brown for her help for the proteomic analysis. This work was financed by European Union Regional Development Fund (ERDF) EU Sustainable Competitiveness Programme for N. Ireland, Northern Ireland Public Health Agency (HSC R&D) & Ulster University (PI: A Bjourson). LLG was a recipient of ArSLA PhD fellowship, GZO post-doctoral position was financed by INSERM/DGOS, EA was Ph.D VCRS Ulster University fellowship and OC was the recipient of Ph.D DELL fellowship. Laura Le Gall, Zamalou Gisele Ouandaogo and Ekene Anakor contributed equally to this work. Northern Ireland Center for Stratified/Personalised Medicine, Biomedical Sciences Research Institute, Ulster University, Derry~Londonderry, UK Laura Le Gall, Ekene Anakor, Owen Connolly, William Duddy & Stephanie Duguez Centre for Research in Myology, INSERM UMRS_974, Sorbonne Université, Paris, France Zamalou Gisele Ouandaogo, Gillian Butler Browne & Jeanne Laine Laura Le Gall Zamalou Gisele Ouandaogo Ekene Anakor Owen Connolly Gillian Butler Browne Jeanne Laine William Duddy Stephanie Duguez SD conceptualized and supervised the study. LLG, ZGO, EA, and SD performed and analyzed the experiments. OC performed the CD63 exosome pull down. JL performed the electron microscopy analysis. LLG, GZO, EA, JL, GBB, WD, and SD wrote, discussed, and edited the paper. All authors read and approved the final manuscript. Correspondence to Stephanie Duguez. The protocol (NCT01984957) was approved by the local Ethical Committee. Written informed consent was obtained from all patients. All co-authors consent for publication Le Gall, L., Ouandaogo, Z.G., Anakor, E. et al. Optimized method for extraction of exosomes from human primary muscle cells. Skeletal Muscle 10, 20 (2020). https://doi.org/10.1186/s13395-020-00238-1 Extracellular vesicle Muscle exosome extraction in vitro Muscle secretome
CommonCrawl
Business Economics Returns to scale Suppose you have a production function equal to Q= 10(0.7K2+0.3L2)0.5. Does this function... Suppose you have a production function equal to Q= 10(0.7K2 + 0.3L2)0.5. Does this function exhibit a. increasing, b. decreasing, or c. constant returns to scale? Explain. Returns To Scale: Unlike the marginal product of an input that measures the change in the total output that results from increasing the input by one unit, the returns to scale measures the change in the total output that results from scaling up all the by the same factor. Here's our given the production function: {eq}Q= 10(0.7K^2+0.3L^2)^{0.5} {/eq} This can be rewritten as: {eq}Q^2= 100(0.7K^2+0.3L^2) {/eq} Returns to Scale in Economics: Definition & Examples Understand the meaning of returns to scale in economics. Learn about increasing returns to scale, constant returns to scale and decreasing returns to scale. Determine whether the production function exhibits increasing, constant or decreasing returns to scale. Q = L^(0.5) K^(0.5) Determine whether the production function below exhibits increasing, constant or decreasing returns to scale. Q = L + L/K Determine whether the following production function exhibits increasing, constant or decreasing returns to scale. Q = K + L Determine whether the following production function exhibits increasing, constant or decreasing returns to scale. Q = Min(2K, 2L) Suppose a firm has a production function given by Q = L*K. Does this production function exhibit increasing, constant or decreasing returns to scale? Determine if each of the following production functions exhibit decreasing, constant, or increasing returns to scale. a. q = 5L0.4K0.5 b. q = min(21, K) Let a production function exist such that Q = K^{0.35}L^{0.75}. A. Does this production function exhibit increasing, decreasing, or constant returns to scale? Explain. B. What is the effect on Q of a 10% increase in labor hours, keeping K constant? C. Wha Let a production function exist such that Q=(K0.30 L0.75) a) Does this production function exhibit Increasing, Decreasing or Constant Returns to Scale? Explain what your answer means and how you know. Determine whether the following production function have increasing, decreasing or constant returns to scale. a. Q=0.001M+50,000 b. Q=15K+0.5KL+30L State whether the following production functions exhibit decreasing returns to scale, increasing returns to scale or constant returns to scale, briefly explain. Suppose f(L, K) = K^2 + LK + L^1/2 K^1/2. Does this production function exhibit increasing, decreasing or constant returns to scale? Show your work. Determine whether this production function exhibits increasing, decreasing, or constant returns to scale. List whether each of the following production functions has decreasing, increasing or constant returns to scale: a. Q = Min (2K, L) b . Q = L .5 K .8 c . Q = L .5 + K .5 d . Q = 10 + K + L Are the returns to scale of the following production functions increasing, decreasing, or constant? a) Q = KL/4 b) Q = K + L c) Q = Min(K/6, L/3) Determine whether the production function T(L, K)=10L+2K, yields an increasing or decreasing returns to scale or a constant returns to scale. Let a production function exist such that Q= (K^{0.35} L^{0.60}). a) Does this production function exhibit Increasing, Decreasing or Constant returns to scale? Explain how you know. b) What is the effect on Q of a 10% increase in labor hours, keeping K Consider the production function: Q = K^(1/3) L^(2/3) where Q is quantity of output, K is capital, and L is labor. Does this function exhibit increasing, diminishing, or constant returns to scale? Suppose the production function for a firm is as follows: q = min (3K, L). (a) Draw the isoquants for q = 3 and q = 6. (b.) Explain whether the production function exhibits constant, increasing or dec For which values of > 0 and > 0 do the followingproduction functions exhibit decreasing, constant or increasingreturns to scale? Explain your answer. a) Q = L + K (a linear production function) b) Q = AL K c) Q=L+L K +K Suppose production is described as Q = 50K + 5KL. Is production increasing returns to scale (IRS), constant returns to scale (CRS), or decreasing returns to scale (DRS)? Does the production function: q = 100L - 20/k exhibit increasing, decreasing, or constant returns to scale? Suppose you have the following production function: Q = 10 K 0.5 L 0.5 Pl = $ 2 Pk = $ 3 P = $ 100 A) What kind of returns to scale are there? B) If the scale increases 10% in what percentage wi Do the following production functions exhibit decreasing, constant, or increasing returns to scale? You must show calculations to justify your answers (a) Q = 0.5KL (b) Q = 2K + 3L (c) Q = L + L1/2 Do the following functions exhibit increasing, constant, or decreasing returns to scale? Explain your answers. A. The production function Q = M^{0.5}K^{0.5}L^{0.5}, where M is materials, K is capital, and L is labor. B. q = L + 0.5K C. q = 0.5LK^{0.25} D. Consider the production function Q = (0.5K^{1/3} + 0.5L^{1/3})^3 . a. Prove that this production function exhibits constant returns to scale. b. Suppose the firms want to minimize the cost of produc A firm has the production function q = f (L, K) = L + K2 This firm has: a) Decreasing returns to scale. b) Increasing returns to scale. c) Constant returns to scale. d) Increasing marginal product. e) None of the above. Assume a production function can exhibit increasing, constant or decreasing returns to scale. Describe the meaning of this statement using a simple production function Y = F (K, L), where K is capital Suppose you have two production functions: (i) y = A(K + L) (ii) y = A + (K + L) Demonstrate how one function is a constant return to scale and the other is not. Consider the Production Function, Y = 25K1/3L2/3 (a) Calculate the marginal product of labor and capital (b) Does this production function exhibit constant/increasing/decreasing returns to scale? ( Does the production function q=100L- {50}/{K} exhibit increasing, decreasing, or constant returns to scale? This production function exhibits ____ returns to scale. Suppose the production function for a country is given by Y = F(K, L) = K0.4L0.4. Does this production function have constant returns to scale? Explain. Suppose we have the following production function: Y = F (K,L) = A(2K + 3L). Does this function exhibit constant returns to scale? Why or why not? Show whether the following production functions exhibit decreasing returns to scale (DRS), constant returns to scale (CRS), or increasing returns to scale (IRS). A. q = 10L^{0.6}K^{0.5} B. q = L + K C. q = L^{0.6} + K^{0.5} Let a production function exist such that Q = (K^.30 L^.75). a) Does this production function exhibit increasing, decreasing, or constant returns to scale? b) Estimate the effect on Q of a 10% increas A firm's production function is given by the equation Q = 100K0.3L0.8, where Q represents units of output, K units of capital, and L units of labor. a. Does this production function exhibit increasing, decreasing, or constant returns to scale? b. Suppose Provide a graph and an explanation to show that the production function Q = L0.5K0.5 has a diminishing marginal product of labor but has constant returns to scale. Suppose that a firms fixed proportion production function is given by: q = min (5K, 10L), and that r = 1, and w = 3. a. Does this function exhibit decreasing, constant, or increasing returns to scale Suppose that a firm's production function is given by Q = K^{2/3}L^{1/3} , where MP_L = w = 6 MP_K = r = 8 (a) As L increases, what happens to the marginal product of labor? (b) As K increases, w For each of the following production functions, determine whether it exhibits increasing, constant or decreasing returns to scale: a) Q = K + 4L b) Q = L + L/K c) Q = Min(K,L) d) Q = L*K The production function q = 22K^{0.7}L^{0.1} exhibits A. increasing returns to scale. B. constant returns to scale. C. unknown returns to scale because the exponents are not equal. D. decreasing returns to scale. A production function may exhibit _____. a. constant returns to scale and diminishing marginal productivities. b. increasing returns to scale and diminishing marginal productivities. c. decreasing returns to scale and diminishing marginal productivities. Which of the following production functions displays decreasing returns to scale? a) Q = aL + bK^{2} b) Q = aL + bK c) Q = bLK d) Q = cL^{0.2} \times K^{0.5} Consider the production function Y=\frac{X-500}{20}, where Y is output and X represents inputs. Graph this production function. Does it display decreasing, constant, or increasing returns to scale? A firm's production function is Q = 5L2/3K1/3. a) Does this production function exhibit constant, increasing, or decreasing returns to scale, and why? b) What is the marginal rate of technical substitution of L for K for this production function? c) Wh For each of the following production functions, determine whether it exhibits increasing, constant or decreasing returns to scale: a) Q = 2K + L b) Q = 3L + L/K c) Q = Min(2K,L) d) Q = L*K The production function Y = (X^2)*(X^{0.5}) has returns to scale. a. increasing b. marginal c. decreasing d. constant Consider the following production function: q = 4LK. Which term describes this production function's returns to scale? A. increasing returns to scale B. decreasing returns to scale C. constant returns to scale Determine whether the following production function exhibits constant increasing or decreasing returns to scale in capital and labor. A). Y = AK^\frac{3}{4} L^\frac{3}{4} Suppose a country has the following production function: Y = F(K, L) = K0.4L0.6. Does this production function have constant returns to scale? Explain. Following are different algebraic expressions of the production function. Decide whether each one has constant, increasing, or decreasing returns to scale. a. Q = 75L^{0.25}K^{0.75} b. Q = 75A^{0.15 A production function can exhibit increasing, constant or decreasing returns to scale. Describe the meaning of this statement using a simple production function Y = F (K, L) , where K is capital and L Suppose that a firm's production function is given by Q = K^0.33L^0.67, where MPK = 0.33K - 0.67L^0.67 and MPL = 0.67K^0.33L - 0.33. As L increases, what happens to the marginal product of labor? What Suppose the production function for good q is given by q = 3K + 2L where K and L are capital and labor inputs. Consider three statements about this function: I. The function exhibits constant returns to scale. II. The function exhibits constant marginal p Define returns to scale. Ascertain whether the given production function exhibit constant, diminishing, or increasing returns to scale. Suppose firms have the following production function This production function exhibits a. Increasing returns to scale b. Decreasing returns to scale. c. Constant returns to scale. d. The returns to sc Find the returns to scale for the following production function where q denotes output, and x_1, x_2, and x_3 are inputs. q = x_1^{1/4} x_2^{1/3} O decreasing returns to scale O constant returns to scale O increasing returns to scale Find the returns to scale for the following production function where q denotes output, and x_1, x_2, and x_3 are inputs. q = 4x_1^{1/4} x_2^{1/4} x_3^{1/4} O decreasing returns to scale O constant returns to scale O increasing returns to scale Check if the following production function is constant, decreasing, or increasing return to scale: a. q = K^1/2 L^3/4 b. q = K^1/2 L^4/6 A production function Y=F(K,L) exhibits constant, decreasing, increasing returns to scale if for some positive number a, say a=2, we have: Constant returns: F(aK,aL)=aF(K,L) The following are different algebraic expressions of the production function. Decide whether each one has constant, increasing, or decreasing returns to scale. a. Q = 75L 0.25 K 0.75 b. Q = 75A 0.15 Suppose output is produced according to the production function: Q = M^0.5 K^0.5 L^0.5, where M is materials, K is capital and L is labor (inputs) used for the production. Does this production function exhibit decreasing, increasing, or constant returns t Suppose the total cost function is TC(q) = f + cq^2. What levels of production are associated with increasing, decreasing and constant returns to scale? For the production function Q = K^0.5L^0. 5, if K and L are both 4 and K increases to 9, output will increase by _____ units. Following are different algebraic expressions of the production function. Decide whether each one has constant, increasing, or decreasing returns to scale. a. Q = 75L 0.25 K 0.75 b. Q = 75A 0.15 B 0 Solve for the marginal product of labor for the following production function. Does the marginal product of labor increase, decrease, or remain constant with increases in Q? Q = (aL^{\rho} + bK^{\rho})^{\delta/\rho} Suppose that the production function z F(K, N) exhibits increasing returns to scale, to the extent that the marginal product of labour increases when the quantity of labour input increases. Given this production function, what will be the representative f Find the returns to scale for the following production functions. Q=X1^.34 X2^.34 Q= (2X1+3X2)^.5 Q=[0.3X1^.5+0.7X2^.5]^2 Q=[min(X1,2X2)]^2 Find the returns to scale for the following production function where q denotes output, and x_1 and x_2 are inputs. q = (0.3sqrt{x_1} + 0.7 sqrt{x_2})^{1/2} O decreasing returns to scale O constant returns to scale O increasing returns to scale Consider the following production function F(K, L) = { KL } / {K + L } (a) Does it satisfy the Constant Returns to Scale Assumption? Explain. (b) Are marginal products diminishing? Explain. (If you can't find marginal products mathematically, you can com Consider the following production function: Q = L^AK^0.45. If this production technology exhibits constant returns to scale, what must be the value of A? How do you prove that the cost function is increasing/decreasing/constant in output, when the production function exhibits DRTS/IRTS/CRTS? Consider the CES production function. This production function exhibits A. constant returns to scale. B. decreasing returns to scale. C. increasing returns to scale. D. either decreasing or constant returns to scale, but more information is needed Which of the following production functions exhibit decreasing returns to scale? In each case, q is output and K and L are inputs. (1) q=K^{1/3} L^{2/3}(2) q=K^{1/2} L^{1/2} (3) q=2K+3L a. 1,2,and 3 b. 2 and3 c. 1 and 3 d. 1 and 2 e. None of the func Suppose that a firm has a production function given by q = 10L0.5K0.6. The firm has 10 units of capital in the short run. Which of the following will describe the marginal product of labor (MPL) for this production function? a. increasing marginal returns The production function f(K,L) = (max\begin{Bmatrix} K,L \end{Bmatrix})^\frac{1}{2} exhibits... \\ A. Decreasing returns to scale B. Constant returns to scale C. Increasing returns to scale D. None of the above Given the production function q = 10K_{a}L^{B}, show that this exhibits constant returns to scale if a+B = 1. Under what conditions do the following production functions exhibit decreasing, constant, or increasing returns to scale? a. q = L + K, a linear production function, b. q = L^{\alpha}K^{\beta}, a general Cobb-Douglas production function. Suppose the production for good q is given by q=3k+2l, where k and l are capital and labor inputs. Consider three statements function about this function: I. the function exhibits constant returns to scale. II. the function exhibits diminishing marginal p Show whether the following production functions exhibit constant returns to scale, decreasing returnsto scale or increasing returns to scale. Please do not just state your answer, but show mathematica List whether each of the following production function functions has diminishing marginal returns to labor (Y or N). a. Q = 50K + 30L - .5L2, MPL.= 30- L b. Q = L.5K.8 MPL = .5K.8/L.5 c. Q = 2L + K The production function q=100k^0.4L^0.8 exhibits: a. increasing returns to scale but diminishing marginal products for both k and l. b. decreasing returns to scale and diminishing marginal products for both k and l. c. increasing returns to scale but dim Consider a production function given by: Q = 27K^{2}L^{0.5} - 2K^{4} A. Let L = 16. Find the level of K at which the marginal product of capital reaches a maximum B. Let L = 16. Find the level of K Why is it sensible to assume that the production function exhibits constant returns to scale and diminishing returns to capital? A firm produces quantity Q of breakfast cereal using labor L and material M with the production function Q = 50 (ML)1/ 2 +M + L . a) Find out the marginal products of M and L. b) Are the returns to scale increasing, constant, or decreasing for this produc State whether the following production functions exhibits constant, increasing or decreasing returns to scale. Assume in all cases {bar}() A is greater than 0. 1) Y = 1/2*K + L 2) Y = L^(3/2) + K^(5/ Do the following production functions exhibit decreasing, constant, or increasing returns to scale? Show calculations to justify your answers. (a) Q = 0.5KL (b) Q = 2K + 3L (c) Q = L + L1/2K1/2 + K If output is produced according to Q = (KL)3/4, then this production process exhibits: a. first decreasing and then increasing returns to scale b. first increasing and then decreasing returns to scale c. increasing returns to scale d. decreasing returns t Show that increasing returns to scale can co-exist with diminishing marginal productivity. To do so, provide an example of a production function with IRTS and diminishing marginal returns. Suppose that a firm had a production function given by q = 2L0.4K0.8. The rental rate for the firm is $20 and the wage is $15. Solve the optimization condition for K and then fill in the value that appears in front of L. For both production functions below, determine: (1) if MPPL and MPPK are increasing, decreasing or constant (2) if the production function exhibits increasing, decreasing or constant returns to scal Let q= L^{3/4}K^{3/4}, w and r denote, respectively the production function, the prices of K (capital) and L (labor). Suppose \bar{K} = 81 (fixed) and L is variable a. Write down the expression for t Do the following production functions exhibit increasing, constant, or decreasing returns to scale in K and L? (Assume bar A is some fixed positive number.) Suppose that a firm's technology is given by the following production function: f(k,l) = 6k^{1/6} L^{1/6 } a. Prove that this production function exhibits diminishing marginal product in both k and l. This is not the same thing as decreasing returns Draw a graph and provide an explanation to show that the production function Q = L0.5K0.5 has a diminishing marginal product of labor but has constant returns to scale. Do the following production functions, where Q is total output, L is the quantity of labor employed, and K is the quantity of capital employed, exhibit constant, increasing, or decreasing returns to scale. Explain. a. Q=3LK^2 b. Q=8L+5K Find the returns to scale for the following production function where q denotes output, and x_1 and x_2 are inputs. q = ( min { x_1, 2x_2 } )^{1/4} O decreasing returns to scale O constant returns to scale O increasing returns to scale Suppose that a firm had a production function given by q = 2L0.5K0.5. The rental rate for the firm is $10 and the wage is $5. Solve the optimization condition for K and then fill in the value that appears in front of L.
CommonCrawl
Geomagnetic data from the GOCE satellite mission I. Michaelis ORCID: orcid.org/0000-0001-9741-40631, K. Styp-Rekowski1,2, J. Rauberg1, C. Stolle3 & M. Korte1 The Gravity field and steady-state Ocean Circulation Explorer (GOCE) is part of ESA's Earth Explorer Program. The satellite carries magnetometers that control the activity of magnetorquers for navigation of the satellite, but are not dedicated as science instruments. However, intrinsic steady states of the instruments can be corrected by alignment and calibration, and artificial perturbations, e.g. from currents, can be removed by their characterisation correlated to housekeeping data. The leftover field then shows the natural evolution and variability of the Earth's magnetic field. This article describes the pre-processing of input data as well as calibration and characterisation steps performed on GOCE magnetic data, using a high-precision magnetic field model as reference. For geomagnetic quiet times, the standard deviation of the residual is below 13 nT with a median residual of (11.7, 9.6, 10.4) nT for the three magnetic field components (x, y, z). For validation of the calibration and characterisation performance, we selected a geomagnetic storm event in March 2013. GOCE magnetic field data show good agreement with results from a ground magnetic observation network. The GOCE mission overlaps with the dedicated magnetic field satellite mission CHAMP for a short time at the beginning of 2010, but does not overlap with the Swarm mission or any other mission flying at low altitude and carrying high-precision magnetometers. We expect calibrated GOCE magnetic field data to be useful for lithospheric modelling and filling the gap between the dedicated geomagnetic missions CHAMP and Swarm. Graphic Abstract In the last two decades, low Earth orbiting (LEO) satellites have been available for accurate measurement of the geomagnetic field using dedicated instruments, e.g. missions like CHAMP (CHAMP 2019) and Swarm (Olsen et al. 2013). However, there is a temporal gap of about 3 years between these dedicated missions. In addition, single missions can only provide limited coverage in local time at a given time. Enhancement of simultaneous local time coverage is given by multi-mission constellations. To this aim, magnetometer data from missions like CryoSat-2 (Olsen et al. 2020), GRACE (Olsen 2021), and GRACE-FO (Stolle et al. 2021) has been characterised and calibrated and made publicly available. Some of those missions can fill the gap between the high-level missions CHAMP and Swarm from 2010 to 2013, e.g. CryoSat-2 and GRACE, others can fill the gap in magnetic local time (MLT) distribution, such as GRACE-FO. An overview of scientific and platform magnetometer (PlatMag) missions is shown in Fig. 1. Stolle et al. (2021) have shown that large-scale field-aligned currents can be derived from GRACE-FO, as well as equatorial ring currents. The standard deviation of the residuals of those datasets compared to high-level geomagnetic models like CHAOS-7 (Finlay et al. 2020) have been reduced to values well below 10 nT for geomagnetic quiet times, depending on the mission. This report introduces a calibrated magnetometer data set from the Gravity field and steady-state Ocean Circulation Explorer (GOCE) mission, following a similar calibration and characterisation procedure of GRACE-FO (Stolle et al. 2021). Overview of the two satellite missions dedicated to geomagnetic measurements CHAMP (blue line) and Swarm (red and green lines) and a selection of missions carrying platform magnetometers at their respective altitudes. Also shown is the F10.7 solar irradiation index as an indication of solar activity (grey with mean as black solid line, right axis) Schematic view of the GOCE satellite. (Credits: ESA) The GOCE mission has been operated by ESA. The primary objective of GOCE (Floberghagen et al. 2008, 2011; GOCE Flight Control Team 2014) was to obtain precise global and high-resolution models for both the static and the time-variable components of the Earth's gravity field and geoid. GOCE was successfully launched on 17 March 2009 and completed its mission on 11 November 2013. It was flying on a near-circular polar dawn–dusk orbit with an inclination of 96.7 °C and at a mean altitude of about 262 km, (https://www.esa.int/Applications/Observing_the_Earth/FutureEO/GOCE/Facts_and_figures). A sketch of the satellite is shown in Fig. 2 and a summary on the satellite's orbits and body is available at (https://www.esa.int/Enabling_Support/Operations/GOCE). The GOCE satellite carried three magnetometers as part of its attitude and orbit control system mounted side-by-side displaced by 80 mm. The attitude was mainly controlled by ion thrusters to achieve a drag-free flight, and in addition magnetorquers are used. For magnetorquer activation, the magnetic background field at each time and location of the satellite needs to be measured by magnetometers. This article describes the original data, methods, and procedures of data processing, characterisation of disturbances, and calibration of instrument-intrinsic parameters that are necessary to obtain scientifically useful magnetic field data from the GOCE platform magnetometers. We show the performance of the calibration and characterisation procedure by comparison to the CHAOS-7 field model, the illustration of Field Aligned Currents (FAC), and a comparison of the time series characterising a geomagnetic storm to the commonly used Dst index that is obtained from ground data. The processed magnetometer data described in this article are available at (Michaelis and Korte 2022), for November 01, 2009 to September 30, 2013. The data published with this article is version 0205. Data sets and data pre-processing As part of the Drag-free Attitude and Orbit Control System (DFACS), the GOCE satellite carries three active 3-axis fluxgate magnetometers, called MGM. The calibration and characterisation effort is part of Swarm DISC (Swarm DISC 2022). The PlatMag consortium within Swarm DISC decided to call magnetometer instrument reference frames MAG. Hence MGM will be further called MAG. Figure 3 shows the locations of the magnetometers onboard the satellite. The magnetometers are manufactured by Billingsley Aerospace &Defence and are of type TFM100S (Billingsley 2020). The measurement range is ±100 µT, the root mean square noise level of the instrument is 100 pT and the resolution of the digitisation is 3.05185 nT/bit, (Kolkmeier et al. 2008). Hence, the instrument noise is below the digitisation level. The data are sampled at 1/16 Hz. The MAG data have been pre-calibrated achieving biases of less than 500 nT. Magnetometer calibration further relies on attitude data derived from the Electrostatic Gravity Gradiometer (EGG), which is GOCE's main instrument, and three star cameras (STR) that are mounted on the shaded side of the satellite, shown in Fig. 2. The strongest magnetic disturbance is expected from the magnetorquers (MTQ), although they are located as far away as possible from the magnetometers; see the overview of instrument location in Fig. 3. Since measurements of the magnetorquer currents are available, an almost full correction for them can be expected. Location of instruments at the satellite body. (Credits: ESA) GOCE's whole telemetry of the satellite, including, e.g. magnetometer, magnetorquer currents, attitude, solar array currents, battery currents, and magnetometer temperatures, is publicly available at https://earth.esa.int/eogateway/missions/goce, European Space Agency (2009). The telemetry datasets used for this article are listed in Table 1. GOCE L1b and L2 data are provided in zip files that contain ESA's Earth Explorer Format (EEF) files for each L1b product. An overview of used products with given names, source, unit, and time resolution is listed in Table 1. Data stored as telemetry are given in zip files that contain ESA's Earth Explorer header and data in ASCII. Time values are always handled as defined in the EEF. The dataset with the highest cadence and quality is the attitude information since it relies on the main instrument of the mission. An interpolation of attitude data may add numerical noise. Therefore, it makes sense to use timestamps from the attitude dataset as reference for creating a series of timestamps. The timestamps are selected from the attitude dataset that are closest to MAG dataset timestamps. This subset of input data was used to linearly interpolate all other data, that is position, magnetometer, magnetorquer, currents and other housekeeping (HK) data listed in Table 1. If the interpolation distance is larger than 16 seconds a flag has been set that indicates a data gap. For each timestep, the predictions of the high-level geomagnetic field model CHAOS-7 including core, crustal and external contributions have been calculated, following Finlay et al. (2020). For the selection of the low-latitude range (\(|QDLAT|<{50}^{\circ }\)), we also calculate the quasi-dipole latitude (QDLAT) and magnetic local time (MLT) (Richmond 1995; Emmert et al. 2010) for each record. For selection of the geomagnetic quiet days, we use the geomagnetic Kp index (\(Kp \le 3\)) (Matzka et al. 2021) and the geomagnetic equatorial Dst index (\(|Dst|\le 30\,\mathrm {nT}\)) (Nose et al. 2015). Coordinate frames The Satellite Physical Coordinate Frame (SC_O_p), called SC in the following, is defined in Kolkmeier et al. (2008). The three MAGs are aligned with the principal axis of the satellite. The rotation of a vector in SC to MAG reference frame is given in Eq. (1): $$\begin{aligned} \overline{{\rm MAG}_{i}}&=\underline{R_{\rm SC2MAG}}\overline{\text{SC}}, \end{aligned}$$ $$\underline{R_{\text{SC2MAG}}} = \begin{pmatrix} -1& \quad 0& \quad 0 \\ 0& \quad1& \quad0\\ 0& \quad 0& \quad -1 \end{pmatrix}.$$ That means negative \(\mathbf{MAG}_{\mathbf {i,x}}\) is aligned with the flight-direction, \(\mathbf {MAG}_{\mathbf {i,z}}\) points to the Earth and \(\mathbf {MAG}_{\mathbf {i,y}}\) completes the orthogonal coordinate system. The Gradiometer Reference Frame (GRF) is the coordinate system in which the measurements of GOCE's main instrument, the Electrostatic Gravity Gradiometer (EGG), are given. These are the gravity tensor and the combined EGG and STR attitude of the satellite with respect to the International Celestial Reference Frame (ICRF). GOCE provides a high quality attitude product, EGG_IAQ_1i (Frommknecht et al. 2011), which is the combination of the Electrostatic Gravity Gradiometer (EGG) and the star cameras. Fixed reference frames for all instruments are expected to be stable with respect to each other. Missing static rotations between reference frames will be corrected by Euler angle estimation during calibration. Scientific evaluation of the data will be done in the Earth-fixed North–East–Centre (NEC) reference frame, which is also the frame for predictions of the CHAOS-7 reference model. The calibration and characterisation procedure has to be done in the same reference frame for measurements and model data. Calibration parameters are instrument intrinsic and depend on the instrument reference frame. Characterisations of local disturbances are systematic in a local satellite reference frame. That leads to the decision to apply calibration and characterisation in the MAG reference frame. For rotation of CHAOS-7 predictions, \(\mathbf {B}_{\mathbf {model,NEC}}\), from NEC to MAG reference frame a chain of rotations is needed. The first is the rotation from NEC to International Terrestrial Reference Frame (ITRF) depending on the latitude and longitude of the satellite location. We use Seeber (2003, page 23) to define a North–East–Zenith reference frame. By changing the sign of the z-direction (3rd row) we get a North–East–Centre reference frame, Eq. (3): $$\begin{aligned} \underline{R_{\rm ITRF2NEC}}&= \begin{pmatrix} -sin(\Phi )\cdot cos(\Lambda ) &{} -sin(\Phi )\cdot sin(\Lambda ) &{} cos(\Phi ) \\ -sin(\Lambda ) &{} cos(\Lambda ) &{} 0 \\ -cos(\Phi )\cdot cos(\Lambda ) &{} -cos(\Phi )\cdot sin(\Lambda ) &{} -sin(\Phi ) \end{pmatrix}\nonumber \\&\quad \text {with latitude}\, \Phi \,\text { and longitude}\, \Lambda . \end{aligned}$$ The second is a time-dependent rotation from ITRF to ICRF, taking into account Earth's nutation and precession. \(\underline{R_{\rm ITRF2ICRF}}\) is calculated by application of the SOFA library function iauC2t06a (IAU SOFA Board 2019) and using Earth rotation parameters that are derived from the International Earth Rotation and Reference Systems service (IERS 2020). The rotation from ICRF to GRF frame is given by quaternions available in the EGG_GGT_li product. GRF and SC reference frames are nominally parallel (Kolkmeier et al. 2008), we can set the quaternions given in EGG_GGT_li product to derive the rotation from ICRF to SC, \(q_{\rm ICRF2SC}\). Rotations can be combined very stably using quaternion algebra. Hence, we need to convert the direction cosine representation of \(\underline{R_{\rm NEC2ITRF}}\), \(R_{\rm ITRF2ICRF}\) and \(\underline{R_{\rm SC2MAG}}\) to a quaternion representation \(q_{\rm NEC2ITRF}\), \(q_{\rm ITRF2ICRF}\) and \(q_{\rm SC2MAG}\) following (Wertz 1978, page 415). In summary, the complete rotation from the NEC to the MAG frame is given as: $$\begin{aligned}&\mathbf {q}_{\mathbf {\rm NEC2MAG}}=\mathbf {q}_{\mathbf {NEC2ITRF}} \cdot \mathbf {q}_{\mathbf {ITRF2ICRF}} \cdot \mathbf {q}_{\mathbf {ICRF2SC}} \cdot \mathbf {q_{SC2MAG}}, \end{aligned}$$ $$\begin{aligned}&\mathbf {B}_{\mathbf {NEC}} \xrightarrow {\mathbf {q}_{\mathbf {NEC2ITRF}}} \mathbf {B}_{\mathbf {ITRF}} \xrightarrow {\mathbf {q}_{\mathbf {ITRF2ICRF}}} \mathbf {B}_{\mathbf {ICRF}} \xrightarrow {\mathbf {q}_{\mathbf {ICRF2SC}}} \mathbf {B}_{\mathbf {SC}}\xrightarrow {\mathbf {q}_{\mathbf {SC2MAG}}} \mathbf {B}_{\mathbf {MAG}}. \end{aligned}$$ CHAOS-7 predictions are finally rotated from NEC to the MAG frame applying the rotation quaternion in Eq. (4) following (Wertz 1978, page 759): $$\begin{aligned} \mathbf {B}_{\mathbf {model,MAG}}=\mathbf {q}_{\mathbf {NEC2MAG}}^{\mathbf {-1}} \cdot \mathbf {B}_{\mathbf {model,NEC}} \cdot \mathbf {q}_{\mathbf {NEC2MAG}}. \end{aligned}$$ For rotation of calibrated and characterised MAG data, Eq. (6) has to be applied in inverse order on \(\mathbf {B}_{\mathbf {MAG}}\). Pre-processing The three equal fluxgate magnetometers on the GOCE satellite are mounted perfectly aligned side-by-side with a distance of 80 mm. For that reason one would expect them to give the same results at the same times. However, when looking at the residuals to CHAOS-7 of the individual components from different magnetometers, respectively, some large steps are visible. We found no correlation with activity of GOCE instruments or major events. We had to correct those events by hand before applying the calibration, and call this step block correction in the following. For each component of MAG2 and MAG3 we subtracted the corresponding component of MAG1. We identified timestamps of the beginning of each block correction by using a higher resolution figure of Fig. 4. The first block has been set as reference for all components of MAG2 and MAG3. For all further blocks the offset of MAG2 and MAG3 has been corrected to reach the same mean value as the first block. At the end the mean value of all blocks has been removed from MAG2 and MAG3. A table containing the timestamps of each event and the corresponding correction values is given as supplementary material in Additional file 1. After the block correction has been applied the residuals between the magnetometers look similar, as can be seen in Fig. 4. Since there will be no relevant scientific output from three calibrated magnetometers very close to each other we decided to combine the three magnetometers into one single instrument by using the mean value, Eq. (7): $$\begin{aligned} \mathbf {B}_{\mathbf {MAG}}=\frac{\sum _{i=1}^{3}{\mathbf {B}_{\mathbf {MAGi}}}}{3}. \end{aligned}$$ By combination of the three instruments, we reduce the noise level of the input data and fill small gaps in single magnetometer records. Overview of block correction for the whole mission. Shown are the differences between magnetometers 2 and 1, and 3 and 1 for the x, y, and z components from top to bottom. Without block correction (left) and after applied block correction (right) Table 1 Input data used for calibration and characterisation, including product name, variable name, unit, and temporal resolution Calibration and characterisation Since the magnetometers of GOCE are used for the Drag-free Attitude and Orbit Control System (DFACS) they have been calibrated on-ground to fulfil the specification for DFACS which has biases of less than 500 nT. The pre-calibrated dataset is provided in the AUX_NOM_1B product. Previous studies, like Stolle et al. (2021) for GRACE-FO and Olsen et al. (2020) for CryoSat-2 showed that adding more internal features like currents, and temperatures to the data processing, that may cause perturbations, can lead to much better calibrated datasets. We follow the same approach as Stolle et al. (2021) but adapt it to conditions and limitations of the GOCE satellite, e.g. availability of currents and temperature measurements. Calibration and characterisation will be applied on a subset only, to avoid that natural variations are interpreted as disturbances, but remain part of the data after the calibration procedure. Therefore, we use only geomagnetic quiet times when natural variations should not be measured by the satellite, thus allowing for a post-launch calibration of the satellite system itself. Concretely, we use only data with \(|{\rm QDLAT}|<{50}^{\circ }\), \(Kp \le 3\), \(|{\rm Dst}|\le 30\,\mathrm {nT}\) and \(B\_{\rm Flag}=0\). \(B\_{\rm Flag}\) is a quality flag that gives non-zero values if the data gap for interpolation of input data is larger than 16 s. Since the resolution of the magnetometer data is only 16 s, we decided to use monthly data for the estimation of calibration and characterisation parameters. That avoids rapid fluctuation in estimated parameters, but still gives a long-term trend of parameter evolution with time to cope with system changes and deterioration. Parameters for vector calibration The previously combined magnetometer data \(B_{\rm MAG}\) act as the raw magnetic field vector for calibration, in MAG frame further named \(\mathbf {E}= (E_1,E_2,E_3)^T\) in nT. The calibration estimates the nine instrument-intrinsic parameters scale factors \(\mathbf {s} = (s_1,s_2,s_3)^T\), offsets \(\mathbf {b} = (b_1,b_2,b_3)^T\) and misalignment angles of the coil windings \(\mathbf {u} = (u_1,u_2,u_3)^T\). Additionally, misalignment between static reference frames may occur, e.g. due to slight rotation during mounting of instruments. This misalignment is estimated in a vector of Euler (1-2-3) angles \(\mathbf {e} = (e_1,e_2,e_3)^T\), following Wertz (1978, page 764), or in a direction cosine rotation matrix, \(\underline{R_A}\), which includes the three external parameters. Euler (1-2-3) represents three rotations about the first, second and third axis, in this order. The parameters are used to describe $$\begin{aligned} \mathbf {B}_{\mathbf {cal}}=\underline{R_A} \underline{P}^{-1} \underline{S}^{-1} (\mathbf {E} - \mathbf {b})= \underline{A} (\mathbf {E} - \mathbf {b})=\underline{A} \mathbf {E} - \mathbf {b}_{\mathbf {A}}, \end{aligned}$$ where \(\underline{R_A}\) is the direction cosine matrix representation of the Euler (1-2-3) angles \(\mathbf {e}\), \(\underline{P}^{-1}\) is the misalignment angle lower triangular matrix $$\begin{aligned} \underline{P}^{-1}&= \begin{pmatrix} 1 &{} 0 &{} 0 \\ \frac{sin(u_1)}{cos(u_1)} &{} \frac{1}{cos(u_1)} &{} 0 \\ -\frac{sin(u_1)sin(u_3)+cos(u_1)sin(u_2)}{w cos(u_1)} &{} -\frac{sin(u_3)}{w cos(u_1)} &{} 1/w \end{pmatrix}\nonumber \\ \text {with: }w&=\sqrt{1-sin^2(u_2)-sin^2(u_3)}, \end{aligned}$$ and \(\underline{S}^{-1}\) is the diagonal matrix including the inverse of the scale factor $$\begin{aligned} \underline{S}^{-1}= \begin{pmatrix} 1/s_{1} &{} 0 &{} 0 \\ 0 &{} 1/s_{2} &{} 0 \\ 0 &{} 0 &{} 1/s_{3} \end{pmatrix}. \end{aligned}$$ Equation (8) is valid for fluxgate magnetometers treated as linear instruments. Brauer et al. (1997) showed that Eq. (8) needs to be extended for non-linear effects of 2nd (\(\underline{\xi }\)) and 3rd (\(\underline{\nu }\)) order by 2nd (\(\mathbf {E}_{\xi }\)) and 3rd (\(\mathbf {E}_{{\nu }}\)) order data: $$\begin{aligned} \mathbf {B}_{\mathbf {cal}}= \underline{A} \mathbf {E} - \mathbf {b}_{\mathbf {A}} + \underline{\xi } \mathbf {E}_{{\xi }} + \underline{\nu } \mathbf {E}_{\nu }, \end{aligned}$$ with non-linearity parameters of 2nd order: $$\begin{aligned} \underline{\xi } =\begin{pmatrix} \xi ^1_{11} &{} \xi ^1_{22} &{} \xi ^1_{33} &{} \xi ^1_{12} &{} \xi ^1_{13} &{} \xi ^1_{23} \\ \xi ^2_{11} &{} \xi ^2_{22} &{} \xi ^2_{33} &{} \xi ^2_{12} &{} \xi ^2_{13} &{} \xi ^2_{23} \\ \xi ^3_{11} &{} \xi ^3_{22} &{} \xi ^3_{33} &{} \xi ^3_{12} &{} \xi ^3_{13} &{} \xi ^3_{23} \\ \end{pmatrix}, \end{aligned}$$ non-linearity parameters of 3rd order: $$\begin{aligned} \underline{\nu }=\begin{pmatrix} \nu ^1_{111} &{} \nu ^1_{222} &{} \nu ^1_{333} &{} \nu ^1_{112} &{} \nu ^1_{113} &{} \nu ^1_{223} &{} \nu ^1_{122} &{} \nu ^1_{133} &{} \nu ^1_{233} &{} \nu ^1_{123} \\ \nu ^2_{111} &{} \nu ^2_{222} &{} \nu ^2_{333} &{} \nu ^2_{112} &{} \nu ^2_{113} &{} \nu ^2_{223} &{} \nu ^2_{122} &{} \nu ^2_{133} &{} \nu ^2_{233} &{} \nu ^2_{123} \\ \nu ^3_{111} &{} \nu ^3_{222} &{} \nu ^3_{333} &{} \nu ^3_{112} &{} \nu ^3_{113} &{} \nu ^3_{223} &{} \nu ^3_{122} &{} \nu ^3_{133} &{} \nu ^3_{233} &{} \nu ^3_{123} \\ \end{pmatrix}, \end{aligned}$$ and modulated data vectors of 2nd and 3rd order: $$\begin{aligned} \mathbf {E}_{\xi }&=(E_1^2,\,E_2^2,\,E_3^2,\,E_1E_2,\,E_1E_3,\,E_2E_3)^T,\nonumber \\ \mathbf {E}_{\nu }&=(E_1^3,\,E_2^3,\,E_3^3,\,E_1^2E_2,\,E_1^2E_3,\,E_2^2E_3,\,E_1E_2^2,\,E_1E_3^2,\,E_2E_3^2,\,E_1E_2E_3)^T. \end{aligned}$$ Parameters for characterisation Characterisation consists of the identification and, if possible, correction of artificial magnetic perturbations contained in the raw magnetic data. By simple correlation analysis combined with knowledge from former satellite missions like CHAMP, Swarm and GRACE-FO, we identified the magnetorquer currents, \(\mathbf {A}_{\mathbf {MTQ}}\), the magnetometer heater temperatures, \(\mathbf {T}_{\mathbf {MAG}}\), the battery currents, \(\mathbf {A}_{\mathbf {BAT}}\), the solar array panel currents, \(\mathbf {A}_{\mathbf {SA}}\), and a set of housekeeping currents, and temperatures \(\mathbf {A}_{\mathbf {HK}}\), to affect the GOCE magnetometer data. We also consider an effect from the correlation between the magnetometer temperature and magnetic field residuals, \(\mathbf {E}_{\mathbf {st}}=\mathbf {E}\cdot (\mathbf {T}_{\mathbf {MAG}}-T_0)\), where \(T_0\) is the monthly median of \(\mathbf {T}_{\mathbf {MAG}}\). The characterisation equation is a combination of all identified disturbances: $$\begin{aligned} \mathbf {B}_{\mathbf {char}} =\underline{M} \cdot \mathbf {A}_{\mathbf {MTQ}}+\underline{bat} \cdot \mathbf {A}_{\mathbf {BAT}}+\underline{sa} \cdot \mathbf {A}_{\mathbf {SA}}+\underline{hk} \cdot \mathbf {A}_{\mathbf {HK}}+\underline{bt} \cdot (\mathbf {T}_{\mathbf {MAG}}-T_0)+\underline{st} \cdot \mathbf {E}_{\mathbf {st}}, \end{aligned}$$ with magnetorquer current scale factor (\(\underline{M}\)), battery current scale factor (\(\underline{bat}\)), solar array current scale factor (\(\underline{sa}\)), housekeeping data scale factor (\(\underline{hk}\)), temperature dependency of offsets b (\(\underline{bt}\)) and temperature dependency of scale factors s (\(\underline{st}\)). Input data used in Eq. (11) and (15) are listed in Tables 1 and 2, respectively. All input parameters and calibrated magnetic observation products are provided in CDF format, in the same format as for GRACE-FO (Michaelis et al. 2021). Table 2 Estimated calibration and characterisation parameters including units and dimensionality An ordinary least squares linear regression has been applied to estimate the parameters \(\mathbf {m}_{\mathbf {cal}}\) and \(\mathbf {m}_{\mathbf {char}}\) to minimise for S: $$\begin{aligned} S&=|(\mathbf {B}_{\mathbf {cal}}(\mathbf {m}_{\mathbf {cal}},\mathbf {E})+\mathbf {B}_{\mathbf {char}}(\mathbf {m}_{\mathbf {char}},\mathbf {d}_{\mathbf {char}})) -\mathbf {B}_{\mathbf {model,MAG}}|^2, \end{aligned}$$ with the calibrated magnetic field vector \(\mathbf {B}_{\mathbf {cal}}\) using instrument-intrinsic calibration parameters, \(\mathbf {m}_{\mathbf {cal}}=(\mathbf {b},\mathbf {s},\mathbf {u},\mathbf {e},\underline{\xi },\underline{\nu })\) that have been applied on the raw magnetic field vector \(\mathbf {E}\) , as given in Eq. (11). For estimation of the characterised magnetic field vector \(\mathbf {B}_{\mathbf {char}}\) parameters describing the impact on the housekeeping data \(\mathbf {m}_{\mathbf {char}}=(\underline{M},\underline{bat},\underline{sa},\underline{bt},\underline{st},\underline{hk})\) have been applied to the housekeeping data \(\mathbf {d}_{\mathbf {char}}=(\mathbf {A}_{\mathbf {MTQ}},\mathbf {A}_{\mathbf {BAT}},\mathbf {A}_{\mathbf {SA}},\mathbf {A}_{\mathbf {HK}},\mathbf {T}_{\mathbf {MAG}},\mathbf {E}_{\mathbf {st}})\), as given in Eq. (15). \(\mathbf {B}_{\mathbf {model, MAG}}\) is the CHAOS-7 magnetic field estimations for the core, crustal and large-scale magnetospheric field rotated into the instrument MAG frame as described by Eq. (6). From previous satellite missions like GRACE-FO it is known that additional time shifts between instrument measurements may occur. We repeated the calibration and characterisation procedure for a range of time shifts within an interval of ±2 s in steps of 0.1 s on the most quiet data set, which was in December 2009. Best calibration results (minimum of the absolute values of residual to CHAOS-7) have been determined with a shift of 0.4 s for MAG data. In this section, we discuss the final GOCE data set and some potential applications. We assess the residuals to CHAOS-7 predictions of all vector components and compare the lithospheric field measured from the GOCE data to the lithospheric field contribution included in CHAOS-7. Moreover, we calculate auroral field-aligned currents (FAC) and compare magnetospheric ring currents measured by GOCE with ground-based estimations like the Geomagnetic Equatorial Disturbance Storm Time Index (Dst). Assessment of the final data set To assess the temporal robustness of the calibration, time series of calibration parameters are shown in Fig. 5 for offsets, scale factors, non-orthogonalities and Euler angles. Red lines show the average mean absolute deviation of the parameters. The parameters show no long-term trends over the mission duration. Comparisons with previously published studies gave similar order results for the mean absolute deviation of the parameter time series for CryoSat-2 (Olsen et al. 2020). However, in detail GOCE shows much higher variations in each of the parameters. That might be caused by higher air pressure at GOCE's low altitude which is compensated for by near-continuous operation of the drag-free attitude and orbit control system. Residuals for the calibrated magnetic field vector have been calculated with respect to CHAOS-7 predictions for geomagnetic quiet conditions and low latitudes, i.e. \(|{\rm QDLAT}|<50^o\), \(Kp<=3\), and \(|{\rm Dst}|<=30\,\mathrm {nT}\). Table 3 shows the mean and standard deviation of these residuals for the whole mission period, and for the most quiet day in the most quiet month. The mean values are close to zero which means that the calibration removed the offsets correctly. For very quiet conditions, Kp < 1, the standard deviation can be reduced to values below 8 nT. The calibration has been applied on monthly data. Results for the standard deviation of residuals with respect to the CHAOS-7 model are given for each month in Table 4 for calibrated magnetometer data in MAG and NEC as well as for raw data of magnetometer MAG\(_{1}\) as representative example. The last three columns give the percentage of data used for the specific month, the mean Kp value and mean Dst value from within data selection for the calibration. Standard deviations vary strongly from month to month. For the majority of months the standard deviation is reduced to the level of very quiet conditions. However, some months deviate strongly from the quiet days. For some of those extreme months, a correlation with missing data or higher geomagnetic conditions seems to exist. However, we cannot state a general correlation of high residuals with high activity. In general, the values for mean and standard deviation have been significantly reduced by the calibration to values between 7 and 13 nT, and are similar to residuals for GRACE-FO given by Stolle et al. (2021) and for CryoSat-2 by Olsen et al. (2020), which varied between 3 nT and 10 nT (GRACE-FO) and 4 nT and 15 nT (CryoSat-2). Time series of instrument-intrinsic calibration parameters offset (top-left), scale factors (top right), non-orthogonalities (bottom-left) and Euler angles (bottom-right) with respect to their median value. Red lines indicate average mean absolute deviation Table 3 Mean and standard deviation of residuals to CHAOS-7 for GOCE for geomagnetic quiet times and for a single quiet day, 2009-12-01 Table 4 Standard deviation of residuals to CHAOS-7 for GOCE for all months in the mission period The estimation of impact for non-intrinsic instrument parameters is shown in Table 5. The impact has been estimated by residual calculation between using all estimated parameters and using all but one parameter and setting this one parameter to a neutral value. As an example, to estimate the impact of \({\varvec{\Delta }} \mathbf {B}_{\mathbf {SA}}\), first all estimated parameters are applied to Equation 15 to compute \(\mathbf {B}_{\mathbf {char}}\). Then, the same approach is repeated with \(\underline{sa}\) being set to zero and calculating \(\mathbf {B}_{\mathbf {char,zero sa}}\). The difference between \(\mathbf {B}_{\mathbf {char}}\) and \(\mathbf {B}_{\mathbf {char,zero sa}}\) is the impact of parameter \(\underline{sa}\), called \({\varvec{\Delta }} \mathbf {B}_{\mathbf {SA}}\). The results indicate that \(\underline{hk}\) and \(\underline{sa}\) have the largest impact. On other missions, e.g. GRACE-FO (Stolle et al. 2021), an even larger standard deviation of impact from solar panels than for the other parameters was found. The influence might be smaller on GOCE due to design and orbit characteristics of the GOCE satellite. The solar arrays are mounted such that they are always on the bright side with the GOCE dusk–dawn orbit, so that currents induced by the solar arrays are more or less constant and do not vary much. Table 5 Magnetic impact of calibration and characterisation, respectively, for each parameter given in Eq. (15) and the non-linear parameters in Eq. (11) Top panel of a shows magnetic residuals to CHAOS-7 (core, crustal and large-scale magnetospheric field). Middle panel of a: magnetic residuals to CHAOS-7 (core and large-scale magnetospheric field). Bottom panel of a: crustal field from CHAOS-7 model. The columns show the three NEC components North, East and Centre. b Shows the distribution of geomagnetic and solar activity indices and magnetic local time for data selection used in a Figure 6a provides global maps of the residuals between the processed data and CHAOS-7 predictions for December 2009 with the mean of the residuals summarised in bins of size of \({5}^{\circ }\) geocentric latitude and \({5}^{\circ }\) geocentric longitude. The three columns represent the \(B_N\), \(B_E\) and \(B_C\) components of the NEC frame, respectively. The first row displays residuals to the core, the crustal and the large-scale magnetospheric field predictions of CHAOS-7. The second row shows residuals to only the core and the large-scale magnetospheric field predictions, i.e. in particular the lithospheric field is now included in the data. The third row shows the crustal field prediction from CHAOS-7. The grey lines indicate \({0}^{\circ }\) and \(\pm {70}^{\circ }\) magnetic latitude (QDLAT). Figure 6b gives distribution of geomagnetic and solar indices and magnetic local time of the data set of this month, which was geomagnetically quiet. Auroral electrojet and field-aligned currents at high latitudes produce the largest deviations as they are measured by the satellite but not included in the CHAOS-7 model. Since the data are collected at a dawn–dusk orbit, no significant low and mid latitude ionospheric disturbances are expected, nor significant effects from magnetospheric currents during the quiet times. Still, there are systematic deviations that follow the geomagnetic equator in all components, and these are already known from GRACE-FO carrying the same type of magnetometers. However, besides the prominent disturbance at the geomagnetic equator there are large areas with absolute residuals below 4 nT as indicated by greyish colours. The comparison of second and third row of Fig. 6a also shows that the calibrated GOCE data can reproduce the large-scale crustal anomalies quite well. For example, the Bangui and Kursk anomaly in central Africa and Russia, respectively, are clearly seen. Still, a systematic artificial field with low amplitude along the geomagnetic equator is visible. Large-scale field-aligned currents Field-aligned currents (FAC) are not part of the CHAOS7 model and should remain in the measured data after calibration and characterisation. Since platform magnetometers have a higher noise level than science magnetometers, we expect only large-scale auroral field-aligned currents to be visible. Figure 7 shows results for FACs derived from GOCE MAG for the whole period of the mission on the Northern (top) and Southern (bottom) hemisphere, selected for the northward (left) and southward (right) z-component of interplanetary magnetic field (IMF). FACs have been put in bins of \({2}^{\circ }\) using the median as the aggregation function. Region 1 and 2 currents are prominently visible, similar to results from the PlatMag feasibility study for Swarm and GOCE https://www.esa.int/Enabling_Support/Preparing_for_the_Future/Discovery_and_Preparation/ESA_s_unexpected_fleet_of_space_weather_monitors and in Lühr et al. (2016). Quasi-dipole latitude (QDLAT) versus magnetic local time (MLT) large-scale field-aligned currents for the whole mission duration. The left panel shows the northern hemisphere and the right panel the southern hemisphere The magnetic effect of the magnetospheric ring current during the March 17, 2013 storm A geomagnetic storm with values of Dst < -130 nT occurred on March 17, 2013 (Fig. 8). The circles represent medians of residuals of the horizontal component of the magnetic field (\(\sqrt{B_N^2+B_E^2}\)) within ± \({10}^{\circ }\) geomagnetic latitude and projected to \({0}^{\circ }\) geomagnetic latitude for each low-latitude orbital segment for ascending (blue) and descending (orange) orbits. The residuals are calculated with respect to the CHAOS-7 core and crustal field predictions. The large-scale magnetospheric field was not subtracted, and signatures from magnetospheric currents (including their induced counterparts in the Earth) remain included in the data. The ascending and descending orbit data generally agree well with each other and with the Dst index, despite the different retrieval technique for magnetospheric signatures in ground and satellite data. It is known from earlier studies that ground-based derived ring current signatures show systematic differences to those derived in space and that in particular the Dst index does not have the correct magnetospheric baseline (Maus and Lühr 2005; Olsen et al. 2005; Lühr et al. 2017; Pick et al. 2019). The ring current signal obtained from LEO satellites is generally lower than from ground, which is also reflected in an offset between the Dst index and the satellite derived residuals. In detail the ring current at ascending (MLT 6) nodes shows systematic weaker residual than for descending (MLT 18) nodes. That agrees well with dawn–dusk asymmetries found in studies from Newell and Gjerloev (2012) for Super MAG Ring current and Love and Gannon (2009) for Dst. Time series of residuals of calibrated GOCE magnetic data to the core and crustal field of CHAOS-7 around the magnetic storm in March 2013. Ascending (ASC) nodes are plotted in blue, descending (DESC) nodes in orange. The Dst index is also plotted in black The GOCE mission carries three vector magnetometers for attitude and orbit control. We applied a calibration and characterisation procedure that significantly reduces perturbations produced artificially by the satellite itself. The calibrated data from non-dedicated magnetometers in LEO can be used to fill gaps between dedicated magnetic field missions and in the MLT distribution. However, since non-dedicated missions do not carry an absolute magnetometer as a reference, a high-level geomagnetic model based on dedicated missions is still needed for the calibration. Although calibrated platform magnetometer data cannot reach residuals below 1 nT when compared to high-level geomagnetic models as dedicated mission data from, e.g. CHAMP and Swarm do, we have shown that they contain information about lithospheric and magnetospheric field signatures and field-aligned currents. With standard deviations of residuals between 7 nT and 13 nT for quiet times, our GOCE results are of similar order to those of CryoSat-2 and GRACE-FO calibrated magnetometer data (Olsen et al. 2020; Stolle et al. 2021). For a mission not dedicated to magnetic field research and not carrying scientific magnetometers, residuals in this order of magnitude are acceptable. The calibrated GOCE data are freely available and may be used for studying different magnetic field sources and the near-Earth space environment. The data generated and analysed in this paper are available at (Michaelis and Korte 2022) ftp://isdcftp.gfz-potsdam.de/platmag/MAGNETIC_FIELD/GOCE/Analytical/v0205/. CHAMP: CHAllenging Minisatellite Payload CHAmp Ørsted SAC-C magnetic field model CDF: Common data format DFACS: Drag-free attitude and orbit control system Dst: Geomagnetic Equatorial Disturbance Storm Time Index ESA: FAC: Field-aligned currents GFZ: Helmholtz Centre Potsdam, German Research Centre for Geosciences GOCE: Gravity field and steady-state Ocean Circulation Explorer GRACE-FO: Gravity Recovery and Climate Experiment Follow-On GRF: Gradiometer Reference Frame HK: ICRF: International Celestial Reference Frame International Geomagnetic Reference Field IGRF-13 ISDC: Information System and Data Center at GFZ ITRF: International terrestrial reference frame L1b: GOCE level 1b data MAG: MLAT: Modified apex latitude MLT: Magnetic local time MTQ: Magnetorquer NEC: North, East, Centre coordinate system PlatMag: Platform magnetometer QDLAT: Quasi-dipole latitude SC: Spacecraft physical reference frame Star cameras (Star TRackers) Billingsley (2020) Billingsley TFM100SH Magnetometer. https://magnetometer.com/products/fluxgate-magnetometers/tfm100s,https://magnetometer.com/wp-content/uploads/magnetometer-comaparison.pdf, https://magnetometer.com/wp-content/uploads/TFM100S-Spec-Sheet-February-2008.pdf Brauer P, Merayo JMG, Nielsen OV, Primdahl F, Petersen JR (1997) Transverse field effect in fluxgate sensors. Sens Actuators A Phys 59:70–74. https://doi.org/10.1016/s0924-4247(97)01416-7 (ISSN 0924-4247) CHAMP: overview of final ME products and format description (2019) CHAMP Satellite Mission. GFZ German Research Centre for Geosciences, Scientific Technical Report STR - Data ; 19/10), (2019). https://doi.org/10.2312/GFZ.b103-19104 Emmert JT, Richmond AD, Drob DP (2010) A computationally compact representation of Magnetic Apex and Quasi-Dipole coordinates with smooth base vectors. J Geophys Res 115:A08322. https://doi.org/10.1029/2010JA015326 European Space Agency (2009) GOCE Level 1 Data Collection. Version 1. https://doi.org/10.5270/esa-8sfucze Finlay C, Kloss C, Olsen NMH, Tøffner-Clausen L, Grayver A, Kuvshinov A (2020) The CHAOS-7 geomagnetic field model and observed changes in the South Atlantic Anomaly. Earth Planets Space 72:156. https://doi.org/10.1186/s40623-020-01252-9 Floberghagen R, Drinkwater M Haagmans R, Kern M (2008) GOCE's Measurements of the Gravity Field and Beyond. ESA Bulletin 133. https://www.esa.int/esapub/bulletin/bulletin133/bul133d_floberghagen.pdf Floberghagen R, Fehringer M, Lamarre D, Muzi d, Frommknecht b, Steiger C, Piñeiro J, and da Costa A (2011) Mission design, operation and exploitation of the Gravity field and steady-state Ocean Circulation Explorer mission. Journal of Geodesy 85(11):749–75. https://doi.org/10.1007/s00190-011-0498-3 Frommknecht B, Lamarre D, Meloni M, Bigazzi A, Floberghagen R (2011) GOCE level 1b data processing. J Geod 85:759–775. https://doi.org/10.1007/s00190-011-0497-4 GOCE Flight Control Team (2014) GOCE End-of-mission operations report. Technical report GO-RP-ESC-FS-6268, Issue 1, Revision 0. https://earth.esa.int/documents/10174/85857/2014-GOCE-Flight-Control-Team.pdf IAU SOFA Board (2019) IAU SOFA Software Collection. http://www.iausofa.org IERS (2020) International Earth rotation and Reference systems Service (IERS) Earth Orientation Center, Bulletin B. ftp://hpiers.obspm.fr/eop-pc/bul/bulb_new Kolkmeier A, Präger G, Möller P, Strandberg T, Kempkens K, Stark J, Gessler L, Hienerwadel K (2008) GOCE-DFAC Interface Control Document, Tech. Rep. GO-IC-ASG-0005_12, EADS Astrium Love JJ, Gannon JL (2009) Revised D\(_{st}\) and the epicycles of magnetic disturbance: 1958–2007. Ann Geophys 27(8):3101–3131. https://doi.org/10.5194/angeo-27-3101-2009 Lühr H, Kervalishvili G, Rauberg J, Michaelis I, Stolle C (2016) Advanced ionospheric current estimates by means of the Swarm constellation mission: A selection of representative results. In: Proceedings, (Special publications / European Space Agency; 740), ESA living planet symposium, Prague, Czech Republic 2016. https://doi.org/10.48440/2.3.2022.001 Lühr H, Xiong C, Olsen N, Le G (2017) Near-earth magnetic field effects of large-scale magnetospheric currents. Space Sci Rev 206:521–545. https://doi.org/10.1007/s11214-016-0267-y Matzka J, Bronkalla O, Tornow K, Elger K, Stolle C (2021) Geomagnetic Kp index. V. 1.0. GFZ Data Services. https://doi.org/10.5880/Kp.0001 Maus S, Lühr H (2005) Signature of the quiet-time magnetospheric magnetic field and its electromagnetic induction in the rotating. Earth Geophys J Int 162:755–763. https://doi.org/10.1111/j.1365-246X.2005.02691.x Michaelis I, Stolle C, Rother M (2021) GRACE-FO calibrated and characterized magnetometer data. GFZ Data Services. https://doi.org/10.5880/GFZ.2.3.2021.002 Michaelis I, Korte M (2022) GOCE calibrated and characterised magnetometer data. GFZ Data Services. https://doi.org/10.5880/GFZ.2.3.2022.001 Newell PT, Gjerloev JW (2012) SuperMAG-based partial ring current indices. J Geophys Res 117:A05215. https://doi.org/10.1029/2012JA017586 Nose M, Iyemori T, Sugiura M, Kamei T (2015). Geomagnetic Dst index. https://doi.org/10.17593/14515-74000 Olsen N, Sabaka T, Lowes F (2005) New parameterization of external and induced fields in geomagnetic field modeling, and a candidate model for IGRF 2005. Earth Planets Space 57(12):1141–1149. https://doi.org/10.1186/BF03351897 Olsen N, Friis-Christensen E, Floberghagen R, Alken P, Beggan CD, Chulliat A, Doornbos E, Da Encarnação JT, Hamilton B, Hulot G, Van Den Ijssel J, Kuvshinov A, Lesur V, Lühr H, Macmillan S, Maus S, Noja M, Olsen PEH, Park J, Plank G, Püthe C, Rauberg J, Ritter P, Rother M, Sabaka TJ, Schachtschneider R, Sirol O, Stolle C, Thébault E, Thomson AWP, Tøffner-Clausen L, Velímský J, Vigneron P, Visser PN (2013) The Swarm Satellite Constellation Application and Research Facility (SCARF) and Swarm data products. Earth Planets Space 65:1189–1200. https://doi.org/10.5047/eps.2013.07.001 Olsen N (2021) Magnetometer Data of the GRACE Satellite Duo. Earth Planets Space 73:62. https://doi.org/10.1186/s40623-021-01373-9 Olsen N, Albini G, Bouffard J, Parrinello T, Tøffner-Clausen L (2020) Magnetic observations from CryoSat-2: calibration and processing of satellite platform magnetometer data. Earth Planets Space 72:48. https://doi.org/10.1186/s40623-020-01171-9 (ISSN 1880-5981) Pick L, Korte M, Thomas Y, Krivova N, Wu C-J (2019) Evolution of large-scale magnetic fields from near-Earth space during the last 11 solar cycles. J Geophys Res Space Phys. https://doi.org/10.1029/2018JA026185,2527-2540. https://doi.org/10.1029/2018JA026185 Richmond A (1995) Ionospheric electrodynamics using magnetic apex coordinates. J Geomagn Geoelectr 47:191–212. https://doi.org/10.5636/jgg.47.191 Seeber G (ed) (2003) Satellite Geodesy, 2nd, completely revised and extended. Walter de Gruyter, Berlin, New York. ISBN 3-11-017549-5 Stolle C, Michaelis I, Xiong C, Rother M, Usbeck T, Yamazaki Y, Rauberg J, Styp-Rekowski KM (2021) Observing Earth's magnetic environment with the GRACE-FO mission. Earth Planets Space 73:51. https://doi.org/10.1186/s40623-021-01364-w Swarm DISC (2022) Swarm DISC (Swarm Data, Innovation, and Science Cluster). https://earth.esa.int/eogateway/activities/swarm-disc Wertz JR (ed) (1978) Spacecraft Attitude Determination and Control, vol 73. Springer, Netherlands. ISBN 978-90-277-1204-2. https://doi.org/10.1007/978-94-009-9907-7 The European Space Agency (ESA) is gratefully acknowledged for providing the GOCE data. Special thanks to Björn Frommknecht (ESA) for pre-selection of housekeeping data. Kp is provided by GFZ, the Dst and AE indices by the Geomagnetic World Data Centre Kyoto, and F10.7 by the Dominion Radio Astrophysical Observatory and Natural Resources Canada. We thank Martin Rother (GFZ) for fruitful discussions. This study has been partly supported by Swarm DISC activities funded by ESA under contract no. 4000109587/13/I-NB. KSR is supported through HEIBRIDS-Helmholtz Einstein International Berlin Research School in Data Science under contract no. HIDSS-0001. Helmholtz Centre Potsdam, GFZ German Research Centre for Geosciences, Telegrafenberg, 14473, Potsdam, Germany I. Michaelis, K. Styp-Rekowski, J. Rauberg & M. Korte Technical University of Berlin, Electrical Engineering and Computer Science, Ernst-Reuter-Platz 7, 10587, Berlin, Germany K. Styp-Rekowski Leibniz-Institut für Atmosphärenphysik e.V. an der Universität Rostock, Schloßstraße 6, 18225, Kühlungsborn, Germany C. Stolle I. Michaelis J. Rauberg M. Korte IM and CS defined the study. IM pre-processed and calibrated the data. JR derived FACs. CS, MK, IM, KSR analysed and interpreted the results. IM wrote the manuscript. All authors read and approved the final manuscript. Correspondence to I. Michaelis. Pre-processed block correction of magnetometers. Michaelis, I., Styp-Rekowski, K., Rauberg, J. et al. Geomagnetic data from the GOCE satellite mission. Earth Planets Space 74, 135 (2022). https://doi.org/10.1186/s40623-022-01691-6 Accepted: 13 August 2022 Earth's magnetic field Geomagnetism Ionospheric currents Magnetospheric ring current Satellite-based magnetometers Platform magnetometers DynamicEarth: Earth's Interior, Surface, Ocean, Atmosphere, and Near Space Interactions
CommonCrawl
N2o3 Hybridization 2 The Mode of Orbital Overlap and the Types of Covalent Bonds 418 11. It provides basic features, such as process management, virtual nodes ring for request processing, session, frame encoding, mq and caching services. sp hybridization is also called diagonal hybridization. In this type of hybridization one- s and two P-orbitals of the valence shell of carbon atom take part in hybridization go give three new sp 2 hybrid orbitals. Intro to Molecular Geometry H 2 O 2 Lewis Structure. Hybridization between introduced and indigenous species can lead to loss of unique genetic resources and precipitate extinction. draw a lewis structure for C13+. sp Hybridization. c) hybridization of central metal. I still don't understand why there are any arguments about hybridisation it's a matter of mathematical convenience and not a matter of physics. A central atom with 6 sigma bonds is d 2 sp 3 hybridized and is octahedral in shape. Similar to its formation in the gastric lumen, N2O3 can independently be generated in the phagosomal lumen through the condensation of HNO2. JEE ADVANCED 2015 PAPER 1 CODE 1 Physics 1. 1 Stripping experiments 3. Note that the N2O hybridization is sp for both Nitrogen atoms. Learn Online Chemical Bonding for Class 11th and 13th Students by Jitendra Hirwani (JH) Sir. Therefore they are sp 2 hybridization. Romano, the 2016 dat destroyed says that the Sulfur hybridization in the sulfite ion is sp3. Solutions 933 APPENDIX D Solutions to Practice Problems 47. Intro to Molecular Geometry H 2 O 2 Lewis Structure. Wei, Inorg. This Lewis structure shows two different types of bonds, single and double. 8 Bonding and hybridized orbitals Hybridized orbitals make sigma bonds Unhybridized orbitals make pi bonds. What is the hybridization of a carbon that is in the alkene and aromatic functional groups? Both the nitrogen atoms in the N2O3 molecule are sp2 hybridizedthus it is planar. [Co(CN)6]3– 8. sp Hybridization. 4 Uncertainty in Measurement 8 1. Its hybridization is somewhere between sp2 and sp. JEE ADVANCED 2015 PAPER 1 CODE 1 Physics 1. Linus Pauling introduced the hybridization theory of molecules. A step-by-step explanation of how to draw the N2O5 Lewis Structure. , than the component atomic orbitals) suitable for the pairing of electrons to form chemical bonds in valence bond theory. (c)i since the graph of ln[A] is a straight line, this indicates that it its 1st order with respect to A, (, rate = k [A] ii k = - slope of the straight line of the ln[A] vs. NO 2 involves an sp 2 type of hybridization. has 6 valence electrons, two of which are moles N2O3 2. Usually nitrogen has oxidation numbers from -3 to +5. (ii) ln [CoF6]3– , Co3+ undergoes sp3d2 hybridization. Covering nearly every health care profession, this book reflects the most current healthcare information. I give you a simple formula to find out hybridisation of 99% molecules (1% are EXCEPTIONS). In this case, multiply by 2, because 1. Which transition element has its d-orbitals completely filled? (A) Fe (D) In (B) Cd (E) As (C) W 47. In XeF2, the outer shell of Xenon has eight electrons out of which two electrons participate in bond formation. Since N is less electronegative, it is the central atom and therefore the 2 N's will be bonded instead of 2 O's. In the context of biochemistry and drug development, a hybridization assay is a type of Ligand Binding Assay (LBA). Current methods involve the use of complementary riboprobes incorporating. June 22, 2018 | Author: Muhammad Nawaz Khan Abbasi | Category: Plasmid, Chemical Reactions. Nitrous Oxide / N2O, Poznań. ] 1947172646, 9781947172647, 1947172638, 9781947172630. 200 ng); x μL H2O to a final volume of 8 μL; 32 μL hybridization buffer. andanidealbonding. (ii) Based on your completed diagram above, identify the hybridization of the nitrogen atom in the HNO. However, I am having a little bit of trouble conceptualizing this. The electron configuration for each atom of chlorine can be represented as [Ne]3s2 + 3p5, showing that in the third orbital level -- represented by the 3s in front of the s and p -- there are a total of seven electrons, shown by the 2 after the s and the 5 after the p. This is because when oxygen is bonded with two molecules, like it is in water, the three 2p orbitals. Unit 3 Electrochemistry 63 3. idization of Atoms, Making the Leap from 2-D to 3-D via Hybridization Theory (Many Examples Worked for the Student), Heteroatom Hybridization, VSEPR, Deviations from Ideal. In this type of hybridization one- s and two P-orbitals of the valence shell of carbon atom take part in hybridization go give three new sp 2 hybrid orbitals. Thus two half filled 'sp' hybrid orbitals are formed, which are arranged linearly. (3 $\times$ sp 2 orbitals and 1 $\times$ p orbital). There are 3 (1 for double bond, 1 for single bond, and 1 for unpaired set electrons on N. 2 Fundamental Properties and the Network 458 The Uniqueness Principle 458 dp-pp Bonding Involving Elements of the Second and Third Periods Other Network Components 463 Hydrides 463 Oxides and. , 50% s and p character. A lone electron pair. : 224 142 517 E-mail: [email protected] But oxidation number of nitrogen in oxides of nitrogen varies from +1 to +5 because nitrogen's electronegativity is less than oxygen. That leaves the 2s and 2p orbitals. It's sp3 for the. "Surface Structure of Pd3Fe(111) and Effects of Oxygen Adsorption", X. 1 Discovery and Isolation of the Elements 455 Antimony and Arsenic 456 Phosphorus 457 Bismuth 458 Nitrogen 458 16. The remaining unhybridized p orbital is not shown here, but is located along the z axis. CHEM101/3, D1 MidTerm Review Questions 2010 10 20 HT 1. Grunstein, M. These involve sp3 hybridization of the central atom and one of the tetrahedral position is occupied by a lone pair. We divide thru by the atomic masses in order to approach the empirical formula: (30. The oxygen atom in the H2O molecule is sp3 hybridized. Grunstein, M. Nitrogen oxide (N2O3) 10544-73-7. Hybridization. 05% what is the empirical. NH2OH + 2H2O + 2e = NH4OH + 2OH. N3-) is a very curious chemical species. , New Delhi) Class-XI IIT-JEE Advanced Chemistry Study Package Session: 2014-15 Office: Rajopatti, Dumra Road, Sitamarhi (Bihar), Pin-843301 Ph. There are many sorting algorithms with worst case of complexity O(n2). 2 Fundamental Properties and the Network 458 The Uniqueness Principle 458 dp-pp Bonding Involving Elements of the Second and Third Periods Other Network Components 463 Hydrides 463 Oxides and. 7 - What is the hybridization of carbon in Ch. (ii) Based 011 your completed diagram ilQove, identify. Write the acidic, basic and amphoteric oxide of group 15th 2. (1978) concluded that exothermic dehy-dration reactions cooled the subducted crust to such an extent that it did not melt at all, but they revised. The N atom has steric number SN = 3. Microglia, like other tissue macrophages, participate in repair and resolution processes after infection or injury to restore normal tissue homeostasis. The oxygen atom in the H2O molecule is sp3 hybridized. Faster Protocol for Endogenous Fatty Acid Esters of Hydroxy Fatty Acid (FAHFA) Measurements; Rectifying Column Calculations. The inhibition of DNA hybridization by small metal nanoparticles has been examined in detail. Formula Lewis Structure VSEPR Hybridization NH3 1 2 3 4 H H H. Find the average molar mass of gaseous mixture. Pre-hybridize for 2 hours at 50°C in prehybridization solution, cover with plastic coverslip, and place in a moist chamber. This program was created with a lot of help from: The book "Parsing Techniques - A Practical Guide" (IMHO, one of the best computer science books ever written. It is metabolized primarily by the cyclo-oxygenase or 5-lipoxygenase pathways to produce prostaglandins and leukotrienes, which are important mediators of inflammation. 15 ~~A strip of nickel metal is placed in a 1 molar solution of Ni(NO3)2 and a strip of silver metal is placed in a 1 molar solution of AgNO3 Anelectrochemical cell is created when the two solutions are connected by a salt bridge and the two strips are connected by wires to volt meter (i)Write the balanced equation for the overall reaction occurring in the cell and calculate the cell potential. A coordination compound CrCl3⋅4H2O precipitates silver chloride when treated with silver nitrate. The hybridization of the central atom will change when : 1) CH3 combines with H+ 2) H3BO3 combines with OH 86. 51 are also using σ-donor ligands to decrease the redox potentials of cobalt complexes such as the phenolate-rich [Co III (L N2O3)(H 2 O)] metallo-surfactant, whose modified fluorine tin oxide (FTO) electrodes prepared by the Langmuir-Blodgett technique yielded water oxidation activity in pH 11, sodium borate solution. sp Hybridization. A step-by-step explanation of how to draw the N2O5 Lewis Structure. This would probably give SF3 as a sp2d2 hybridized compound with the three fluorines in a trigonal plane around the central sulfur, a lone pair on one side of the plane, and the radical on the. For each of the following, draw the Lewis structure, predict the ONO bond angle, and give the hybridization of the nitrogen. HF molecules have a smaller dipole moment d. The edge of the unit cell is 408 pm. Herein a conjugation of ND-PG with basic polypeptides (Arg8, Lys8 and His8) through click chemistry followed by hybridization with plasmid DNA (pDNA) and its characterization by. Determine it molecular geometry and the hybridization of the central C1 atom. 2 Nature of Matter 2 1. Sb4O6 tetraantimony hexoxide The hybridization associated with the central atom of a molecule in which all the bond angles are 109. DETERMINING THE HYBRIDIZATION OF NITROGEN IN AMMONIA, NH 3. Uses JMOL 3D molecules you can rotate. We used ribonucleic acid (RNA) in-situ hybridization to assess TNF-α and IL-6 expression on tissue microarray slides from 78 epithelial ovarian carcinomas (51 serous, 12 endometrioid, 7 clear. A hybridization assay comprises any form of quantifiable hybridization i. So only sp3d2 (outer) hybridization can occur. 3 Molecular Orbital (MO) Theory and. n In oxide glasses, the convention is to list the glass network modifiers in increasing valence order ending with glass network formers. All of the above. c) hybridization of central metal. Hybridisation Potential of 1',3'-Di-O-methylaltropyranoside Nucleic Acids. Identify the acidic and neutral oxides and also arrange them in increasing order of acidic character 3. When two atoms bond to form a molecule, the electron(s) in the bond are not necessarily shared equally. Note that the N2O hybridization is sp for both Nitrogen atoms. Lindoy and G. 5 120 4 2 3 0 0 1 Number of σ bonds around the atom Number of π bonds around the atom Atom 4 sp2 Atom 5 Atom 6 sp3 sp3 < 109. The mass of element Q found in 1. N3-) is a very curious chemical species. Log Octanol-Water Partition Coef (SRC): Log Kow (KOWWIN v1. N2O3 dinitrogen trioxide. Based on its position on the periodic table, each atom of chlorine contains seven valence electrons. Download books for free. To know about the hybridization of Ammonia, look at the regions around the Nitrogen. Aghayarzadeh, M, Khabbaz, H, Fatahi, B & Terzaghi, S 2020, 'Interpretation of Dynamic Pile Load Testing for Open-Ended Tubular Piles Using Finite-Element Method', INTERNATIONAL JOURNAL OF GEOMECHANICS, vol. The hybridization effect is also related to the substituent of TrR3. doc Author: Bruce Mattson Created Date: 11/6/2008 12:44:45 PM. of lone pairs on the atom no. Is used to explain why iodine molecules are held together in the solid state. 9 Linear geometry - sp Hybridization Molecules that have a linear geometry like CO2, N2O, BeH2, HCN, C2H2 all exhibit sp hybrization on the central atom. If all the N2 and O2 are consumed, what volume of N2O3, at the same temperature and pressure. (ii) Based 011 your completed diagram ilQove, identify. That leaves the 2s and 2p orbitals. 3 Hybridization expansion and worm sampling. subicaddy subicaddy 27. This type of hybridization involves the mixing of one 's' orbital and one 'p' orbital of equal energy to give a new hybrid orbital known as a sp hybridized orbital. The term hybridization in general relates to a combination of atomic orbitals of one single atom. In situ hybridization based on the mechanism of the hybridization chain reaction (HCR) has addressed multi-decade challenges that impeded imaging of mRNA expression in diverse organisms, offering a unique combination of multiplexing, quantitation, sensitivity, resolution and versatility. I also read the topic in my book but this issue wasn't. Lindoy and G. 5 1 2 3 1. Any process of chemical decay of metals due to action of surrounding medium is called _____ a) Surrounding b) Enamel. After drawing the diagram, we need to count the number of electron pairs and the bonds present in the central nitrogen atom. Below, the concept of hybridization is described using Since the four hybridized orbitals are created by mixing one 2s orbital and three 2p orbitals, they are called sp3-hybridized orbitals, and the energy. This arrangement is known as trigonal planar arrangement, and therefore sp2 hybridisation is also known as trigonal hybridisation. Project report | Carbon Monoxide | Carbon - Scribd Bssbnsk. 7 - What is the hybridization of carbon in Ch. Usually nitrogen has oxidation numbers from -3 to +5. Because an s orbital is more stable than a p orbital, a bond in the s orbital would Li+1 Cl-1 Mg+2 N-3 Cu+1 or +2 O-2. How to remove copper impurities from an metallic aqueous solution ? 6. Oxide Oxidation state characters N2O nitrogen oxide (nitrous oxide) +1 Colourless and neutral NO nitrogen monoxide (nitric oxide) +2 Colourless and neutral N2O3 nitrogen trioxide +3 Blue , solid acidic NO2 nitrogen dioxide +4 Brown, acidic N2O4 nitrogen tetraoxide +4 Colourless and acidic N2O5 nitrogen pentaoxide +5 Colourless and acidic. 7 - Give the hybridization of each atom (except H) in Ch. It does not explain or attempt to explain any observations or predictions. Predicted data is generated using the US Environmental Protection Agency's EPISuite™. Hybridization Required 2Linearsp 180° 3 Trigonal planar sp2 120° 4 Tetrahedral 109. Ander-son et al. The steric number can be found by adding the number of bonded atoms and then num. 7 - Acrylonitrile, C3H3N is the building mer Orlon Ch. The strong interactions between HCHO melecules and TiO2 surface are largely attributed to the bonding of hydrogen of HCHO and oxygen of TiO2 surface, which is mainly from the hybridization of the H 1s, O 2p and O 2s. This gives SbH3 tetrahedral bond angles, 109. Jee advanced 2015 paper 1 code 1 final 1. 1 charge centre is the equivalent of either: A single covalent bond. Three oxides of nitrogen N2O, NO2 and N2O3 are mixed in a molar ratio of 3 : 2 : 1. when combining two p orbitals an one s orbital (sp2-hybridisation) the axis in which the two p orbitals point form a plane. Now the Lewis structure for #N_2O_3# shows resonance, as it is possible to draw two different Lewis structures. In N2H4, each N has two H bonded to it, along with a single bond to the other end, and one lone pair. 99E-021 (Modified Grain. The molecule XeF4 is non polar. sp 3 hybrid orbitals is produced by hybridization of single s-orbital and three p-orbital. 37% Na2SO4 %S = 32. Assume that the. Verani et al. Solutions APPENDIX D Step 3: Determine percent by mass of S. chemistry. The easiest way to determine the hybridization of nitrate is by drawing the Lewis structure. Указать валентности NO азот 2, кислород два N2O3 азот 3, кислород 2 SO3 сера 6, кислород 2 Na2S натрий 1, сера 2 SF6 сере 6, фтор 1 BBr3 бор 3, бром 1. tomakefourSP3 hybridorbitals. Which seedless plant is a renewable source of energy ? (a) Club mass 127. 01*g*mol^-1) = 2. By comparing the Electronegativity of the two atoms (See the Periodic Table for a list of Electronegativites), one can determine if the bond is Ionic (one atom takes the electron from the other atom in the bond), Polar Covalent (the electron is shared, but it is spends most of its time near. 58% Percent Composition Elemental (sometimes called "laboratory") Analysis Listing the mass of each element present in 100 g of a compound Can be used to find the empirical formula What is the chemical formula for a compound with The following elemental. of sigma bonds and no. Hence for this reason, the below given sp hybridization was proposed. It is a useful reagent in chemical synthesis. Openstax - Chemistry - Full [34m7wgw3qp46]. Liquid N2O3 is dark blue at low temperatures, but the color fades and becomes greenish at higher tem; 5. 1 Discovery and Isolation of the Elements 455 Antimony and Arsenic 456 Phosphorus 457 Bismuth 458 Nitrogen 458 16. Автор: Jeff Bradbury. Aghayarzadeh, M, Khabbaz, H, Fatahi, B & Terzaghi, S 2020, 'Interpretation of Dynamic Pile Load Testing for Open-Ended Tubular Piles Using Finite-Element Method', INTERNATIONAL JOURNAL OF GEOMECHANICS, vol. 4NH3+ 3O2 = 2N2+ 6H2O реакциясындағы оттектің концентрациясын 2 есе арттырғанда реакция жылдамдығы өседі ?. Hydrogen Bonding b. To account for this, sp hybridization was proposed as explained below. We'll do the first one by pretty much just drawing it out as we have in the structure above. 5) ___ SnO + ___ NF3 ___ SnF2 + ___ N2O3 Using the equation from problem 2 above, answer the following questions: 6) If I do this reaction with 35 grams of C6H10 and 45 grams of oxygen, how many grams of carbon dioxide will be formed? 7) What is the limiting reagent for problem 6?. The hybridization of carbon in methane is sp 3. I give you a simple formula to find out hybridisation of 99% molecules (1% are EXCEPTIONS). has 6 valence electrons, two of which are moles N2O3 2. Автор: Jeff Bradbury. Chemical compounds will be studied in details by bond and molecular geometry and of these compound will be requested the structure formula, the geometry of orbitals obtained from orbitals and hybridization. "Dinuclear Nickel(II) Complex of a N2O3-Donor Schiff Base Derived from Acetylacetone and 1,3-Diamino-2-hydroxypropane", Q. , New Delhi) Class-XI IIT-JEE Advanced Chemistry Study Package Session: 2014-15 Office: Rajopatti, Dumra Road, Sitamarhi (Bihar), Pin-843301 Ph. Featured Videos. If there is more than one possible structure, draw the structure that minimizes nonzero formal charges and fulfills the octet rule. In allence (C3H4),the types of hybridisation of carbon atoms are 3 years ago Answers : (1) Allenes are organic compounds in which one carbon atom has double bonds with each of its two adjacent carbon centres. 4 MULTIPLE BONDS 393 394 Double Bonds 394 Molecular Geometry of Ethylene 396 Isomers 397 Bonding in Formaldehyde 397 Triple Bonds 398 Bonding in Benzene 399 Summary of Bonding and Structure Models 401. Step 1: Find valence e- for all atoms. of a gaseous sample of a compound was found to be 1. 3 Properties of Matter and their Measurement 4 1. ib chemistry ii internal assessment cook. Openstax - Chemistry - Full [34m7wgw3qp46]. It does not explain or attempt to explain any observations or predictions. Hybridization Required 2Linearsp 180° 3 Trigonal planar sp2 120° 4 Tetrahedral 109. It has been found to be 107o. question_answer75) Which of the following is isoelectronic as well as has same structure as that of \[{{N}_{2}}O E) None of these done clear. Chapter 07: The p- Block Elements of Chemistry Examplar Problems book - I. Aluminum can be quantitatively analyzed as the oxide (formula Al 2 O 3 ) or as a derivative of the organic nitrogen compound 8-hydroxyquinoline. Although quantum mechanics yields the. Its hybridization is somewhere between sp2 and sp. Introgressive hybridization of divergent species has been important in increasing variation, leading to new morphologies and even new species, but how that happens throughout evolutionary history is not known. 6 years ago. This means sp3 hybridization at O. A reducing sugar is a simple sugar containing a hemiacetal functional group. The hybridization in SiF4 is sp3. (ii) Based 011 your completed diagram ilQove, identify. 7 - Give the hybridization of each atom (except H) in Ch. The least electronegative atom goes in the center of a lewis structure. Question: In the Workbook on Quiz 2 preparation, number 4 asks, What is the hybridization of each nitrogen atom in N2? I am confused with this because I drew the Lewis structure and there is a triple bond between the 2 N's, is the. Water: Chime in new window. N3-) is a very curious chemical species. Dinitrogen trioxide. com; E-mail: [email protected]. Lewis suggested that a chemical bond involved sharing of electrons. Nitrogen can share two electrons with oxygen, and oxygen can share two back, producing a double bond between the two atoms. of bonded Ask your question. So, here we have an unbonded electron bond and three sigma bonds. DNA melting point analysis showed that the oligonucleotides adsorb strongly and nonspecifically on. The O bonded to H has SN = 4 (two bonding pairs and two lone pairs). HF is much less soluble in water e. This gives SbH3 tetrahedral bond angles, 109. , New Delhi) Class-XI IIT-JEE Advanced Chemistry Study Package Session: 2014-15 Office: Rajopatti, Dumra Road, Sitamarhi (Bihar), Pin-843301 Ph. Aluminum can be quantitatively analyzed as the oxide (formula Al 2 O 3 ) or as a derivative of the organic nitrogen compound 8-hydroxyquinoline. Therefore they are sp 2 hybridization. See the Big List of Lewis Structures. 868 views. 3 Molecular Orbital (MO) Theory and. Hydrocarbon structures and types of isomerism (structural isomers, cis/trans isomers, and enantiomers). Dinitrogen trioxide. 42): Boiling Pt (deg C): 513. 99E-021 (Modified Grain. 6 that this species is an exception to the octet rule and we know that the hybridization of atomic orbitals on the S atom is sp 3 d 2 2. Hence, it is paramagnetic. The weak bonds : Hydrogen bonds and dipolar bonds. NO 2 involves an sp 2 type of hybridization. 3 Hybridization expansion and worm sampling. 3 Molecular Orbital (MO) Theory and. = Hybridization is the process of combining two complementary single-stranded DNA or RNA molecules and allowing them to form a single double-stranded molecule through Hybridization is a part of many important laboratory techniques such as polymerase chain reaction and Southern blotting. D) sp3, sp3d. 0 mL of 50 ppm quinine solution before dilution to 100 mL. Оксид алюминия. Ander-son et al. Senovážné náměstí 23, Praha 1, 110 00 Tel. 06226-252314 , Mobile:9431636758, 9931610902 Website: www. connect to do. 2 Galvanic Cells 65 3. Dinitrogen pentoxide is an strong acidic oxide and nitrogen atom is at +5 oxidation state. So far we've seen a few definitions of an acid, with the most popular being the Brønsted-Lowry one that tells us an acid transfers proton. However, I am having a little bit of trouble conceptualizing this. Fibronectin in human prostatic cells in vivo and in vitro: expression, distribution, and pathological significance. Chemical compounds will be studied in details by bond and molecular geometry and of these compound will be requested the structure formula, the geometry of orbitals obtained from orbitals and hybridization. In N2O the hybridisation of both the nitrogen atoms is sp because the terminal nitrogen has 1 sigma bond and one loan pair of electrons and middle nitrogen has 2 sigma bonds. For each labeled atom, give the hybridization, the bond angle and the number of σ and π bonds around the atom. Hybridization *Molecular Orbital Theory (Bond Order, Diamagnetism, Paramagnetism) Coordination Compounds and their Biological Importance Naming Shape, Structure, Coordination Number, Ligands Biological Examples Industrial Examples *Stereochemistry *Crystal Field Theory. that depends on how you put your coordinate system. We used ribonucleic acid (RNA) in-situ hybridization to assess TNF-α and IL-6 expression on tissue microarray slides from 78 epithelial ovarian carcinomas (51 serous, 12 endometrioid, 7 clear. (b) 1 C-H bond, 1 C=N bond, 1 lone pair of electrons on the C atom and 1 lone pair of electrons on the N atom. So in this article, I am going to solve all the confusions regarding of the Sulfur DiFluoride – SF2 molecular geometry. View Solution play_arrow. So we have the Carbon surrounded by three Hydrogens, and the Nitrogen with two Oxygens around it. With an expanded valence, we know from Chapter 4 Section 4. is the only species produced. ENCY CL OPEDIA OF INOR GANIC CHEMIS TR Y ENCYCL CLOPEDIA INORG CHEMISTR TRY. A lone electron pair. The hybridization effect is also related to the substituent of TrR3. Chemistry: Atoms First 2e is a peer-reviewed, openly licensed introductory textbook produced through a collaborative pub. It is metabolized primarily by the cyclo-oxygenase or 5-lipoxygenase pathways to produce prostaglandins and leukotrienes, which are important mediators of inflammation. 01*g*mol^-1) = 2. The central N atom in N2 O contains two bonding domains and zero lone pair of electron. The lone pair repels the two Hydrogen atom, to bond angle of 104. doc Author: Bruce Mattson Created Date: 11/6/2008 12:44:45 PM. GREAT QUESTION!! AZIDE ION (e. Which seedless plant is a renewable source of energy ? (a) Club mass 127. The hybridization specificity of the silicon nanowire sensor for the detection of DNA was further evaluated by analyzing fully complementary target DNA and non-complementary target DNA (control group). Cr 3 Complex Ion Hybridization D2Sp3. 5) ___ SnO + ___ NF3 ___ SnF2 + ___ N2O3 Using the equation from problem 2 above, answer the following questions: 6) If I do this reaction with 35 grams of C6H10 and 45 grams of oxygen, how many grams of carbon dioxide will be formed? 7) What is the limiting reagent for problem 6?. Chemistry: Atoms First 2e (2019 Edition) [2 ed. NO2 (nitrogen dioxide)* 13. During the sate of excitation, the atom undergoes sp hybridization involving the mixing of 2s and 2p orbitals. c) Trigonal bipyramidal d) Octahedral. After hybridization, each carbon still has one unhybridized 2pz orbital that is perpendicular to the hybridized lobes and contains a single electron (part This region has no nodes perpendicular to the O3 plane. JEE ADVANCED 2015 PAPER 1 CODE 1 Physics 1. If all the N2 and O2 are consumed, what volume of N2O3, at the same temperature and pressure. All types of parentheses are correct, for example K3[Fe(CN)6]. is the only species produced. It forms upon mixing equal parts of nitric oxide and nitrogen dioxide and cooling the mixture below −21 °C (−6 °F): NO + NO 2 ⇌ N 2 O 3. That will be the least electronegative atom ("C"). The hybridisation of an atom in a compound can be calculated by a formula… Hybridisation = no. Valence Bond Theory 350 Hybridization of Atomic Orbitals 353 • Hybridization of s and p Orbitals 354 • Hybridization of s, p, and d Orbitals 357 Hybridization in Molecules Containing Multiple Bonds 361 Formation of Pi Bonds in Ethylene and Acetylene 9. or Take a Test. Sofia Uribe Sanchez. of lone pairs on the atom no. of lone pairs in the any atom like in nitrogen in above case it has 3 sigma bonds so its hybridization is SP 2 (3= 1*s +2*p). CHEM101/3, D1 MidTerm Review Questions 2010 10 20 HT 1. DETERMINING THE HYBRIDIZATION OF NITROGEN IN AMMONIA, NH 3. Hybridization 443 The Central Themes of VB Theory 443 Types of Hybrid Orbitals 444 Modes of Orbital Overlap and the Types of Covalent Bonds 451 Orbital Overlap in Single and Multiple Bonds 451 Orbital Overlap and Rotation Within a Molecule 455 CHAPTER 12 11. 99E-021 (Modified Grain. question_answer86) When the hybridisation state of carbon atom changes from \[s{{p}^{3}}\]to \[s{{p}. The structure of N2O is In N2O the hybridisation of both the nitrogen atoms is sp because the terminal nitrogen has 1 sigma bond and one loan pair of electrons and middle nitrogen has 2 sigma bonds. In chemistry, orbital hybridisation (or hybridization) is the concept of mixing atomic orbitals into new hybrid orbitals (with different energies, shapes, etc. Multiple Choice Questions (Type-I) 1. N2O5: sp^2 hybridization. It causes a repulsion of electron pairs to form the 120-degree angle. 0 suggestions are available. k = iv k would remain unchanged, it is temperature dependent, not concentration dependent. Given the following equilibrium constants are given at 377°C, 1/2 N2(g) + 1/2 O2(g) NO(g) K1 = 110−17 1/2 N2(g) + O2(g) NO2(g) K2 = 110−11 N2(g) + 3/2 O2(g) N2O3(g) K3 = 210−33 N2(g) + 2 O2(g) N2O4(g) K4 = 410−17 determine the values for the. Predicted data is generated using the US Environmental Protection Agency's EPISuite™. NH3 Hybridization – SP3. 7 - Give the hybridization of each atom (except H) in Ch. During the sate of excitation, the atom undergoes sp hybridization involving the mixing of 2s and 2p orbitals. Hydrocarbon structures and types of isomerism (structural isomers, cis/trans isomers, and enantiomers). N2O was started as the first Erlang Web Framework that uses WebSocket protocol only. a) the ion PH 4 + Phosphorus has 5 electrons in its bonding level. Hence for this reason, the below given sp hybridization was proposed. Based on its position on the periodic table, each atom of chlorine contains seven valence electrons. 4 Conductance of Electrolytic Solutions 73 3. Introgressive hybridization of divergent species has been important in increasing variation, leading to new morphologies and even new species, but how that happens throughout evolutionary history is not known. HF is the strongest acid c. Ionic Bonding d. To determine the concentration of HNO. N20 is sp not sp2. What is Hybridization? Understand the types of hybridization, Formation of new hybrid orbitals by the mixing atomic orbitals, sp, sp2, sp3, sp3d, sp3d2 Hybridization and more. Below, the concept of hybridization is described using Since the four hybridized orbitals are created by mixing one 2s orbital and three 2p orbitals, they are called sp3-hybridized orbitals, and the energy. com; E-mail: [email protected]. 2019 Chemistry Secondary School +5 pts. analysis shows that 2. 1 To produce ao·aqueO\IS solution of HNO2, the student bubbles N2O3(i) into·distilled water. 0 grams, 76. 7 - Acrylonitrile, C3H3N is the building mer Orlon Ch. 37 (Mean or Weighted MP) VP(mm Hg,25 deg C): 1. sp Hybridization. 76g sample of fluorite contains 1. In MO calculations, there are more cases where such hybrid orbitals appear than cases where they are absent. Complete the Lewis structures correctly by adding a combination of lone pairs, multiple bonds, and formal charges where necessary. question_answer86) When the hybridisation state of carbon atom changes from \[s{{p}^{3}}\]to \[s{{p}. The acid is found in many foods. 42): Boiling Pt (deg C): 513. N2O was started as the first Erlang Web Framework that uses WebSocket protocol only. 200 ng); x μL H2O to a final volume of 8 μL; 32 μL hybridization buffer. Access the answers to hundreds of Oxidation state questions that are explained in a way that's easy for you to understand. Types of Hybridization with examples for sp, sp2, sp3, sp3d, sp3d2, sp3d3 & dsp2 hybridizations using the molecules: BeCl2, BCl3, CH4, C2H6, C2H4, C2H2, NH3, H2O, PCl5, SF6 etc. How is sulfur making a pi bond if it is sp3 hybridized?. H2O did not have linear shape because oxygen atom have two lone pair electrons. Wide Hybridization Interspecific Hybridization:- Crosses made between distantly related species Intergeneric Hybridization:- Crosses made between distantly related genera Somatic hybridization (Protoplast fusion) Crosses made between somatic cells Hybridization (recombination) is the third. Spaces are irrelevant, for example Cu SO 4 is equal CuSO4. The graphitic carbon units in junction with tri-s-triazine domains were clearly observed and its in-plane hybridization with carbon nitride was formed during the copolymerization using melamine. Water: Chime in new window. The edge of the unit cell is 408 pm. NH2OH + 2H2O + 2e = NH4OH + 2OH. Hybridization is the mixing of valence atomic orbitals to get equivalent hybridized orbitals that having similar characteristics and energy. Harold Stevenson The Facts O. Step2: Find octet e- for each atom and add them together. Step 1: Find valence e- for all atoms. B) sp3d, sp3d2. It can be fatal if absorbed through the skin or inhaled. Predicted data is generated using the US Environmental Protection Agency's EPISuite™. molecules N2 2. & Hogness, D. 4NH3+ 3O2 = 2N2+ 6H2O реакциясындағы оттектің концентрациясын 2 есе арттырғанда реакция жылдамдығы өседі ?. In it, the 2s orbital and one of the 2p orbitals hybridize to form two sp orbitals, each consisting of 50% s and 50% p character. In the context of biochemistry and drug development, a hybridization assay is a type of Ligand Binding Assay (LBA). Solutions APPENDIX D Step 3: Determine percent by mass of S. Predicted data is generated using the US Environmental Protection Agency's EPISuite™. edu is a platform for academics to share research papers. calculate the mass of. or Take a Test. HF molecules tend to form hydrogen bonds 9. To account for this, sp hybridization was proposed as explained below. Yes SnCl2*2H20 is soluble in water, in less than its own weight of water, but it forms an insoluble basic salt with excess water making it soluble in ethanol. The hybridization of carbon in methane is sp 3. N2O3 General Chemistry 1 120 MINS Lesson 11: Chemical Reactions and Chemical Equations (Lecture) Content Standard The learners demonstrate an understanding of the use of chemical formulas to represent chemical reactions. The easiest way to determine the hybridization of nitrate is by drawing the Lewis structure. All sodium, potassium, ammonium, and nitrate salts are soluble in water. Its formal charge will be zero. So we have the Carbon surrounded by three Hydrogens, and the Nitrogen with two Oxygens around it. Direct visualization of Propionibacterium acnes in prostate tissue by multicolor fluorescent in situ hybridization assay. ionic bonding. 05 g x 100% = 32. Proceedings of the National Academy of Sciences 72, 3961-5. Hybridization -. other atoms are attached to N. From the structure of a covalent molecule be able to determine the number of s and p bonds and the hybridization about the 4-5 N2O3 5-1 6. of lone pairs in the any atom like in nitrogen in above case it has 3 sigma bonds so its hybridization is SP 2 (3= 1*s +2*p). As a member, you'll also get unlimited access to over 83,000 lessons in math, English, science, history, and more. Whenever you can draw two or more Lewis structures for a molecule, differing only in the locations of the electrons, the actual structure is none of the structures but is a resonance hybrid of them all. The edge of the unit cell is 408 pm. b) the surroundings absorb energy. The immune response in the brain has been widely investigated and while many studies have focused on the proinflammatory cytotoxic response, the brain's innate immune system demonstrates significant heterogeneity. Access the answers to hundreds of Oxidation state questions that are explained in a way that's easy for you to understand. See the Big List of Lewis Structures. of lone pairs on the atom no. Foreword iii Unit 1 Some Basic Concepts of Chemistry 1 1. The correct electron-dot formulation for hydrogen cyanide shows: (a) 2 double bonds and two lone pairs of electrons on the N atom. Which of the following options are correct for [Fe(CN)6]3- complex? (i) d2sp3 hybridisation (ii) sp3d2 hybridisation. There are two possible structures for Nitrosonium : In the first structure, there is a +1 Formal charge on Oxygen, whereas, In the second structure, there is a +1 Formal charge on Nitrogen. NH2OH + 2H2O + 2e- → NH4OH + 2OH. so the summation of (n. See full list on faculty. N2O3 NO2-N2O5 NO3-P2O3 PO33-P2O5 PO43-H2O OH-NH3 NH4+ Solubility Rules. After drawing the diagram, we need to count the number of electron pairs and the bonds present in the central nitrogen atom. Is site par apko Urdu Kahaniyan and Urdu Sexy Stories jo k buhat popular hain asian countires main woh milegi. I still don't understand why there are any arguments about hybridisation it's a matter of mathematical convenience and not a matter of physics. Hybridization of fishes in North America (Family Centrarchidae), by W. Obviously, the shape of the molecular orbitals resulting from the. 6 that this species is an exception to the octet rule and we know that the hybridization of atomic orbitals on the S atom is sp 3 d 2 2. This is because when oxygen is bonded with two molecules, like it is in water, the three 2p orbitals. Step 1: Find valence e- for all atoms. 5(V+M-C+A) 1. Pyramidal shape of NH3 molecule Due to the presence of lone pair, the bond angle in NH3 is less than the normal tetrahedral angle. All sodium, potassium, ammonium, and nitrate salts are soluble in water. In N2O the hybridisation of both the nitrogen atoms is sp because the terminal nitrogen has 1 sigma bond and one loan pair of electrons and middle nitrogen has 2 sigma bonds. Therefore they are sp 2 hybridization. Therefore, its electron configuration is: It means that it has three electrons on 3d sublevel zero on 4s and zero on 4p. Hno3 Hybridization. The hybridization of an s orbital (blue) and two p orbitals (red) produces three equivalent sp2 hybridized orbitals (yellow) oriented at 120° with respect to each other. Molecular Structure: Lewis Dot - facultychemqueensuca. To this end, we construct T-shaped plasmonic heterodimers consisting of a short and a long gold nanorod with finite element method simulation. 8) Hybridization of Xe in XeF4 is _____ and in XeF2 is _____. If there was no hybridization then they would not be the same as one electron in a 2s orbital of Be would bind with one F while another electron in the 2p would bond with the other F. 4 Evaluation of the trace over the local Hilbert space. The immune response in the brain has been widely investigated and while many studies have focused on the proinflammatory cytotoxic response, the brain's innate immune system demonstrates significant heterogeneity. The N atom has steric number SN = 3. Hybridisation Potential of 1',3'-Di-O-methylaltropyranoside Nucleic Acids. that means sp³ hybridization. The hybridization effect is also related to the substituent of TrR3. Hybridization. A central atom with 6 sigma bonds is d 2 sp 3 hybridized and is octahedral in shape. subicaddy subicaddy 27. Ethane, ethene, and ethyne are all similar in that they are (1) hydrocarbons (3) saturated. One single atom of Sulphur is bonded with two atoms of Oxygen covalently. N2O3 General Chemistry 1 120 MINS Lesson 11: Chemical Reactions and Chemical Equations (Lecture) Content Standard The learners demonstrate an understanding of the use of chemical formulas to represent chemical reactions. By comparing the Electronegativity of the two atoms (See the Periodic Table for a list of Electronegativites), one can determine if the bond is Ionic (one atom takes the electron from the other atom in the bond), Polar Covalent (the electron is shared, but it is spends most of its time near. 0 mL aliquot of the original solution was mixed with 20. All of the above. Chemical compounds will be studied in details by bond and molecular geometry and of these compound will be requested the structure formula, the geometry of orbitals obtained from orbitals and hybridization. ] 1947172646, 9781947172647, 1947172638, 9781947172630. 42g calcium. 17, we get an empirical formula of NO_2. The ring-opened form reduces Cu2+ (Benedicts, Fehlings) and Ag+ (Tollens) rgts. The correct electron-dot formulation for hydrogen cyanide shows: (a) 2 double bonds and two lone pairs of electrons on the N atom. Three regions of high electron density = trigonal planar arangement and sp2 hybridization. Trloc e−(β−τ2n)Hloc O2n · · · e−(τ2−τ1)Hloc. Answered Hybridisation of N2O5 2 See answers. It is a useful reagent in chemical synthesis. хлорид олова (IV) — SnCl 4 дихромат кобальта (II) — CoCr 2 O 7. For the N2O3 Lewis structure, calculate the total number of vale. Decision: The molecular geometry of BrF 3 is T-shaped with asymmetric charge distribution about the central atom. Microglia, like other tissue macrophages, participate in repair and resolution processes after infection or injury to restore normal tissue homeostasis. Complete Solutions Manual GENERAL CHEMISTRY NINTH EDITION Ebbing/Gammon. N3-) is a very curious chemical species. By Staff WriterLast Updated Mar 25, 2020 10:05:14 PM ET. Since N is less electronegative, it is the central atom and therefore the 2 N's will be bonded instead of 2 O's. If all the N2 and O2 are consumed, what volume of N2O3, at the same temperature and pressure. To find the hybridization for NCl3 we'll first determine the steric number. 42): Boiling Pt (deg C): 513. Section 11-3: Hybridization. He described what he called the cubical atom, because a cube has 8 corners, to represent the outer valence shell electrons which can be shared to create a bond. 42): Boiling Pt (deg C): 727. 51 are also using σ-donor ligands to decrease the redox potentials of cobalt complexes such as the phenolate-rich [Co III (L N2O3)(H 2 O)] metallo-surfactant, whose modified fluorine tin oxide (FTO) electrodes prepared by the Langmuir-Blodgett technique yielded water oxidation activity in pH 11, sodium borate solution. This gives SbH3 tetrahedral bond angles, 109. Answer Key 1. chemistry. Cоединение. 6 years ago. InthevideoonSP3 hybridization, wetookallfourofthese. Cr 3 Complex Ion Hybridization D2Sp3. Yes SnCl2*2H20 is soluble in water, in less than its own weight of water, but it forms an insoluble basic salt with excess water making it soluble in ethanol. It does not explain or attempt to explain any observations or predictions. You will find that in nitrogen dioxide there are 2 sigma bonds and 1 lone electron pair. The hybridization of the central atom will change when : 1) CH3 combines with H+ 2) H3BO3 combines with OH 86. Aghayarzadeh, M, Khabbaz, H, Fatahi, B & Terzaghi, S 2020, 'Interpretation of Dynamic Pile Load Testing for Open-Ended Tubular Piles Using Finite-Element Method', INTERNATIONAL JOURNAL OF GEOMECHANICS, vol. That leaves the 2s and 2p orbitals. What is the hybridization of each nitrogen and carbon atom in urea? Solution The Lewis structure of urea is. Hence this hybridization is called trigonal hybridization. The property of the hybridized orbital is the mean or average of. The least electronegative atom goes in the center of a lewis structure. È un composto estremamente instabile dall'odore pungente e di colore verde-azzurro, e a temperatura ambiente si decompone rapidamente in acido nitrico, liberando contemporaneamente biossido di azoto. hybridization, and show the angles between the bonds in a drawing. Deprotection of the benzylidene protecting group under mild conditions using AcOH:H2O (3:1) at 45 °C for 12 h afforded 15a-d in 50%-94% yield. Why nitric oxides behave as free radical? 4. The hybridization specificity of the silicon nanowire sensor for the detection of DNA was further evaluated by analyzing fully complementary target DNA and non-complementary target DNA (control group). 99E-021 (Modified Grain. Оксид меди (I). N20 is sp not sp2. Get an answer for 'Hybridization of SF3?' and find homework help for other Science questions at eNotes. The front lobes face away from each other and form a straight line leaving a 180° angle between the two orbitals. is the only species produced. 88 (Adapted Stein & Brown method) Melting Pt (deg C): 207. Introgressive hybridization of divergent species has been important in increasing variation, leading to new morphologies and even new species, but how that happens throughout evolutionary history is not known. 7 - Give the hybridization of each atom (except H) in Ch. Is site par apko Urdu Kahaniyan and Urdu Sexy Stories jo k buhat popular hain asian countires main woh milegi. com Nitrogen sesquioxide is a toxic and Corrosive compound. N-nitroso compounds can also be generated from nitric oxide at neutral pH. O3(g) + NO(g) ( O2(g. N2O is an embeddable message protocol loop library for WebSocket, HTTP, MQTT and TCP servers. HF is much less soluble in water e. 5) ___ SnO + ___ NF3 ___ SnF2 + ___ N2O3 Using the equation from problem 2 above, answer the following questions: 6) If I do this reaction with 35 grams of C6H10 and 45 grams of oxygen, how many grams of carbon dioxide will be formed? 7) What is the limiting reagent for problem 6?. FIVE STEPS TO A AP Chemistry 5 Other books in McGraw-Hill's 5 STEPS TO A 5 series include: AP Biology AP Calculus AB/BC AP Computer Science AP English Language AP English Literature AP European History AP Physics B and C AP Psychology AP Spanish Language AP Statistics AP U. These algorithms have different average and best cases. : 224 142 517 E-mail: [email protected] D) sp3, sp3d. Large quantities of sulphur dioxide are used in the contact process for the manufacture of sulphuric acid. 2 Galvanic Cells 65 3. N is the central atoms, all. Because it takes more energy to break a double bond than a single bond, we say that a double bond is stronger than a single bond. Complete the Lewis structures correctly by adding a combination of lone pairs, multiple bonds, and formal charges where necessary. The weak bonds : Hydrogen bonds and dipolar bonds. The Chem God added a drop of lysergic acid diethyl amide into a bomb calorimeter with a heat capacity of. How do I find the hybridization of oxygen and nitrogen in $\ce{N2O}$ and finally determine its structure? Research Effort: I watched some lectures on hybridization, but all of them included cases in which only the central atom had hybrid orbitals. Rather, it is an algorithm that accurately predicts the structures of a large number of compounds. There are 2 electrons in the 2s orbital and 3 electrons in the 3p orbital. Naming Ionic Compounds – Answer Key Give the name of the following ionic compounds: Name 1) Na 2CO 3 sodium carbonate 2) NaOH sodium hydroxide 3) MgBr 2 magnesium bromide. N2O is an embeddable message protocol loop library for WebSocket, HTTP, MQTT and TCP servers. Theories of Covalent Bonding 410 11. Residual air persists in the renal collecting system following percutaneous nephrolithotomy. %Na= 2 x 22. , 2008, 61, 3-13. Oxygen needs two more electrons to complete its octet, and nitrogen needs three. Harold Stevenson The Facts O. 7 - Give the hybridization of the central atom. This is used to explain the fact that the four bonds in methane are equivalent. For sp3d2 hybridization, the expected geometry will be_____ a) Tetrahydral b) Square planar. The gas that will liquefy with most difficulty is a) He b) CO2 c) NH3 d) SO2 71. It has an open book structure evry similar to hydrogen peroxide and oxygen has an unusual oxidation state of +1 because it is bonded to fluorine which is the most electronegative element. Three of the sp3 orbitals overlap with s orbitals on H atoms to form the O‐H bonds, while the fourth accommodates the lone pair on O. ENCY CL OPEDIA OF INOR GANIC CHEMIS TR Y ENCYCL CLOPEDIA INORG CHEMISTR TRY. Its hybridization is somewhere between sp2 and sp. I also read the topic in my book but this issue wasn't. While VB theory allows for atomic orbital hybridization (leading to the known sp, sp 2, sp 3 hybridizations), MO theory assumes that no hybridization occurs at atomic level. 103 mg of the compound produced 5. , than the component atomic orbitals) suitable for the pairing of electrons to form chemical bonds in valence bond theory. Access the answers to hundreds of Oxidation state questions that are explained in a way that's easy for you to understand. Oxygen atom has sp3 hybridisation because it has 1 sigma bond and 3 loan pair of electrons. SO2 Bond Angle. Because it takes more energy to break a double bond than a single bond, we say that a double bond is stronger than a single bond. Nitrogen-Based. Which compound is a hydrocarbon? (1) CH3I (3) CH3COOH (2) CH3OCH3 (4) CH3CH3 39. [Co(CN)6]3– 8. The edge of the unit cell is 408 pm. 05 g x 100% = 22. You will find that in nitrogen dioxide there are 2 sigma bonds and 1 lone electron pair. What is Hybridization? Understand the types of hybridization, Formation of new hybrid orbitals by the mixing atomic orbitals, sp, sp2, sp3, sp3d, sp3d2 Hybridization and more. 7 - What is the hybridization of carbon in Ch. Therefore, it has sp hybridization (2 $\times$ sp orbitals and 2 $\times$ p orbitals) Each oxygen has 1 double bond and 2 lone-pairs which means 1 $\times \ \pi$ bond and 3 $\times \ \sigma$ bonds (actually 1 bond and 2 orbitals with lone pairs). The energies, degrees of hybridization, populations of the lone pairs of oxygen, energies of their interaction with the anti-bonding orbital of the rings and the electron density distributions and E(2) energies have been calculated by NBO analysis using DFT method to predict clear evidence of stabilization originating from the hyper-conjugation. com Nitrogen sesquioxide is a toxic and Corrosive compound. is the only species produced. In this case, multiply by 2, because 1. Hybridisation is given by the formula: H=1/2{ V + X - C + A} V=no of valence electrons in central atom X=no of monovalent atoms around the central atom C= +ve charge on cation A= -ve charge on. Direct visualization of Propionibacterium acnes in prostate tissue by multicolor fluorescent in situ hybridization assay. sp2 (Boron trichoride, BCl3; Ethylene, C2H4). A) sp3d2, sp3d2. That's the unbonded electron pairs and then the Sigma bonds. Answered Hybridisation of N2O5 2 See answers. The Chem God added a drop of lysergic acid diethyl amide into a bomb calorimeter with a heat capacity of. SP3 hybridization is the mixing of one s-orbital and three p- orbitals of the same atom having nearly the same energy to form four orbitals of equal in all. Which transition element has its d-orbitals completely filled? (A) Fe (D) In (B) Cd (E) As (C) W 47. All electrons are paired and thus it is diamagnetic. Oxide Oxidation state characters N2O nitrogen oxide (nitrous oxide) +1 Colourless and neutral NO nitrogen monoxide (nitric oxide) +2 Colourless and neutral N2O3 nitrogen trioxide +3 Blue , solid acidic NO2 nitrogen dioxide +4 Brown, acidic N2O4 nitrogen tetraoxide +4 Colourless and acidic N2O5 nitrogen pentaoxide +5 Colourless and acidic. The hybridization of the central atom in O3 is. 42): Boiling Pt (deg C): 727. A molecule can have resonance structures when it has a lone pair or a double bond on the atom next to a double bond. The VSEPR model can predict the structure of nearly any molecule or polyatomic ion in which the central atom is a nonmetal, as well as the structures of many molecules and polyatomic ions with a central metal atom. Colony hybridisation: A method for the isolation of cloned DNAs that contain a specific gene. The O bonded to H has SN = 4 (two bonding pairs and two lone pairs). N 2 O (nitrous oxide) is an oxide of nitrogen and is called as laughing gas. Aluminum can be quantitatively analyzed as the oxide (formula Al 2 O 3 ) or as a derivative of the organic nitrogen compound 8-hydroxyquinoline. com; E-mail: [email protected]. 868 views. Click here👆to get an answer to your question ️ In solid state N2O5 exits as:. andanidealbonding. sp Hybridization can explain the linear structure in molecules. Any process of chemical decay of metals due to action of surrounding medium is called _____ a) Surrounding b) Enamel. 7 - What is the hybridization of nitrogen in Ch. Use up and down arrows to select. In this type of hybridization one- s and two P-orbitals of the valence shell of carbon atom take part in hybridization go give three new sp 2 hybrid orbitals. No higher resolution available. Access the answers to hundreds of Oxidation state questions that are explained in a way that's easy for you to understand. The hybridisation of an atom in a compound can be calculated by a formula… Hybridisation = no. Chemistry: Atoms First 2e (2019 Edition) [2 ed. [CoF6]3– In [Co(CN)6]3–, Co3+ undergoes d2sp3 hybridization. 17*mol*N; (69.
CommonCrawl
Does genetic differentiation underlie behavioral divergence in response to migration barriers in sticklebacks? A common garden experiment Part of a collection: Using behavioral ecology to explore adaptive responses to anthropogenic change A. Ramesh ORCID: orcid.org/0000-0002-7200-13661,2,3, M. M. Domingues ORCID: orcid.org/0000-0001-7722-83353, E. J. Stamhuis ORCID: orcid.org/0000-0001-7746-25354, T. G. G. Groothuis ORCID: orcid.org/0000-0003-0741-21742, F. J. Weissing ORCID: orcid.org/0000-0003-3281-663X1 & M. Nicolaus ORCID: orcid.org/0000-0003-1808-15263 Behavioral Ecology and Sociobiology volume 75, Article number: 161 (2021) Cite this article Water management measures in the 1970s in the Netherlands have produced a large number of "resident" populations of three-spined sticklebacks that are no longer able to migrate to the sea. This may be viewed as a replicated field experiment, allowing us to study how the resident populations are coping with human-induced barriers to migration. We have previously shown that residents are smaller, bolder, more exploratory, more active, and more aggressive and exhibited lower shoaling and lower migratory tendencies compared to their ancestral "migrant" counterparts. However, it is not clear if these differences in wild-caught residents and migrants reflect genetic differentiation, rather than different developmental conditions. To investigate this, we raised offspring of four crosses (migrant ♂ × migrant ♀, resident ♂ × resident ♀, migrant ♂ × resident ♀, resident ♂ × migrant ♀) under similar controlled conditions and tested for differences in morphology and behavior as adults. We found that lab-raised resident sticklebacks exhibited lower shoaling and migratory tendencies as compared to lab-raised migrants, retaining the differences in their wild-caught parents. This indicates genetic differentiation of these traits. For all other traits, the lab-raised sticklebacks of the various crosses did not differ significantly, suggesting that the earlier-found contrast between wild-caught fish reflects differences in their environment. Our study shows that barriers to migration can lead to rapid differentiation in behavioral tendencies over contemporary timescales (~ 50 generations) and that part of these differences reflects genetic differentiation. Significance statement Many organisms face changes to their habitats due to human activities. Much research is therefore dedicated to the question whether and how organisms are able to adapt to novel conditions. We address this question in three-spined sticklebacks, where water management measures cut off some populations, prohibiting their seasonal migration to the North Sea. In a previous study, we showed that wild-caught "resident" fish exhibited markedly different behavior than migrants. To disentangle whether these differences reflect genetic differentiation or differences in the conditions under which the wild-caught fish grew up, we conducted crosses, raising the F1 offspring under identical conditions. As their wild-caught parents, the F1 of resident × resident crosses exhibited lower migratory and shoaling tendencies than the F1 of migrant × migrant crosses, while the F1 of hybrid crosses were intermediate. This suggests that ~ 50 years of isolation are sufficient to induce behaviorally relevant genetic differentiation. Habitat fragmentation resulting from human activities is considered to be a major threat for many animal populations (Foley et al. 2005; Fischer and Lindenmayer 2007). Habitat fragmentation is characterized by a reduction in habitat size, habitat loss, and loss of habitat connectivity (Fahrig 2003). This poses a threat to animal populations, especially for migratory species which rely on connectivity between functional habitats for reproduction and survival (Legrand et al. 2017). Migratory species would thus need to respond via adaptive changes in life history and behavior to thrive in disconnected patches (Bohlin et al. 2001; Kraabøl et al. 2009; Junge et al. 2014). Therefore, understanding the underlying mechanisms of these responses is crucial as they directly affect the future adaptive potential and evolutionary trajectories of populations (Kawecki and Ebert 2004; Wang and Bradburd 2014) as well as conservation measures (Stockwell et al. 2003). Individuals need to maintain a match between their phenotypes and the environment to enhance their local performance, thereby allowing populations to subsist or grow in an altered environment. Depending on the underlying mechanism involved, such adaptive responses may occur more or less rapidly and may influence population genetic structure (Hedrick et al. 1976, 2006; Nicolaus and Edelaar 2018). For example, phenotypic adjustment may result from natural selection favoring some phenotypes over others, potentially leading to population genetic differentiation across multiple generations when phenotypic variation has a genetic basis (Kawecki and Ebert 2004). Non-exclusively, individuals may match their phenotype to local conditions through plasticity, be it reversible plasticity (or phenotypic flexibility sensu Piersma and Drent 2003), developmental plasticity, or transgenerational plasticity (through parental and epigenetic effects). Plasticity, defined as the ability of a genotype to exhibit different phenotypes in response to the environment (Via et al. 1995; Pigliucci 2005), can thus provide a rapid mechanism to respond to environmental changes (Ghalambor et al. 2007). Importantly, selection may favor genotypes with varying levels of plasticity (Scheiner 1993; Nussey et al. 2007), implying that the mentioned mechanisms are intertwined (Edelaar et al. 2017) and that observed population divergence could reflect genetic differentiation and/or differences in the environments under which individuals grow up. In migratory species, migrants would have to exhibit phenotypic plasticity or bet-hedging strategies, as they are exposed to different environmental conditions (Botero et al. 2015). In the case where migrants are no longer able to migrate (forced "residents"), we expect selection to act on either the traits themselves or on the degree of plasticity. In this study, we focus on behavior as it is the primary way through which animals interact with their environment and respond to changes (Wong and Candolin 2015). Behavior is often considered highly flexible and hence less prone to genetic divergence in response to environmental changes. However, plastic responses could evolve rapidly through genetic divergence compared to fixed traits (van Gestel and Weissing 2018). In addition, "animal personality" research points that behaviors are highly structured and form correlations over time (consistency) and over contexts (syndromes) (Réale et al. 2007; Stamps and Groothuis 2010; Wolf and Weissing 2012). Furthermore, individual differences within populations are often repeatable (Bakker 1986; Réale et al. 2007) and, to some extent, heritable (Bakker 1986; Dingemanse et al. 2009; Dochtermann et al. 2014). As a consequence, personality variation may retard or accelerate rates of microevolution and population divergences (Wagner and Altenberg 1996; Wolf and Weissing 2012; Dochtermann and Dingemanse 2013; van Gestel and Weissing 2018). Here, we aim to study whether genetic differentiation underlies the rapid behavioral differentiation following habitat fragmentation. We capitalize on an unintended field experiment in the north of the Netherlands, where the construction of pumping stations in the 1970s has led to the forced residency of replicate populations of anadromous three-spined sticklebacks (Gasterosteus aculeatus). A previous study in this system has revealed extensive phenotypic differentiation (morphology and behavior) between the ancestral "migrant" and its derived "resident" populations (Ramesh et al. 2021). Compared to migrants, wild-caught residents are smaller, more active and aggressive, more exploratory, and bolder and showed reduced shoaling and migratory tendencies (Ramesh et al. 2021; see also Bakker 1994). These differences parallel the behavioral divergence reported between freshwater and marine populations of sticklebacks over ~ 12,000 years (Di-Poi et al. 2014). However, it remains to be determined if similar behaviorally relevant genetic differentiation has evolved in our system over much shorter time scales (~ 50 years). This knowledge is important because conservation efforts are underway to reconnect the waterways, and therefore, we need to better understand the current state of fish populations in order to predict the eco-evolutionary consequences of barrier removal. We conducted a common garden experiment to test whether genetic differentiation underlies the observed divergence in morphology and behavior. We raised F1 juveniles from four types of crosses (migrant parents (MM), resident parents (RR), hybrids with a migrant mother (RM), and hybrids with a resident mother (MR); Fig. 1a) under similar laboratory conditions and quantified variation in activity, exploration, shoaling, boldness, and migratory tendencies among these crosses. We expect that (1) if the behavioral differentiation is genetic, individuals of MM crosses will differ significantly from RR crosses (similar to their wild-caught parents, Fig. 1b); (2) if the behavioral differences between wild-caught residents and migrants are induced by differences in their environments, there will be no differences between the "common garden" crosses (Fig. 1b); and (3) if parental effects are involved, we will see asymmetric changes in the reciprocal hybrid crosses (Fig. 1b). Specifically, if behavioral variation is strongly influenced by maternal effects, the hybrids resulting from the MR cross will have a similar score as the RR cross and the hybrids resulting from the RM cross will have a similar score as the MM cross (Fig. 1b). A similar trend can be expected in the case of paternal effects, but we eliminated that possibility to a large extent by raising juveniles without paternal care (Giesing et al. 2011; McGhee and Bell 2014; Heckwolf et al. 2018). a Schematic of breeding design. We obtained four F1 crosses—migrant male × migrant female (MM), resident male × resident female (RR), migrant male × resident female (MR), and resident male × migrant female (RM). b Expectations of mean behavioral scores (e.g., shoaling) if the underlying basis for behavioral differentiation in wild-caught parents is due to genetic differentiation, environmental experiences during development or through maternal effects (letters of migrant and resident female in the maternal effects prediction plot are colored according to the origin for ease of interpretation of patterns in hybrids, when they are under the control of maternal effects. The expected mean value of hybrids would correspond to the migrant or resident status of the female) Study populations The waterways in the Netherlands consist of rivers and canals that are open to the sea and of land-locked smaller ditches (< 1-m deep) located inside polders. We caught incoming migrants at two sea locks ("TER" (53°18′7.24″, 7°2′17.11″) and "NSTZ" (53°13′54.49″, 7°12′30.99″)), whereas residents were caught in two land-locked polders ("LL-A" (53°17′56.14″, 7°2′1.28″) and "LL-B" (53°17′16.52″, 7°2′26.46″)) (Ramesh et al. 2021). Sticklebacks were caught over a period of 4 weeks between March and April in 2019. All individuals were transported to the laboratory within 2 h of capture in aerated bags (5–6 fish/3-L bag). They were housed outdoors separated by their origin in groups of five fish in 50-L aerated tanks filled with freshwater, exposed to the natural day-light cycles and temperatures. They were fed brine ad libitum with brine shrimps and blood worms (3F Frozen Fish Food company). Males were separated once they reached breeding colors, and females were checked daily for signs of gravidity. Lab-bred F1 juveniles Lab-bred F1 juveniles of resident, migrant, and hybrid sticklebacks arose from a partial factorial breeding design (Fig. 1a) using three resident males, three resident females, three migrant males, and three migrant females (six migrants from "NSTZ," five residents from "LL-A," and one resident female from "LL-B"). Each family consisted of all combinations of crosses between a male and female migrant and male and female resident, leading to F1 offspring of different crosses: pure migrant (MM) or resident (RR) and hybrids with migrant father and resident mother (MR) and vice versa (RM). From the offspring pool, a total of 40 fish were used per cross for the experiment, with each cross containing at least five fish from each family. For obtaining F1 juveniles, we followed a split-clutch in vitro fertilization protocol, where eggs of ripe females were stripped, then weighed and split into two halves for artificial insemination with sperm extracted from freshly euthanized migrant and resident fathers respectively (Barber and Arnott 2000). All offspring were raised without paternal care to prevent undesired long-lasting effects of father on offspring behavior (McGhee and Bell 2014). The larvae hatched 5 to 7 days after fertilization and started maintaining buoyancy and independent feeding one week after hatching. The fish larvae were fed a mixture of frozen cyclops, freshly hatched Artemia nauplii, and zebrafish diet (GEMMA Micro 75, Skretting, Tooele, Utah) daily. The densities never exceeded 40 fish larvae in 5 liter "home-tanks" (30 × 16 × 18 cm (L × W × H)). Once fish reached ~ 2 cm, they were isolated, assigning ten random individuals from the same family into separate home tanks. After this, the individuals were fed ad libitum with brine shrimps and blood worms (3F Frozen Fish Food company), and tanks were connected to the same water system at 16 °C. The photoperiod was set at 16:8 (L:D), mimicking summer conditions during juvenile growth. When the fish reached a length of ~ 4 cm, they received a unique identification (see below). We induced autumn conditions when the fish were ~ 12–13 months old, characterized by 12:12 (L:D) photoperiod and temperatures being lowered to 13–14 °C. All fish were in non-breeding conditions and kept in autumn conditions during the period of experimentation. Experimentation started when fish were ~ 15–16 months old. Individual identification When the juveniles reached 4 cm length (~ 12 months), we used clipped spines or injection of an 8-mm Passive Integrated Transponder (PIT tag; Trovan, Ltd., Santa Barbara, California) for unique individual identification. We used PIT tag injection only for half of the tested fish (20 fish × 4 crosses = 80 fish), while the rest were tagged using a combination of dorsal and pelvic spine clipping (20 fish × 4 crosses = 80 fish). This was because PIT tag retention was low in these fish (~ 15% loss in the first week after tagging), and we did not retag the fish to prevent excess handling. PIT tags were injected in the abdominal cavity and under anesthesia following the standard protocol (following Cousin et al. 2012). During tagging/clipping, we also measured weight and standard length (the length from the tip of the snout to the base of the tail) as a proxy for size. Lateral plates were not very clearly visible in juvenile fish and hence were not measured. After individual tagging, we mixed juveniles from different families to be housed together in groups of ten in their home tanks while keeping them together with the same cross (MM, RR, MR or RM). Large-scale movement tendencies in mesocosm (migratory tendencies) For the subset of PIT tagged fish, movement assays were performed in semi-natural mesocosms before subjecting them to the lab-based tests. The mesocosm system consisted of five connected outdoor ponds of diameter 1.6 m connected by four pipes of length ~ 1.5 m and diameter 11 cm, filled with water from a nearby freshwater ditch, with a linear flow in the system of connected ponds similar to those typically experienced in the canals and ditches (flow speed < 0.7 cm/s) (Fig. 2a). This was done to create a cue for migration-like movement. All connecting tubes were fitted with circular PIT antennas around the entrance and exit of each pond to record fish movement between ponds. The sticklebacks were tested in pond experiments after at least 1 week of recovery from tagging. A group of ten fish of one cross (MM, RR, MR, or RM) was introduced in the first pond and acclimatized for 5 h in the first morning, after which the connection to the rest of the ponds was opened. We then recorded the movement of fish as the number of crossings between ponds for the next 16 h (~ 4 pm–8 am). We attempted to have 20 tagged fish/cross and tested them in groups of ten each, making it two groups/cross. However, due to tag loss, we ended up with < 20 fish/cross. Instead of changing group size, which could have an effect on behavior, we decided to spread the final number of tagged fish between two groups and supplement the remaining with untagged fish from the same cross to make up to ten. In total, two groups, each from a randomly chosen cross, were tested, making a total of eight groups with 56 fish (NMM = 12, NMR = 17, NRM = 15, NRR = 12). In groups with less than ten tagged individuals, untagged fish from the same cross were added to maintain constant group size. Schematic of behavioral assays. a Migration tendencies were tested in a linear setup of five connected pond mesocosm with groups of 10 fish. There is water flow (rate < 0.7 cm/s). PIT antennas are present at both ends of the corridors connecting the ponds. b Lab assays were performed in the following order: activity (day 1), exploration (day 1), shoaling (day 2), boldness (day 3) Lab behavior assays Three days before testing, fish were selected randomly and acclimatized in visually separated and isolated tanks, same as their home tanks, at an ambient temperature of 19 °C. We attempted to test 40 fish/cross, but some fish were lost due to mortality. Hence, in total, 154 fish (NMM = 40, NMR = 39, NRM = 35, NRR = 40) were randomly selected for testing and split into four batches. One round of testing consisted of four batches of approximately ten fish of each cross and lasted 1 week where we assayed activity, exploration, shoaling, and boldness in that order (Fig. 2b). Overall, roughly 40 fish were tested each week. The interval between the first and the second rounds of testing of each individual was thus at least 4 weeks. Fish were returned to their home tanks between the testing rounds. The sample sizes for the second round were lower (N = 151) due to mortality between the two rounds (NMM = 39, NMR = 38, NRM = 34, NRR = 40). All lab assays were filmed from the top using a Raspberry Pi camera (Raspberry Pi NoIR Camera Board V2 – 8MP, Raspberry Pi Foundation, UK) in tanks placed in illuminated wooden boxes to prevent external disturbance. Behavioral assays were conducted in fixed order as below, and videos were analyzed using EthovisionXT (Noldus Information Technology company). In all tests, observers were blind with respect to the cross to which the test fish belonged and further bias was reduced by analyzing the videos using automated video tracking techniques. Activity (day 1) Activity of the fish was measured as the total distance the fish swam in a tank identical to its home tank during a total of 20 min (with 5 min for acclimatization). Exploration (day 1) Just after activity was recorded, the fish was isolated to one corner of the tank using a sheet partition, and the setup in the tank was changed. Five stone pillars extending above the water's surface were added in a specific position, forcing the fish to move around them. After 5 min, the sheet was removed remotely without opening the box, and the fish was recorded in this novel arena for 20 min. The total distance travelled by the fish in this novel environment was used as a proxy for exploratory tendency of fish as it highly correlates with space use (Ramesh et al. 2021). Shoaling (day 2) For the shoaling assay, a larger tank (60 × 30 × 30 cm) was filled with water up to 10-cm height. The tank was divided into three compartments: the central testing arena where the focal fish was released and two end compartments containing the stimulus shoal (N = 10 unfamiliar conspecifics of mixed crosses), and the distractor fish (N = 2 unfamiliar conspecifics) (adapted from Wark et al. 2011). The position of the distractor and shoal fish compartments was switched to prevent biases and replaced with new distractor and shoal fish every seven tests. At the start of the test, the focal fish was allowed to acclimatize for 5 min in the central arena without viewing the end compartments which were covered with opaque barriers. Then, the opaque barriers were lifted remotely from outside the box, and the response of the focal fish was recorded for the next 20 min. The water was refreshed after testing seven fish in the arena. In total, we had four groups of shoal fish and five pairs of distractor fish, which were randomly used to avoid biases. The proportion of time the focal fish spent within one-fish distance (6 cm) from the side containing the stimulus shoal was used as a proxy for shoaling. Boldness (day 3) In the boldness tests, we measured the responses of the focal fish toward visual cue of a European perch (Perca fluviatilis) (model with soft body, Kozak and Boughman 2012) and olfactory predation cues (50 mL of water from freshly dead sticklebacks mixed with 50 mL of water containing live perch scent, Sanogo et al. 2011). The focal fish was moved from its home tank into a bigger, novel tank (60 × 30 × 30 cm) with three compartments filled with 10 cm of water. The predator model was randomly presented in one of the end compartments, while the focal fish was acclimatized in the other end compartment (Kozak and Boughman 2012). After 5 min of acclimatization, the fish was released remotely into the arena with view of the predator model, and the assay lasted for 20 min. We changed the side of predator compartment systematically in order to avoid biases. Further, the water was refreshed and new predatory olfactory cues were added after testing seven fish in the arena. The proportion of time the focal fish spent within one-fish distance (6 cm) from the predator compartment was taken as a proxy for boldness. Variation in size and behaviors (activity, exploration, shoaling, and boldness) was analyzed using linear mixed models (LMM) in which repeat (first vs. second round) and cross identity (MM, MR, RM or RR) were included as fixed factors. We also included the interactive effects (cross x round) to test for cross-specific habituation effects. Individual identity (Fish ID), mother identity (Mother ID), and father identity (Father ID) were included as random effects. For shoaling behavior, we added identity of the test shoal (Shoal ID) as an additional random effect. For migratory tendencies, only one round of tests was performed and we fitted a Poisson generalized linear mixed model with log-link function (GLMM), with number of pond crosses as the response variable and cross identity as fixed factor. As random effects, we included mother identity (Mother ID) and father identity (Father ID), and further, to prevent overestimation of predictive power caused due to overdispersion, we added observation level random effects (OLRE) (Harrison 2014). All LMMs/GLMMs were constructed in R v. 3.6.1, R Core Team (2019) using the "lmer" function of the "lme4" package, package version 1.1-27.1 (Bates et al. 2015). The statistical significance of fixed effects was assessed based on the 95% confidence interval (CI): an effect was considered significant when its 95% CI did not include zero. In addition, Tukey's HSD post hoc test was performed using the functions "emmeans" and "pairs" to give pairwise comparisons using the package "emmeans," package version 1.6.1 (Lenth 2020). LMMs were used to decompose the phenotypic variance of behaviors into between-individual (VFish ID), between-mother (VMother ID), between-father (VFather ID), and within-individual (VResidual) variances that we subsequently used to calculate repeatabilities, i.e., the proportion of total phenotypic variation (Vp) attributable to differences between individuals (RFish ID), between mothers (RMother ID), and between father (RFather ID): $$\begin{array}{l}{\mathrm{R}}_{\mathrm{Fish ID}}={\mathrm{V}}_{\mathrm{Fish ID}} / {\mathrm{V}}_{\mathrm{p}}\\ {\mathrm{R}}_{\mathrm{Mother ID}}={\mathrm{V}}_{\mathrm{Mother ID}} / {\mathrm{V}}_{\mathrm{p}}\\ {\mathrm{R}}_{\mathrm{Father ID}}={\mathrm{V}}_{\mathrm{Father ID}} / {\mathrm{V}}_{\mathrm{p}}\\ \mathrm{with} {\mathrm{V}}_{\mathrm{p}}={\mathrm{V}}_{\mathrm{Fish ID}}+{\mathrm{V}}_{\mathrm{Mother ID}}+{\mathrm{V}}_{\mathrm{Father ID}}+{\mathrm{V}}_{\mathrm{Residual}}.\end{array}$$ Raw (without fixed effects), adjusted repeatabilities (after accounting for fixed effects, cross x round), and their confidence intervals were calculated using "rpt" function with 1000 bootstraps in "rptR," package version 0.9.22 (Stoffel et al. 2017). Our prime goal was to test if RR and MM crosses that were raised under similar conditions, exhibited similar behavioral differences as observed in their wild-caught population of origin and if these differences were consistent over time. We found that RR crosses were consistently less active than MM crosses in the two rounds (Fig. 3a; Table 1; overall effect of crosses on activity: χ2 = 17.35, df = 3, p < 0.01, Supp. Table 1). We further found that shoaling and migratory tendencies varied significantly and consistently between RR and MM crosses in the same direction, with RR crosses exhibiting lower shoaling and migratory tendencies than MM crosses (Fig. 3c, e; Table 1; overall effect of crosses on shoaling: χ2 = 17.91, df = 3, p < 0.01, on migratory tendency: χ2 = 14.37, df = 3, p < 0.01). MM but not RR crosses shoaled more than expected by chance (score > 0.5) (Table 1). RR and MM crosses did not differ consistently in levels of exploration and boldness (Fig. 3b, d; Table 1, Supp. Table 1). For boldness, RR cross differed from MM cross but only in round 2 (Fig. 3d, significant effects of round and round x cross RR, Table 1, Supp. Table 1), implying that the observed difference was not consistent over time (Fig. 3d). Crosses did not differ in body size (Fig. 3f, Table 1, Supp. Table 1). Mean scores and standard errors for behaviors and size of F1 fish of different crosses. a "Activity," total distance travelled in meters (m); b "exploration," total distance travelled in a novel arena in m; c "shoaling," proportion of time spent near shoal compartment; d "boldness," proportion of time spent near predator. For lab-based behaviors, the mean behavioral scores for the two repeats are represented separately (sample sizes round 1, NMM = 40, NMR = 39, NRM = 35, NRR = 40; round 2, NMM = 39, NMR = 38, NRM = 34, NRR = 40); e "migratory tendency," total number of pond crosses (NMM = 12, NMR = 17, NRM = 15, NRR = 12); f "size," standard length in mm (NMM = 40, NMR = 39, NRM = 35, NRR = 40) Table 1 Effect of type of cross (migrant MM, resident RR, hybrid RM and MR) on behavior and morphology of common garden raised three-spined sticklebacks. For lab-based behaviors, the additive and/or interactive effects of rounds are included. Summaries of linear mixed models on traits are presented with estimates of fixed effects (β), with their 95% confidence intervals (CI) and variance (σ2) due to random effects with corresponding standard deviation (SD). Significant fixed effects compared to the reference factor are denoted in bold. Sample size (N) represents number of observations We did not find evidence for parental effects. For all traits investigated, we did not observe a clear directional asymmetry between the reciprocal hybrid crosses or trends in the distribution of individual behavior (Fig. 3, Supp. Fig. 2). Overall, only a small fraction of the variance in behaviors was attributable to differences between fathers and mothers (between 0 and 0.18, Table 2). In contrast, individual identity explained a significant part of the behavioral variation across the two rounds of measurement (adjusted Rind = 0.31 to 0.38; raw Rind = 0.14 to 0.43) (Supp. Fig. 1, Table 2), i.e., individual behavior is consistent (to a certain extent), despite potential effects of habituation or sensitization to handling (Fig. 3, Table 1). Table 2 Repeatabilities of lab-based behaviors. Raw repeatabilities and adjusted repeatabilities after controlling for cross ID are given for individual ID, father ID, and mother ID along with their 95% confidence intervals (CI) We aimed to study whether genetic differentiation underlies the behavioral differentiation following habitat fragmentation in sticklebacks. Using a common garden experiment, we showed that the differences between residents and migrants in shoaling and migration tendency (and to some extent also activity) have a genetic basis. In contrast, there were no clear patterns regarding differences in other behaviors or size between crosses. The earlier observed differences in these traits between wild-caught residents and migrants might therefore reflect differences in the respective developmental environments of the two ecotypes of fish. We discuss below the likely causes of divergence in our system and compare the patterns to those observed in post-glacial divergence of marine and freshwater sticklebacks. Then, we discuss the eco-evolutionary implications of our findings in link with conservation plans of our study area. Our common garden experiment revealed that the divergence in at least two of the five behavioral traits studied have a genetic basis. This corroborates a previous study on sticklebacks showing that the expression of heritable variation, i.e., the fraction of phenotypic variance owing to additive effects of genes (Lynch and Walsh 1998), substantially varied depending on the personality trait considered and the evolutionary history of the populations (Dingemanse et al. 2009). An interesting future avenue will be to quantify population specific trait heritabilities and the relative contribution of genetic and non-genetic sources of variation in those behaviors. Furthermore, it remains to be tested if the genetic differences we uncovered reflect local adaptation as opposed to other processes such as genetic drift or founder effects. Shoaling and migration tendencies are very crucial for the ancestral migratory fish. Their migratory lifestyle involves group schooling tendencies and potentially higher shoaling tendencies due to increased predator pressure owing to "openness" of habitats in the sea. In residents, shoaling tendencies may be less strongly selected for, leading to the pattern of random association with the shoal that we have recovered in our experiments (Fig. 3c). Alternatively, lower shoaling tendencies may be selected for due to increased competition, for instance, in winter, when resources are scarce leading to a trade-off between intra-specific aggression and competition (Lacasse and Aubin-Horth 2014). Studies on marine-freshwater stickleback pairs have also revealed potential genetic underpinnings of shoaling via eda gene (freshwater sticklebacks shoaled less and schooled less efficiently than migrants; Wark et al. 2011; Di-Poi et al. 2014; Archambeault et al. 2020) and migratory tendencies via genetic divergence in thyroxine response mechanisms (Kitano et al. 2010). One next step will be to test whether the genetic differentiation of shoaling and migratory tendencies reflect local adaptation using either a genomic approach to detect signature of adaptive divergence (using, e.g., a whole genome and/or a candidate gene (eda allele) approach) or a transplant experiment where we would raise crosses in different environmental conditions (marine vs freshwater) to infer fitness. We expected similar differentiation in other traits, as they were found to be different between wild-caught migrants and residents over two study years (Ramesh et al. 2021). For instance, studies have shown moderately heritable and additive genetic components in behaviors such as exploration and boldness in sticklebacks (Dingemanse et al. 2009). However, in our experiment, body size and behaviors such as exploration and boldness did not show differences between crosses. For body size, responses may be potentially plastically adjusted to the ecological conditions as seen in previous studies (e.g., predation pressure; Frommen et al. 2011; niche specialization, Day and McPhail 1996; Wund et al. 2008). Similar to body size, behaviors such as exploration and boldness may also be environmentally determined. Alternatively, these behaviors could also be state-dependent (state being size or mass in this case), owing to differences in resource availability during growth of migrants and residents (Luttbeg and Sih 2010; Wolf and Weissing 2010). It also remains possible that the differences in behavior in wild migrants are due to plastic responses of migrants in freshwater vs sea conditions, which have not been tested here. In our current study, we found little evidence for maternal effects as maternal contribution to trait variation was small and not significant (19% for activity, 3% for exploration, and 3% for shoaling tendencies) and we did not find clear systematic differences between the reciprocal hybrid crosses (RM and MR). However, we raised the juveniles in the absence of paternal care. Hence, it remains possible that the behavioral differences observed between wild-caught migrants and residents (Ramesh et al. 2021) are related to differences in paternal care. This is an interesting avenue warranting further investigation because there is evidence for parental programming through maternal effects and paternal care in sticklebacks (Giesing et al. 2011; McGhee et al. 2012, 2015; McGhee and Bell 2014; Mommer and Bell 2014; Stein and Bell 2014). Our studies revealing genetic differentiation between ancestral migrant and resident populations in behaviors related to migration and shoaling are timely and have important consequences for conservation efforts. Water authorities are currently implementing conservation measures which aim at restoring river connectivity via barrier removal or the construction of fishways. Reconnecting migratory and genetically differentiated land-locked populations can be viewed as a large scale eco-evolutionary experiment that raises exciting questions such as the following: will migratory and resident sticklebacks intermix and introgress in sympatry (Ravinet et al. 2021)?; Will hybrids be selected against?; or Will we have incomplete gene flow and partial migration occurring in these populations (Berner et al. 2011; Ingram et al. 2015; Hanson et al. 2016; Lackey and Boughman 2017)? From our studies, residents and hybrids show lowered migratory and shoaling tendencies. This could potentially drive divergent selection and lead to the genetic differentiation of sympatric populations with partial migration upon reconnection. Divergence may also be maintained or enhanced by size-assortative mating of migrants and residents as size difference at maturity has been detected in the wild (Ramesh et al. 2021) or by phenotype-dependent microhabitat choice (Maciejewski et al. 2020; Dean et al. 2021). Irrespective of the mechanisms involved in the observed phenotypic differentiation between migrants and residents, whether the migrant-resident ecotype divergence will persist in the absence of migration barriers, needs to be investigated. Overall, using a common garden experiment, we found evidence for genetic differentiation in shoaling, migratory tendencies, and potentially activity. These results suggest that residents may have locally adapted to their novel environmental conditions in our system. Few imminent questions that follow this finding are whether our results can be generalized to other freshwater and migratory fish species that have undergone isolation and how conservation plans may be affected (Tuomainen and Candolin 2011; Franssen et al. 2013). Conservation methods indeed should not only aim at restoring the ecosystem to its original state because this may lead to unwanted consequences (Stockwell et al. 2003). For example, reversal of responses to restorations may not be possible if newly adapted populations or species lack genetic variation, leading to a rapid population decline after conservation measures are in place (Lahti et al. 2009; Mable 2019). Alternatively, newly adapted populations or species may, in fact, have selected for invasive phenotypes such as novel foraging tactics and increased aggression and boldness, leading to unwanted expansions causing unpredictable effects on other species and communities (Holway and Suarez 1999; Sol et al. 2002). Hence, conservation efforts should be aimed at implementing methods taking an informed approach of the current state of the system and assessing the evolutionary changes undergone in the species assemblages they are aimed at. The final processed data used for the figures and analyses of this study are made available as supplementary material. Archambeault SL, Bärtschi LR, Merminod AD, Peichel CL (2020) Adaptation via pleiotropy and linkage: association mapping reveals a complex genetic architecture within the stickleback Eda locus. Evol Lett 4:282–301 Bakker TCM (1986) Aggressiveness in sticklebacks (Gasterosteus aculeatus L.): a behaviour-genetic study. Behaviour 98:1–144 Bakker TCM (1994) Evolution of aggressive behaviour in the threespine stickleback. In: Bell MA, Foster SA (eds) The Evolutionary Biology of the Threespine Stickleback. Oxford University Press, Oxford, pp 345–380 Barber I, Arnott SA (2000) Split-clutch IVF: A technique to examine indirect fitness consequences of mate preferences in sticklebacks. Behaviour 137:1129–1140 Bates D, Mächler M, Bolker BM, Walker SC (2015) Fitting linear mixed-effects models using lme4. J Stat Softw 67:1–48 Berner D, Kaeuffer R, Grandchamp AC, Raeymaekers JAM, Räsänen K, Hendry AP (2011) Quantitative genetic inheritance of morphological divergence in a lake-stream stickleback ecotype pair: implications for reproductive isolation. J Evol Biol 24:1975–1983 Bohlin T, Pettersson J, Degerman E (2001) Population density of migratory and resident brown trout (Salmo trutta) in relation to altitude: evidence for a migration cost. J Anim Ecol 70:112–121 Botero CA, Weissing FJ, Wright J, Rubenstein DR (2015) Evolutionary tipping points in the capacity to adapt to environmental change. Proc Natl Acad Sci USA 112:184–189 Cousin X, Daouk T, Péan S, Lyphout L, Schwartz M, Bégout M (2012) Electronic individual identification of zebrafish using radio frequency identification (RFID) microtags. J Exp Biol 215:2729–2734 Day T, McPhail JD (1996) The effect of behavioural and morphological plasticity on foraging efficiency in the threespine stickleback (Gasterosteus sp.). Oecologia 108:380–388 Dean LL, Dunstan HR, Reddish A, MacColl ADC (2021) Courtship behavior, nesting microhabitat, and assortative mating in sympatric stickleback species pairs. Ecol Evol 11:1741–1755 Dingemanse N, Van der Plas F, Wright J, Réale D, Schrama M, Roff DA, Van der Zee E, Barber I (2009) Individual experience and evolutionary history of predation affect expression of heritable variation in fish personality and morphology. Proc R Soc Lond B 276:1285–1293 Di-Poi C, Lacasse J, Rogers SM, Aubin-Horth N (2014) Extensive behavioural divergence following colonisation of the freshwater environment in threespine sticklebacks. PLoS One 9:98980 Dochtermann NA, Dingemanse NJ (2013) Behavioral syndromes as evolutionary constraints. Behav Ecol 24:806–811 Dochtermann NA, Schwab T, Sih A (2014) The contribution of additive genetic variation to personality variation: Heritability of personality. Proc R Soc B 282:20142201 Edelaar P, Jovani R, Gomez-Mestre I (2017) Should I change or should I go? Phenotypic plasticity and matching habitat choice in the adaptation to environmental heterogeneity. Am Nat 190:506–520 Fahrig L (2003) Effects of habitat fragmentation on biodiversity. Annu Rev Ecol Evol Syst 34:487–515 Fischer J, Lindenmayer DB (2007) Landscape modification and habitat fragmentation: a synthesis. Glob Ecol Biogeogr 16:265–280 Foley JA, DeFries R, Asner GP et al (2005) Global consequences of land use. Science 309:570–574 Franssen NR, Harris J, Clark SR, Schaefer JF, Stewart LK (2013) Shared and unique morphological responses of stream fishes to anthropogenic habitat alteration. Proc R Soc B 280:20122715 Frommen JG, Herder F, Engqvist L, Mehlis M, Bakker TCM, Schwarzer J, Thünken T (2011) Costly plastic morphological responses to predator specific odour cues in three-spined sticklebacks (Gasterosteus aculeatus). Evol Ecol 25:641–656 Ghalambor CK, McKay JK, Carroll SP, Reznick DN (2007) Adaptive versus non-adaptive phenotypic plasticity and the potential for contemporary adaptation in new environments. Funct Ecol 21:394–407 Giesing ER, Suski CD, Warner RE, Bell AM (2011) Female sticklebacks transfer information via eggs: Effects of maternal experience with predators on offspring. Proc R Soc Lond B 278:1753–1759 Hanson D, Moore JS, Taylor EB, Barrett RDH, Hendry AP (2016) Assessing reproductive isolation using a contact zone between parapatric lake-stream stickleback ecotypes. J Evol Biol 29:2491–2501 Harrison XA (2014) Using observation-level randomeffects to model overdispersion in count data in ecology and evolution. PeerJ 2014:e616 Heckwolf MJ, Meyer BS, Döring T, Eizaguirre C, Reusch TBH (2018) Transgenerational plasticity and selection shape the adaptive potential of sticklebacks to salinity change. Evol Appl 11:1873–1885 Hedrick PW (2006) Genetic polymorphism in heterogeneous environments: the age of genomics. Annu Rev Ecol Evol Syst 37:67–93 Hedrick PW, Ginevan ME, Ewing EP (1976) Genetic polymorphism in heterogeneous environments. Annu Rev Ecol Syst 7:1–32 Holway DA, Suarez AV (1999) Animal behavior: an essential component of invasion biology. Trends Ecol Evol 14:328–330 Ingram T, Jiang Y, Rangel R, Bolnick DI (2015) Widespread positive but weak assortative mating by diet within stickleback populations. Ecol Evol 5:3352–3363 Junge C, Museth J, Hindar K, Kraabøl M, Vøllestad LA (2014) Assessing the consequences of habitat fragmentation for two migratory salmonid fishes. Aquat Conserv Mar Freshw Ecosyst 24:297–311 Kawecki TJ, Ebert D (2004) Conceptual issues in local adaptation. Ecol Lett 7:1225–1241 Kitano J, Lema SC, Luckenbach JA, Mori S, Kawagishi Y, Kusakabe M, Swanson P, Peichel CL (2010) Adaptive divergence in the thyroid hormone signaling pathway in the stickleback radiation. Curr Biol 20:2124–2130 Kozak GM, Boughman JW (2012) Plastic responses to parents and predators lead to divergent shoaling behaviour in sticklebacks. J Evol Biol 25:759–769 Kraabøl M, Johnsen SI, Museth J, Sandlund OT (2009) Conserving iteroparous fish stocks in regulated rivers: the need for a broader perspective! Fish Manag Ecol 16:337–340 Lacasse J, Aubin-Horth N (2014) Population-dependent conflict between individual sociability and aggressiveness. Anim Behav 87:53–57 Lackey ACR, Boughman JW (2017) Evolution of reproductive isolation in stickleback fish. Evolution 71:357–372 Lahti DC, Johnson NA, Ajie BC, Otto SP, Hendry AP, Blumstein DT, Coss RG, Donohue K, Foster SA (2009) Relaxed selection in the wild. Trends Ecol Evol 24:487–496 Legrand D, Cote J, Fronhofer EA, Holt RD, Ronce O, Schtickzelle N, Travis JMJ, Colbert J (2017) Eco-evolutionary dynamics in fragmented landscapes. Ecography 40:9–25 Lenth R (2020) Emmeans: estimated marginal means, aka least-squares means. https://CRAN.R-project.org/package=emmeans. Accessed 1 Apr 2021 Luttbeg B, Sih A (2010) Risk, resources and state-dependent adaptive behavioural syndromes. Philos Trans R Soc B 365:3977–3990 Lynch M, Walsh B (1998) Genetics and Analysis of Quantitative Traits. Sinauer Associates, Sunderland, MA Mable BK (2019) Conservation of adaptive potential and functional diversity: integrating old and new approaches. Conserv Genet 20:89–100 Maciejewski MF, Jiang C, Stuart YE, Bolnick DI (2020) Microhabitat contributes to microgeographic divergence in threespine stickleback. Evolution 74:749–763 McGhee KE, Bell AM (2014) Paternal care in a fish: Epigenetics and fitness enhancing effects on offspring anxiety. Proc R Soc B 281:20141146 McGhee KE, Pintor LM, Suhr EL, Bell AM (2012) Maternal exposure to predation risk decreases offspring antipredator behaviour and survival in threespined stickleback. Funct Ecol 26:932–940 McGhee KE, Feng S, Leasure S, Bell AM (2015) A female's past experience with predators affects male courtship and the care her offspring will receive from their father. Proc R Soc B 282:20151840 Mommer BC, Bell AM (2014) Maternal experience with predation risk influences genome-wide embryonic gene expression in threespined sticklebacks (Gasterosteus aculeatus). PLoS One 9:e98564 Nicolaus M, Edelaar P (2018) Comparing the consequences of natural selection, adaptive phenotypic plasticity, and matching habitat choice for phenotype–environment matching, population genetic structure, and reproductive isolation in meta-populations. Ecol Evol 8:3815–3827 Nussey DH, Wilson AJ, Brommer JE (2007) The evolutionary ecology of individual phenotypic plasticity in wild populations. J Evol Biol 20:831–844 Piersma T, Drent J (2003) Phenotypic flexibility and the evolution of organismal design. Trends Ecol Evol 18:228–233 Pigliucci M (2005) Evolution of phenotypic plasticity: Where are we going now? Trends Ecol Evol 20:481–486 Ramesh A, Groothuis TGG, Weissing FJ, Nicolaus M (2021) Habitat fragmentation induces rapid divergence of migratory and isolated sticklebacks. Beh Ecol XX(XX):1–11. https://doi.org/10.1093/beheco/arab121 Ravinet M, Kume M, Ishikawa A, Kitano J (2021) Patterns of genomic divergence and introgression between Japanese stickleback species with overlapping breeding habitats. J Evol Biol 4:114–127 Réale D, Reader SM, Sol D, McDougall PT, Dingemanse NJ (2007) Integrating animal temperament within ecology and evolution. Biol Rev 82:291–318 Sanogo YO, Hankison S, Band M, Obregon A, Bell AM (2011) Brain transcriptomic response of threespine sticklebacks to cues of a predator. Brain Behav Evol 77:270–285 Scheiner SM (1993) Plasticity as a selectable trait: reply to Via. Am Nat 142:371–373 Sol D, Timmermans S, Lefebvre L (2002) Behavioural flexibility and invasion success in birds. Anim Behav 63:495–502 Stamps J, Groothuis TGG (2010) The development of animal personality: relevance, concepts and perspectives. Biol Rev 85:301–325 Stein LR, Bell AM (2014) Paternal programming in sticklebacks. Anim Behav 95:165–171 Stockwell CA, Hendry AP, Kinnison MT (2003) Contemporary evolution meets conservation biology. Trends Ecol Evol 18:94–101 Stoffel MA, Nakagawa S, Schielzeth H (2017) rptR: repeatability estimation and variance decomposition by generalized linear mixed-effects models. Methods Ecol Evol 8:1639–1644 Tuomainen U, Candolin U (2011) Behavioural responses to human-induced environmental change. Biol Rev 86:640–657 van Gestel J, Weissing FJ (2018) Is plasticity caused by single genes? Nature 555:E19–E20 Via S, Gomulkiewicz R, De Jong G, Scheiner SM, Schlichting CD, Van Tienderen PH (1995) Adaptive phenotypic plasticity: consensus and controversy. Trends Ecol Evol 10:212–217 Wagner GP, Altenberg L (1996) Perspective: complex adaptations and the evolution of evolvability. Evolution 50:967–976 Wang IJ, Bradburd GS (2014) Isolation by environment. Mol Ecol 23:5649–5662 Wark AR, Greenwood AK, Taylor EM, Yoshida K, Peichel CL (2011) Heritable differences in schooling behavior among threespine stickleback populations revealed by a novel assay. PLoS One 6:e18316 Wolf M, Weissing FJ (2010) An explanatory framework for adaptive personality differences. Philos Trans R Soc B 365:3959–3968 Wolf M, Weissing FJ (2012) Animal personalities: Consequences for ecology and evolution. Trends Ecol Evol 27:452–461 Wong BBM, Candolin U (2015) Behavioral responses to changing environments. Behav Ecol 26:665–673 Wund MA, Baker JA, Clancy B, Golub JL, Foster SA (2008) A test of the "flexible stem" model of evolution: ancestral plasticity, genetic accommodation, and morphological divergence in the threespine stickleback radiation. Am Nat 172:449–462 We thank Dennis de Worst and Willem Diederik for help with fish care and advice on experimental design. We thank Peter Paul Schollema, at the Water Authorities Hunze en Aa's and Jeroen Huisman at van Hall Larenstein, University of Applied Sciences, for help with acquiring samples of sticklebacks. We thank Jakob Gismann, Jana Riederer, Raphaël Scherrer, Kevin Kort, and Albertas Janulevicius for their useful comments on the manuscript. We also thank the two reviewers, who provided excellent comments and suggestions that improved the manuscript. This work is supported by PhD fellowship of the Adaptive Life program of the University of Groningen to AR, by funding from European Research Council to FJW (ERC Advanced Grant No. 789240) and from the Netherlands Organization for Scientific Research to FJW and MN (NWO-ALW; ALWOP.668). This work was also supported by grants from the Gratama Foundation to AR (2020GR040), the Dr. J.L. Dobberke Foundation (KNAWWF/3391/1911), and the Waddenfonds - Ruim Baan voor Vissen 2 (01755849/WF-2019/200914). Theoretical Research in Evolutionary Life Sciences group, Groningen Institute for Evolutionary Life Sciences, University of Groningen, Nijenborgh 7, 9747 AG, Groningen, Netherlands A. Ramesh & F. J. Weissing Evolutionary Genetics, Development & Behaviour group, Groningen Institute for Evolutionary Life Sciences, University of Groningen, Nijenborgh 7, 9747 AG, Groningen, Netherlands A. Ramesh & T. G. G. Groothuis Conservation Ecology group, Groningen Institute for Evolutionary Life Sciences, University of Groningen, Nijenborgh 7, 9747 AG, Groningen, Netherlands A. Ramesh, M. M. Domingues & M. Nicolaus Department of Oceans Ecosystems, Energy and Sustainability Research Institute Groningen, University of Groningen, Nijenborgh 7, 9747 AG, Groningen, Netherlands E. J. Stamhuis A. Ramesh M. M. Domingues T. G. G. Groothuis F. J. Weissing M. Nicolaus Correspondence to A. Ramesh. Sampling of wild animals and handling methods were done following a fishing permit from Rijksdienst voor Ondernemend Nederland (the Netherlands) and an angling permit from the Hengelsportfederatie Groningen-Drenthe. Animal housing and behavioral tests adhered to the project permit from the Centrale Commissie Dierproeven (the Netherlands) under the license number AVD1050020174084. All methods were carried out under the applicable international, national, and institutional guidelines for the use of animals (Art. 9, Wet op de Dierproeven & European directive 2010/63/EU). All authors have given their consent for publication. The authors declare no competing interests. This article is a contribution to the Topical Collection Using behavioral ecology to explore adaptive responses to anthropogenic change – Guest Editors: Jan Lindström, Constantino Macias Garcia, Caitlin Gabor Communicated by J. Lindström. Below is the link to the electronic supplementary material. Supplementary file1 (DOCX 606 KB) Supplementary file2 (XLSX 77 KB) Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Ramesh, A., Domingues, M.M., Stamhuis, E.J. et al. Does genetic differentiation underlie behavioral divergence in response to migration barriers in sticklebacks? A common garden experiment. Behav Ecol Sociobiol 75, 161 (2021). https://doi.org/10.1007/s00265-021-03097-y Gasterosteus aculeatus Behavioral differentiation Habitat fragmentation Anthropogenic changes
CommonCrawl
PROCEEDING (749) Kim, Won-Il (34) Jung, Goo-Bok (32) Kim, Jeong-Gyu (24) Heo, Jong-Soo (23) Lee, Jong-Sik (22) Kim, Kye-Hoon (21) Kim, Lee-Hyung (19) Cho, Ju-Sik (18) Kim, Seong-Jo (17) Kim, Eun-Soo (16) Kim, Kyung-Tae (16) Ok, Yong-Sik (16) Kim, Young-Kee (15) Kim, Kwon-Rae (14) Yang, Jae-E. (14) Lee, Jin-Soo (13) Cho, Jae-Young (12) Choi, Jyung (12) Choi, Sang-Il (12) Kang, Ju-Chan (12) Seo, Jeoung-Yoon (12) Yang, Jae-E (12) Ahn, Ji-Whan (11) Baek, Seung-Hwa (11) Kim, Bok-Young (11) Kim, Jin-Ho (11) Kim, Mee-Hye (11) Kwon, Soon-Ik (11) So, Kyu-Ho (11) Yang, Jae-Kyu (11) Yoo, Sun-Ho (11) Yu, Chan (11) Kim, Doo-Hie (10) Kim, Kyoung-Woong (10) Lee, Hong-Jae (10) Lee, Jai-Young (10) Lee, Jong-Un (10) Lee, Su-Rae (10) Park, Seong-Jik (10) Ra, Kongtae (10) Yun, Sun-Gang (10) Chon, Hyo-Taek (9) Chung, Doug-Young (9) Han, Kang-Wan (9) Hwang, Un-Ki (9) Jin, Pyung (9) Kang, Ku (9) Kim, Jae-Gon (9) Kim, Min-Kyeong (9) Kim, Soon-Oh (9) Lee, Jong-Ok (9) Moon, Kwang-Hyun (9) Shin, Joung-Du (9) Sung, Nak-Chang (9) Yun, Sung-Wook (9) Baek, Ki-Tae (8) Chang, Yoon-Young (8) Cho, Sung-Rok (8) Chon, Chul-Min (8) Gam, Sang-Gyu (8) Kim, Jin-Soo (8) Lee, Gi-Gang (8) Lee, Jung-A (8) Lee, Jung-Sik (8) Lee, Jyung-Jae (8) Lee, Min-Gyu (8) Lee, Pyeong-Gu (8) Lee, Sang-Hwan (8) Lim, Dong-Kyu (8) Lim, Soo-Kil (8) Paik, Ki-Hyon (8) Park, Jun-Kun (8) Park, Sung-Kug (8) Shin, Woo-Seok (8) Shin, Yoon-Kyung (8) Son, Bu-Soon (8) Yoo, Ji-Hyock (8) Choi, Hoon (7) Chung, Keun-Yook (7) Chung, So-Young (7) Han, Jung-Geun (7) Heu, Min-Soo (7) Kim, Bok-Jin (7) Kim, Hyeong-Seok (7) Kim, Ji-Young (7) Kim, Min-Suk (7) Ko, Seok-Oh (7) Lee, Chang-Bok (7) Lee, Ji-Ho (7) Nam, Kyoung-Phile (7) Park, Mun-Gi (7) Ryu, Hyang-Mi (7) Seo, Dong-Cheol (7) Song, Hee-Bong (7) Yoon, Chun-Gyeong (7) Ahn, Byung-Koo (6) An, Youn-Joo (6) Do, Hwa-Seok (6) Ha, Jin-Wook (6) Han, Gi-Chun (6) Department of Environmental Engineering, Kwangwoon University (18) Korea Institute of Geoscience and Mineral Resources (18) Department of Environmental Engineering, Andong National University (17) National Institute of Environmental Research (16) Department of Biological Environment, Kangwon National University (15) National Institute of Agricultural Science and Technology, RDA (14) Department of Environmental Science, Kangwon National University (11) Department of Civil and Environmental Engineering, Kongju National University (10) Department of Earth and Environmental Sciences, Andong National University (10) Department of Environmental Engineering, Dong-A University (10) Department of Environmental Engineering, Kumoh National Institute of Technology (10) Department of Environmental Health Science, Soonchunhyang University (10) Department of Geology, Kyungpook National University (10) Korea Atomic Energy Research Institute (10) Department of Agricultural Chemistry, Chonbuk National University (9) Department of Chemical Engineering, Hankyong National University (9) Department of Chemical Engineering, Hanyang University (9) Department of Environmental Engineering, Kunsan National University (9) Division of Biological Environment, Kangwon National University (9) Division of Environmental Science and Ecological Engineering, Korea University (9) Mine Reclamation Corp. (9) Department of Civil and Environmental Engineering, Seoul National University (8) Department of Environmental Engineering, Chungbuk National University (8) Division of General Education, Kwangwoon University (8) Institute of Marine Science and Technology Research, Hankyong National University (8) Institute of Mine Reclamation Technology, Mine Reclamation Corporation (8) Jeollabuk-do Agricultural Research and Extension Services (8) Seoul Metropolitan Government Research Institute of Public Health and Environment (8) Technology Research Center, Mine Reclamation Corporation (8) Department of Energy and Resources Engineering, Chonnam National University (7) Department of Environmental Engineering, Kyungpook National University (7) Department of Environmental Engineering, Pukyong National University (7) Department of Environmental Engineering, The University of Seoul (7) Department of Environmental Science and Engineering, Ewha Womans University (7) Department of Environmental Science, Konkuk University (7) Dept. of Agricultural Chemistry, Gyeongsang National University (7) Geologic Environment Division, Korea Institute of Geoscience and Mineral Resources (7) Mine Reclamation Corporation (7) National Fisheries Research & Development Institute, West Sea Fisheries Research Institute, Marine Ecological Risk Assessment Center (7) Public Health and Environment Institute of Daegu Metropolitan City (7) Department of Agricultural Chemistry, Chungbuk National University (6) Department of Aquatic Life Medicine, Pukyong National University (6) Department of Biological and Environmental Engineering, Semyung University (6) Department of Environmental Engineering, Chonbuk National University (6) Department of Materials Engineering, Kyonggi University (6) Dept. of Agricultural Engineering, Gyeongsang National University (6) Environmental and Ecology Division, National Institute of Agricultural Science and Technology (6) Graduate School, Gyeongsang National University (6) Korea Food Research Institute (6) Korea Mine Reclamation Corporation (6) Marine Environment Research Department, KORDI (6) Marine Environmental Research Center, Korea Institute of Ocean Science & Technology (6) National Academy of Agricultural Science, RDA (6) Central Laboratory, Gyeongsang National University (5) College of Agriculture and Life Science, Seoul National University (5) College of Life and Natural Resources, Wonkwang University (5) Department of Agricultural Chemistry, College of Agriculture, Kyungpook National University (5) Department of Bio Environmental Chemistry, Chungnam National University (5) Department of Bioresources & Rural systems Engineering, Hankyong National University (5) Department of Chemical Engineering, Kwangwoon University (5) Department of Earth and Environmental Sciences and Research Institute of Natural Science, Gyeongsang National University (5) Department of Energy Resources Engineering, Seoul National University (5) Department of Environmental Engineering, Konkuk University (5) Department of Environmental Engineering, University of Seoul (5) Department of Environmental Engineering, Yeungnam University (5) Department of Environmental Geosciences, Pukyong National University (5) Department of Food & Nutrition, Ewha Woman's University (5) Department of Food and Nutrition, Ewha Woman's University (5) Department of Geology and Earth Environmental Sciences, Chungnam National University (5) Department of Oceanography, Seoul National University (5) Division of Environmental Science & Ecological Engineering, Korea University (5) Graduate School of Public Health, Kyungpook National University (5) Korea Forest Research Institute (5) Korea Institute of Construction Technology (5) Korea Turfgrass Research Institute (5) National Agricultural Science and Technology Institute, RDA (5) School of Public Health, Seoul National University (5) College of Agriculture, Chungnam National University (4) Convergence Research Center for Development of Mineral Resources, Korea Institute of Geoscience and Mineral Resources (4) Department of Applied Chemistry, Andong National University (4) Department of Aquaculture and Aquatic Science, Kunsan National University (4) Department of Astronomy, Space Science and Geology, Chungnam National University (4) Department of Bio & Nano-Chemistry, Kookmin University (4) Department of Biological Science, Andong National University (4) Department of Bioresources and Rural systems Engineering, Hankyong National University (4) Department of Chemical Engineering, Pukyong National University (4) Department of Chemistry, Han Nam University (4) Department of Chemistry, Hanyang University (4) Department of Chemistry, Korea University (4) Department of Civil Engineering, Seoul National University of Science and Technology (4) Department of Civil and Environmental Engineering, Kongju National University (4) Department of Earth & Environmental Sciences, Korea University (4) Department of Environmental Engineering, Chosun University (4) Department of Environmental Engineering, Kwandong University (4) Department of Environmental Engineering, Kyungsung University (4) Department of Environmental Engineering, Seoul National University of Technology (4) Department of Environmental Science and Engineering, Gwangju Institute of Science and Technology (4) Department of Food Evaluation, Korea Food and Drug Administration (4) Department of Food Science and Technology, Kyungpook National University (4) Department of Industrial Chemistry, Kangnung National University (4) Korean Society of Soil and Groundwater Environment (298) The Korean Environmental Sciences Society (226) The Korean Society of Economic and Environmental Geology (195) The Korean Society of Environmental Agriculture (181) Korean Society of Environmental Engineers (132) Korean Society for Atmospheric Environment (121) Korean Society of Soil Science and Fertilizer (107) Korea Organic Resources Recycling Association (72) The Korean Society of Food Hygiene and Safety (70) Korean Society of Environmental Health (68) The Korean Institute of Resources Recycling (68) The Korean Society of Analytical Sciences (67) Korean Geo-Environmental Society (53) The Mineralogical Society of Korea (50) Korean Society of Environmental Biology (45) The Korean Society of Fisheries and Aquatic Science (42) The Korean Society of Industrial and Engineering Chemistry (40) Korean Society for Environmental Sanitary Engineers (39) Korean Society of Life Science (39) The Korean Society of Oceanography (39) The Korean Society for Preventive Medicine (37) The Korean Society of Environmental Toxicology (37) Korean Wetlands Society (36) Korean Geotechnical Society (35) The Ecological Society of Korea (34) The Korean Ceramic Society (33) Korean Society of Environmental Impact Assessment (29) Korean Society on Water Environment (29) The Malacological Society of Korea (29) The Microbiological Society of Korea (26) The Korean Society for Integrative Biology (25) Korean Chemical Society (23) The Korean Society for Microbiology and Biotechnology (23) The Membrane Society of Korea (23) The Korean Fiber Society (22) The Korean Society for Biotechnology and Bioengineering (22) The Korean Society of Clean Technology (21) The Korean Society of Engineering Geology (20) The Korean Society of Limnology (20) Korean Geosynthetics Society (18) The Korean Earth Science Society (17) The Korean Society for Marine Environment and Energy (17) The Korean Society of Wood Science & Technology (17) Korean Society of Forest Science (16) The Korean Society for Energy (16) The Korean Society of Water and Wastewater (16) The Korean Society of Animal Environmental Science & Technology (15) Korean Institute of Navigation and Port Research (14) Korea Concrete Institute (13) The Korean Institute of Chemical Engineers (13) The Korean Society of Agricultural Engineers (13) The Korean Society of Safety (13) The Plant Resources Society of Korea (13) Korea Environmental Preservation Association (12) Korea Technical Association of the Pulp and Paper Industry (12) The Korea Association of Crystal Growth (12) The Korean Society of Dyers and Finishers (12) Korea Environmental Engineers Federation (11) Society of Cosmetic Scientists of Korea (11) The Korean Society of Applied Science and Technology (11) Korea Industrial Health Association (10) Korea Medicine Herbal (10) The Korean Institute of Surface Engineering (10) Korean Industrial Hygiene Association (9) Korean Recycled Construction Resources Institute (9) The East Asian Society of Dietary Life (9) Korean Association of Organic Agriculture (8) The Pharmaceutical Society of Korea (8) Korea Cement Association (7) Korea Environment Institute (7) Korea Institute of Ocean Science & Technology (7) Korean Nuclear Society (7) Korean Society for New and Renewable Energy (7) Korean Society of Agricultural and Forest Meteorology (7) Korean Society of Ecology and Infrastructure Engineering (7) Korean Society of Road Engineers (7) The Korean Society of Mushroom Science (7) The Korean Society of Mycology (7) The Polymer Society of Korea (7) Korea Institute for Structural Maintenance Inspection (6) Korea Packaging Association INC. (6) Korean Society of Food and Cookery Science (6) The Korean Society for Railway (6) The Korean Society of Fish Pathology (6) The Korean Society of Grassland and Forage Science (6) Korean Journal of Environmental Agriculture (181) Economic and Environmental Geology (136) Journal of Korean Society of Environmental Engineers (132) Journal of Soil and Groundwater Environment (132) Journal of Environmental Science International (119) Proceedings of the Korean Society of Soil and Groundwater Environment Conference (112) Korean Journal of Soil Science and Fertilizer (107) Proceedings of the Korean Environmental Sciences Society Conference (107) Proceedings of the Korea Air Pollution Research Association Conference (104) Journal of the Korea Organic Resources Recycling Association (72) Journal of Food Hygiene and Safety (70) Analytical Science and Technology (67) Journal of Environmental Health Sciences (64) Proceedings of the KSEEG Conference (59) Resources Recycling (55) Journal of the Korean GEO-environmental Society (53) Journal of the Mineralogical Society of Korea (47) Korean Journal of Fisheries and Aquatic Sciences (42) Korean Journal of Environmental Biology (41) Applied Chemistry for Engineering (40) Journal of environmental and Sanitary engineering (38) Journal of Korea Soil Environment Society (37) Journal of Life Science (36) Journal of Wetlands Research (36) The Korean Journal of Ecology (31) Journal of Environmental Impact Assessment (29) Journal of Korean Society on Water Environment (29) Proceedings of the Korean Society of Fisheries Technology Conference (29) Journal of Preventive Medicine and Public Health (26) Korean Journal of Microbiology (26) Journal of the Korean Chemical Society (23) The Korean Journal of Malacology (23) The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY (23) Environmental Analysis Health and Toxicology (22) Clean Technology (21) Microbiology and Biotechnology Letters (21) Proceedings of the Zoological Society Korea Conference (21) Journal of the Korean Ceramic Society (20) Korean Journal of Ecology and Environment (20) The Journal of Engineering Geology (20) Journal of the Korean Geosynthetics Society (18) Journal of Korean Society for Atmospheric Environment (17) Journal of the Korean Geotechnical Society (17) Journal of the Korean Society for Marine Environment & Energy (17) Journal of the Korean Society of Groundwater Environment (17) Journal of the Korean Wood Science and Technology (17) KSBB Journal (17) Journal of Korean Society of Forest Science (16) Journal of Korean Society of Water and Wastewater (16) Proceedings of the Korean Geotechical Society Conference (16) Journal of Animal Environmental Science (15) Journal of the Korean earth science society (15) Proceedings of the Korea Society of Environmental Toocicology Conference (15) Textile Science and Engineering (15) Korean Chemical Engineering Research (13) Korean Journal of Food Preservation (13) Proceedings of the Korean Institute of Resources Recycling Conference (13) Proceedings of the Membrane Society of Korea Conference (13) Bulletin of Korea Environmental Preservation Association (12) Journal of the Korean Crystal Growth and Crystal Technology (12) Environmental engineer (11) Journal of the Korean Applied Science and Technology (11) Journal of the Society of Cosmetic Scientists of Korea (11) Proceedings of the Korean Society of Dyers and Finishers Conference (11) Journal of Energy Engineering (10) Membrane Journal (10) Proceedings of the Korea Technical Association of the Pulp and Paper Industry Conference (10) Proceedings of the Korean Ceranic Society Conference (10) The Hankook-Saengyark Bo (10) Journal of Korean Society of Occupational and Environmental Hygiene (9) Journal of The Korean Society of Agricultural Engineers (9) Journal of the East Asian Society of Dietary Life (9) Journal of the Korean Recycled Construction Resources Institute (9) Proceedings of KOSOMES biannual meeting (9) The Korean Journal of Food And Nutrition (9) Korean Journal of Plant Resources (8) Proceedings of the Korean Institute of Industrial Safety Conference (8) YAKHAK HOEJI (8) Ecology and Resilient Infrastructure (7) Journal of Environmental Policy (7) Journal of Mushroom (7) Korean Journal of Organic Agriculture (7) Ocean and Polar Research (7) Proceedings of the Korea Concrete Institute Conference (7) Proceedings of the Korean Fiber Society Conference (7) The Korean Journal of Mycology (7) Journal of fish pathology (6) Journal of the Korea institute for structural maintenance and inspection (6) Title/Summary/Keyword: 중금속 Search Result 3,655, Processing Time 0.071 seconds Studies on the Electrochemical Behavior of Heavy Lanthanide Ions and the Synthesis, Characterization of Heavy Metal Chelate Complexes(II). Synthesis and Characterization of Eight Coordinate Tungsten(IV) and Cerium(IV) Chelate Complex (무거운 란탄이온의 전기화학적 거동 및 중금속이온의 킬레이트형 착물의 합성 및 특성에 관한 연구(제2보). 8배위 텅스텐(IV)과 세륨(IV)의 킬레이트형 착물의 합성 및 특성) Kang, Sam Woo;Chang, Choo Wan;Suh, Moo Yul;Lee, Doo Youn;Choi, Won Jong Analytical Science and Technology An attempt was made to prepare two series of tetrakis eight-coordinate tungsten(IV) and cerium(IV) complexes containing the 5,7-dichloro-8-quinolinol(N:${\pi}$-acceptor atom, O:${\pi}$-donor atom) ligand. Tetrakis eight-coordinate tungsten(IV) complex of 2-mercaptopyrimidine(N:${\pi}$-acceptor atom, S:${\pi}$-donor atom) ligand have also been prepared. And the new series of mixed-ligand eight-coordinate tungsten(IV) complexes containing bidentate ligands 5,7-dichloro-8-quinolinol and 2-mercaptopyrimidine have been prepared, isolated by TLC and characterized. $W(dcq)_4$, $W(dcq)_3(mpd)_1$, $W(dcq)_2(mpd)_2$, $W(dcq)_1W(dcq)_3$ and $W(mpd)_4$ complexes of MLCT absorption band appeared to 710nm, 680nm, 625nm, 581nm, and 571nm(${\varepsilon}\;max={\sim}>{\times}10^4$) on low-energy respectively. The specific absorption wave length of $Ce(dcq)_4$ is appeared 520nm(${\varepsilon}\;max={\sim}>{\times}10^4$). The Chemical shift values by proton of coordinated position appeared to $W(dcq)_4$ [$H_2:8.9ppm$]; $W(dcq)_3(mpd)_1$ [$H_2:9.3$,$H_6:9.2ppm$]; $W(dcq)_2(mpd)_2$ [$H_2:9.7$,$H_6:8.95ppm$]; $W(dcq)_1(mpd)_3$ [$H_2:9.8$,$H_6:9.4ppm$]; $W(mpd)_4$ [$H_6:8.8ppm$]; $Ce(dcq)_4$ [$H_2:9.3ppm$] with $^1H$-NMR. The inertness of mixed-ligand eight coordinate tungsten(IV) complexes have been investigated by UV-Vis. spectroscopic method in dimethylsulfoxide at $90^{\circ}C$. The inertness of $W(dcq)_n(mpd)_{4-n}$ complexes showed the following order, $W(dcq)_3(mpd)_1;k_{obs.}=3.8{\times}10^{-6}$ > $W(mpd)_4;k_{obs.}=6.0{\times}10^{-6}$ > $W(dcq)_4;k_{obs.}=6.4{\times}10^{-6}$ > $W(dcq)_2(mpd)_2;k_{obs.}=7.0{\times}10^{-6}$ > $W(dcq)_1(mpd)_3;k_{obs.}=1.7{\times}10^{-5}$, which showed the inertness until 16days, 10days, 9days, 8days, and 4days. The $W(mpd)_4$ is very inert as $k_{obs.}=3.6{\times}10^{-6}$(16days) in xylene at $90^{\circ}C$ and $k_{obs.}=6.0{\times}10^{-6}$(10days) in DMSO at $90^{\circ}C$. Processing of Intermediate Product(Krill Paste) Derived from Krill (크릴을 원료로 한 식품가공용 중간소재(크릴페이스트) 가공에 관한 연구) LEE Eung-Ho;CHA Yong-Jun;OH Kwang-Soo;Koo Jae-Keun Korean Journal of Fisheries and Aquatic Sciences As a part of investigation to use the Anatrctic krill, Euphausia superba, more effectively as a food source, processing conditions, utilizations and storage stability of krill paste (intermediate product of krill) were examined and also chemical compositions of krill paste were analyzed. Frozen raw krill was chopped, agitated with $25\%$ of water to the minced krill and then centrifuged to separate the liquid fraction from the residue. This liquid fraction was heated at $98^{\circ}C$ for 20 min. to coagulate the proteins of krill, and it was filtered to separate the protein fraction. Krill paste was prepared with grinding the protein fraction, adding $0.2\%$ of polyphosphate and $0.3\%$ of sodium erythorbate to the krill paste for enhancing of functional properties and quality stability. The krill paste was packed in a carton box, and then stored at $-30^{\circ}C$. Chemical compositions of krill paste were as follows : moisture $78\%$, crude protein $12.9\%$, crude lipid $5.9\%$, and the contents of hazardous elements of krill paste as Hg 0.001 ppm, Cd 1.15 ppm, Zn 9.1 ppm, Pb 0.63 ppm and Cu 11.38ppm were safe for food. The amino acid compositions of krill paste showed relatively high amount of taurine, glutamic acid, aspartic acid, leucine, lysine and arginine, which occupied $55\%$ of total amino acid and also taurine, lysine, glycine, arginine and proline were occupied $65\%$ of total free amino acid. Fatty acid compositions of krill paste consist of $32.4\%$ of saturated fatty acid, $29.6\%$ of monoenoic acid and $38.0\%$ of polyenoic acid, and major fatty acids of product were eicosapentaenoic acid ($17.8\%$), oleic acid ($16.9\%$), palmitic acid ($15.3\%$), myristic acid ($8.7\%$) and docosahexaenoic acid ($8.4\%$). In case of procssing of fish sausage as one of experiment for krill paste use, Alaska pollack fish meat paste could be substituted with the krill paste up to $30\%$ without any significant defect in taste and texture of fish sausage, and the color of fish sausage could be maintained by the color of krill paste. Judging from the results of chemical and microbial experiments during frozen storage, the quality of krill paste could be preserved in good condition for 100 days at $-39^{\circ}C$. Improvement of Certification Criteria based on Analysis of On-site Investigation of Good Agricultural Practices(GAP) for Ginseng (인삼 GAP 인증기준의 현장실천평가결과 분석에 따른 인증기준 개선방안) Yoon, Deok-Hoon;Nam, Ki-Woong;Oh, Soh-Young;Kim, Ga-Bin Journal of Food Hygiene and Safety Ginseng has a unique production system that is different from those used for other crops. It is subject to the Ginseng Industry Act., requires a long-term cultivation period of 4-6 years, involves complicated cultivation characteristics whereby ginseng is not produced in a single location, and many ginseng farmers engage in mixed-farming. Therefore, to bring the production of Ginseng in line with GAP standards, it is necessary to better understand the on-site practices of Ginseng farmers according to established control points, and to provide a proper action plan for improving efficiency. Among ginseng farmers in Korea who applied for GAP certification, 77.6% obtained it, which is lower than the 94.1% of farmers who obtained certification for other products. 13.7% of the applicants were judged to be unsuitable during document review due to their use of unregistered pesticides and soil heavy metals. Another 8.7% of applicants failed to obtain certification due to inadequate management results. This is a considerably higher rate of failure than the 5.3% incompatibility of document inspection and 0.6% incompatibility of on-site inspection, which suggests that it is relatively more difficult to obtain GAP certification for ginseng farming than for other crops. Ginseng farmers were given an average of 2.65 points out of 10 essential control points and a total 72 control points, which was slightly lower than the 2.81 points obtained for other crops. In particular, ginseng farmers were given an average of 1.96 points in the evaluation of compliance with the safe use standards for pesticides, which was much lower than the average of 2.95 points for other crops. Therefore, it is necessary to train ginseng farmers to comply with the safe use of pesticides. In the other essential control points, the ginseng farmers were rated at an average of 2.33 points, lower than the 2.58 points given for other crops. Several other areas of compliance in which the ginseng farmers also rated low in comparison to other crops were found. These inclued record keeping over 1 year, record of pesticide use, pesticide storages, posts harvest storage management, hand washing before and after work, hygiene related to work clothing, training of workers safety and hygiene, and written plan of hazard management. Also, among the total 72 control points, there are 12 control points (10 required, 2 recommended) that do not apply to ginseng. Therefore, it is considered inappropriate to conduct an effective evaluation of the ginseng production process based on the existing certification standards. In conclusion, differentiated certification standards are needed to expand GAP certification for ginseng farmers, and it is also necessary to develop programs that can be implemented in a more systematic and field-oriented manner to provide the farmers with proper GAP management education. https://doi.org/10.13103/JFHS.2019.34.1.40 인용 PDF KSCI Limno-Biological Investigation of Lake Ok-Jeong (옥정호의 육수생물학적 연구) SONG Hyung-Ho Limnological study on the physico-chemical properties and biological characteristics of the Lake Ok-Jeong was made from May 1980 to August 1981. For the planktonic organisms in the lake, species composition, seasonal change and diurnal vertical distribution based on the monthly plankton samples were investigated in conjunction with the physico-chemical properties of the body of water in the lake. Analysis of temperature revealed that there were three distinctive periods in terms of vertical mixing of the water column. During the winter season (November-March) the vertical column was completely mixed, and no temperature gradient was observed. In February temperature of the whole column from the surface to the bottom was $3.5^{\circ}C$, which was the minimum value. With seasonal warming in spring, surface water forms thermoclines at the depth of 0-10 m from April to June. In summer (July-October) the surface mixing layer was deepened to form a strong thermocline at the depth of 15-25 m. At this time surface water reached up to $28.2^{\circ}C$ in August, accompanied by a significant increase in the temperature of bottom layer. Maximum bottom temperature was $r5^{\circ}C$ which occurred in September, thus showing that this lake keeps a significant turbulence Aehgh the hypolimnial layer. As autumn cooling proceeded summer stratification was destroyed from the end of October resulting in vertical mixing. In surface layer seasonal changes of pH were within the range from 6.8 in January to 9.0 in guutuost. Thighest value observed in August was mainly due to the photosynthetic activity of the phytoplankton. In the surface layer DO was always saturated throughout the year. Particularly in winter (January-April) the surface water was oversaturated (Max. 15.2 ppm in March). Vertical variation of DO was not remarkable, and bottom water was fairly well oxygenated. Transparency was closely related to the phytoplankton bloom. The highest value (4.6 m) was recorded in February when the primary production was low. During summer transparency decreased hand the lowest value (0.9 m) was recorded in August. It is mainly due to the dense blooming of gnabaena spiroides var. crassa in the surface layer. A. The amount of inorganic matters (Ca, Mg, Fe) reveals that Lake Ok-Jeong is classified as a soft-water lake. The amount of Cl, $NO_3-N$ and COD in 1981 was slightly higher than those in 1980. Heavy metals (Zn, Cu, Pb, Cd and Hg) were not detectable throughout the study period. During the study period 107 species of planktonic organisms representing 72 genera were identified. They include 12 species of Cyanophyta, 19 species of Bacillariophyta, 23 species of Chlorophyta, 14 species of Protozoa, 29 species of Rotifera, 4 species of Cladocera and 6 species of Copepoda. Bimodal blooming of phytoplankton was observed. A large blooming ($1,504\times10^3\;cells/l$ in October) was observed from July to October; a small blooming was present ($236\times10^3\;cells/l$ in February) from January to April. The dominant phytoplankton species include Melosira granulata, Anabaena spiroides, Asterionella gracillima and Microcystis aeruginota, which were classified into three seasonal groups : summer group, winter group and the whole year group. The sumner group includes Melosira granulate and Anabaena spiroides ; the winter group includes Asterionella gracillima and Synedra acus, S. ulna: the whole year group includes Microtystis aeruginosa and Ankistrodesmus falcatus. It is noted that M. granulate tends to aggregate in the bottom layer from January to August. The dominant zooplankters were Thermocpclops taihokuensis, Difflugia corona, Bosmina longirostris, Bosminopsis deitersi, Keratelle quadrata and Asplanchna priodonta. A single peak of zooplankton growth was observed and maximum zooplankton occurrence was present in July. Diurnal vertical migration was revealed by Microcystis aeruginosa, M. incerta, Anabaena spiroides, Melosira granulata, and Bosmina longirostris. Of these, M. granulata descends to the bottom and forms aggregation after sunset. B. longirostris shows fairly typical nocturnal migration. They ascends to the surface after sunset and disperse in the whole water column during night. Foully one species of fish representing 31 genera were collected. Of these 13 species including Pseudoperilnmpus uyekii and Coreoleuciscus splendidus were indigenous species of Korean inland waters. The indicator species of water quality determination include Microcystis aeruginosa, Melosira granulata, Asterionelta gracillima, Brachionus calyciflorus, Filinia longiseta, Conochiloides natans, Asplanchna priodonta, Difflugia corona, Eudorina elegans, Ceratium hirundinella, Bosmina longirostris, Bosminopsis deitersi, Heliodiaptomus kikuchii and Thermocyclops taihokuensis. These species have been known the indicator groups which are commonly found in the eutrophic lakes. Based on these planktonic indicators Lake Ok-Jeong can be classified into an eutrophic lake. Physico-Chemical Properties of Aggregate By-Products as Artificial Soil Materials (골재 부산물의 용토재 활용을 위한 특성 분석) Yang, Su-Chan;Jung, Yeong-Sang;Kim, Dong-Wook;Shim, Gyu-Seop Korean Journal of Soil Science and Fertilizer Physical and chemical properties of the aggregate by-products including sludge and crushed dust samples collected from the 21 private companies throughout the country were analyzed to evaluate possible usage of the by-products as artificial soil materials for plantation. The pH of the materials ranged from 8.0 to 11.0. The organic matter content was $2.85g\;kg^{-1}$, and the total nitrogen content and available phosphate content were low as 0.7 percents and $12.98mg\;kg^{-1}$, respectively. Exchangeable $Ca^{2+}$, $Mg^{2+}$, $K^+$, and $Na^+$ were 2.29, 0.47, 0.02 and $0.05cmol\;kg^{-1}$, respectively. Heavy metal contents were lower than the limits regulated by environmental law of Korea. Textural analysis showed that most of the materials were silt loam with low water holding capacity ranged from 0.67 to 7.41 percents, and with low hydraulic conductivity ranged from 0.4 to $2.8m\;s^{-1}$. Mineralogical analysis showed that the aggregate by product materials were mostly composed of silicate, alumina and ferric oxides except calcium oxide dominant materials derived from limestones. The primary minerals were quartz, feldspars and dolomites derived from granite and granitic gneiss materials. Some samples derived from limestone material showed calcite and graphite together with the above minerals. According to the result, it can be concluded that the materials could be used as the artificial soil material for plantation after proper improvement of the physico-chemical properties and fertility. 366현재 366 / 366 pages
CommonCrawl
Physical interpretation of Parseval's theorem I have read that Parseval's theorem, relating the norm of a function $f$ and the norm of its Fourier transform $g(k)$: \begin{equation} \int |f(x)|^2 dx=\int|g(k)|^2 dk \end{equation} has the simple physical interpretation of "conservation of energy". I just don't see this, so can you suggest me a way to think about it? energy waves fourier-transform quark1245quark1245 $\begingroup$ it's not quite right--- the conservation of energy assumes each Fourier mode is oscillating separately, so that the energy is either a sum over modes or a sum over positions, and this is a consequence of Parseval's theorem. Proving Parseval's theorem is best using the abstract idea that the integral is the "length" of the function considered as a vector, and the length doesn't depend on your choice of orthonormal basis. $\endgroup$ – Ron Maimon Apr 8 '12 at 23:20 Think of a vector $\mathbf{V}$. As seen in a coordinate system $S$ with basis vectors $\hat{e}_i$, it can be written $$\mathbf{V} = \sum_i V_i \hat{e}_i$$ where $V_i$ are the components of $\mathbf{V}$ in $S$. As seen from another coordinate system $S'$ with basis vectors $\hat{e}_i'$, it has a representation $$\mathbf{V} = \sum_i V_i' \hat{e}_i'.$$ Obviously the length of the vector is independent of the coordinate system used to represent it. In other words, we must have $$\sum_i V_i^2 = \sum_i (V_i')^2$$ Proceeding with this analogy, for a function $f(x)$ one can have a position space representation in $\delta$-function basis as $$f(x) = \int f(x') \delta(x-x') dx'$$ where the "component" of $f(x)$ along the "basis vector" $\delta(x-x')$ is $f(x')$ and we sum (integrate since $x$ is a continuous variable) over all the possible "axes". One can look at the same function in Fourier-space representation as $$f(x) = \int g(k) e^{-i k x} dk$$ where $e^{-ikx}$ are the "basis vectors" and $g(k)$ are the "components" of $f(x)$ along these basis vectors. You would then agree that $$ \int |f(x)|^2 dx = \int |g(k)|^2 dk$$ So Parseval's theorem is just the restatement of the invariance of the length of a "vector" independent of the representation used. If $|f(x)|^2$ is proportional to the energy, then Parseval's theorem is a statement of the conservation of the energy as seen in the real-space domain or the Fourier-space domain If $f(x)$ is a quantum-mechanical wavefunction, $|f(x)|^2$ is proportional to the probability density. Parseval's theorem is then a statement of the conservation of the probability as seen in the position-space representation or the momentum-space representation. See also Parseval's identity Vijay MurthyVijay Murthy $\begingroup$ The question asked for an understanding of Parseval's theorem as a statement of conservation of energy, not just a proof of the theorem. $\endgroup$ – kleingordon Apr 8 '12 at 20:04 In short - Energy in time domain is equal to energy in frequency domain. EDIT: Think of f(x) as a signal, then Parseval's theorem says that total energy of this signal over time equals to energy of this signal over frequencies. anuragsn7anuragsn7 Not the answer you're looking for? Browse other questions tagged energy waves fourier-transform or ask your own question. What is the physical interpretation of the Fourier transform $(\mathcal{F}Z)(t)$ an impedance? Parseval's Theorem on a Random Signal Partition functions in $\phi^{4}$ theory Diffraction and $k$-space Fourier transform in a semi-infinite Ferromagnet Fourier Analysis for Physicists 2D Fourier Transform of a general function satisfying the wave equation
CommonCrawl
Why correlation length diverges at critical point? I want to ask about the behavior near critical point. Let me take an example of ferromagnet. At $T < T_c$, all spins are aligned to the same direction thus it is in the ordered state, scale invariant, its correlation length is effectively infinite. At $T > T_c$, all spins are aligned randomly so it is disordered state. However, in my understanding, we say the system is scale invariant and its correlation length diverges only at critical point. What is wrong in my understanding? Furthermore, could you explain an intuitive region why at critical point, the correlation length should diverge? statistical-mechanics phase-transition ising-model critical-phenomena scale-invariance $\begingroup$ See also this related question. $\endgroup$ – Yvan Velenik May 4 '18 at 6:00 It is not the correlation length of the system that you should look at, but the correlation of the fluctuations. If T>>Tc the spins are randomly oriented and the lenghtscale of fluctuations is very small. As you get closer to Tc, the fluctuations become more correlated, and lenghtscale increases toward infinity. Similarly for the ferromagnet at temperatures much less than Tc, all spins are aligned. The fluctuations at 0 < T << Tc have short correlation lengths. As you heat the system, it is still mostly ordered, but the number of spins pointing in the opposite direction increases, and so does the correlation length of these fluctuations AndreiAndrei I think your trouble is that a correlation length $\xi$ is not to be interpreted as correlation in the sense of statistics, e.g. $ \frac{<(s(x)-\lt s(x) \gt)(s(y)-<s(y)>)>}{\sqrt{<\left( s(x)-\lt s(x)>\right)^2 \lt(s(y)-\lt s(y)>)^2 \gt)}}$, but rather defined via $ <(s(x)-< s(x)>)(s(y)-<s(y)>)>=e^{-|x-y|/\xi}$ ( see for example (https://physics.stackexchange.com/q/59690). Assume that, at zero temperature, all spins are "frozen" and perfectly aligned and hence perfectly correlated (in the statistical sense). However, since $s(x)=<s(x)>$ and $s(y)=<s(y)>$ in this case, it follows that $\xi=0$ . As far as the second part of the question is concerned: Critical points are phase transitions that correspond to fixed points in the renormalization group flow. What this means is that the process of consecutively dividing the spin lattice into blocks, integrating them out and constructing a new Hamiltonian between those blocks has reached a fixed point: The form of the Hamiltonian does not change any longer, only its parameters (couplings) get re-adjusted with any further block-spin operation. This in turn means that the system has lost its scale and has become scale free. So if I were to take two pictures of the material, one of size one inch and the other one of size one micro-inch, you could not tell me which one is which. The only way to describe this mathematically is by assuming a power-law which yields $\xi(T) \sim (T-T_c)^{-\nu}$ where $T_c$ is the critical temperature and $\nu$ is the scaling dimension that is not necessarily integer.Hence the correlation length diverges at the critical point. Not the answer you're looking for? Browse other questions tagged statistical-mechanics phase-transition ising-model critical-phenomena scale-invariance or ask your own question. Why do spin correlation functions in Ising Models decay exponentially below the critical temperature? Why does Critical Points have fluctuations on all scales (Infinite correlation length? What does the behavior of the pair correlation function look like in the vicinity of the critical point? Correlation length in d>1 Ising model, at zero temperature Ising model scale invariance Spontaneous symmetry breaking at a finite temperature $T$: How is the state dscribed as a function of $T$? Quantum critical region governed by quantum critical point Scale invariance at phase transitions Is the Landau free energy scale-invariant at the critical point? Correlation length at low temperatures?
CommonCrawl
Given the "programs as proofs" isomorphism, how do we know that the program isn't lying? I've been studying constructive type theory (CTT) and one of the things that I'm not clear on is the proof part: Proving the correctness of a program in a form of a proof that's nothing but the program itself (Curry-Howard Correspondence) Most examples that I've seen in books (e.g., Type Theory and Functional Programming - Thomson and TaPL) show the "proofs" on $\lambda$-abstractions and applications on terms (i.e., literally $a, b, e...$). The proofs seem to mostly rely on the type signatures of the functions, under the assumption that the function does what it claims. Not much is discussed about the "how" of the function's correctness. For example, when writing a real program (e.g., in Haskell and other [pure] functional languages), the function under consideration could do any arbitrary computation and return a term of the correct type for the proof to go through (statically speaking). So how do we know that the program is computationally doing the "right" thing (dynamically speaking) and not just faking it to get past a proofing system? From what I understand, here's how things should go (and probably are, but I'm not sure if I'm right), crudely speaking: Given a program's specification in something like predicate logic we "convert" it into an equivalent "typed representation" Using backward inference we substitute the "functions" with their appropriate values, which themselves could be other functions (i.e., we replace the functions with their computation rules, but I'm thinking more on the lines of replacing the function with its body, from a programming point of view, for the sake of argument. Assuming that they're "returning" the correct type this seems like a believable substitution) We continue doing #2 above till we hit primitive operations (again, crudely speaking) which we can trivially prove (or if not, maybe the proof is "simple" enough). Once we've hit all the "axioms" (or trivial proofs) along all the branches of backward inferences, we stop. Is my understanding/intuition of "how" the proof of correctness works in CTT works correct? It looks like it won't be possible for the program to "cheat" this or can it? And secondly, is this what proof assistants like Coq help you prove/analyze (at a high level)? type-theory correctness-proof intuition proof-assistants curry-howard PhDPhD $\begingroup$ The proofs I have done with Isabelle/HOL were based on (pseudo) syntax and formal semantics of the language we expressed our algorithms in. There is no "lying" or "faking it"; the meaning of an algorithm is (and has to be) precisely defined -- then (and only then) you can prove facts. $\endgroup$ – Raphael♦ Mar 9 '15 at 8:49 Proving the correctness of a program in a form of a proof that's nothing but the program itself This is not quite how the Curry-Howard-Correspondence works. First one has to show that the language of choice actually corresponds to some consistent logic. Different languages correspond to different logics, and many languages correspond to inconsistent logics which are not good for proving things. Haskell doesn't correspond to a consistent logic, because in Haskell, we can write nonterminating programs which would correspond to infinite proofs. But infinite proofs are invalid, so not every Haskell program corresponds to a valid proof. Now what does it mean that a language corresponds to a logic? It means that: a type in the language corresponds to a formula in the logic a program in the language corresponds to a description of a proof in the logic a well-typed program in the language corresponds to a description of a valid proof in the logic. a program of a specific type in the language corresponds to a proof of a specific formula in the logic a value in the language correspond to truth in the logic evaluation of programs to values corresponds to soundness of the proving rules reification of values to programs corresponds to completeness of the proving rules No need to understand all aspects of the correspondence at once, but it is good to keep in mind that the correspondence relates not just programs and proofs, but many (all?) aspects of the languages and the logic. For the question here, only the first four aspects are relevant. A minor issue is that only well-typed programs correspond to valid proofs. That's why type signatures of functions are so relevant to the Curry-Howard-Correspondence. The key issue is that a well-typed program doesn't correspond to a valid proof of its own correctness, but to a valid proof of whatever formula corresponds to the program's type. For example, let us consider the following two programs (using Haskell syntax, but assuming a version of Haskell that corresponds to a consistent logic): f :: a -> a -> a f x y = x g :: a -> a -> a g x y = y The functions f and g are well-typed programs of the same type, so they correspond to valid proofs of the same formula. The formula is "forall propositions p, p implies that p implies p". But clearly, f and g are different programs. And of course, they correspond to different proofs of the same formula. The program f corresponds to the proof "We know that p implies p, so also p implies (whatever implies p)". And the program g corresponds to the proof "We know that p implies p, so also whatever implies (p implies p)." Usually, with programming languages, we care about the difference between programs like f and g, and with logics, we don't care about the difference between the two proofs that correspond to f and g. But when thinking about the Curry-Howard-Correspondence, it is important not to forget that there can be multiple different valid proofs of the same formula. So how do we know that the program is computationally doing the "right" thing (dynamically speaking) and not just faking it to get past a proofing system? We don't. We only know that the program proves the formula we have encoded in the program's type. So how can we use the Curry-Howard-Correspondence to prove a program correct? We have to encode a statement about program correctness into the type of the program. This requires a very expressive type system, of course, which is exactly what the languages inside tools like Coq or Agda provide. ToxarisToxaris $\begingroup$ Shouldn't the proof corresponding to the program g be "we know that p implies p, so also (whatever implies p) implies p" (where the parentheses have been moved from surrounding "p implies p" to surrounding "whatever implies p")? $\endgroup$ – LSpice Jan 3 '18 at 19:15 In some sense it doesn't matter what the function does, as long as it takes the correct types and produces something of the correct type. The trick is that when you start talking about the Curry-Howard correspondence, the types are much more precise and specified that what we'd normally deal with day-to-day. Moreover the Curry-Howard correspondence says that we can take a proof (in a variety of different logics) and produce a program by a "simple" substitution, and we can take a program (in something essentially lambda-calculus-esque) and produce a proof about the types. So it doesn't quite say that it proves what the program does, it just proves a statement about the types. So a function that takes a natural number and produces a natural number is a proof that given a natural number, we can produce a natural number. It gives us nothing in particular (without extra work) about what number it actually produces. When you start using more precise types, then things start to get interesting, and you get something like the process you outline, and give or take particulars, how things like Coq work. To give an example from Coq, we can define a function: Definition zero (n : nat) : nat := 0. So all we get is zero, but if we look at the type of zero, we get nat -> nat. Not particularly interesting (but correct, which is what the type checker gives us). We can however prove something about it: Lemma zero_always : forall n, zero n = 0. intros. destruct n. reflexivity. reflexivity. This is Coq's syntax for proving things, which gets translated into a program: zero_always = fun n : nat => match n as n0 return (zero n0 = 0) with | 0 => eq_refl | S _ => eq_refl So the proof is a function that destructs the nat type, and applies some other thing which says that things that look identical are the same. Now the type of zero_always is forall n : nat, zero n = 0 Which is a function from nats (which we will give the label n) to things of type zero n = 0. So you can see that the types we need to say something useful about a program are a bit unusual compared to the run of the mill datatypes (but not really different in this context). Going back a step, the eq_refl is actually a constructor of a type eq. eq has only the one contructor which is eq_refl : x = x where x : A, A: Type and hence eq : A -> Prop. Unravelling that, equality is a function that takes a single element of some type (for every type A) and produces a Prop (which is a proposition type, really just a Type but Coq uses it for convenience for other reasons) that says that element is equal to itself. So our zero_always proof applies a function that just applies another function, and because the very carefully chosen types match, we're happy (as long as we trust the type checker ;) ). Luke MathiesonLuke Mathieson Not the answer you're looking for? Browse other questions tagged type-theory correctness-proof intuition proof-assistants curry-howard or ask your own question. Is Wadler's 'Theorems for Free' as general as Design By Contract for establishing correctness? Type systems understanding problems Can we prove that $1 + 2 + \dots + n = \frac{n(n+1)}{2}$ using a computer program? Uses of the type Unit Curry howard isomorphism "proof as program" Proving property of a term using Induction Can a type system serve as a proof assistant for foreign functions? Proof of correctness recursive reverse digit function The difference between a Hoare Triple/Assertion and a Typed Function What makes a proof assistant a proof assistant?
CommonCrawl
Using Diagrams to Build and Extend Student Understanding By Jenna Laib and Kristin Gray Take a moment to think about the value of each expression below. $\frac{1}{4}\times \frac{1}{3}$ What do you notice? How would you explain the things you notice? If you are like us, or the students in Ms. Stark's grade 5 classroom, you may have noticed many things. Things such as each expression has the same denominator, or the way in which the values increased as the problems progressed. When students notice these things, we often ask, 'Why is that happening?" but it can be challenging to explain why beyond the procedure one followed. But fifth graders are brilliant. Grade 5 teacher Francesca Stark wrote an enthusiastic email after doing this number talk from IM K–5 Math with her students. "We were interrupted by the fire drill," she wrote. "But students were looking at the patterns of the products: $\frac{1}{12}$, $\frac{2}{12}$, $\frac{4}{12}$, $\frac{6}{12}$." They wondered, "why an increase of $\frac{1}{4}$ in the first factor of the last three problems made the product go up by $\frac{2}{12}$, but an increase of $\frac{1}{3}$ in the second factor of the first and second problem only made the product increase by $\frac{1}{12}$ ." She was thinking about ways to support students in discovering why this was happening. This Number Talk comes after a series of lessons focusing on conceptual experiences with multiplying fractions. In prior lessons, students have found a fraction of a fraction, for example, "there was $\frac{1}{4}$ of a pan of cornbread remaining, and Priya ate $\frac{1}{5}$ of it." and used diagrams to solve and represent their thinking. Students developed their understanding as they noticed the denominators multiplied to find the number of small rectangles in a unit square and the numerators multiplied to find the number of shaded pieces. These ideas were all built with a consistent representation: the area diagram. So when the fifth graders wanted to explore the why behind what they observed in the products of the number talk, they had a common visual language to use. Students started by representing the first problem in the string, $\frac{1}{4}\times \frac{1}{3}$ . Every student adeptly represented the product with a rectangle partitioned into fourths and thirds and shaded in a single small rectangle— $\frac{1}{12}$ of the whole—to represent the product. Next, students were asked to represent the other three problems. Some students decided to show each problem on a separate diagram, which allowed us to look for the difference between the four representations. Other students chose to show all four problems on a single diagram, using labels or colors to indicate the change from problem to problem. This student decided to show her work on a single diagram. She modeled the first problem— $\frac{1}{4}\times \frac{1}{3}=\frac{1}{12}$ —in green, and added on the additional $\frac{1}{12}$ piece in blue to show how the diagram would need to change to show the second problem, $\frac{1}{4}\times \frac{2}{3}$. "Then, to change it to show $\frac{2}{4}\times \frac{2}{3}$, you're only adding a fourth, but now each fourth has two pieces colored in, not one." When pressed to explain further, she added, "I guess because it went up to $\frac{2}{3}$ , so it's 2 pieces out of 3 colored in every time." "For thirds, I shade another column. For fourths, I shade another row," this student explained. "After the second problem, there's two in a row, so that's $\frac{2}{12}$ ." "I did the same thing," another student said. "I kept adding rows with two across. If there were only 1 across it would be adding $\frac{1}{12}$ , but there's 2 for the $\frac{2}{3}$ , so it's $\frac{2}{12}$ ." The IM K–5 Math curriculum purposefully makes use of mathematical representations as a way for students to develop an understanding of concepts and procedures. In this particular case, the area diagram supported students in their reasoning and communication about how the change in factors impacted the products as they multiplied fractions. Students took ownership of this mathematical representation to push their thinking in new directions. All of these students are still wrestling with the underlying mathematics, and figuring out how to communicate ideas clearly. The diagram supports this work. It gives them a starting place, and allows them to leverage the diagram as a way to think about the mathematics, in addition to a way to do the computation. Choose one of the Number Talks to try in your classroom. When you are finished, record the patterns students notice in the problems and responses. Ask them why those patterns are happening and capture the ways in which they support their reasoning. What representations do they use and how do they show their brilliance? Jenna Laib Jenna Laib is currently a math specialist at the Driscoll School (K–8) in Brookline, MA. Students inspire her. She also enjoys working with pre-service and in-service teachers to develop content knowledge, pedagogical strategies, and infinite curiosity for mathematical thinking and learning. She blogs www.jennalaib.wordpress.com as a way to process her experiences in the classroom. Jenna is the 2016 recipient of the Harry S. Levitan Prize for Educational Leadership from Brandeis University. Twitter: @jennalaib Kristin Gray Elementary Curriculum Lead at Illustrative Mathematics | Website Kristin Gray is a National Board Certified 21-year veteran teacher of grades 5, 7, and 8, is currently the Elementary Curriculum lead at Illustrative Mathematics and writer of IM professional learning content. She has served as a curriculum writer on the IM 6–8 Math curriculum and Teaching Channel Laureate. Kristin has developed and facilitated mathematics professional learning at district, state, and national levels and presents annually at both the NCSM and NCTM conference. As a teacher, colleague, presenter, and learner, Kristin continuously shares the value of curiosity around student thinking in her planning and instruction. To reflect on her experiences, she blogs and connects with educators on Twitter, @MathMinds. Kristin has a B.S. in elementary education with a concentration in mathematics from the University of Delaware, a M.Ed. in applied technology in education from Wilmington University and is the 2014 Presidential Awardee for Excellence in Mathematics and Science Teaching.
CommonCrawl
Focus on: All days 24 Jun 2019 25 Jun 2019 26 Jun 2019 All sessions Closing Opening Poster session Registration Session 1 Session 10 Session 11 Session 12 Session 2 Session 3 Session 4 Session 6 Session 7 Session 8 Session 9 Session5 Hide Contributions 24 Jun 2019, 08:15 → 26 Jun 2019, 17:45 Japan 122, 134 (Bunkyo School Building, Tokyo Campus, University of Tsukuba) 3-29-1 Otsuka Bunkyo-ku Tokyo 112-0012 Japan The 17th International Conference on QCD in Extreme Conditions (XQCD 2019) will be held in Tokyo from 24 to 26 June 2019. XQCD is a series of international workshop-style conferences, held annually, which aims to cover recent advances in the theory and phenomenology of QCD under extreme conditions of temperature and/or baryon density, together with related topics. QCD at finite temperature and density QCD in external fields Phase diagram of strongly interacting matter Properties of Quark-Gluon Plasma Heavy ion collision phenomenology Neutron stars More information is available on the conference homepage. Center for Computational Sciences, University of Tsukuba Tomonaga Center for the History of the Universe, University of Tsukuba Adam Bzdak Aiichi Iwazaki Akihiro Shibata Akio Tomiya Aleksandr Nikolaev ALEXANDER LEHMANN Alexander Maximilian Eller Alexander Rothkopf Andre Veiga Giannini Andrei Alexandru Anh Quang Pham Arata Yamamoto Atsushi Baba Atsushi Nakamura Chiho Nonaka Chinatsu Watanabe Daiki Suenaga Di-Lun Yang Enrico Rinaldi Etsuko Itou Francesco Di Renzo Frithjof Karsch Gergely Fejos Goro Ishiki Hayato Aoi Hidefumi Matsuda Hideo MATSUFURU Hidetoshi Taya Hiroki Hoshina Hiromasa WATANABE Hiroshi Ohno Hirotsugu Fujii Jens Oluf Andersen JINFENG LIAO Jun Nishimura Kazunori Itakura Kazuya Mameda Kazuyuki Kanaya Kei Suzuki Kei-Ichi Kondo Kentaro Nishimura Kevin Zambello Koichi Hattori Koichi Murase Manuel Scherzer Masakiyo Kitazawa Masaru Hongo Masayasu Hasegawa Matias Säppi Muneto Nitta Noriyuki Sogabe Olaf Kaczmarek Oliver Bär P. Thomas Jahn Ryo Sakai Sebastian Schmalzbauer Semeon Valgushev Shanjin Wu Shigehiro Yasui Shinji Ejiri Shogo Nishino Shoichi Sasaki Shoichiro Tsutsui Shota Imaki Simran Singh Takahiro MIURA Takehiro Azuma Takeru Yokota Teiji Kunihiro Tetsufumi Hirano Ting-Wai Chiu Tomoya Hayata TORU KOJO Toshihiro NONAKA Tyler Gorda Urs Wenger Yasushi Nara Yi Yin Yoshimasa Hidaka Yoshinobu Kuramashi Yoshio Kikukawa Yui Hayashi Yuji Hirono Yusuke Namekawa Yusuke Taniguchi Yuto Mori Zebin Qiu Zhandos Moldabekov Mon, 24 Jun Tue, 25 Jun Opening 134 Session 1 134 Convener: Prof. Frithjof Karsch (Universitaet Bielefeld) Current status and perspectives of complex Langevin calculations in finite density QCD 45m Monte Carlo studies of finite density QCD is difficult due to the notorious sign problem. As a promising approach that can avoid this problem, the complex Langevin method has been attracting attention. In particular, a practical criterion for correct convergence has been proposed, and it is found to be satisfied in certain parameter regions of finite density QCD. In this talk I will summarize our results obtained so far and discuss what one can do using this method. Speaker: Prof. Jun Nishimura (KEK) Complex Langevin applied to chiral random matrix model in T-mu plane 25m We examine the $T-\mu$ phase diagram in the chiral random matrix model in $T-\mu$ plane by checking the correctness condition, i.e., the tail behavior of the ensemble distribution, with varying the matrix size and other model parameters. Speaker: Dr Hirotsugu Fujii (U Tokyo) Complex Langevin Simulations: Reliability and applications to full QCD at non-zero density 25m M. Scherzer, E. Seiler, D. Sexty and I.-O. Stamatescu Complex Langevin (Equation) is a well defined method providing a general instrument for ab initio, approximation free studies of realistic lattice models even for complex action. The latter include full QCD at finite density and CLE appears as the only method presently applied in this context. The complexification of the variable space required by a complex action introduces, however, special conditions to be satisfied in order to ensure correct convergence. Analyzing these conditions led to the development of procedures and criteria which allow to control the simulations and define physical reliability regions. We here develop one essential condition directly related to the correctness proof to a general criterion applicable on-line also to QCD and discuss its relation to other criteria. We also present full QCD CLE results for the transition from the confinement to the deconfined phase for $0 \le \mu / T_c( \mu=0) \le 5$ and for observables vs $\mu$ at high temperature. Speaker: Mr Manuel Scherzer (Institute for theoretical Physics, Heidelberg University) Coffee break 30m 122 Convener: Prof. Muneto Nitta (Keio University) The interpolation approach to dense QCD and neutron-star phenomenology 45m Neutron stars (NSs) contain the densest observable matter in the universe. Within their cores lies QCD matter compressed to multiple times the density of common nuclei. Unfortunately, this matter is too dense to be studied from first-principles nuclear-physics calculations, and not dense enough to be studied using first-principles perturbative-QCD calculations. In this talk, I will detail a model-independent approach to bridge this unknown region of the equation of state (EOS) of NS matter. By using interpolating functions to parametrize our ignorance of the EOS between the extremes of nuclear and quark matter, and by demanding that a few robust astrophysical constraints hold for each interpolated EOS, we are able place bounds on the allowed region in pressure and energy density where the EOS of NS matter must lie. Furthermore, we are also beginning to be able to draw conclusions about the physical properties of this matter, and to address such questions as whether NSs are dense enough to contain quark matter in their cores. Speaker: Dr Tyler Gorda (University of Virginia) Neutron star equations of state and quark-hadron continuity 25m The properties of dense QCD matter are delineated through the construction of equations of state which should be consistent with QCD calculations in the low and high density limits, nuclear laboratory experiments, and the neutron star observations. These constraints, together with the causality condition of the sound velocity, are used to develop the picture of hadron-quark continuity in which hadronic matter continuously transforms into quark matter (modulo small 1st order phase transitions). The resultant unified equation of state at zero temperature and - equilibrium, which we call Quark-Hadron-Crossover (QHC18 and QHC19), is consistent with the measured properties of neutron stars and in addition gives us microscopic insights into the properties of dense QCD matter. Speaker: Prof. Toru Kojo (Central China Normal University) Rotating neutron star in strong magnetic fields and the MR relations 25m Neutron stars are highly magnetized rotating compact stars. In 2010, a neutron star named PSR J1614-2230 has a mass of twice the solar mass (1.97$\pm$0.04M$\odot$). In 2013, a neutron star named PSR J0348+0432 with a mass of 2.01$\pm$0.04M$\odot$ was observed. Such massive neutron stars give strong constraints on the equation of state (EoS) of neutron star matter. In this study, we calculate radius of a neutron star using a perturbative prescription for various EoSs and relations of its total mass increased by magnetic fields. Also, we calculate the radius of the neutron star as a function of its total mass increased by rotation. As for the EoS, we use RMF theory. Moreover, we calculate mass-radius (MR) relation with both rotation and magnetic fields using 5 hadronic EoSs. We have the results of MR relation with both rotation and magnetic fields have the total mass of over twice the solar mass for all 5 hadronic EoSs. Speaker: Ms Chinatsu Watanabe (Saitama University) Lunch break 2h IAC Meeting 1h 134 Convener: Prof. Toru Kojo (Central China Normal University) The Kondo effect in dense QCD 25m We discuss Kondo effect occurring in dense QCD [1,2]. Based on the renormalization-group analysis, we show that effective coupling strengths between ungapped and gapped quarks in the two-flavor color superconducting (2SC) phase are renormalized by logarithmic quantum corrections, which drives the system into a strongly coupled regime [2]. This is a characteristic behavior observed in the Kondo effect, which has been known to occur in the presence of impurity catterings via non-Abelian interactions. We propose a novel Kondo effect emerging without doped impurities, but with the gapped quasiexcitations and the residual SU(2) color subgroup intrinsic in the 2SC phase, which we call the 2SC Kondo effect [2]. The Kondo effect is a consequence of the dimensional reduction near the Fermi surface, and an analogous dimensional reduction in a strong magnetic field is also known to induce similar phenomena [3, 4]. [1] Hattori, Itakura, Ozaki, Yasui, "QCD Kondo effect: quark matter with heavy-flavor impurities," Phys.Rev. D92 (2015) 065003. [2] Hattori, Huang, Pisarski, "Emergent QCD Kondo effect in two-flavor color superconducting phase," [arXiv:1903.10953 [hep-ph]]. [3] Gusynin, Miransky, Shovkovy, "Dimensional reduction and dynamical chiral symmetry breaking by a magnetic field in (3+1)-dimensions," Phys.Lett. B349 (1995) 477-483. [4] Ozaki, Itakura, Kuramoto, "Magnetically induced QCD Kondo effect," Phys.Rev. D94 (2016) 074013. Speaker: Dr Koichi Hattori (Yukawa Institute, Kyoto University) Non-Abelian vortices in dense QCD: quark-hadron continuity and non-Abelian statistics 25m Quark-hadron continuity was proposed as crossover between hadronic matter and quark matter without a phase transition, based on the matching of the symmetry and excitations in both phases. In the limit of a light strange-quark mass, it connects hyperon matter and the color-flavor-locked phase exhibiting color superconductivity. Here, we argue that three hadronic superfluid vortices must combine with three non-Abelian vortices with different colors with the total color magnetic fluxes canceled out through a junction called a colorful boojum. We prove this based on the Aharonov-Bohm phases of quarks around vortices. We then discuss non-Abelian statistics of non-Abelian vortices based on the Bogoliubov-de Gennes equation and possible application to the above continuity. Speaker: Prof. Muneto Nitta (Keio University) Quark-hadron continuity beyond Ginzburg-Landau paradigm 25m Quark-hadron continuity [1] is a scenario that hadronic matter is continuously connected to color superconductor without phase transitions as the baryon chemical potential increases. This scenario is based on Landau's classification of phases since they have the same symmetry breaking pattern. We address the question whether this continuity is true as quantum phases of matter, which requires the treatment beyond Ginzburg-Landau description [2,3]. To examine the topological nature of color superconductor, we derive a dual effective theory for U(1) Nambu-Goldstone (NG) bosons and vortices of the color-flavor locked phase, and discuss the fate of emergent higher-form symmetries. The theory has the form of a topological BF theory coupled to NG bosons, and fractional statistics of test quarks and vortices arises as a result of an emergent Z3 two-form symmetry. We find that this symmetry is not spontaneously broken, indicating that quark-hadron continuity is still a consistent scenario. [1] T. Schafer and F. Wilczek, Phys.Rev.Lett. 82 (1999) 3956-3959. [2] Y. Hirono, Y. Tanizaki, Phys. Rev. Lett., in press [arXiv:1811.10608] [3] Y. Hirono, Y. Tanizaki, [arXiv:1904.08570] Speaker: Dr Yuji Hirono (Asia Pacific Center for Theoretical Physics) Convener: Prof. Kei-Ichi Kondo (Chiba University) Partial deconfinement 45m We discuss a "partially deconfined phase" in SU(N) gauge theories. This phase is in between the confined and deconfined phases and is defined such that SU(M) in SU(N) (M < N) is deconfined and the rest of degrees of freedom are confined. We investigate some examples and find that in all the examples, the transition from the partially deconfined phase to completely deconfined phase has the same structure as the Gross-Witten Wadia transition. We also discuss an interesting relation between the partial deconfinement and black holes in string theory. When the partially deconfined phase is unstable, it corresponds to the phase with a small Schwarzschild black hole in string theory through the gauge/gravity duality. Speaker: Prof. Goro Ishiki (University of Tsukuba) Poster session 122 Anomalous Casimir effect in axion electrodynamics 20m The Casimir effect is relevant for QCD physics in many contexts such as a possible origin of the dark energy, an extra pressure in the hadron bag model etc. In this talk we delve into the Casimir effect in (3+1)-dimensional Maxwell-Chern-Simons (MCS) theory aka axion electrodynamics. It is known that two bodies with reflection symmetry always have an attractive Casimir force, but this ``no-go theorem'' has been challenged recently. We demonstrate that a spatially inhomogeneous topological $\theta$ angle induces a repulsive Casimir force. This is a detectable effect in the topological insulator for which axion electrodynamics is the effective theory. Speaker: Mr Zebin Qiu (University of Tokyo) Applicability of the complex Langevin method to QCD at finite density 20m The complex Langevin method (CLM) is a promising approach to overcome the sign problem. Here we examine its applicability to QCD at finite density on a $24^3 \times 12$ lattice with four-flavor staggered fermions around the deconfinement phase transition line in the $(T-\mu)$-plane. While the CLM actually works at quite large values of $\mu$, it fails in the confined phase, which appears at lower $\mu$. This is due to the singular drift problem as one can understand through the generalized version of the Banks-Casher relation, which relates the Dirac zero modes to the chiral condensate. This problem was avoided in our previous work in the confined phase because of the chosen small spatial volume. Speaker: Dr Shoichiro Tsutsui (RIKEN) Bulk quantities in nuclear collisions from Color Glass Condensate and hybrid hydrodynamic simulations 20m Starting from a running-coupling improved $k_T$-factorized formula of the Color Glass Condensate (CGC) framework, we calculate bulk observables in several heavy-ion collision systems. This is done in two ways: first the particle distribution is calculated directly as implied from the CGC model, and then it is compared to the case where it is instead used as initial conditions for a hybrid hydrodynamic simulation. In this way, it it possible to assess the effects of hydrodynamic and hadronic evolution by quantifying how much they change the results from a pure initial state approach and, therefore, to what extent initial condition models can be directly compared to experimental data. We find that although entropy production in subsequent hydrodynamic evolution can increase multiplicity by as much as 50%, the centrality, energy, and system size dependence of charged hadron multiplicity is only affected at the $\sim$5% level (disregarding a single overall - energy and system size independent - normalization) when compared to the pure initial state case. The parameter-free prediction for these dependencies then gives reasonable agreement with experimental data whether or not hydrodynamic evolution is included. On the other hand, our results are not compatible with the hypothesis that hydrodynamic evolution is present in large systems, but not small systems like p-Pb. Moreover, we find that hydrodynamic evolution significantly changes the distribution of momentum, so that observables such as mean transverse momentum are very different from the initial particle production, and much closer to measured data. Lastly, we point out that the onset of a hydrodynamic phase in heavy-ion collisions, along with viscous effects, could, perhaps, be further investigated by studying the centrality dependence of ratio of the mean $p_T$ across different collision systems with similar collision energy. Speaker: Dr Andre Veiga Giannini (Akita International University and University of Sao Paulo) Catalytic effects of QCD monopoles on the phase transitions 20m The existence of monopoles has been theoretically predicted since P. A. M. Dirac introduced the magnetic monopole in quantum mechanics. Moreover, a large number of experiments to observe monopoles have been conducted. However, monopoles have not been detected yet. The purpose of this research is to find a clue to observe QCD monopoles which closely relate to the color confinement by experiments. To find the evidence, we add the classical fields of the monopole and the anti-monopole to the QCD vacuum and calculate the Dirac operator of the overlap fermion which preserves the exact chiral symmetry in the lattice gauge theory from the QCD vacuum. We then estimate catalytic effects of the additional monopole and anti-monopole on observables by the numerical calculations. In the study of the low temperature, we have shown that the value of the chiral condensate (defined as the minus value) decreases, the pion decay constant increases, and the masses of the light quarks and the mesons become heavy, by varying the values of the magnetic charges of the additional monopole and anti-monopole. Finally, we have discovered that the decay width of pion becomes wider and the lifetime of pion becomes shorter than the experimental results. These are the catalytic effects of monopoles in QCD (arXiv:1807.04808). In this research, we add the monopole and anti-monopole to the configurations of the finite temperature and investigate catalytic effects of monopoles in QCD on quark confinement-deconfinement phase transition, and chiral symmetry breaking and the restoration. We find that the additional monopole and anti-monopole increase the temperature of quark confinement-deconfinement phase transition, moreover, the restoration of chiral symmetry breaking does not occur, by varying the values of the magnetic charges of the additional monopole and anti-monopole. In this talk, we would like to present our preliminary results about the catalytic effects of QCD monopoles in the finite temperature. Speaker: Dr Masayasu Hasegawa (Joint Institute for Nuclear Research) Center symmetry and the sign problem of finite density lattice gauge theory 20m We study the phase transition of quantum chromodynamics (QCD) at finite temperature and density by focusing on the probability distribution function of quark density. The phase transition of QCD is expected to change its properties as the density changes, and the probability distribution function gives important information for understanding the nature of the phase transition. The numerical simulation of QCD at high density has the serious problem of "sign problem". In this study, we consider the center symmetry, which is important for understanding the phase transition of lattice gauge theory, and propose a method to avoid the sign problem using the symmetry. In this way, we aim to establish a method to calculate probability distribution functions of physical quantities such as quark density by numerical simulation. Speaker: Dr Shinji Ejiri (Niigata University) Chiral kinetic theory in curved spacetime 20m Many-body systems with chiral fermions exhibit anomalous transport phenomena originated from quantum anomalies. Based on quantum field theory, we derive the kinetic theory for chiral fermions interacting with an external electromagnetic field and a background curved geometry. The resultant framework respects the covariance under the U(1) gauge, local Lorentz, and diffeomorphic transformations. It is particularly useful to study the gravitational or non-inertial effects for chiral systems. As the first application, we study the chiral dynamics in a rotating coordinate and clarify the roles of the Coriolis force and spin-vorticity coupling in generating the chiral vortical effect (CVE). We also show that the CVE is an intrinsic phenomenon of a rotating chiral fluid, and thus independent of observer's frame. Speaker: Dr Kazuya Mameda (RIKEN) Chiral soliton lattice in dense matter under rotation 20m We study anomaly-induced effects on dense QCD matter under rotation. We show that the chiral perturbation theory under rotation has the topological term that accounts for the chiral vortical effect. We find that, due to the presence of this new term, the ground state of QCD under rotation is the chiral soliton lattice (CSL) for the neutral pion or η' meson. This state is a periodic array of topological solitons which spontaneously breaks parity and continuous translational symmetries. In particular, the CSL for the η' meson is energetically more favorable than the QCD vacuum and that of the neutral pion when the baryon chemical potential is much larger than the isospin chemical potentials. Speaker: Mr Kentaro Nishimura (Keio University) Complex poles and spectral function of the Landau gauge gluon propagator: effects of quark flavors 20m The analytic structures of propagators have kinematic information and are important to understand the color confinement; in particular, the existence of complex poles is a signal of confinement for the corresponding particle. We derive general relationships between the number of complex poles of a propagator and the spectral function under some assumptions on the asymptotic behaviors of the propagator. We apply this relation to the massive Yang-Mills model, which is an effective model of the Landau gauge Yang-Mills theory, to show that the gluon propagator in this model has two complex poles. We consider the flavor effects on the analytic structure of the gluon propagator in the massive Yang-Mills model with quarks and also discuss effects of finite temperature and chemical potential towards understanding the QCD phases in relation with the analytic structures of the propagators in the Landau gauge QCD. Speaker: Ms Yui Hayashi (Chiba University) Confinement/deconfinement phase transition for quarks in the higher representation in view of dual superconductivity 20m Dual superconductor picture is one of the most promising scenarios for quark confinement. We have proposed a new formulation of Yang-Mills theory on the lattice so that the so-called restricted field obtained from the gauge-covariant decomposition plays the dominant role in quark confinement. This framework improves the Abelian projection in the gauge-independent manner. For quarks in the fundamental representation, we have demonstrated some numerical evidences for the dual superconductivity. However, it is known that the expected behavior of the Wilson loop in higher representations cannot be reproduced if the restricted part of the Wilson loop is extracted by adopting the Abelian projection or the field decomposition naively in the same way as in the fundamental representation. Recentry, by virtue of the non-Abelian Stokes theorem for the Wilson loop operator, we have proposed suitable operators constructed from the restricted field only in the fundamental representation to reproduce the correct behavior of the original Wilson loop in higher representations. We have further demonstrated by the numerical simulation that the proposed operators well reproduce the expected behavior of the original Wilson loop average, which overcomes the problem that occurs in naively applying Abelian-projection to the Wilson loop operator for higher representations. In this talk, we focus on the the confinement and deconfinement phase transition for quarks in the higher representations at finite temperature in view of the dual superconductivity. By using our new formulation of lattice Yang-Mills theory and numerical simulations on the lattice, we extract the dominant mode for confinement by decomposing the Yang-Mills field, and we investigate the Polyakov loop average, static quark potential for both Yang-Mills field and decomposed restricted field in both confinement and deconfinement phase at finite temperature. Speaker: Dr Akihiro Shibata (Computing Research Center, KEK) Dyon in pure SU(2) Yang-Mills theory with a gauge-invariant mass toward confinement-deconfinement phase transition 20m The KvBLL instantons (calorons) are extensively used to understand confinement-deconfinement phase transition in the Yang-Mills theory at finite temperature. The KvBLL instanton is a topological soliton solution of the self-dual equation of the SU(2) Yang-Mills theory on $S^1\times R^3$ space with instanton charge, which consists of BPS dyons having both electric and magnetic charges with non-trivial holonomy at spatial infinity. Recently, we have found a novel dyon solution as a non-BPS solution of (non self-dual) field equations of a gauge-scalar model with the radially fixed scalar field in the adjoint representation. This dyon solution of the gauge-scalar model is identified with the topological field configuration of the Yang-Mills theory with a gauge-invariant gluon mass term without scalar field, which is regarded as the low-energy effective model of the Yang-Mills theory with mass gap. This follows from the gauge-independent Higgs mechanism which does not rely on the spontaneous breaking of gauge symmetry. Our dyon has the non-vanishing asymptotic value corresponding to the non-trivial holonomy at spatial infinity to be comparable with the KvBLL dyon. Thus we can propose another scenario for reproducing confinement-deconfinement phase transition in the Yang-Mills theory at finite temperature based on our dyon solution. In this poster we show the existence of such dyons and discuss the characteristic properties, especially the asymptotic holonomy. Speaker: Dr Shogo Nishino (Chiba University) Exploring the QCD phase diagram via reweighting from isospin chemical potential 20m We investigate the QCD phase diagram close to the isospin chemical potential axis. Numerical simulations directly along this axis are not hindered by the sign problem and pion condensation can be observed at high enough values of the isospin chemical potential. The possibility of a crossover transition from this BEC phase to a BCS phase is investigated. We study how the BEC phase boundary evolves in the baryon and strangeness chemical potential directions via reweighting in the quark chemical potentials and discuss our results. Furthermore, we develop an alternative method to approach nonzero baryon chemical potentials. This involves simulations including auxiliary quarks of an extended isospin doublet and decoupling them by increasing their mass, again via reweighting. Speaker: Mr Sebastian Schmalzbauer (ITP, Goethe University) Extracting equation of state from neutron star observation using machine learning 20m First-principles evaluation of the dense matter equation of state is one of the longstanding problems in QCD. Owing to the advances in neutron star observations in last decade, it is now possible to evaluate the equation of state from the observational data. As it circumvent the problems that are inherent in the theory, it may put significant constraint on the theory. Here we discuss a novel method of machine learning to deduce the equation of state from a set of mass-radius observational data, which is alternative to the Bayesian analysis based on different principle. Using test data (mock observational data) we confirm that the equation of state is correctly reconstructed and this method works well. We use state of the art observational data of mass-radius measured from neutron star X-ray radiations as an input, and estimate the equation of state. We confirm that the speed of sound calculated from the equation of state is surpassing the conformal limit (1/3 of speed of light) as expected earlier. Our results are consistent with extrapolation from the conventional nuclear models and the experimental bound on the tidal deformability inferred from gravitational wave observation. [1] Y. Fujimoto, K. Fukushima, K. Murase, Phys. Rev. D 98, 023019 (2018). [2] Y. Fujimoto, K. Fukushima, K. Murase, arXiv:1903.03400 [nucl-th]. Speaker: Mr Yuki Fujimoto (The University of Tokyo) Ginzburg-Landau theory for neutron 3P2 superfluidity in neutron stars 20m The neutron 3P2 superfluidity is one of the interesting phases inside the neutron stars. In this presentation, we will discuss their properties based on the Ginzburg-Landau theory derived from the tensor-type interaction between two neutrons. We will show the strong magnetic effect relevant to the magnetars, the boundary effect near the surface of the neutron stars, and some related topological properties. Speaker: Dr Shigehiro Yasui (Keio University) Gluon propagator in two-color dense QCD: Massive Yang-Mills approach 20m We study the Landau gauge gluon propagators in dense two-color QCD at quark chemical potential. In order to take into account the non-perturbative effects in the infrared regime, we use the massive Yang-Mills theory which has successfully described the gluon and ghost propagators in the Landau gauge within the one-loop approximation measured on the lattice. We couple quarks to this theory and compute the one-loop polarization effects. Dense matter in two-color QCD should possesses the diquark condensate which is color-singlet, and hence neither electric nor magnetic screening effects appear at the scale less than the diquark gap. This infrared behavior explains the lattice results which show the insensitivity of screening masses to the quark density. Speaker: Dr Daiki Suenaga (Central China Normal University) Heavy quark spectral and transport properties from lattice QCD 20m We will present recent results on thermal modifications of heavy quark spectral functions based on continuum extrapolated correlation functions in pure SU(3) plasma and discuss constraints for the heavy quark diffusion coefficients. Using the gradient flow technique for the color-electric field correlator in quenched as well as full QCD we will discuss the effects of dynamical fermions on the heavy quark momentum diffusion coefficient and provide first estimates on the thermal quarkonium mass shift of heavy quarks in the thermal medium. Speaker: Dr Olaf Kaczmarek (University of Bielefeld) Instability toward the chiral inhomogeneous phase with the functional renormalization 20m In this talk, we present our functional-renormalization-group (FRG) study of the collective excitations around the chiral phase transition line. In particular, we intensively investigate the sigma-mesonic and pionic collective modes around the QCD critical point (CP) by calculating the spectral functions with the FRG. Such an FRG study gives beyond-mean-field pictures of the collective modes since it incorporates the large fluctuations involved in the second-order phase transition. We find that one-particle excitation showing tachyonic instability in the sigma-mesonic channel appears as the quark chemical potential approaches that of the QCD CP from the hadronic phase with a fixed temperature. Such an unstable mode has finite momentum, which suggests the instability associated with the transition to the chiral inhomogeneous phase. We give an explanation for the origin of this phenomenon: The level repulsion between the one- and two-particle modes in the sigma-mesonic channel causes the instability in the one-particle mode. Since there is no such instability in the pionic channel, our result suggests the real kink crystal occurs. Speaker: Dr Takeru Yokota (High Energy Accelerator Research Organization) Lattice field theory with torsion 20m We formulate lattice field theory with dislocation. The dislocation realizes the spacetime torsion in the continuum limit. As the first application, we perform the numerical computation to analyze the generation of the current induced by the screw dislocation, which we call the "chiral torsional effect". Speaker: Mr Shota Imaki (The University of Tokyo) Linked cluster expansion method for the SU(3) spin models 20m An $SU(3)$ spin model is often used in the literature as a first step to deal with QCD at finite chemical potential. It approximates full lattice QCD in the strong coupling and large fermion-mass limit. We describe a series expansion method called Linked Cluster Expansion (LCE), and how to apply it to the spin model. The results are series of several couplings, which we analyze by generalized Padé approximants, called Partial Differential Approximants (PDAs). This method allows complex multi-critical behavior of quark matter to be investigated. We compare our results with those from complex Langevin and flux representation. We also indicate a couple of open problems. Speaker: Mr Anh Quang Pham (Goethe University Frankfurt) Logarithms in perturbation theory -- NNNLO pressure of cold and dense QCD 20m I will present results on computing the pressure of cold and dense QCD matter to high loop orders in perturbation theory. Such high-order computations are made possible by resumming contributions from the soft degrees of freedom. In particular, I will cover the computation of the nonanalytic logarithmic terms appearing at NNNLO for $T=0$, both the leading logarithm based on a paper from 2018 (Phys.Rev.Lett. 121 (2018) no.20, 202701) as well as a work-in-progress computation for obtaining the subleading logarithmic term, which gets distinct contributions from both the resummed soft sector as well as the hard sector. Speaker: Mr Matias Sappi (University of Helsinki) Measuring chiral susceptibility using gradient flow 20m We study the chiral susceptibility in $N_f = 2+1$ full QCD. In the lattice gauge theory with Wilson fermion, chiral symmetry is explicitly broken. Therefore, we need a non-trivial additive correction to renormalize the chiral susceptibility. To avoid this problem, we use Gradient flow method. Gradient flow method makes us possible to define correctly renormalized chiral susceptibility without additive renormalization even if we use Wilson fermion. We measure not only disconnected diagram but also connected diagram for chiral susceptibility. This measurement is on finite temperature full QCD with $N_f=2+1$ Wilson fermion, and for temperature range 178-348 MeV. Speaker: Mr Atsushi Baba (University of Tsukuba) Nonrelativistic Nambu-Goldstone modes of generalized global symmetries and new dynamic critical phenomena in QCD 20m We study the effects of dynamical electromagnetic fields on the second-order chiral phase transition of QCD under a background magnetic field. We show that the interaction between the photon and the neutral pion through the quantum anomaly causes the type-B Nambu Goldstone (NG) mode associated with the spontaneous breaking of the generalized global symmetries. Furthermore, we find that such a novel NG mode leads to the new dynamic universality class beyond the conventional Hohenberg and Halperin's classification. We also argue a possible realization of this new dynamic universality class in 3-dimensional Dirac semimetals. Speaker: Mr Noriyuki Sogabe (Keio University) On the multiple thimbles decomposition for the Thirring model 20m Lefschetz thimbles regularization is an elegant way to overcome the sign problem. By integrating over thimbles, where the imaginary part of the action stays constant and can be factored out, the sign problem disappears and observables of interest may be computed by Monte Carlo simulations. Still, many examples are known so far where the correct results can only be recovered by taking into account multiple thimbles; therefore one is left with the difficult task of collecting their contributions. The Thirring model is one of such examples: this theory has a rich thimble structure and it has been shown that one cannot reproduce the results of the full theory from the dominant thimble alone. Using the model as a test bench for the calculation techniques we have developed in Parma, we report preliminary results on reproducing the complete results from multiple thimbles simulations. Speaker: Mr Kevin Zambello (University of Parma and INFN, Gruppo Collegato di Parma) Order of the color superconducting phase transition 20m We investigate via the functional renormalization group the order of the color superconducting phase transition. We calculate the flow of the gauge coupling in a 3d Ginzburg-Landau theory and investigate whether it supports the existence of infrared stable fixed points. Speaker: Dr Gergely Fejos (Keio University) We argue the existence of "partially deconfined phase" in some SU(N) gauge theories, that is in between the confined and deconfined phases. We characterize this phase in terms of the Polyakov line phases and study examples of theories in which the partially deconfined phase exists. We find that this phase is closely related to the Gross-Witten-Wadia phase transition. The partially deconfined phase is conjectured to be the counterpart of the small black hole phase in the context of the gauge/string duality. We also discuss possible applications in this context. Speaker: Mr Hiromasa Watanabe (University of Tsukuba) Phase transitions in matrix models 20m We discuss new results in the BFSS matrix model and its bosonic variant. Speaker: Dr Enrico Rinaldi (Riken) QCD Topology to High Temperatures via Improved Reweighting 20m At high temperatures, the topological susceptibility of QCD becomes relevant for the properties of axion dark matter. However, the strong suppression of non-zero topological sectors causes ordinary sampling techniques to fail, since fluctuations of the topological charge can only be measured reliably if enough tunneling events between sectors occur. We present an improvement of a technique the we recently developed to circumvent this problem based on a combination of gradient flow and reweighting techniques and quote first results of the topological susceptibility in pure SU(3) Yang-Mills theory up to $7~T_\mathrm c$. Speaker: Mr P. Thomas Jahn (Technische Universität Darmstadt) Quark mass generation by monopole condensation 20m We show that monopole quark interactions break flavor chiral SU(2) symmetry as well as chiral U(1) symmetry. The interactions induce quark masses when the monopoles condense even in the chiral limit (current quark masses vanish.) The masses are estimated to be approximately 20MeV. Thus, the pions are not massless even in the chiral limit. Furthermore, the presence of the interactions leads to the fact that the chiral symmetry breaking and the quark confinement simultaneously arises. Because fluctuations of color electric fields are large in dense quark matters, they expel the monopole condensation. Then, the deconfinement of quarks and the restoration of the chiral symmetry simultaneously arise. Speaker: Prof. Aiichi Iwazaki (Nishogakusha University) Quarkonium suppression in streaming quark-gluon plasma 20m Quarkonium suppression in quark-gluon plasma has been investigated since original work by Matsui and Satz [1]. This topic remains actual due to the need of quark-gluon plasma diagnostics. In fact, both quarkonium suppression in quark-gluon plasma and recombination during hadronisation remain to be key open questions [2]. The bound state of quarkonium is theoretically well investigated in the case of equilibrium quark-gluon plasma [3]. However, the experimentally produced quark-gluon plasmas is strongly non-equilibrium. Therefore, in this work we present results for the quarkonium suppression in streaming quark-gluon plasmas. For this propose we use the concept of dynamical screening using the dielectric function of collisional quark-gluon plasma. [1] T. Matsui and H. Satz, Physics Letters 178, 416 (1986) [2] Jurgen Schukraft, Nuclear Physics A 967, 1 (2017) [3] R. Rapp and X. Du, Nuclear Physics A 967, 216 (2017) Speaker: Dr Zhandos Moldabekov (Al Farabi Kazakh National University) Relation between chirality imbalance and fermion pair-production under the parallel electromagnetic field 20m There has been recently an increasing interest for study of the chirality imbalance $n_5$, which is the difference between right- and left-handed fermions. The chirality imbalance is expected to be arised from the axial anomaly and plays a key role to understand anomalous transport phenomena in the hot/dense quark matter or the Dirac/Weyl semimetals under the magnetic field. One of interesting transport phenomena in the presence of chirality imbalance is the chiral magnetic effect(CME), appearance of electric current in direction of the external magnetic field. However, the electric field gives a crucial contribution to the emergence of the chirality imbalance in addition to the magnetic field. In the previous work, we have studied the chirality imbalance and the CME using the analytical solution of the Dirac equation in the constant magnetic and Sauter-type pulsed electric fields, and found that the time-dependence of the gauge field is essentially important for the production of $n_5$. Here, we try to extend our study to the general time-dependent electric field, and discuss a relation between $n_5$ and the fermion pair-production from the vacuum. In this talk, we study the time evolution of the chirality imbalance and the electric current under spatially-uniform and parallel electromagnetic field in the vacuum of massive fermion. For the time-dependence, we assume the constant magnetic field, but do not impose any specific form for the electric field with boundary conditions $E(t \to \pm \infty) \to 0$. We solve the Dirac equation and calculate vacuum expectation values of the currents with the gauge invariant regularization. In particular, we find that $n_5$ and CME at $t \to \infty$ are solely determined by the probability distributions of the fermion created non-perturbatively by the electric field. As a result, asymptotic forms of $n_5$ and CME consists of a constant part and an oscillating part independent of details of intermediate time-dependence of the gauge potential. The non-zero constant term is proportional to a relativistic velocity and the momentum distribution of the created particle, and understood as a classical analogue of the electric current. We discuss how the chirality imbalance arises and roles of the electromagnetic fields in detail. Speaker: Mr Hayato Aoi (Tokyo University of Science) Shear viscosity of classical fields in Yang-Mills theory 20m The created matter in the initial stage of relativistic heavy ion collisions is described well by the classical Yang-Mills(CYM) fields. It has been shown that the dynamics of the CYM fields play a significant role in the realization of an local thermal equilibrium. In this work, we expect that the CYM fields itself have hydrodynamical property such as transport coefficient in equilibrium. We discuss the shear viscosity of the classical fields in the CYM theory using the Green-Kubo formula. We show that the time correlation function of energy-momentum tensor in equilibrium shows a monotonous decay with an exponential form and the shear viscosity can be well evaluated by the contribution from the exponential decay. Speaker: Mr Hidefumi Matsuda (Graduate school of science, Kyoto university) String confinement in 2-form lattice gauge theory 20m We study the confinement between vortex strings in a 2-form gauge theory by using the lattice Monte Carlo simulation. We calculate the string-antistring potential from the surface operator of the 2-form gauge field in the abelian 2-form lattice gauge theory, which is dual of the abelian Higgs model in continuum limit. The linear confining potential appears in a confinement phase and it disappears in a deconfinement phase. The phase diagram of the theory is also studied. Speaker: Dr Tomoya Hayata (RIKEN) The Schwinger mechanism with perturbative electric fields 20m I discuss spontaneous particle production from the vacuum (the Schwinger mechanism) in the presence of a strong slow electric field superimposed by a fast weak electric field. I analytically/numerically show that a QED analog of the Franz-Keldysh effect occurs, which significantly modifies the spectrum of the produced particles. I also show that a non-trivial spin-dependence appears in the production even without magnetic fields due to the intrinsic spin-orbit coupling in the Dirac equation if the weak electric field is transverse with respect to the strong electric field. Implications to experiments/observations (e.g. heavy ion collisions, lasers) and relations to the dynamically assisted Schwinger mechanism are also discussed. [1] H. Taya "Franz-Keldysh effect in strong-field QED," PRD 99, 056006 (2019) [2] X.-G. Huang, M. Matsuo, H. Taya, "Spontaneous generation of spin current from the vacuum by strong electric fields," arXiv:1904.07593 [3] X.-G. Huang, H. Taya, "Spin-dependent dynamically assisted Schwinger mechanism," arXiv:1904.08200 Speaker: Dr Hidetoshi Taya (Fudan University) The sign problem and the Lefschetz thimbles in two dimensional Hubbard model 20m In the talk we discuss the sign problem and the possibility to alleviate it with the help of methods related to Lefschetz thimbles in the space of complexities field variables. In particular, we consider two-dimensional Hubbard model at finite density. We analyze the model on the square lattice combining semi-analytical study of saddle points and thimbles on a small lattice and results of test Monte-Carlo simulations. We investigate different representations of the path integral and find a particular representation which supposedly leads to the presence of a single dominating thimble even for large lattices. Finally, we derive a novel non-Gaussian representation of the four-fermion interaction term, which also exhibits decreased number of Lefschetz thimbles. Speaker: Dr Semeon Valgushev (Brookhaven National Laboratory) Thermal Quarkonium Mass Shift from Euclidean Correlators 20m Brambilla et al. have derived an effective description of quarkonium with two parameters: a momentum diffusion term which has been widely explored within the community, and a real self-energy term. We derive a relation between the self-energy term and Euclidean electric field correlators along a Polyakov line, which can directly be studied on the lattice without the need for analytical continuation. We also discuss the problems in determining the correlator within the scope of the quenched QCD approximation. Speaker: Mr Alexander Maximilian Eller (TU Darmstadt) Thermodynamic properties of QGP at the physical point with the gradient flow method. 20m We study thermodynamic properties of 2+1 flavor QCD on the lattice applying the method of Makino and Suzuki based on the gradient flow, using a nonperturbatively $O(a)$-improved Wilson quark action and the renormalization group-improved Iwasaki gauge action. I report on results of the energy momentum tensor and chiral condensate obtained so far from our on-going simulations at the physical point. Speaker: Prof. Kazuyuki Kanaya (Tomonaga Center for the History of the Universe, University of Tsukuba) Topological susceptibility of two-color QCD at low temperature and high density 20m We study the chemical potential $(\mu)$ dependence of the topological susceptibility with two-color two-flavor QCD. We find that at temperature $T \sim T_c/2$, where Tc denotes the critical temperature at zero chemical potential, the topological susceptibility is almost constant until $\mu/m_{PS}=1.6$, while at $T \sim T_c$, it decreases significantly from the $\mu=0$ value in a high $\mu$ regime. In this work, we perform the simulation for $\mu/T \le 16$, which covers even the low temperature and the high chemical potential regime. In this regime, we introduce a diquark source term, which is characterized by $j$, into the action. We also show our results for the phase diagram in a low temperature regime $(T \sim T_c/2)$, which is obtained after taking the $j \to 0$ limit of physical observable. Speaker: Dr Etsuko Itou (Keio University) Universal scaling of conserved charge in the stochastic diffusion dynamics 20m In this work, we explore the Kibble-Zurek scaling of the conserved charge, using the stachastic diffusion dynamics. After determining the characteristic scales $\tau_{\tiny KZ}$ and $l_{\tiny KZ}$ and properly rescaling the traditional correlation function and cumulant, we construct universal functions for both the two-point correlation function $C(y_1-y_2;\tau)$ and second-order cumulant $K(\Delta y,\tau)$ of the conserved charge in the critical regime, which are insensitive to the initial temperature and a parameter in the mapping between 3D Ising model and the hot QCD system near the critical point. Speaker: Mr Shanjin Wu (Peking University) Session5 134 Convener: Dr Kazunori Itakura (KEK) Studying the QCD phase diagram in RHIC-BES at STAR 45m Exploring the QCD phase structure is one of the ultimate goals of high-energy heavy-ion colliding experiments. At BNL-RHIC, the Beam Energy Scan (BES-I) program was carried out from 2010 to 2014, and many data sets have been collected by the STAR experiment in various collision energies from $\sqrt{s_{NN}} =$ 200 GeV down to 7.7 GeV in Au+Au collisions. In order to reduce the uncertainties in the interested energy region (7.7 $< \sqrt{s_{NN}} <$ 19.6 GeV), the BES-II program is scheduled for 2019-2021. In this talk, we present the BES-I results on hadron spectra, directed flow and higher-order cumulants of conserved charges which covers the physics issues of the freeze-out, first-order phase transition and the searching for the QCD critical point, respectively. The current status of BES-II and the future prospects for the fixed-target program will be also discussed. Speaker: Dr Toshihiro Nonaka (Central China Normal University) Correlations and probability distributions in high-energy nuclear collisions 45m I will discuss the present efforts to probe the QCD phase diagram with fluctuations and multi-particle correlations. Speaker: Dr Adam Bzdak (AGH University of Science and Technology) Convener: Prof. Yasushi Nara (Akita International University) The QCD critical point hunt: new dynamic framework and first simulation results 45m The on-going heavy-ion collision experiments at RHIC is scanning the baryon-rich regime of the QCD phase diagram with an unprecedented precision that would potentially discover the QCD critical point, the landmark point in the phase diagram. On the theory front, conventional hydrodynamic modeling would not be sufficient for the critical point hunt. Instead, I will present a novel theoretical framework, namely "hydro+", which couples critical fluctuations to bulk evolution of the "fireball" created in heavy-ion collisions. I will show the first results on the numerical simulations of "hydro+", and, if time is permitted, discuss the interesting connection of "hydro+" to other approaches of fluctuating hydrodynamics. Speaker: Dr Yi Yin (MIT) Hydrodynamic fluctuations and fluctuation theorem in heavy-ion collisions 25m Recently the effects of hydrodynamic fluctuations, i.e., the thermal fluctuations of relativistic hydrodynamics, on flow observables in high-energy nuclear collisions are analyzed in event-by-event simulations by dynamical models. The statistics of the hydrodynamic fluctuations is usually determined by the fluctuation-dissipation theorem obtained in the global equilibrium. However, in expanding systems such as matter created in the experiments, the fluctuation-dissipation theorem is non-trivial. Fluctuation theorem is more general theorem that describes the probability distribution of the entropy production and is applicable to any non-equilibrium systems. We discuss the fluctuation-dissipation relation in expanding systems and its relation to the fluctuation theorem by performing the numerical simulations of non-linear relativistic fluctuating hydrodynamics assuming the Bjorken flow. Speaker: Dr Koichi Murase (Sophia University) Formulating relativistic hydrodynamics with spin polarization 25m Recently, there have been significant experimental progresses in observing and controlling spin-dependent bulk quantities in broad areas in physics such as relativistic heavy-ion collisions and spintronics. Although hydrodynamics is one of the most powerful theoretical frameworks to describe such macroscopic bulk quantities, its extension to a spinful fluid has not been fully developed, especially for relativistic systems. In this study, we formulate dissipative relativistic hydrodynamics with a dynamical spin degree of freedom based on the phenomenological entropy-current analysis [1]. With the help of the first and second laws of local thermodynamics, we constrain the possible constitutive relations for a relativistic spinful fluid. In addition, we perform the linear-mode analysis on the top of global thermal equilibrium, and clarify that spin density gives a non-hydrodynamic diffusive mode with a finite lifetime. This diffusive behavior is a consequence of the mutual convertibility between spin and orbital angular momentum. [1] K. Hattori, M. Hongo, X.-G. Huang, M. Matsuo, H. Taya, "Fate of spin polarization in a relativistic fluid: An entropy-current analysis," arXiv:1901.06615 Speaker: Dr Masaru Hongo (Keio University) Convener: Prof. Tetsufumi Hirano (Sophia University) Novel transport phenomena with chirality, vorticity and magnetic field 45m By colliding heavy ions at high energies, physicists are able to "break up" nuclear particles like protons and neutrons and create a hot "subatomic soup", the quark-gluon plasma (QGP). In recent years, there have been significant interests and progress on the spin degrees of freedom in the QGP fluid. In particular, novel transport phenomena arise from the nontrivial interplay between quark spin and chirality with extremely strong vorticity and magnetic field in heavy ion collisions. In this talk, a number of fascinating examples will be briefly surveyed. The first is the global polarization of particle spin from fluid rotation, demonstrating "fluid spintronics" on the subatomic scale. The second is the anomalous transport phenomenon known as the Chiral Magnetic Effect (CME) that has been enthusiastically studied not only in the "subatomic swirls" but also in Dirac and Well semimetals as well as in atomic, astrophysical and cosmological systems. The talk would also give a more detailed discussion on the ongoing efforts to search for the CME in heavy ion collisions. The pertinent progress in phenomenological modelings with the recently developed tool of Anomalous-Viscous Fluid Dynamics (AVFD) framework will also be presented. We end this talk with an outlook into the potential opportunity of discovery in the isobaric collision experiment. Speaker: Prof. Jinfeng Liao (INDIANA UNIVERSITY) Axial kinetic theory and spin transport for massive fermions 25m In relativistic heavy ion collisions (HIC), not only a strong magnetic field but also strong vorticity could be generated. Recent observations of the polarization of Lambda hyperons have triggered intensive studies for vorticity-induced polarization and spin dynamics in relativistic fluids. However, more recent studies suggest that the spin polarization could be possibly led by non-equilibrium effects. It is thus desired to construct a quantum transport theory for investigating non-equilibrium dynamics of spin polarization for massive fermions. Based on the Wigner-function approach, we derive an axial kinetic theory (AKT) for massive fermions as modified Boltzmann equations involving quantum corrections associated with spin and chiral anomaly, which can be applied to track the phase-space evolution of both vector/axial charges and spin polarization in weakly coupled systems. Since spin of massive fermions is a dynamical degree of freedom, the AKT involves one scalar and one axial-vector kinetic equations with side-jump effects pertinent to the spin-orbit interaction. In the massless limit, the AKT also reproduces the chiral kinetic theory as a well-established quantum kinetic theory for Weyl fermions and manifests the spin enslavement in such a limit. The AKT could have various applications in different physical systems including the spin transport for strange quarks or for Lambda hyperons in HIC. Speaker: Dr Di-Lun Yang (Keio University) Relativistic quantum molecular dynamics approach for heavy-ion collisions at high baryon density region 25m A new N-body non-equilibrium transport model based on the relativistic quantum molecular dynamics (RQMD) is developed for the simulations of high energy heavy ion collisions at high baryon density regions. In this approach, hadrons interact via the sigma-omega fields in the mean-field approximation as well as the hard two-body scatterings which produce the strings and hadronic resonances in JAM transport code. We compare results on the beam energy dependence of the directed and elliptic flows with the E895,NA49 and STAR data. The relativistic mean-field theory predicts the density isomer state which is a strong first-order phase transition to the nucleon matter to the resonance matter. We investigate the effects of such strong first-order phase transition to the delta-matter on the directed flow. Our dynamical approach can be also applied to the event-by-event fluctuations. We also discuss the effects of delta-matter transition on the net-proton cumulant ratios. Speaker: Prof. Yasushi Nara (Akita International University) Convener: Dr Olaf Kaczmarek (University of Bielefeld) In-medium heavy quark potential from lattice QCD and the generalized Gauss-law 25m In this talk I report on recent progress in the determination of the complex heavy quark potential from lattice QCD simulations [1] and show how its temperature dependence can be captured in an analytic parametrization based on an improved generalized Gauss law model [2]. Prospects for in-medium quarkonium phenomenology are discussed. [1] P. Petreczky, A.R., J.Weber, NPA982 (2019) 735 and in progress [2] D. Lafferty, A.R. in preparation Speaker: Dr Alexander Rothkopf (University of Stavanger) Real-Time-Evolution of Heavy-Quarkonium Bound States 25m Elucidating the production process of heavy quark bound states is a central goal in heavy-ion collisions [1]. Two central questions exist: Do bound states of heavy quarks form in the early time evolution of the glasma? If so, in which time regime can that happen? An answer requires the development of a non-perturbative treatment of the real-time-dynamics of heavy quarkonia. To answer those questions we have developed a novel real-time formulation [2] of lattice NRQCD [3,4] to order $1/(aMq)^2$ where we employ a classical statistical simulation for the early-time dynamics of the gauge fields [5]. Here we present results from a simulation of heavy quarkonium dynamics in the glasma. By computing the time-evolution of spectral functions of heavy quarkonium channels we expect to identify the emergence of bound states and their formation time in the evolving glasma. [1] G. Aarts et al., Eur. Phys. J. A 53 no.5, 93 (2017) [2] A. L., A. Rothkopf (in preparation) [3] G.P. Lepage et al., Phys.Rev. D 46, 4052 (1992) [4] M. Berwein, N. Brambilla, S. Hwang, A. Vairo, TUM-EFT 74/15, 56 pp (2018) [5] K. Boguslasvki, A. Kurkela, T. Lappi, J. Peuron, Phys.Rev. D98 no.1, 014006 (2018) Speaker: Mr Alexander Lehmann (Heidelberg University and University of Stavanger) Quantum dissipation of quarkonium in quark-gluon plasma: Lindblad equation approach 25m In heavy ion collision experiments, quark-gluon plasma (QGP) is expected to be produced and its physical properties have been discussed. Survival probability of a quarkonium is sensitive to the Debye screening of color charges in QGP. The dynamics of quarkonia can be described by master equation for the density matrix in the open quantum system approach. In this approach, the effect of quantum dissipation can be discussed. In contrast, this effect cannot be described in a simple ''in-medium" Schroedinger equation approach. In our study, we derive the Lindblad master equation for the relative motion of a quarkonium in QGP and solve it numerically. From this, we analyze how the quantum dissipation and the center-of-mass motion of the quarkonium affect the relative motion between the heavy quark antiquark pair. Finally, we present the phenomenological implication of the quarkonium dynamical evolution by solving the master equation in the Bjorken expanding QGP where the temperature decreases with time. Speaker: Mr Takahiro Miura (Osaka University) Banquet 2h Kanobi Meikei-kan Kanobi Meikei-kan Convener: Prof. Jens Oluf Andersen (Norwegian University of Science and Technology) Lattice with external fields and rotation 45m Extreme environments, such as strong electromagnetic force and fast rotation, induce interesting quantum phenomena. They are relevant both for high energy experiments and for condensed matter experiments. From a theoretical point of view, we need reliable computational frameworks to investigate them. I would like to review such endeavors in lattice gauge theory. Speaker: Dr Arata Yamamoto (University of Tokyo) Conductivity of quark-gluon plasma in the presence of external magnetic field 25m We examine the electric conductivity of quark-gluon plasma in the presence of external magnetic field B within LQCD formulation for few temperatures in the deconfinement phase. Ensembles are generated with dynamical staggered 2+1 quarks at physical quark masses. At first we measure the electromagnetic current-current Euclidean correlators along and perpendicular to the magnetic field, then extract the conductivity via analytical continuation within the Backus-Gilbert method. We obtain, that $\sigma_\parallel$ grows in the direction of magnetic field and $\sigma_\perp$ decreases. Thus we observe the Chiral Magnetic Effect in quark-gluon plasma. Speaker: Dr Aleksandr Nikolaev (Swansea University) The order of phase transition in three flavor QCD with background magnetic field in crossover regime 25m We investigate the order of phase transition in three flavor QCD with a background U(1) magnetic field using the standard staggered action with the plaquette gauge action. We perform simulations for three volumes $N_\sigma=8,16,24$ with fixed mass $ma=0.030$ and temporal extent $N_\tau=4$, which is expected to show crossover for vanishing magnetic field. We apply physically same magnitude of magnetic field $\sqrt{eB}=0.9$ for each volume. We measure the chiral condensates and Polyakov loop and calculate their susceptibility and Binder cumulant. We find that the transition becomes first order like transition with hysteresis in the Monte-Carlo history from crossover for non-zero magnetic field on the system. Speaker: Dr Akio Tomiya (RIKEN-BNL) Session 10 134 Convener: Prof. Urs Wenger (University of Bern) First study of $N_f=2+1+1$ lattice QCD with physical domain-wall quarks at finite temperatures (Cancelled) 25m Using a GPU cluster consisting of 25 units of Nvidia DGX-1 (each unit having 8*V100 interconnected by the NVLink), TWQCD collaboration has generated the first gauge ensembles of $N_f=2+1+1$ lattice QCD with physical domain-wall quarks on the $ L_x^3 \times L_t = 64^3 \times (6,8,10,12,16) $ lattices, in the temperature range $ T \simeq 200 - 550 $ MeV. The lattice spacing $a \sim 0.064 $~fm ($ La > 4 $ fm, and $ M_\pi La > 3 $) and the physical bare quark masses ($m_{u/d} a = 0.00125$, $m_s a = 0.04$, $m_c a = 0.55$) are determined with the zero temperature gauge ensemble resulting from the simulation of $N_f=2+1+1$ QCD on the $ 64^4 $ lattice. In this talk, I will outline the HMC simulation and the generation of the gauge ensembles. Moreover, I will present the first physical results (e.g., topological susceptibility) extracting from these $N_f = 2+1+1$ gauge ensembles with physical domain-wall quarks at finite temperatures. Speaker: Prof. Ting-Wai Chiu (National Taiwan University) The chiral phase transition in (2+1)-flavor QCD 25m We present recent results on the pseudo-critical and critical behavior in (2+1)-flavor QCD close to the chiral phase transition temperature. Speaker: Prof. Frithjof Karsch (Universitaet Bielefeld) QCD energy-momentum tensor using gradient flow 25m We study the energy-momentum tensor in QCD for $N_f$ = 2+1 dynamical quarks. In order to tame violation of the translational invariance on lattice we use the gradient flow method as a non-perturbative renormalization scheme. We adopt two values for the up and down quark mass. One is the physical mass with which we measure the one point function of the energy-momentum tensor and derive the equation of state in QCD. The other is a rather heavy mass of about $m_{ud}\simeq59$ MeV with $m_{\pi}/m_{\rho}\simeq0.63$. Using the latter gauge configuration we measure correlation functions of the energy-momentum tensor, from which we extract some transport coefficients. We also measure the chiral condensate and topological charge and study their temperature dependence. Speaker: Dr Yusuke Taniguchi (University of Tsukuba) Pion condensation -chpt versus lattice 25m In this talk I will discuss pion condensation and the phase diagram at finite temperature and isospin density. I will present results for the quark-meson model and chiral perturbation theory. The results for the phase diagram, pressure and equation of state are compared with recent lattice results. I will also present results for pion stars that consist of a Bose condensate of pion electromagnetically neutralized by leptons. Speaker: Prof. Jens Oluf Andersen (Norwegian University of Science and Technology) Convener: Prof. Hirotsugu Fujii (U Tokyo) A more powerful thimble approach to lattice field theories 25m Lefschetz thimbles regularisation of (lattice) field theories was put forward as a possible solution to the sign problem. Despite elegant and conceptually simple, it has many subtleties. Two major ones have to do with most relevant issues: how can one efficiently implement importance sampling on thimbles? how many thimbles should we take into account? As for the first question, since a few years we have been working on algorithms in which one takes into account (complete) steepest ascent paths. We discuss improvements we devised, in particular with respect to the flow equation (which in this approach is the main building block). In the original formulation of thimble regularisation, a single thimble dominance hypothesis was put forward: in the thermodynamic limit, universality arguments could support a scenario in which the dominant thimble (associated to the global minimum of the action) captures the physical content of the field theory. By now we know many counterexamples and we have been pursuing multi-thimble simulations ourselves. Still, a single thimble regularisation would be the real breakthrough. We report on ongoing work aiming at a substantial reduction of the number of thimbles to be taken into account (possibly being left with one single thimble). Speaker: Prof. Francesco Di Renzo (University of Parma & INFN) Improved algorithms for generalized thimble method 25m Questions about quantum field theories at non-zero chemical potential and/or real-time correlators are often impossible to investigate numerically due to the sign problem. A possible solution to this problem is to deform the integration domain for the path integral in the complex plane. Sampling configurations on these manifolds is challenging. In this talk I will discuss some of these problems, present solutions we have found and the directions we are currently pursuing. Speaker: Prof. Andrei Alexandru (The George Washington University) The sign problem in low dimensional QCD studied by using the path optimization 25m The sign problem, is a serious obstacle to perform the Monte Carlo simulations of QCD with finite chemical potential, is caused by the oscillation of the Boltzmann factor. To avoid this problem, we have proposed the path optimization method. In this method, we optimize the integral path in complex plain to decrease the cancellation in integral. In this talk, we explain the application of this method to gauge theory, and discuss the sign problem of low dimensional QCD. Speaker: Mr Yuto Mori (Kyoto University) Convener: Prof. Andrei Alexandru (The George Washington University) The Schwinger model in the canonical formulation 25m We consider the massive Schwinger model in the canonical formulation using transfer matrices in fixed fermion number sectors. The fermion contributions can be classified according to the discrete, local fermion occupation numbers which define specific fermion states. They can be used to expose the vacuum structure of the theory and the origin of the sign problem at finite fermion number density. We construct observables which can be used to calculate the ground state energy and the spectrum of the theory. Finally, we discuss the relation of the canonical formulation to the fermion loop and fermion bag formulation and comment on possible solutions to the fermion sign problem. Speaker: Prof. Urs Wenger (University of Bern) Tensor network study of two dimensional complex $\phi^{4}$ theory at finite density 25m We study the complex $\phi^{4}$ theory with finite chemical potential. To closely understand nontrivial effects such as the Silver Blaze phenomenon, experimental studies on the lattice will give some knowledge; however, on account of the finite chemical potential, there is a sign problem in Monte Carlo simulations. In this study, to overcome the problem, the tensor renormalization group approach is employed, and we give some numerical results surrounding the phenomena in the finite density system. Speaker: Dr Ryo Sakai (Kanazawa University) Schwinger-Keldysh formalism for Lattice Gauge Theories 25m It is important to compute transport coefficients in QCD at finite temperature and density. When the imaginary-time formalism of Lattice QCD is used, the spectral functions have to be reconstructed by supplementing certain Ansatze for correlation functions on the lattice. On the other hand, real-time Green's functions can be obtained directly in the Schwinger-Keldysh (SK) formalism. But the SK formalism has not been constructed so far for QCD non-perturbatively. In this work we formulate the SK formalism for Lattice QCD by constructing the transfer matrix in the direction of real time for gauge link field and Wilson fermion. We examine the spectral functions and other real-time Green's functions in weak gauge-coupling limit. We also obtain the Kubo formulae in this framework as a summation of the real-time Green's functions on the closed time path. Speaker: Mr Hiroki Hoshina (The Univ. of Tokyo, Komaba) Closing 134
CommonCrawl
A model for screen utility to predict the future of printed solar cell metallization Sebastian Tepner1, Linda Ney1, Marius Singler1, Ralf Preu1, Maximilian Pospischil1 & Florian Clement1 Scientific Reports volume 11, Article number: 4352 (2021) Cite this article Energy science and technology Fine line screen printing for solar cell metallization is one of the most critical steps in the entire production chain of solar cells, facing the challenge of providing a conductive grid with a minimum amount of resource consumption at an ever increasing demand for higher production speeds. The continuous effort of the industrial and scientific community has led to tremendous progress over the last 20 years, demonstrating an average reduction rate for the finger width of approximately 7 µm per year with the latest highlight of achieving widths of 19 µm. However, further reductions will become a major challenge because commonly used metal pastes are not able to penetrate arbitrary small screen opening structures. Therefore, this study introduces the novel dimensionless parameter screen utility index SUI which quantifies the expected printability of any 2-dimensional screen architecture in reference to a given paste. Further, we present a full theoretical derivation of the SUI, a correlation to experimental results and an in-depth simulation over a broad range of screen manufacturing parameters. The analysis of the SUI predicts the point when commonly used wire materials will fail to provide sufficient meshes for future solar cell metallization tasks. Therefore, novel wire materials (e.g. the use of carbon nanotubes) with very high ultimate tensile strengths are discussed and suggested in order to fulfill the SUI requirements for printing contact fingers with widths below 10 µm. We further analyze economic aspects of design choices for screen angles by presenting an analytical solution for the calculation of mesh cutting losses in industrial screen production. Finally, we combine all aspects by presenting a generalized approach for designing a 2-dimensional screen architecture which fulfills the task of printing at a desired finger width. Today's metallization of Silicon solar cells is still dominated by flatbed screen printing1 mainly because of its reliable and cost-effective production capabilities. Within the last two decades, the scientific community has made tremendous progress in reducing the finger width from approx. 100 µm in 20062 to only 26 µm on cell level published by Tepner et al. in 20193. In the same year, we were able to reduce the printed finger width down to approx. 20 µm at a record aspect ratio of approx. 0.95 on a test layout as presented in Fig. 1 on the left4,5. This trend of approx. 7 µm per year was mainly driven by independent paste and screen optimization. Thibert et al. presented a comprehensive study on how the rheological behavior of Ag-paste is influencing the screen printing performance. Pospischil et al. further related common rheological parameters of metal pastes to the printed finger geometry, showing the dominant impact of the yield stress on the aspect ratio6,7,8. Furthermore, Xu et al. showed for specially designed paste formulations, how wall slip at the emulsion surface can significantly improve the paste transfer. Tepner et al. expanded on this idea by generalizing the method for the analysis of slip phenomena in screen printing. Further, they demonstrated an improved wall slip behavior of commercially available metal pastes3,9. Besides the influence of the paste rheology, the screen architecture itself plays a crucial role when an optimized paste-screen interaction is desired10. Figure 1 on the right, illustrates how the final 2-dimensional architecture of a screen is the result of combining a mesh, defined by the mesh count MC and the wire diameter d, and the applied emulsion, defined by the nominal screen opening width wn and the screen angle φ. Previous works have correlated the mesh parameters and the screen opening width with the quality of finger geometry4,11,12. Ney et al. introduced a screen simulation approach that is able to predict the exact size, shape and location of all individual opened areas within a screen opening based on the presented four parameters13,14. Tepner et al. has later correlated this simulation approach to experimental data4. Furthermore, they have expanded this simulation approach to determine how the number of wire crossings within the screen opening depends on the presented four screen parameters. Based on these results, they were able to derive specific screen angles to create knotless configurations (zero wire crossing), suggesting an increased screen lifetime and improved printability at the same time15. On the other hand, White et al. and Taroni et al. presented a comprehensive mathematical model for the mechanics of the screen printing process which could be used for further optimization attempts. However, to this date, the application of these models is quite challenging, missing clear and easy to use design goals for screen production in a volatile PV market. On the left, a fine line Ag-electrode (finger) for Si-solar cell metallization is shown16. On the right, we present a SEM image of a screen opening channel, defined by the nominal screen opening width \({\text{ w}}_{{\text{n}}}\), the channel length \({\text{ l}}\), the wire diameter \({\text{ d}}\), the mesh opening \({\text{ d}}_{0}\) and the angle between emulsion edge and mesh wires \( \varphi\). The crossover area of two wires is defined as a so-called knot and the black areas represent the individual opened areas of the channel13. For this reason, we will present a generalized theory for the design of a 2-dimensional screen architecture by deriving a dimensionless parameter which describes the impact of the screen utility during the printing process. In the following section, we will first summarize the theoretical background for screen design and then expand on this background by deriving the screen utility index SUI. Furthermore, we will show experimental verification of the presented simulation approach by comparing the simulated area of individual openings to microscope images of different screen architectures. Finally, we present a comprehensive simulation of the dependency of the SUI on all 2-dimensional screen parameters. This data will allow the industry to improve their decision-making process for novel screen configurations without requiring complex mathematical modeling of the process mechanics. The goal of the presented approach is to further improve the metallization of Si-solar cells in mass production in terms of increased cell efficiency and reduced production cost. Especially, the reduction of silver consumption per cell by an improved fine-line screen printing process is crucial when facing the predicted silver production crisis, when the demand of the PV industry for silver will exceed worldwide silver production by the year 203017. Theoretical background This section summarizes briefly the state of the art parameters to describe the overall quality of a screen in terms of its printability and life time in a production environment. The four introduced parameters (mesh count MC, wire diameter d, screen opening width wn and the screen angle φ) define the resulting 2-dimensional screen opening channel. Further they can be used to derive more specific measures for the screen quality. The normalized open area OA% is the most established parameter to characterize the geometrical architecture. It is defined by the ratio of opened area to the overall area of one mesh unit and can be calculated by Eq. (1)18. $$ {\rm{OA}}_{\% } = \frac{{{\rm{d}}_{0}^{2} }}{{({\rm{d}} + {\rm{d}}_{0} )^{2} }}. $$ It shall be noted that this parameter describes the average value across an infinitely long screen opening channel. We are not aware of an analytical solution which describes the local deviation of OA% across the length of the screen opening channel and its dependency on the screen angle. As discussed in the introduction section, we have previously published an approach which is able to simulate this local deviation across the screen opening σOA13. The wire-to-wire distance \({\text{d}}_{0}\) is calculated by Eq. (2) with the mesh count MC, describing the number of wires per unit length, and the wire diameter d18. $$ {\text{d}}_{0} = 1{\text{ / MC}} - {\text{d}}{.} $$ One important parameter to ensure sufficient printability over the maximal possible screen life cycle is the screen tension γscreen. Depending on the mesh count MC and the wire diameter d, a maximal possible screen tension γscreen_max can be given by Eq. (3)18. $$ \gamma_{{{\text{screen\_max}}}} = \sigma_{{{\text{uts\_wire\_mat}}}} \cdot {\text{MC}} \cdot \pi \frac{{{\text{d}}^{2} }}{4}. $$ The ultimate tensile strength of a single mesh wire σuts_wire_max is a material parameter, describing the minimal stress which is necessary to break the material while being stretched. The state of the art material for mesh wires in the PV industry is stainless steel with an ultimate tensile strength σuts_wire_max ≈ 800 N/mm219. Recently, the use of tungsten alloy wires emerged in order to produce mesh with wire diameters below 15 µm4,5,14. Horwarth et al. analyzed the impact of a reduced screen tension γscreen on the screen lifetime, making it one of the most important parameters for the industry20. Further publications deal with 3-dimensional parameters, e.g. emulsion height EOM and the calendaring of the mesh, showing an additional influence on the printing performance6,11,12,21,22,23. However, the presented generalization of the screen performance will only rely on a 2-dimensional analysis, allowing the reader to compare any screen configuration with the same set of 3-dimensional parameters. Simulation approach In this study, a full simulation of all individual opened areas across a screen opening channel wn has been carried out. The mathematical background of this simulation approach is well described in literature4,5,13,15. The area heavily depends on the screen manufacturing parameters e.g. screen opening width wn, screen angle φ, mesh count MC and wire diameter d, leading to a full parameter sweep presented in Table 1. In total, all possible combinations of 68,600 different mesh configurations, 450,000 different screen angles φ and 3400 different screen opening width wn are simulated. For all screen opening channels, a length of l = 156 mm is assumed. Furthermore, the position on the screen orthogonal to the screen opening is averaged over 100 screen openings. Table 1 Variation of parameters for the simulation of the area of individual opened areas across a screen opening. The length of all simulated channels was set to the industry standard of 156 mm (2020)3. The position on the screen orthogonal to the screen opening is averaged over 100 screen openings. Experimental verification of the simulation approach The area of individual openings of two commercial available screens with a screen angle of φ = 22.5° are evaluated by microscopy. The first screen has a mesh count of MC = 360 1/inch, a wire diameter of d = 16 µm and a screen opening width of wn = 40 µm. The second screen has a mesh count of MC = 380 1/inch, a wire diameter of d = 14 µm and a screen opening width of wn = 30 µm. For both screens, a 1 mm segment in the center between busbars of the 25th, 50th and 75th screen opening of the layout is investigated by measuring the area of all individual openings and determining its shape. Afterwards, the exact screen opening segments are simulated to obtain the simulated area of individual openings which are then compared to the experimental data. Furthermore, we presented an additional verification of the simulation approach by comparing the screen angle dependency of individual opened areas, demonstrating an completely accurate simulation of their size13. Later, we expanded the simulation approach by simulating the amount of different classes of wire crossings across the screen opening channel and comparing them to microscope images, proving the precise prediction capability of the presented screen simulation approach15. Introduction of the screen utility index In order to generalize the screen opening pattern, a relationship between the screen manufacturing parameters (e.g. screen opening width wn, mesh count MC, wire diameter d and the screen angle φ) and the resulting opening pattern needs to be derived. Figure 2 presents a way to deconstruct the screen opening channel, resulting in a single dimensionless parameter which describes the general utility of a screen. The screen utility index SUI is constructed in three parts: Definition of the screen utility index SUI, describing the relationship between the area of individual openings and the area of individual mesh bridges weighted by the amount of mesh units per screen opening channel. The average area of an individual opening defines how much fluid is able to transfer through the screen opening channel wn at given pressure. An analytical solution for the dependency of this area AInd.Opening on all described screen manufacturing parameters is unknown and therefore, requires the presented simulation approach. The average size of all openings is directly linked to the printability. In order to quantify the impact of the underlying mesh on the screen performance, the area of a single mesh bridge Abridge is defined by Eq. (4). Using a fine mesh (high mesh count MC or small wire to wire distance d0) will increase the screen lifetime due to an increased wire intersection coverage15. Furthermore, the screen tension is increased, improving the screen snap-off mechanics to minimize spreading effects15,18. On the other hand, the wire diameter d should be minimized because the paste transfer is strongly limited by a blocking cylindrical object. Whitney et al. analyzed the force–velocity relationship of a rigid cylinder moving through a highly non Newtonian fluid, showing that commonly used metal pastes with a flow index n ≪ 1 require forces more than one order of magnitude higher than for Newtonian flows with equal velocities and geometric conditions24,25. This scenario applies directly to the screen printing process during screen snap-off and indirectly to the flooding phase3. Combining both statements for the wire-to- wire distance d0 and the wire diameter d, will lead to the conclusion to minimize the area of a mesh bridge Abridge. $$ {\text{A}}_{{{\text{bridge}}}} = {\text{d}}_{0} \cdot {\text{d}}{.} $$ The dependency of the amount of contributing mesh units within a screen opening channel on the screen angle φ is presented in Eq. (5). Due to this relationship, the ratio between the average area of individual openings AInd.Opening and the area of single mesh bridges Abridge must be multiplied by the corresponding factor 1/cos(φ) in order to account for the decreased angle dependent number of mesh units per screen opening channel at nonzero angles. Each mesh bridge contributes to the expected stability of the emulsion edge during printing because it acts as a micro foundation. $$ \frac{{{\text{n}}_{{{\text{mesh \;units}} \varphi }} }}{{{\text{n}}_{{{\text{mesh\; units}} \varphi = 0^\circ }} }} = \frac{1}{\cos \varphi }. $$ Finally, the definition of the dimensionless screen utility index SUI can be given in Eq. (6) by combining the presented three statements. Any screen configuration, defined by its 2-dimensional geometric parameters has one specific screen utility index SUI. However, there is an infinite amount of theoretical screen configurations which result in the exact same value of the SUI. $$ {\text{SUI}} = \frac{{\overline{{{\text{A}}_{{{\text{Ind.Opening}}}} }} }}{{\cos \varphi \cdot {\text{d}} \cdot {\text{d}}_{0} }}. $$ Following this statement gives rise to a classical optimization problem. What value for the SUI is good enough to ensure printability in respect to a fixed reference fluid or paste? In order to answer this question, we must analyze the special case where SUI = 1 applies. In that case, the average size of an individual opening is equal to the area of a blocking wire bridge, weighted by the amount of mesh units across a screen opening channel. This relationship puts the impact of the mesh into context to the resulting screen opening channel, finding a balance between a fine mesh, optimized for high screen tension as well as the screen life time, and the task of providing a sufficient paste transfer at the desired screen opening width wn. In the regime where SUI < 1 applies, the chosen underlying mesh is too coarse for the desired wn, resulting in a significant limitation of the fluid transfer. The relationship between the different screen parameters and the resulting SUI is nonlinear because the nominator is depending on all screen parameters itself. For example, increasing the mesh count MC (decreasing the wire to wire distance d0) would increase the SUI. However, the average area of individual openings will decrease at the same time, resulting in a highly nonlinear reduction of the SUI. On the other hand, in the regime where SUI > 1 applies, the underlying mesh is fine enough to create a sufficient screen opening pattern and therefore not limiting the paste transfer more than the angled screen opening channel width wn would have done anyway. It must be noted that the special case of SUI = 1 mainly applies for a homogeneous fluid which is either particle free or contains a particle size distribution where the majority of particles are small compared to the individual opening size. Therefore, we suggest a threshold for the ratio between the size of the majority of particles (e.g. particles with a diameter smaller than d99%, assuming a normal distribution) and the individual average opened area in Eq. (7). $$ \frac{1}{4}\pi {\text{d}}_{99}^{2} \ll \overline{{{\text{A}}_{\text{Ind.Opening}} }} . $$ If the d99% value of a highly filled suspension (e.g. metal pastes for solar cell metallization) becomes too big, certain small openings included in the nominator of Eq. (6) are not able to contribute anything to the overall paste transfer due to immediate clogging by individual particles or agglomerates. As soon as this effect cannot be neglected anymore, the threshold of the minimal required SUI value for a sufficient printability (SUImin) becomes a function of the clogging probability itself. At this point, we are going to suggest an empirical value for SUImin for commonly used high temperature Ag-paste for PERC front-side metallization in the following experimental section. In order to further model the correlation between the SUImin and the clogging probability of individual opened areas within a screen opening channel, an experimental method to measure the clogging event during the screen printing process needs to be developed. At this point, the mechanics of the screen printing process do prevent an easily accessible method for direct measurements. Predictability of the simulation approach In Fig. 3, we present the experimental verification of the simulation approach by comparing simulated values for the area of individual openings to measurements of the corresponding area by microscope images. As described in section "Experimental verification of the simulation approach", two different commercial available screens with a screen angle φ = 22.5° are used. The first screen has a mesh count MC = 360 1/inch with a wire diameter d = 16 µm (wn = 40 µm) and the second screen is made out of a mesh, using a MC = 380 1/inch with a wire diameter d = 14 µm (wn = 30 µm). The presented deviation between measured and simulated sizes for the exact same individual opening is not caused by the simulation approach itself, rather than resolution limitations of microscopy. Furthermore, manufacturing tolerances of wn, d, φ and further deviations due to the mesh calendaring are causes for the deviation. However, the overall predictability of the presented simulation approach for the area of individual openings is verified and shall be used to obtain values for the presented dimensionless parameter screen utility index SUI because the area of individual openings is the only parameter within Eq. (6) which must be simulated by this model. Experimental verification of the simulated area of individual openings. For this verification, 27 microscope images per screen of the screen opening channels were taken and analyzed regarding the area of individual openings. Two screens with different mesh types are chosen. The first mesh has a mesh count MC = 380 1/inch and a wire diameter d = 14 µm (open symbols) with wn = 30 µm and the second mesh has a mesh count MC = 360 1/inch and a wire diameter d = 16 µm with wn = 40 µm (closed symbols). The screen angle of both screens is φ = 22.5°. The deviation of measurements is caused by manufacturing tolerances of \({\text{w}}_{{\text{n}}}\), MC, d and \(\varphi\), the mesh calendaring and an insufficient resolution of microscope images. Correlation of the SUI with screen printing experiments In Fig. 4, we present experimental data from our previous publication4, demonstrating the impact of the SUI on screen printed metallization by increased lateral finger resistance RL. For this experiment, the paste sample, the emulsion height EOM, the rate of calendaring of the mesh and all printing parameters has been kept constant. In "Results and discussion", we have discussed the theoretical meaning of SUI = 1, highlighting the change when the mesh starts to contribute significantly to the limitation of further paste transfer. The presented data supports this critical point where SUI = 1 applies and further shows that even approaching SUI = 1 will have consequences in terms of significant increase of the lateral finger resistance RL and thus reduced cell efficiency and non-optimal silver consumption. In order to understand this, we elaborate on the underlying optimization problem of solar cell metallization. The shading of the active cell area by the metallization grid is determined by the cell layout (e.g. number of busbars), the interconnection concept, the finger geometry (mainly the width) and the number of fingers. An increase in shading losses directly results in a significant reduction of the short circuit current density Jsc and subsequently solar cell efficiency. As these shading losses of the grid should be minimized, one must also consider the series resistance contribution of the grid26. Here, the lateral finger resistance as well as the contact resistance at the metal–semiconductor contact plays a crucial role. The latter is mainly determined by paste formulation, configuration of the firing process as well as actual properties of the solar cell precursor. However, on the other hand, the lateral finger resistance for a given paste is predominantly determined by the geometry of the printed structure and therefore strongly correlates with printing results. The finger resistance increases whenever the cross-sectional area of a contact finger is locally reduced across its length due to insufficient printing. In our previous publication we calculated the maximal tolerable lateral finger resistance for different interconnection concepts when a maximal finger series resistance contribution of rs = 0.1 Ω cm2 is assumed27 showing that there is hard limit for the maximal tolerable lateral finger resistance per given cell layout and interconnection concept. On top of that, the overall goal for the metallization process remains always to minimize silver consumption while meeting the described performance requirements. When we now come back to the screen utility index SUI, we must consider the correlation between the SUI and experimental data for the lateral finger resistance for each paste separately. As discussed in section "Introduction of the Screen Utility Index", if one uses a highly filled suspension for which Eq. (7) does not remain true, the margin for minimal SUI shifts towards higher levels because the effective average area of individual openings is reduced due to clogging by single particles or agglomerates. Based on the presented data in Fig. 4, we suggest for high temperature Ag-pastes (HT-Ag), used for PERC front-side metallization, a minimal threshold of SUImin (HT-Ag) = 1.25. Furthermore, for low temperature Ag-paste (e.g. metallization of HJT solar cells) we predict a suitable margin of SUImin (LT-Ag) > 1.6 and for Al-paste for rear side metallization of bifacial PERC, we predict a necessary margin of SUImin (HT-Al) > 1.9. However, to this date, there is no specific evidence for the last two predictions. Furthermore, we would like to point out that small deviations on screen configurations due to manufacturing tolerances might cause a deviation of the SUI, negatively (or positively) influencing the expected printing result further. In future studies, these deviations should be experimentally investigated in order to directly link manufacturing tolerances to a potential reduction in printability. Correlation of the average finger resistance on the SUI value. The predicted change of printability at a SUI = 1 is supported by experimental data. For values where SU I < 1 applies, the underlying mesh will have an over proportional negative influence on the printing performance. On the other hand, for values where SUI > 1 applies, the mesh will not limit the paste transfer more than natural limitation of the screen opening channel wn. Different screens are plotted for 24 µm, 21 µm, 18 µm and 15 µm screen openings. The data is taken from our previous publication4. Values for the 380/0.014/22.5° screen are taken from3. In Fig. 5, we present accumulated data from successful screen printing experiments at Fraunhofer ISE (Freiburg, Germany) for Si-solar cell metallization over the last ten years, demonstrating the evolution of the SUI in a research environment. A variety of different mesh counts MC and wire diameters d has been used to ensure printing through an ever decreasing screen opening width wn. However, without realizing it at the time, the SUI has been reduced over the years, indicating that mesh manufactures where not able to keep up their development of finer meshes with the reduction rate of the screen opening width dwn/dt. Nevertheless, the absolute value for the SUI over the years was still suitable for mass production because not even the SUI = 1.25 limit was passed. This offers a potential explanation why the evolution of published results for printed finger width over the last 15 years was achieved at an outstanding reduction rate of more than 7 µm per year28. Further paste development was enough to drive this evolution as SUI values during that time span were far beyond SUI > 1.25, revealing that the screen was never the limiting factor when it comes to printability. In Fig. 7, we present the gap between the constant blue line and actual evolution of the SUI, giving a qualitative measure for this contribution of the paste development. Those improvements on the paste printability were able to compensate the (at the time) hidden reduction of the SUI. On the other hand the red line shows the theoretical evolution of the SUI when no mesh improvements since 2010 would have been achieved at all. The gap between this trend and the actual evolution gives a qualitative measure for the mesh development. Especially in recent years, the paste development has an increasing impact on the further reduction of the achieved finger width. History of screen printing experiments at Fraunhofer ISE using screens with the shown screen utility indices SUI. The progression towards smaller SUI values shows the natural evolution of the fine line screen printing process for metallization of Si-solar cells. The nominal screen opening width was continuously reduced over the years. The development of finer mesh patterns was not able to keep up with this trend, resulting in an average SUI reduction of approx. 0.05 points per year. In 2019, Fraunhofer ISE has challenged the screen printing process with an intense reduction of screen opening structures down to 15 µm. Furthermore, we would like to highlight the fact that the overwhelming industry standard for the screen angle φ = 22.5° was dominating even the research activities in a way that almost no data for different screen angles φ are available29,30,31,32,33. Only in recent years so 0° knotless screens and 30° angled screens have been investigated4,34. Figure 5 further reveals that in recent years we have challenged the screen printing process to the point where usual screen architectures fail completely. In 2019, significant reductions of the screen opening width wn from initial 27 µm towards a novel test pattern with screen openings ranging from 24 µm to only 15 µm have cut the resulting SUI almost in half. This result highlights how the mesh development is a critical step of overall screen development. Therefore, the rate in which screen manufactures decrease the screen opening width wn should not be done as quickly as possible as it requires a strong communication with mesh manufactures beforehand. Simulation of the SUI Optimization of the mesh count MC and wire diameter d Figure 6 shows the SUI dependency on both mesh parameters for a screen opening width of wn = 20 µm with a screen angle φ = 22.5°. As discussed in section "Introduction of the Screen Utility Index", the nominator of the SUI is depending on the wire to wire distance d0 and the wire diameter d itself, resulting in the presented nonlinear relationship between the SUI and the mesh parameters. This result gives rise to a classical optimization problem because a mesh with a very low mesh count MC and a small wire diameter d will maximize the SUI, but at the same time minimizes the screen stability due to Eq. (3). In order to highlight this circumstance, we have added red curves for constant SUI values as well as curves for the maximal possible screen tension γscreen_max = 20 N/cm for stainless steel and tungsten alloy wires. The intersection of a constant SUI line (e.g. SUImin = 1.25) with the curve for the constant screen tension gives the minimal requirement for the mesh in terms of minimal mesh count MC and maximal wire diameter d at which the SUI > 1.25 threshold is fulfilled. If an intersection point between the constant SUI line and the maximal possible screen tension curve for a given wire material does not exist, the desired configuration is physically impossible. In such a case, a new wire material with an increased ultimate tensile strength σuts_wire_mat needs to be developed. This approach reveals the threshold at which a given wire material with an ultimate tensile strength σuts_wire_mat is able to fulfill the requirements for a screen with a certain screen tension γscreen (as long as γscreen < γscreen_max remains true) and the desired value for the SUI which fulfills SUI > SUImin. The SUI is simulated for 68,600 different mesh configurations with a screen opening width of wn = 20 µm at a screen angle of φ = 22.5°. In red, constant SUI lines are presented, highlighting the minimal threshold at SUI = 1. Further, the SUI = 1.25 line is shown, indicating the minimal barrier for a sufficient printability of commonly used Ag-pastes. The black and grey lines indicate a constant screen tension for tungsten alloy and stainless steel wires at γscreen = 20 N/cm. The intersection point of the SUI = 1.25 line with the screen tension function defines the optimal mesh choice. In Fig. 7, we present a suggestion for future wire materials by plotting Eq. (3) for a broad range of mesh counts MC and wire diameters d with a screen tension γscreen = 20 N/cm. Furthermore, we are adding constant SUI lines which highlight the need for further developments of novel wire materials because commercial available wire materials like stainless steel and tungsten alloys already show very limited capabilities for further reduction of the screen opening width wn. For example, there exist different types of fiber glass with high ultimate tensile strengths which could be a suitable option for woven mesh wires35. Usually, the glass of such fibers is amorphous, providing a homogenous structure along and across the fiber, however the production of these fibers at diameters below 10 µm is challenging, because even small scratches on the surface will dramatically influence the mechanical properties36. On the other hand, there exist fibers made out of carbon. They are widely used in the industry to produce strong and ultra-light components for a broad range of applications. Kumar et al. tested single carbon fibers with diameters down to 7 µm, reporting ultimate tensile strengths of up to 3200 MPa37. Arshad et al. produced carbon fibers with an electrospinning approach, with diameters below 0.5 µm and an ultimate tensile strength in the range of 4500 MPa38. Finally, if we further examine the thought experiment of using the finest possible "wire" with the maximal obtainable ultimate tensile strength, we will eventually arrive at Iijima and Ichihashi, who published the discovery of carbon nanotubes in 199339. Takakura et al. measured for the first time the ultimate tensile strengths of an individual structure-defined, single-walled carbon nanotube with values for the ultimate tensile strength ranging from 20 GPa to over 50 GPa40. Furthermore, Zhang et al. was able to produce over 50 cm long carbon nanotubes by a floating chemical vapor deposition process in 201641, making an industrial application for mesh production potentially a matter of years rather than multiple decades. The ultimate tensile strength of one these potential wire or fiber materials need to be higher than the minimum requirement for the desired SUI. For example, if a screen with a nominal screen opening channel of only wn = 5 µm is manufactured, the underlying mesh cannot be made out of conventional wires. New technologies for the mass production of very thin and strong wires or fibers (e.g. carbon nanotubes) have to be developed in order to prevent an upcoming dead end of ultra-fine line metallization of Si-solar cells. The simulation of the ultimate tensile strength σuts_wire of individual wires. The mesh parameters are varied between 100–1500 1/inch for the mesh count MC and 1–50 µm for the wire diameter d. A fixed screen tension of 20 N/cm was used. Furthermore, constant lines for certain materials are shown and discussed (e.g. stainless steel, tungsten, glass fibers, carbon fibers, carbon nanotubes). Finally, constant SUI lines are given for different desired screen opening channels wn. Optimization of the screen angle φ and screen opening width wn In Fig. 8 on the right, a full simulation of the SUI for screen angles between 0°–45° and screen opening channel width wn between 5 and 40 µm are shown, revealing a nonlinear relationship between the SUI and the screen angle φ. Red lines highlight constant SUI curves, including the SUI = 1.25 margin discussed earlier. The common industry standard of φ = 22.5° results in one of the worst configurations if a reduction of wn is desired. The screen manufacturer should switch to reducing or increasing the screen angle to avoid the regime where the SUI shows the strongest reduction with further reducing wn. However, increasing the screen angle will also increase the total area of mesh per screen required due to increased cutting losses during production. These cutting losses contribute significantly to the overall production costs and should be minimized. The industry produces meshes on weaving machines, creating a "mesh carpet" on a roll with a width of usually 1 m. Afterwards, single sheets of mesh are cut out of this mesh roll. On the left, the screen utility index is simulated for all possible screen angles—screen opening width wn combinations between 0° < φ < 45° and 5 µm < wn < 40 µm. The underlying mesh is kept constant with a MC = 480 1/inch and d = 11 µm. The constant SUI lines for SUI = 1, 1.25, 1.5 are highlighted in red, indicating the nonlinear dependency on the screen angle φ. For low screen angles, the nonlinear characteristic vanishes whereas for high angles close to 45° the dependency becomes the dominant influencing factor. On the right, the screen utility index for the 480/0.011 mesh is highlighted with the inclusion of extraordinary screen angles φ. In our previous publication we derived those angles by utilizing the Farey sequence, revealing a highly repetitive opening pattern15. If those angles are compared to the average angle dependency of the SUI, a significant increase can be observed. In Fig. 9, we present a model to quantify these cutting losses by calculating how many individual square sheets of mesh with side length ls can be cut out of the mesh roll with the screen angle φ. In order to do that, the mesh needs to be clamped on a clamping table with a length of lct. In Eq. (8), we present, to our knowledge for the first time, an analytical solution for the absolute cutting loss per angled sector (see Fig. 9). Further, in Eq. (9) we are presenting a solution for the relative loss of the entire mesh roll based on the roll dimensions, the sheet side length ls, the length of the clamping table lct and the screen angle φ. $$ A_{loss\_sector} = \tan (\varphi ) \cdot l_{s}^{2} + \left( {mod_{{l_{s} }} \left( {\frac{{w_{r} - \sin (\varphi ) \cdot l_{s} }}{{\cos (\varphi ) \cdot l_{s} }}} \right) \cdot l_{s} } \right) $$ $$ A_{loss\_roll\% } = \frac{{A_{loss\_sector} }}{{w_{r} }} \cdot \left( {\frac{\cos (\varphi )}{{w_{r} }} - \frac{\sin (\varphi )}{{l_{ct} }}} \right) + \tan (\varphi ) \cdot \frac{{w_{r} }}{{l_{ct} }}. $$ On the left, a schematic illustration for cutting losses of mesh during screen production is presented. Depending on the screen angle φ, the number of sheets with a side length ls which fit into the parallelogram shaped sector is shown. The cutting losses are defined by two triangles with the same size and a remaining rectangle (defined by the residual of sheets fitting into the sector). On the right, a calculation of Eq. (9) for the relative cutting losses of an entire mesh roll is shown for different roll width wr and screen angles φ. The length of the clamping table was set to lct = 5 m. On the right of Fig. 8, a full calculation of Eq. (9) is shown for different roll width wr between 0.8–2.6 m and all screen angles between 0°–45°. At commonly used roll widths of wr = 1 m, the angle dependency of the cutting losses is close to its maximum, suggesting to reduce the screen angle φ as much as possible when economic design goals are considered. However, as discussed in the following section this creates other serious disadvantages and therefore causes a significant optimization problem for the screen angle φ. We suggest that the industry develops and builds bigger weaving machines in order to increase the scalability of the mesh production. In section "Conclusion", we will come back to this optimization problem by presenting a comprehensive approach to design any 2-dimensional screen architecture. As mentioned, reducing the screen angle will increase the SUI and therefore the printability, however it will also significantly increase local deviations of the opening area across the screen angle as discussed by Ney et al.13. This phenomenon might increase the probability of local finger interruptions for screen angles, especially below φ < 10°. This circumstance shows the complexity of choosing the right screen angle for given screen opening width wn. Tepner et al. analyzed specific screen angles which show high repeating pattern of opening structures, showing that for e.g. φ = 26.565° a knotless screen pattern with a repeating pattern every 2 mesh units will occur. This type of knotless screen completely prevents the negative influence of a strong deviation of the opening rate OA% across the screen opening channel. In Fig. 8 on the right, we have specifically simulated the SUI for those screen angles which show outstanding repeating pattern, using the same screen opening channel wn = 20 µm and a mesh with a mesh count MC = 480 1/inch and a wire diameter d = 11 µm. It becomes clear, that a conventional sweep of the screen angle φ even for an increment ∆φ = 10–4° is not fine enough to explore the full complexity of the angle depending screen opening pattern. This phenomenon has been discussed in our previous publication in more detail15. Optimization approach for future screen design In the previous section, we have discussed how the SUI is dependent on all 2-dimensional screen parameters. Now, we are able to derive a clear approach for designing a screen architecture which will optimize the compromise between expected printability and screen lifetime by using a strong mesh. In Fig. 10, a flow chart is presented which starts at defining the desired goal for a printed finger width wf. Afterwards, the spreading offset of a printed finger at the desired printing speed needs to be estimated or analyzed by rheological investigation. Usually, the printed finger width wf is significantly higher than the screen opening width wn. For example, if an average printed finger width of wf = 20 µm for industrial printing speeds is desired, one must take spreading in the range of 5 µm into account. After wn is known, the threshold for the minimal SUImin for the desired paste has to be defined. As discussed in section "Correlation of the SUI with screen printing experiments", the data supports a SUImin = 1.25 for commercial high temperature Ag-paste. Due to practical reasons, the next step should be the choice of the smallest available wire diameter. This decision might be influenced by technical and/or economic reasons. Now, the optimal screen tension is defined by estimation or experience. For mass production of Si-Solar cells a minimum of 20 N/cm should be used3,14,42. This value is further used to obtain the optimal mesh count MC by Eq. (3), defining a fixed ratio between MC and d. If this ratio does not fulfill the discussed SUImin requirement, the designer can check if a screen angle is available which pushes the SUI over SUImin requirement. This decision might be further affected by economic reasons due to the angle depending cutting losses of mesh as discussed in section "Simulation of the SUI" and quantified by Eq. (9). If no screen angle exists which fulfills SUI > SUImin, the chosen wire material does not offer a solution for the desired printed finger width wf and screen tension γscreen. In such a case, the designer has to research for new wire materials or reevaluate its initial technological and economic decision for the smallest available wire diameter. Design approach for the definition of the 2-dimensional screen architecture. In order to reach a desired printed finger width wf, a series of design choices has to be made. First, the screen opening width wn needs to be derived by rheological investigation of the paste at the desired printing speed and determination of the expected spreading offset. Furthermore, the margin for the SUI (e.g. SUImin > 1.25), the smallest available wire diameter for mesh production and the desired screen tension γscreen are defined. After calculating the mesh count MC by Eq. (3), the SUImin requirement is controlled. Depending on the result, the screen angle φ is chosen by minimizing the cutting losses, defined by Eq. (9). If no configuration is available which fulfills all requirements, new wire materials need to be developed. Fine line screen printing for solar cell metallization is facing the increasingly difficult challenge of further decreasing the printed finger width to increase cell efficiency and reduce silver consumption per cell. In this study, we present a step by step approach for designing future screen architectures by introducing the novel dimensionless parameter screen utility index SUI. This parameter gives a quantitative indication for the expected printability of any 2-dimensional screen architecture. Further, a full theoretical derivation of the SUI, a correlation to experimental results and an in-depth simulation over a broad range of screen manufacturing parameters is given, revealing a nonlinear relationship to all 2-dimensional screen parameters. This analysis is extended by modeling the angle dependent mesh cutting losses in mass production of screens, giving rise to a classical optimization problem. Finally, we present a prediction for the future of mesh production by simulating the printability of screens made out of novel wire materials (e.g. the use of carbon nanotubes) with very high ultimate tensile strengths in order to fulfill the SUI requirements for printing contact fingers with widths below 10 µm. International technology roadmap for photovoltaic. 11th edition. 2019 Results. ITRPV (2020). Hilali, M. M. et al. Effect of Ag particle size in thick-film Ag paste on the electrical and physical properties of screen printed contacts and silicon solar cells. J. Electrochem. Soc. 153, A5. https://doi.org/10.1149/1.2126579 (2006). Tepner, S. et al. Improving wall slip behavior of silver pastes on screen emulsions for fine line screen printing. Solar Energy Mater. Solar Cells 200, 109969. https://doi.org/10.1016/j.solmat.2019.109969 (2019). Tepner, S. et al. Screen pattern simulation for an improved front-side Ag-electrode metallization of Si-solar cells. Prog. Photovolt Res. Appl. 28, 1054–1062. https://doi.org/10.1002/pip.3313 (2020). Tepner, S., Ney, L., Linse, M., Lorenz, A. & Pospischil, M. Advances in screen printed metallization for si-solar cells—towards ultra-fine line contact fingers below 20 μm. 29th International PV Science and Engineering Conference, Xi´an, China. https://doi.org/10.13140/RG.2.2.33088.69126 (2019). Thibert, S. et al. Influence of silver paste rheology and screen parameters on the front side metallization of silicon solar cell. Mater. Sci. Semicond. Process. 27, 790–799. https://doi.org/10.1016/j.mssp.2014.08.023 (2014). Pospischil, M. A Parallel Dispensing System for an improved Front Surface Metallization of Silicon Solar Cells (Fraunhofer Verlag, Munich, 2017). Pospischil, M. et al. Correlations between finger geometry and dispensing paste rheology. 27th European Photovoltaic Solar Energy Conference and Exhibition, 1773–1776. https://doi.org/10.4229/27THEUPVSEC2012-2CV.5.51 (2012). Xu, C., Fies, M. & Willenbacher, N. Impact of wall slip on screen printing of front-side silver pastes for silicon solar cells. IEEE J. Photovolt. 7, 129–135. https://doi.org/10.1109/JPHOTOV.2016.2626147 (2017). Tepner, S. et al. The link between Ag-paste rheology and screen printed metallization of Si solar cells. Adv. Mater. Technol. https://doi.org/10.1002/admt.202000654 (2020). Fortu, Z. & Song, L. X. Identifying screen properties to optimise solar cell efficiency and cost. Energy Procedia 33, 84–90. https://doi.org/10.1016/j.egypro.2013.05.043 (2013). Tavares, R., Dobie, A., Buzby, D. & Zhang, W. Optimal screen mesh, emulsion chemistry, and emulsion thickness for fine-line front metallization pastes on crystalline silicon solar cells. 26th European Photovoltaic Solar Energy Conference and Exhibition, 2040–2043. https://doi.org/10.4229/26THEUPVSEC2011-2CV.2.23 (2011). Ney, L. et al. Optimization of fine line screen printing using in-depth screen mesh analysis. AIP Conf. Proc. https://doi.org/10.1063/1.5125871 (2019). Clement, F. et al. "Project FINALE"—screen and screen printing process development for ultra-fine-line contacts below 20 µm Finger Width. 36th EU PVSEC Conference Proceedings, 259–262. https://doi.org/10.4229/EUPVSEC20192019-2DO.5.1 (2019). Tepner, S. et al. Studying knotless screen patterns for fine line screen printing of Si-solar cells. IEEE J. Photovolt. 10, 319–325. https://doi.org/10.1109/JPHOTOV.2019.2959939 (2020). Fraunhofer ISE. Press Release: Innovative Fine-Line Screen Printing Metallization Reduces Silver Consumption for Solar Cell Contacts. https://www.ise.fraunhofer.de/en/press-media/press-releases/2019/innovative-fine-line-screen-printing-metallization-reduces-silver-consumption-for-solar-cell-contacts.html (2019). Haegel, N. M. et al. Terawatt-scale photovoltaics: Transform global energy. Science (New York, N.Y.) 364, 836–838. https://doi.org/10.1126/science.aaw1845 (2019). Hahne, P. Innovative Drucktechnologien. Siebdruck - Tampondruck ; Photolithographie, InkJet, BubbleJet, Digitaldruck, LFP, Drop-On-Demand, Non-Impact-Verfahren, Dickfilm, Heißprägen, Offsetdruck, Flexodruck, Fodel-Verfahren, Driographie (Verl. Der Siebdruck, Lübeck, 2001). Rasmussen, K. J. Full-range stress–strain curves for stainless steel alloys. J. Constr. Steel Res. 59, 47–61. https://doi.org/10.1016/S0143-974X(02)00018-4 (2003). Horvath, E., Harsanyi, G., Henap, G. & Torok, A. Mechanical modelling and life cycle optimisation of screen printing. J. Theor. Appl. Mech. 50, 1025–1036 (2012). Aoki, M. et al. 30μm fine-line printing for solar cells. IEEE 39th Photovoltaic Specialists, 2162–2166. https://doi.org/10.1109/PVSC.2013.6744903 (2013). Erath, D. et al. Advanced screen printing technique for high definition front side metallization of crystalline silicon solar cells. Solar Energy Mater. Solar Cells 94, 57–61. https://doi.org/10.1016/j.solmat.2009.05.018 (2010). Itoh, U. et al. 38th IEEE Photovoltaic Specialists Conference (PVSC), 2012. 3–8 June 2012, Austin Convention Center, Austin, Texas (IEEE, Piscataway, NJ, 2012), 2167–2170. Whitney, M. J. & Rodin, G. J. Force–velocity relationships for rigid bodies translating through unbounded shear-thinning power-law fluids. Int. J. Non-Linear Mech. 36, 947–953. https://doi.org/10.1016/S0020-7462(00)00059-7 (2001). ADS Article MATH Google Scholar Chhabra, R. P., Soares, A. A. & Ferreira, J. M. Steady non Newtonian flow past a circular cylinder: A numerical study. Acta Mech. 172, 1–16. https://doi.org/10.1007/s00707-004-0154-6 (2004). Article MATH Google Scholar Mette, A. New concepts for front side metallization of industrial silicon solar cells 1st edn. (Verl. Dr. Hut, München, 2007). Tepner, S. & Lorenz, A. Printing technology and its impact on the PV industry—a comprehensive review. Prog. Photovolt: Res. Appl. (submitted for publication). Lorenz, A. et al. Screen printed thick film metallization of silicon solar cells—recent developments and future perspectives. 35th European Photovoltaic Solar Energy Conference and Exhibition, 819–824; https://doi.org/10.4229/35THEUPVSEC20182018-2DV.3.65 (2018). Winter, M. R. et al. Screen printing to achieve highly textured Bi4Ti3O12. J. Am. Ceram. Soc. https://doi.org/10.1111/j.1551-2916.2010.03694.x (2010). Olaisen, B. R. et al. Hot-melt screen-printing of front contacts on crystalline silicon solar cells. Conference Record of the Thirty-first IEEE Photovoltaic Specialists Conference, 1084–1087. https://doi.org/10.1109/PVSC.2005.1488323 (2005). Schwanke, D., Pohlner, J., Wonisch, A., Kraft, T. & Geng, J. Enhancement of fine line print resolution due to coating of screen fabrics. J. Microelectron. Electron. Packag. 6, 13–19. https://doi.org/10.4071/1551-4897-6.1.13 (2009). Narakathu, B. B. et al. Rapid prototyping of a flexible microfluidic sensing system using inkjet and screen printing processes. IEEE Sens. https://doi.org/10.1109/ICSENS.2015.7370340 (2015). Beaucarne, G., Schubert, G., Tous, L. & Hoornstra, J. Summary of the 8th workshop on metallization and interconnection for crystalline silicon solar cells. AIP Conf. Proc. https://doi.org/10.1063/1.5125866 (2019). Chen, S.-Y. et al. Industrially PERC solar cells with integrated front-side optimization. Conference Record of the IEEE Photovoltaic Specialists Conference, 980–982. https://doi.org/10.1109/PVSC.2018.8547926 (2013). Wallenberger, F. T. & Bingham, P. A. Fiberglass and Glass Technology: Energy-Friendly Compositions and Applications (Springer, New York, 2009). Gupta, V. B. & Kothari, V. K. Manufactured Fibre Technology (Springer Netherlands, Dordrecht, 2012). Ilankeeran, P. K., Mohite, P. M. & Kamle, S. Axial tensile testing of single fibres. MME 02, 151–156. https://doi.org/10.4236/mme.2012.24020 (2012). Arshad, S. N., Naraghi, M. & Chasiotis, I. Strong carbon nanofibers from electrospun polyacrylonitrile. Carbon 49, 1710–1719. https://doi.org/10.1016/j.carbon.2010.12.056 (2011). Iijima, S. & Ichihashi, T. Single-shell carbon nanotubes of 1-nm diameter. Nature 363, 603–605. https://doi.org/10.1038/363603a0 (1993). Takakura, A. et al. Strength of carbon nanotubes depends on their chemical structures. Nat. Commun. 10, 3040. https://doi.org/10.1038/s41467-019-10959-7 (2019). Zhang, R. et al. Growth of half-meter long carbon nanotubes based on Schulz–Flory distribution. ACS Nano 7, 6156–6161. https://doi.org/10.1021/nn401995z (2013). Lai, J.-H. et al. in 38th IEEE Photovoltaic Specialists Conference (PVSC), 2012. 3–8 June 2012, Austin Convention Center, Austin, Texas (IEEE, Piscataway, NJ, 2012), 2192–2195. Open Access funding enabled and organized by Projekt DEAL. Fraunhofer Institute for Solar Energy Systems ISE, Heidenhofstraße 2, 79110, Freiburg, Germany Sebastian Tepner, Linda Ney, Marius Singler, Ralf Preu, Maximilian Pospischil & Florian Clement Sebastian Tepner Linda Ney Marius Singler Ralf Preu Maximilian Pospischil Florian Clement S.T. conducted the literature review and developed the idea and concept of the study. Definition and theoretical analysis of the screen utility index. Conducted the data analysis and interpretation of simulation results. Wrote the manuscript and prepared all figures for publication. L.N. performed screen simulation. Assisted on data analysis and interpretation of results. M.S. performed measurements of screen openings by microscopy for the verification of the simulation. Assisted on the modeling of mesh cutting losses during screen production processes. R.P., M.P., F.C. supervised the research through all phases. Proofreading and feedback on the manuscript prior to submission. Correspondence to Sebastian Tepner. Tepner, S., Ney, L., Singler, M. et al. A model for screen utility to predict the future of printed solar cell metallization. Sci Rep 11, 4352 (2021). https://doi.org/10.1038/s41598-021-83275-0
CommonCrawl
Selected articles from the 17th Asia Pacific Bioinformatics Conference (APBC 2019): bioinformatics Identification of trans-eQTLs using mediation analysis with multiple mediators Nayang Shan1,2, Zuoheng Wang3 & Lin Hou1,2,4 BMC Bioinformatics volume 20, Article number: 126 (2019) Cite this article Mapping expression quantitative trait loci (eQTLs) has provided insight into gene regulation. Compared to cis-eQTLs, the regulatory mechanisms of trans-eQTLs are less known. Previous studies suggest that trans-eQTLs may regulate expression of remote genes by altering the expression of nearby genes. Trans-association has been studied in the mediation analysis with a single mediator. However, prior applications with one mediator are prone to model misspecification due to correlations between genes. Motivated from the observation that trans-eQTLs are more likely to associate with more than one cis-gene than randomly selected SNPs in the GTEx dataset, we developed a computational method to identify trans-eQTLs that are mediated by multiple mediators. We proposed two hypothesis tests for testing the total mediation effect (TME) and the component-wise mediation effects (CME), respectively. We demonstrated in simulation studies that the type I error rates were controlled in both tests despite model misspecification. The TME test was more powerful than the CME test when the two mediation effects are in the same direction, while the CME test was more powerful than the TME test when the two mediation effects are in opposite direction. Multiple mediator analysis had increased power to detect mediated trans-eQTLs, especially in large samples. In the HapMap3 data, we identified 11 mediated trans-eQTLs that were not detected by the single mediator analysis in the combined samples of African populations. Moreover, the mediated trans-eQTLs in the HapMap3 samples are more likely to be trait-associated SNPs. In terms of computation, although there is no limit in the number of mediators in our model, analysis takes more time when adding additional mediators. In the analysis of the HapMap3 samples, we included at most 5 cis-gene mediators. Majority of the trios we considered have one or two mediators. Trans-eQTLs are more likely to associate with multiple cis-genes than randomly selected SNPs. Mediation analysis with multiple mediators improves power of identification of mediated trans-eQTLs, especially in large samples. Expression quantitative trait loci (eQTLs) are genetic variants that influence expression levels of mRNA transcripts. Cis-eQTLs commonly refer to genetic variations that act on local genes (Fig. 1a), and trans-eQTLs are those that act on distant genes and genes residing on different chromosomes (Fig. 1b). Identification of eQTLs can help advance our understanding of genetics and regulatory mechanisms of gene expression in various organisms [1]. Consistent findings suggest that many genes are regulated by nearby single nucleotide polymorphisms (SNPs), and the identified cis-eQTLs are typically close to transcription start sites. In contrast to cis-eQTLs, trans-eQTL identification is much more challenging because a greater number of SNP-gene pairs are tested for trans-association. In order to achieve the same power, analysis of trans-eQTLs requires a much larger sample size and/or effect than that in the cis-eQTL analysis. However, trans-eQTLs tend to have weaker effects than cis-eQTLs [2]. Several methods have been developed to improve trans-eQTL detection, such as reducing the multiple-testing burden based on pairwise partial correlations from the gene expression data to increase power [3], and constructing or selecting variables to control for unmeasured confounders that may lead to spurious association [4,5,6]. Graphical representation of eQTLs. a cis-eQTL, b trans-eQTL, c mediated trans-eQTL with a single cis-mediator, and d mediated trans-eQTL with multiple cis-mediators Moreover, the biological mechanisms underlying trans-eQTLs are less understood. Previous studies have shown that trans-eQTLs are more likely to be cis-eQTLs than randomly selected SNPs in the human genome [2, 7], suggesting that trans-eQTLs may regulate expression of remote genes by altering the expression of nearby genes. Recently, mediation analysis has become a popular tool to explore trans-association mediated by cis-regulators [2, 6, 8]. These studies used mediation test assuming a single mediator (Fig. 1c). However, gene expression levels are not independent due to the complex regulatory mechanisms. Correlation between genes may violate the assumptions that are required to identify mediation effects if other cis-genes also affect the trans-gene in study. Mediation analysis with multiple mediators has been applied in genomics [9,10,11], epigenetics [12], and epidemiological studies [13]. Mediation with two mediators was used in [9, 10, 13] and mediation with high dimensional mediators was implemented in [11, 12]. In this paper, we showed that the assumptions in the multivariate extension of mediation analysis are more likely to be satisfied than that in the single-mediator model (Additional file 1). We also found that trans-eQTLs are more likely to associate with more than one cis-gene than randomly selected SNPs in various tissues from the GTEx database. Then, we developed a computational method to identify trans-eQTLs that are mediated by multiple mediators (Fig. 1d). In simulation studies, we demonstrated that the multiple mediator approach increases the statistical power of identification of mediated trans-eQTLs. The improvement is more pronounced in large sample size. We applied the method to the HapMap3 dataset and identified 11 mediated trans-eQTLs that were not detected by the single mediator analysis in the combined samples of African populations. Lastly, we illustrated that mediated trans-eQTLs are more likely to be trait-associated SNPs in genome-wide association studies (GWAS). These findings advance our knowledge of gene regulation. Genotype and gene expression data were retrieved from six HapMap3 populations, LWK (Luhya in Webuye, Kenya), MKK (Maasai in Kinyawa, Kenya), YRI (Yoruba in Ibadan, Nigeria), CEU (Utah residents with Northern and Western European ancestry from the CEPH collection), CHB (Han Chinese in Beijing, China), and JPT (Japanese in Tokyo, Japan) [14]. There are 83, 135, 107, 107, 79, and 81 individuals in each population, respectively. The greater genetic diversity in African populations (LWK, MKK, YRI) tends to increase the power of eQTL detection [15]. Therefore, we performed analyses in the three populations separately and in the combined samples. Due to the sample size below 100 in CHB and JPT, we combined the two populations into a sample of Asian populations. Processed expression data profiling on the Illumina Human-6 v2 Expression BeadChip array for the HapMap3 samples were downloaded from ArrayExpress (accession numbers E-MTAB-264 and E-MTAB-198). We also downloaded the V6p release from the GTEx database, which provides a complete list of cis- and trans-eQTLs identified in the GTEx study [16]. Genotype data processing In the quality control step, a series of filters were applied to remove samples and SNPs with poor quality in each population. We removed samples with a call rate less than 0.97; next we retained autosomal SNPs with a missing rate less than 0.08 and minor allele frequency (MAF) greater than 0.10; finally, SNPs that failed the Hardy-Weinberg test (p-value <10-5) were removed. We then converted the SNP coordinates according to the human reference genome hg38. In addition, we found that some SNPs were in complete Linkage Disequilibrium (LD) with each other or mapped to identical genome positions. For such cases, we randomly selected one SNP to be included in the analysis. The number of individuals and SNPs before and after quality control were listed in Table 1. In total, there are 740,158 SNPs retained in the combined samples of African populations and 540,684 SNPs in the combined samples of Asian populations. Table 1 The number of individuals and SNPs before and after quality control in the HapMap3 data Gene expression data processing There are 21,800 probes in the microarray gene expression in the HapMap3 samples. Among them, 20,439 probes were mapped to the reference genome. We then removed probes that were mapped to multiple genes or non-autosomes, resulting in 19,832 probes corresponding to 19,643 unique genes. We further removed probes with low variance or low intensity and performed quantile normalization to reduce inter-individual variation [17]. Mediation analysis was applied to the probe level data, mainly because multiple probes in a gene represent different isoforms of this gene and merging them may lose information. More importantly, probes mapped to the same gene were weakly correlated in the HapMap3 data. Population stratification and confounders in gene expression data In the single population analysis (LWK, MKK, YRI, CEU), we adopted the strategy of [18] to correct for population admixture in LWK and MKK. We used the EIGENSTRAT program [19] to select the top 10 principal components (PCs) generated from the SNP genotype data as covariates. In the combined samples of African populations and Asian populations, 20 PCs from the genotype data were included in the analysis. To adjust for batch effects and unmeasured confounders in the gene expression datasets, we used the probabilistic estimation of expression residuals (PEER) method [20]. Following the GTEx analysis [21], the number of factors for PEER was determined by the sample size. We included 15 factors for datasets with less than 150 samples, 30 factors for datasets with sample size between 150 and 250, and 35 factors for datasets with more than 250 samples. Gender was also included as a covariate in all analyses. eQTL analysis We conducted genome-wide eQTL analysis using the R package, Matrix eQTL [22]. SNPs and probes within 1 Mb were tested for cis-association. All inter-chromosomal SNP-probe pairs as well as intra-chromosomal SNP-probe pairs that are more than 1 Mb apart were tested for trans-association. Enrichment analysis The motivation of our work was based on the observation that many trans-eQTLs are also identified as cis-eQTLs and they are often associated with more than one cis-gene in the GTEx database. In order to test whether the association with multiple cis-genes is over-represented in trans-eQTLs, we compared the proportion of trans-eQTLs that are associated with more than one cis-gene with that in the human genome. We considered the trans-eQTLs reported in the GTEx V6p dataset and those identified in the HapMap3 dataset. Permutation tests were used to assess significance. To elaborate, for the trans-eQTLs reported in the GTEx V6p dataset, we randomly sampled the same number of SNPs with matched MAF from the 1000 Genomes Project [23] and calculated the proportion of SNPs that are associated with multiple cis-genes. The empirical p-value was obtained by resampling 1000 times. The same test procedure was applied to the trans-eQTLs identified in the HapMap3 dataset. To understand the role of mediated trans-eQTLs in disease association, we performed Fisher's exact test to assess the enrichment of trait-associated SNPs in the trans-eQTLs identified by our method. The trait-associated SNPs were obtained from the NHGRI GWAS catalog [24]. To identify trans-eQTLs that are mediated by one or more cis-genes, we first selected candidate trios, composed of SNP, one or multiple cis-genes, and trans-gene. The trios were selected based on the following criteria. First, trans SNP-gene pairs were selected if their p-value is less than 10-6. The p-value cutoff was chosen to reduce the multiple-testing burden [6]. Second, cis-genes that are associated with the SNPs identified from the first step at a genome-wide false discovery rate (FDR) less than 0.05 were selected as candidate cis-mediators. In all the analyses described below, we assume gene expression data have been normalized and transformed so that the expression values approximately follow a normal distribution. In mediation analysis with a single mediator, we followed the test procedure in [25]. The bootstrap p-value was used to assess significance when testing the single mediation effect (SME). In mediation analysis with multiple mediators, we considered the following model. For the ith subject, let Yi be the expression level of a trans-gene, Xi be the SNP genotype coded by the number of minor alleles, Mi = (Mi1, ⋯⋯, Mip)Tbe the expression levels of the p cis-genes, Ci = (Ci1, ⋯⋯, Ciq)Tbe the q covariates. The mediation model is stated below: $$ {Y}_i={\beta}_0+{X}_i{\beta}_X+{\boldsymbol{M}}_i^T{\boldsymbol{\beta}}_M+{\boldsymbol{C}}_i^T{\boldsymbol{\beta}}_C+{\varepsilon}_{Y_i} $$ $$ {M}_{ij}={\alpha}_{0j}+{X}_i{\alpha}_{Xj}+{\boldsymbol{C}}_i^T{\boldsymbol{\alpha}}_{Cj}+{\varepsilon}_{M_{ij}} $$ where \( {\boldsymbol{\beta}}_M={\left({\beta}_{M_1},\cdots \cdots, {\beta}_{M_p}\right)}^T \)is the effect of the p cis-genes on the trans-gene adjusting for the SNP and covariates, αX = (αX1, ⋯⋯, αXp)T is the effect of the SNP on the p cis-genes adjusting for covariates. \( {\varepsilon}_{Y_i} \) and \( {\varepsilon}_{M_{ij}} \) are measurement errors on gene expression. Here we assume \( {\varepsilon}_{Y_i}\sim N\left(0,{\sigma}^2\right) \), \( {\boldsymbol{\varepsilon}}_{M_i}={\left({\varepsilon}_{M_{i1}},\cdots \cdots, {\varepsilon}_{M_{ip}}\right)}^T\sim {N}_p\left(\mathbf{0},\boldsymbol{\Sigma} \right) \), and \( {\varepsilon}_{Y_i} \) and \( {\varepsilon}_{M_{ij}} \) are independent, but we allow dependence among cis-genes, i.e., the off-diagonal elements in the covariance matrix Σ can be non-zero [11]. Denote the total mediation effect (TME) as \( \Delta ={\boldsymbol{\alpha}}_X^T{\boldsymbol{\beta}}_M \) and the component-wise mediation effects (CME) as δ = (δ1, ⋯⋯, δp)T, where \( {\delta}_j={\alpha}_{Xj}{\beta}_{M_j} \) [11]. In the following, we focus on the hypothesis tests of TME and CME: $$ {H}_0:\Delta =0 $$ $$ {H}_0:\boldsymbol{\delta} =\mathbf{0} $$ where (2) consists of a broader class of null than (3). For example, when δj 's are nonzero in different directions and sum to 0, the TME is zero while the CME is not. Thus, the CME test is of particular interest in the presence of the cancellation effect, which is evident in the HapMap3 dataset. That is, if a SNP has a positive mediation effect through one cis-gene and a negative mediation effect through another cis-gene, the CME test can be more powerful than the TME test, as demonstrated in the simulation studies. Conventional multivariate tests for CME, such as likelihood ratio test, have limited power when there are a large number of mediators [26]. In our problem, we were less concerned because there are 1 or 2 cis-mediators in majority of the trios (see results in Additional file 2, Additional file 3, Additional file 4, Additional file 5, Additional file 6 and Additional file 7). We used the bootstrap method to assess significance. For comparison, we also tested the SME for each mediator in the trios that have multiple mediators, and the mediation effect was considered to be significant if at least one of the SME tests is significant. Simulation setup We conducted simulation studies to evaluate the impact of model misspecification on type I error and statistical power. In detail, we considered three types of model misspecification: Scenario I, the true model has only one mediator while the analysis includes the true mediator and another irrelevant variable as the mediators; Scenario II, the true model has two mediators and the mediation effects are in the same direction; Scenario III, the true model has two mediators and the mediation effects are in opposite direction. In all three scenarios, the performance of the TME, CME, and SME tests are evaluated and compared. We considered sample size of 100 and 300 to mimic the sample size in the HapMap3 single population analysis and combined analysis. Scenario I: The MAF of the SNP is set to 0.3. For the cis-regulatory effect in model (1), αX1 varies from 0.2 to 1, αX2 is fixed at 0.6, and\( {\alpha}_{01}={\alpha}_{02}={\beta}_0=0.5,{\beta}_{M_2}=0,{\beta}_X=0.3 \). We assume an exchangeable covariance structure for εM with the variance being 1 and the correlation coefficient being 0.2, and εY follows the standard normal distribution. We set \( {\beta}_{M_1}=0 \) in the type I error experiments and \( {\beta}_{M_1}=0.1 \) in the power evaluation. The parameters are chosen to mimic the effects estimated in the HapMap3 dataset. Scenario II: We set\( {\beta}_{M_2}=0 \) and 0.1 to evaluate the type I error and the power respectively. The other parameters are set the same as in Scenario I. Scenario III: We set\( \kern0.50em {\beta}_{M_2}=0 \) and −0.1 to evaluate the type I error and the power respectively. The other parameters are set the same as in Scenario I. Trans-eQTLs are more likely to associate with multiple cis-genes Previous studies showed that trans-eQTLs are more likely to associate with cis-genes [2, 7], which lays the foundation for the employment of mediation analysis in trans-eQTL studies. To justify multiple mediators, we hypothesized that trans-eQTLs tend to associate with more than one cis-gene, and validated this hypothesis in the GTEx dataset. In 14 out of the 22 tissues available in the GTEx database, trans-eQTLs were found to be significantly associated with two or more cis-genes, and the sample sizes are all greater than 100 (Table 2). In the remaining 8 tissues, sample size is less than 100 in 4 tissues, and no more than 3 trans-associations were observed in 5 tissues. Consistent with the GTEx dataset, we also observed an enrichment of multiple cis-genes in trans-eQTLs in MKK, YRI, CEU, and the combined samples of African populations and Asian populations in the HapMap3 dataset (Table 3). The only exception is the LWK population, possibly because the power of identifying cis- and trans-eQTLs is limited at the sample size of 83. Thus, multiple mediators are prevalent among trans-eQTLs. In the upcoming sections, we developed and evaluated statistical tests to identify trans-eQTLs in a multiple-mediator setup, and then applied the method in the HapMap3 dataset. Table 2 Enrichment results in different tissues in the GTEx database Table 3 Enrichment results in the HapMap3 data Simulation studies In simulations, we studied the effect of model misspecification in three scenarios of trans-eQTL identification (see Simulation setup in Methods) from two perspectives, the type I error and the statistical power, and we compared three tests, TME, CME, and SME. In Scenario I, the type I error rates did not differ significantly from the nominal level of 0.05 even though a second mediator was falsely included in the analysis (Fig. 2a), and the results were consistent in all three tests we considered. In terms of power, as we expected, the SME test (true model) achieved the highest power, while the TME and CME tests had reduced power due to falsely including a cis-gene that does not mediate the trans-association (Fig. 3a). However, the power difference between SME and TME quickly diminishes as the mediated effect of the true mediator increases. In the mediated trans-eQTL problem, we pre-select trios for mediation test, and there is no guarantee that false mediators are excluded at this step. However, as shown in simulations, the type I error was under control, at the expense of power loss. Empirical type I error of the TME, CME, and SME tests, based on 1,000 simulation replicates, α=0.05. a type I error in Scenario I, and b type I error in Scenario II/III. The two horizontal grey dashed lines are the 95% confidence interval (0.0365-0.0635) Empirical power of the TME, CME, and SME tests, based on 1,000 simulation replicates, α=0.05. a power in Scenario I, b power in Scenario II, and c power in Scenario III In Scenarios II and III, the null hypotheses are identical for each test respectively, thus we presented the type I error results in one graph (Fig. 2b). We can see that the type I error was under control when either one or both mediators are included in the model. In terms of power, the TME test was more powerful than the CME test in Scenario II when the two mediation effects are in the same direction. In contrast, the TME test was less powerful than the CME test in Scenario III when the two mediation effects are in opposite direction. The SME test lost power due to leaving out one of the two cis-mediators, and its power fell between that of TME and CME (Fig. 3b, c). It is noteworthy that for the SME test, we actually performed two separate tests for each of the mediators, and the rejection of either one leads to the final rejection. The results were consistent with [11], where similar directionality effect was reported. The power difference of the three tests increases as the sample size increases. When the mediation effects were in different direction, the power of the TME test declined until αX1 reached 0.6 when the two mediation effects were cancelled. After that, the power of the TME test rose with the value of αX1, but was still inferior to that of CME and SME (Fig. 3c). In summary, the single mediator model loses power when multiple mediators are present, and the optimal choice of the hypothesis test depends on the unknown directionality of the mediation pathways. Identification of mediated trans-eQTLs: application to the HapMap3 dataset We applied the mediation tests to LWK, MKK, YRI, CEU, and the combined samples of African populations and Asian populations in the HapMap3 dataset. In each population, mediated trans-eQTLs with p-values less than 0.05 are shown in Table 4 (more details in Additional file 2, Additional file 3, Additional file 4, Additional file 5, Additional file 6 and Additional file 7). The three tests gave similar results in the single population analysis, perhaps due to the small sample size. In the combined samples of African populations, 291 (24.3%) trans-eQTLs were associated with two or more cis-genes. Among the 248 trans-eQTLs associated with two cis-genes, 70 trios were identified by both the TME and CME tests, 13 trios in which the estimated mediation effects were in the same direction were identified by the TME test but not the CME test, and 17 trios in which the estimated mediation effects were in opposite direction were identified by the CME test but not the TME test. All the 89 trios detected by the SME test were also identified by either the TME or CME test. In total, we identified 11 mediated trans-eQTLs that were not detected by the single mediator analysis (Table 5). In the Asian populations, 254 (26.1%) trans-eQTLs were associated with two or more cis-genes. Among the 195 trans-eQTLs associated with two cis-genes, 33 trios were identified by both the TME and CME tests, 12 trios in which the estimated mediation effects were in the same direction were identified by the TME test but not the CME test, 2 trios were identified by the CME test but not the TME test, and 4 trios in which the estimated mediation effects were in opposite direction were identified by the CME test but not the TME test. All the 45 trios detected by the SME test were also identified by either the TME or CME test. There are 6 mediated trans-eQTLs that were not detected by the single mediator analysis. Similar results were obtained when the trans-eQTL p-value threshold was set at 10-7 (data not shown). Table 4 Mediated trans-eQTLs with p-value <0.05 in the HapMap3 data Table 5 Mediated trans-eQTLs that were detected by the multiple mediator analysis but not detected by the single mediator analysis in the combined samples of African populations Replication of trans-eQTLs and mediated trans-eQTLs We demonstrated the replication of trans-eQTLs from LWK, MKK, YRI, and the combined samples of African populations. When the FDR was controlled at 0.1, the trans-eQTLs identified in LWK, MKK, YRI, and the combined samples have a large overlap (Additional file 8). Among the 7 trans-eQTLs identified in LWK, all of them were also identified in another population or the combined samples. Among the 46 trans-eQTLs identified in MKK, 23 of them were also identified in another population or the combined samples. Among the 51 trans-eQTLs identified in YRI, 30 of them were also identified in another population or the combined samples. Additionally, we compared the results with that from a previous study in which the FDR of trans-eQTLs was set at 0.05 [2]. There were 2, 5, 5, and 20 trans-eQTLs identified by our method in LWK, MKK, YRI, and the combined samples respectively that were previously reported (more details in Additional file 9). The relatively low rates of replication with the previous study may be explained by genetic and environmental differences between populations [2]. Next, we evaluated the replication of mediated trans-eQTLs across populations (Additional file 10). 7 of 13 mediated trans-eQTLs identified in LWK were also identified in another population or the combined samples. 13 of 61 mediated trans-eQTLs identified in MKK were also identified in another population or the combined samples. 10 of 59 mediated trans-eQTLs identified in YRI were also identified in another population or the combined samples. For those trans-eQTLs that have inconsistent mediation across populations, it may be due to different gene regulatory mechanisms between populations [27]. Lastly, we observed that trait-associated SNPs are enriched in the mediated trans-eQTLs identified in the combined samples of African populations and Asian populations (Table 6). Table 6 Enrichment results of trait-associated SNPs in the mediated trans-eQTLs identified in the HapMap3 data Examples of mediated trans-eQTLs The identified trans-eQTLs that are mediated by multiple mediators may bring new biological insight in gene regulation. For example, the RPL34 gene on chromosome 4 was found to be trans-associated with 5 SNPs on chromosome 6 through the mediation of HLA-DRB5 and HLA-DRB1 (Table 5). RPL34 was previously reported to be trans-associated with the SNP rs2395185 in human monocytes [28], and the association was found unique to TLR4 activation, which plays a key role in innate immunity [29]. However, the biological mechanism underlying this trans-association is unknown. Our study identified the mediated trans-association of RPL34 and the SNP rs2239804 which is in LD with rs2395185 (r2 = 0.53), suggesting a mediating pathway of the previously reported trans-regulation of RPL34 (Fig. 4). The SNP rs2395185 and the two cis-mediators, HLA-DRB5 and HLA-DRB1, were reported to be susceptible to ulcerative colitis [30], and the two HLA genes were also identified in the rheumatoid arthritis GWAS [31]. The dysfunction of innate immunity is critically important in the pathogenesis of ulcerative colitis [32] and rheumatoid arthritis [33]. Thus, the identified mediated trans-eQTLs not only suggest a biological mechanism for the trans-association of rs2239804 and RPL34, but also suggest a role of the mediated pathway in the disease etiology of ulcerative colitis and rheumatoid arthritis. Mediation diagram of the trans-association between rs2239804 and RPL34 eQTL studies have shed enormous light on gene regulatory mechanisms. Significant progress has been made to integrate eQTL information with genome-wide association signals to explain SNP-phenotype associations and prioritize genes and variants for functional studies [34,35,36]. The ongoing efforts such as GTEx and the HapMap Project have greatly expanded current knowledge of eQTLs. However, the identification and interpretation of trans-eQTLs remain a challenging yet important topic. In this work, we developed a computational method to identify trans-eQTLs that are mediated by multiple mediators, and demonstrated its superiority to the single mediator test in mediation analysis. Previous studies considered the identification of cis-transcripts that mediate the effects of trans-eQTLs on distant genes in a single-mediator setting [2, 6, 8], and may be subject to potential model misspecification. One innovative aspect of our work is to employ the multiple-mediator analysis to identify mediated trans-eQTLs. We observed that the associations of trans-eQTLs with more than one cis-gene are prevalent in the GTEx and HapMap3 datasets. Thus, mediation analysis allowing for multiple mediators would be less sensitive to model misspecification, and as a result improve the statistical power of the tests. Applied to the HapMap3 data, our approach allowing for multiple mediators identified 11 mediated trans-eQTLs that were not detected in the single mediator analysis. There are several caveats in our work. First, unmeasured confounders may not be fully accounted for in the mediation analysis due to the biological complexity in gene regulatory networks. The influence of potential confounders was further evaluated in the single-mediator setting [37]. Sensitivity analysis in the mediation with multiple mediators will be investigated in future studies. Second, we cannot make causal claims based on the detected mediation effects because the observed mediations simply explain trans-associations but not establish causal relationships. Third, the selection of cis-gene mediators is completely data-driven in the current study. It would be of great interest to integrate the knowledge of gene networks into the mediation framework. We implemented a multiple-mediator analysis approach to identify mediated trans-eQTLs. In simulation studies, we illustrated that our method improves the statistical power of identification of mediated trans-eQTLs compared to the single mediator analysis. Furthermore, we identified 11 mediated trans-eQTLs that were not detected by the single mediator analysis in the HapMap3 data. CEU: Utah residents with Northern and Western European ancestry from the CEPH collection CHB: Han Chinese in Beijing, China CME: Component-wise mediation effect eQTL: Expression quantitative trait locus JPT: Japanese in Tokyo, Japan LD: LWK: Luhya in Webuye, Kenya MKK: Maasai in Kinyawa, Kenya Principal component PEER: Probabilistic estimation of expression residuals SME: Single mediation effect TME: Total mediation effect YRI: Yoruba in Ibadan, Nigeria Veyrieras JB, Kudaravalli S, Kim SY, Dermitzakis ET, Gilad Y, Stephens M, et al. High-resolution mapping of expression-QTLs yields insight into human gene regulation. PLoS Genet. 2008;4:e1000214. Pierce BL, Tong L, Chen LS, Rahaman R, Argos M, Jasmine F, et al. Mediation analysis demonstrates that trans-eQTLs are often explained by cis-mediation: a genome-wide analysis among 1,800 South Asians. PLoS Genet. 2014;10:e1004818. Weiser M, Mukherjee S, Furey TS. Novel distal eQTL analysis demonstrates effect of population genetic architecture on detecting and interpreting associations. Genetics. 2014;198:879–93. Stegle O, Parts L, Durbin R, Winn J. A Bayesian framework to account for complex non-genetic factors in gene expression levels greatly increases power in eQTL studies. PLoS Comput Biol. 2010;6:e1000770. Rakitsch B, Stegle O. Modelling local gene networks increases power to detect trans-acting genetic effects on gene expression. Genome Biol. 2016;17:33. Yang F, Wang J, Consortium G, Pierce BL, Chen LS. Identifying cis-mediators for trans-eQTLs across many human tissues using genomic mediation analysis. Genome Res. 2017;27:1859–71. Westra HJ, Peters MJ, Esko T, Yaghootkar H, Schurmann C, Kettunen J, et al. Systematic identification of trans eQTLs as putative drivers of known disease associations. Nat Genet. 2013;45:1238–43. Yao C, Joehanes R, Johnson AD, Huan T, Liu C, Freedman JE, et al. Dynamic role of trans regulation of gene expression in relation to complex traits. Am J Hum Genet. 2017;100:571–80. Zhao SD, Cai TT, Li H. More powerful genetic association testing via a new statistical framework for integrative genomics. Biometrics. 2014;70:881–90. Huang YT. Integrative modeling of multi-platform genomic data under the framework of mediation analysis. Stat Med. 2015;34:162–78. Huang YT, Pan WC. Hypothesis test of mediation effect in causal mediation model with high-dimensional continuous mediators. Biometrics. 2016;72:402–13. Zhang H, Zheng Y, Zhang Z, Gao T, Joyce B, Yoon G, et al. Estimating and testing high-dimensional mediation effects in epigenetic studies. Bioinformatics. 2016;32:3150–4. Huang YT, Yang HI. Causal mediation analysis of survival outcome with multiple mediators. Epidemiology. 2017;28:370–8. The International HapMap3 Consortium. Integrating common and rare genetic variation in diverse human populations. Nature. 2010;467:52–8. Brynedal B, Choi J, Raj T, Bjornson R, Stranger BE, Neale BM, et al. Large-scale trans-eQTLs affect hundreds of transcripts and mediate patterns of transcriptional co-regulation. Am J Hum Genet. 2017;100:581–91. The GTEx Consortium. Genetic effects on gene expression across human tissues. Nature. 2017;550:204–13. Gentleman RC, Carey VJ, Bates DM, Bolstad B, Dettling M, Dudoit S, et al. Bioconductor: open software development for computational biology and bioinformatics. Genome Biol. 2004;5:R80. Stranger BE, Montgomery SB, Dimas AS, Parts L, Stegle O, Ingle CE, et al. Patterns of cis regulatory variation in diverse human populations. PLoS Genet. 2012;8:e1002639. Price AL, Patterson NJ, Plenge RM, Weinblatt ME, Shadick NA, Reich D. Principal components analysis corrects for stratification in genome-wide association studies. Nat Genet. 2006;38:904–9. Stegle O, Parts L, Piipari M, Winn J, Durbin R. Using probabilistic estimation of expression residuals (PEER) to obtain increased power and interpretability of gene expression analyses. Nat Protoc. 2012;7:500–7. Consortium G, et al. The Genotype-Tissue Expression (GTEx) pilot analysis: Multitissue gene regulation in humans. Science. 2015;348:648–60. Shabalin AA. Matrix eQTL: ultra fast eQTL analysis via large matrix operations. Bioinformatics. 2012;28:1353–8. The 1000 Genomes Project Consortium. An integrated map of genetic variation from 1,092 human genomes. Nature. 2012;491:56–65. Welter D, MacArthur J, Morales J, Burdett T, Hall P, Junkins H, et al. The NHGRI GWAS Catalog, a curated resource of SNP-trait associations. Nucleic Acids Res. 2014;42:D1001–6. Mackinnon DP, Lockwood CM, Williams J. Confidence limits for the indirect effect: Distribution of the product and resampling methods. Multivariate Behav Res. 2004;39:99. Huang YT, Vanderweele TJ, Lin X. Joint analysis of SNP and gene expression data in genetic association studies of complex diseases. Ann Appl Stat. 2014;8:352–76. Pai AA, Pritchard JK, Gilad Y. The genetic and mechanistic basis for variation in gene regulation. PLoS Genet. 2015;11:e1004857. Kim S, Becker J, Bechheim M, Kaiser V, Noursadeghi M, Fricker N, et al. Characterizing the genetic basis of innate immune response in TLR4-activated human monocytes. Nat Commun. 2014;5:5236. Beutler BATLR. innate immunity. Blood. 2009;113:1399–407. Silverberg MS, Cho JH, Rioux JD, McGovern DP, Wu J, Annese V, et al. Ulcerative colitis loci on chromosomes 1p36 and 12q15 identified by genome-wide association study. Nat Genet. 2009;41:216–20. Eyre S, Bowes J, Diogo D, Lee A, Barton A, Martin P, et al. High-density genetic mapping identifies new susceptibility loci for rheumatoid arthritis. Nat Genet. 2012;44:1336–40. Geremia A, Biancheri P, Allan P, Corazza GR, Di Sabatino A. Innate and adaptive immunity in inflammatory bowel disease. Autoimmun Rev. 2014;13:3–10. Gierut A, Perlman H, Pope RM. Innate immunity and rheumatoid arthritis. Rheum Dis Clin North Am. 2010;36:271–96. Montgomery SB, Dermitzakis ET. From expression QTLs to personalized transcriptomics. Nat Rev Genet. 2011;12:277–82. Gusev A, Ko A, Shi H, Bhatia G, Chung W, Penninx BW, et al. Integrative approaches for large-scale transcriptome-wide association studies. Nat Genet. 2016;48:245–52. Wu M, Lin Z, Ma S, Chen T, Jiang R, Wong WH. Simultaneous inference of phenotype-associated genes and relevant tissues from GWAS data via Bayesian integration of multiple tissue-specific gene networks. J Mol Cell Biol. 2017;9:436–52. Imai K, Keele L, Yamamoto T. Identification, inference and sensitivity analysis for causal mediation effects. Stat Sci. 2010;25:51–71. We thank two reviewers for providing thoughtful and constructive comments to improve the manuscript. This study was supported by the National Natural Science Foundation of China grant No. 11601259 (LH), and the National Institutes of Health grant K01AA023321 (ZW). The publication of this article was sponsored by National Natural Science Foundation of China (Grant No. 11601259). The gene expression data analyzed in this study are available at ArrayExpress E-MTAB-264, https://www.ebi.ac.uk/arrayexpress/experiments/E-MTAB-264/ and E-MTAB-198, https://www.ebi.ac.uk/arrayexpress/experiments/E-MTAB-198/. The genotype data analyzed in this study are available at HapMap phase III, ftp://ftp.ncbi.nlm.nih.gov/hapmap/genotypes/2009-01_phaseIII/plink_format/. This article has been published as part of BMC Bioinformatics Volume 20 Supplement 3, 2019: Selected articles from the 17th Asia Pacific Bioinformatics Conference (APBC 2019): bioinformatics. The full contents of the supplement are available online at https://bmcbioinformatics.biomedcentral.com/articles/supplements/volume-20-supplement-3. Center for Statistical Science, Tsinghua University, Beijing, 100084, China Nayang Shan & Lin Hou Department of Industrial Engineering, Tsinghua University, Beijing, 100084, China Department of Biostatistics, Yale School of Public Health, New Haven, CT, 06510, USA Zuoheng Wang MOE Key Laboratory of Bioinformatics, School of Life Sciences, Tsinghua University, Beijing, 100084, China Lin Hou Nayang Shan LH conceived the project, LH and ZW designed the project, NS performed the analysis. All authors interpreted the data and wrote the manuscript. All authors have read and approved the final version of the manuscript. Correspondence to Zuoheng Wang or Lin Hou. Assumptions in mediation analysis. This document explaining that the assumptions in the multivariate extension of mediation analysis are more likely to be satisfied than that in the single-mediator model. (DOCX 14 kb) Mediated trans-eQTLs in LWK. (XLS 64 kb) Mediated trans-eQTLs in MKK. (XLS 73 kb) Mediated trans-eQTLs in YRI. (XLS 71 kb) Mediated trans-eQTLs in the combined samples of African populations. (XLS 106 kb) Mediated trans-eQTLs in CEU. (XLS 94 kb) Mediated trans-eQTLs in the combined samples of Asian populations. (XLS 91 kb) Venn diagram of trans-eQTLs in African populations. (PNG 81 kb) Overlap of trans-eQTLs with a previous study [2]. (XLS 52 kb) Additional file 10: Venn diagram of mediated trans-eQTLs in African populations. (PNG 82 kb) Shan, N., Wang, Z. & Hou, L. Identification of trans-eQTLs using mediation analysis with multiple mediators. BMC Bioinformatics 20, 126 (2019). https://doi.org/10.1186/s12859-019-2651-6 Trans-eQTL Multiple mediators
CommonCrawl
Difference between revisions of "Semantic Relation Classification——via Convolution Neural Network" From statwiki X25ling (talk | contribs) (→‎Algorithm) and 49,600 of them are unique. [[File:CNN.png]] [[File:CNN.png|800px]] This is the architecture of the CNN. We first transform a sentence via Feature embeddings. Basically we transform each sentence into continuous word embeddings: [[File:WordPosition.png]] And word position embeddings: [[File:Position.png]] 1 Presented by 3 Algorithm Rui Gong, Xinqi Ling, Di Ma,Xuetong Wang One of the emerging trends of natural language technologies is their use for the humanities and sciences (Gbor et al., 2018). SemEval 2018 Task 7 mainly solves the problem of relation extraction and classification of two entities in the same sentence into 6 potential relations. The 6 relations are USAGE, RESULT, MODEL-FEATURE,PART WHOLE, TOPIC, and COMPARE. Data comes from 350 scientific paper abstracts, which have 1228 and 1248 annotated sentences for two tasks. For each data, an example sentence was chosen with its right and left sentences, as well as an indicator showing whether the relation is reserved, then a prediction is made. Three models were used for the prediction: Linear Classifiers, Long Short-Term Memory(LSTM), and Convolutional Neural Network. After featurizing all words in the sentence. The sentence of length N can be expressed as a vector of length [math] N [/math], which looks like $$e=[e_{1},e_{2},\ldots,e_{N}]$$ and each entry represents a token of the word. Also, to apply convolutional neural network, the subsets of features $$e_{i:i+j}=[e_{i},e_{i+1},\ldots,e_{i+j}]$$ is given to a weight matrix [math] W\in\mathbb{R}^{(d^{w}+2d^{wp})\times k}[/math] to produce a new feature, defiend as $$c_{i}=tanh(W\cdot e_{i:i+k-1}+bias)$$ This process is applied to all subsets of features with length [math] k [/math] starting from the first one. Then a mapped feature factor $$c=[c_{1},c_{2},\ldots,c_{N-k+1}]$$ is produced. The max pooling operation is used, the [math] \hat{c}=max\{c\} [/math] was picked. With different weight filter, different mapped feature vectors can be obtained. Finally, the original sentence [math] e [/math] can be converted into a new representation [math] r_{x} [/math] with a fixed length. For example, if there are 5 filters, then there are 5 features ([math] \hat{c} [/math]) picked to create [math] r_{x} [/math] for each [math] x [/math]. Then, the score vector $$s(x)=W^{classes}r_{x}$$ is obtained which represented the score for each class, given [math] x [/math]'s entities' relation will be classified as the one with the highest score. The [math] W^{classes} [/math] here is the model being trained. To improve the performance, "Negative Sampling" was used. Given the trained data point [math] \tilde{x} [/math], and its correct class [math] \tilde{y} [/math]. Let [math] I=Y\setminus\{\tilde{y}\} [/math] represent the incorrect labels for [math] x [/math]. Basically, the distance between the correct score and the positive margin, and the negative distance (negative margin plus the second largest score) should be minimized. So the loss function is $$L=log(1+e^{\gamma(m^{+}-s(x)_{y})}+log(1+e^{\gamma(m^{-}-\mathtt{max}_{y'\in I}(s(x)_{y'}))}$$ with margins [math] m_{+} [/math], [math] m_{-} [/math], and penalty scale factor [math] \gamma [/math]. The whole training is based on ACL anthology corpus and there are 25,938 papers with 136,772,370 tokens in total, and 49,600 of them are unique. In the word embeddings, we got a vocabulary 'V', and we will make an embedding word matrix based on the position of the word in the vocabulary. This matrix and trainable and need to be initialized by pre-trained embedding vectors. In the word position embeddings, we first need to input some words named 'entities' and they are the key for the machine to determinate sentence's relation. During this process, if we have two entities, we will used the relative position of them in the sentence to make the embeddings. We will output two vectors and one of them keep track of the first entity relative position in the sentence ( we will make the entity recorded as 0, the former word recorded as -1 and the next one 1, etc. ). And the same procedure for the second entity. Finally we will get two vectors concatenated as the position embedding. After the embeddings, the model will transform the embedded sentence to a fix-sized representation of the whole sentence via the convolution layer, finally after the max pooling to reduce the dimension of the output of the layers, we will get a score for each relation class via a linear transformation. Throughout the process, linear classifiers, sequential random forest, LSTM and CNN models are tested. Variations are applied to the models. Among all variations, vanilla CNN with negative sampling and ACL-embedding has significant better performance than all others. Attention based pooling, up-sampling and data augmentation are also tested, but they barely perform positive incresement on the behaviour. Retrieved from "http://wiki.math.uwaterloo.ca/statwiki/index.php?title=Semantic_Relation_Classification——via_Convolution_Neural_Network&oldid=45797"
CommonCrawl
Exploiting scale-invariance: a top layer targeted inverse model for hyperspectral images of wounds Asgeir Bjorgan and Lise Lyngsnes Randeberg Asgeir Bjorgan* and Lise Lyngsnes Randeberg Department of Electronic Systems, NTNU Norwegian University of Science and Technology, Trondheim, Norway *Corresponding author: [email protected] Asgeir Bjorgan https://orcid.org/0000-0003-4200-3827 Lise Lyngsnes Randeberg https://orcid.org/0000-0003-2608-3759 A Bjorgan L Randeberg Asgeir Bjorgan and Lise Lyngsnes Randeberg, "Exploiting scale-invariance: a top layer targeted inverse model for hyperspectral images of wounds," Biomed. Opt. Express 11, 5070-5091 (2020) Hyperspectral imaging of human skin aided by artificial neural networks (BOE) Developing digital tissue phantoms for hyperspectral imaging of ischemic wounds (BOE) Wavelet based feature extraction and visualization in hyperspectral tissue characterization (BOE) Tissue Optics and Spectroscopy Absorption spectroscopy Scattering media Spectral properties Original Manuscript: June 11, 2020 Detection of re-epithelialization in wound healing is important, but challenging. Hyperspectral imaging can be used for non-destructive characterization, but efficient techniques are needed to extract and interpret the information. An inverse photon transport model suitable for characterization of re-epithelialization is validated and explored in this study. It exploits scale-invariance to enable fitting of the epidermal skin layer only. Monte Carlo simulations indicate that the fitted layer transmittance and reflectance spectra are unique, and that there exists an infinite number of coupled parameter solutions. The method is used to explain the optical behavior of and detect re-epithelialization in an in vitro wound model. © 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement In vitro wound models are useful for investigation of wound healing in a controlled laboratory setting [1–5]. However, it is challenging to monitor the actual healing in such models without destructive histology analysis. Hyperspectral imaging is a technique providing a non-destructive, objective method for characterizing such tissues optically. Hyperspectral imaging has a spectral and spatial resolution that has been shown to be useful for biomedical applications like wound imaging [6–14], burn wound imaging [12,15], cancer diagnostics [6,16,17] and surgical guidance [6,18]. Statistical processing techniques are often used to handle the large amounts of data [8,12,13,15,19–25]. The rich spectral content further enables use of inverse photon transport modeling [26–32]. Such modeling techniques can be used to interpret the data and relate spectral changes to changes in skin properties like blood content and blood oxygenation through constrained fitting of optical properties. This indirect way of estimating optical properties is considered to be ill-defined [6,33] since different media can have similar reflectance spectra [33,34]. Further, absorption and scattering properties are fitted to a single reflectance measurement [35]. A priori knowledge of the expected shapes of the absorption and scattering restrains the problem somewhat [35], but there is still a basic ambiguity resulting from a scale-invariance of the reflectance with respect to the absorption and scattering spectra [33,34]. Other possibilities to restrain such models can be a valuable road of study in order to obtain unique and robust estimates. The tissue model used in this study was based on an in vitro wound model setup developed by Jansson et al. [36] and Kratz [37]. Here, samples with wounds are prepared from ex vivo human tissue and placed in a growth medium, which causes the wounds to re-epithelialize. Characterization of the re-epithelialized layer is of special interest, e.g. the presence, maturity and thickness of the layer. Photon transport modeling techniques could be used to extract this information from the reflectance spectra. Photon transport modeling of in vitro wounds can be challenging. The spectral characteristics of the dermal part of the wound models are less defined than in normally circulated tissue due to the lack of blood absorption, and the spectra are significantly influenced by the spectral properties of the growth medium [38]. However, in wounds, both regions with and without the upper tissue layer are present. The spatial resolution of hyperspectral imaging makes multiple reflectance spectra of each tissue type available as every pixel contains data with high spectral resolution. This could potentially be used to restrain the model and target the epidermal layer specifically. The main idea of the inverse modeling technique to be presented in this paper is that the basic scale-invariant limitations of the reflectance modeling can be exploited to enable consideration of the upper layers without having to completely model the lower layers. The availability of reflectance spectra from both wound and re-epithelialized tissue then enables quantification of skin properties in re-epithelialized layers, without the need to consider the dermal layers. The basic method requires knowledge of an appropriate wound spectrum. The high number of available wound spectra in a hyperspectral image makes selection of one unique spectrum somewhat prohibitive. However, the same availability of multiple spectra with a high spectral resolution makes it possible to use dimensionality reduction methods to remove redundant information and represent the spectral information in a low-dimensional space [38]. Principal component analysis (PCA) is such a method, which can decompose a dataset in terms of orthonormal components (loadings) that can linearly transform each observation into new variance-maximizing coordinates (scores) [39]. This technique has been used as a pre-processing technique [40–42] and for investigation of spectra in a low-dimensional space [6,19,38]. The method is used in the current paper to reduce a discrete wound spectrum choice to a continuous choice of PCA scores that can be back-transformed to a wound-like spectrum using the inverse transform. This enables the wound spectrum selection to be an efficiently evaluated part of the model optimization. The main goal of this study is to validate a proof of concept of the presented basic inverse modeling technique including the suggested PCA extension, and explore the limitations and possibilities of this approach. A simulation study is carried out using Monte Carlo simulations in MCML [43], which represents a gold standard for photon transport simulations. The simulation study is used to investigate and verify the assumptions that enable use of the inverse modeling method. The uniqueness and accuracy of the fitted skin parameters are explored. With the findings of the simulation study on uniqueness and parameter accuracy in place, the method is finally applied to experimental data. Hyperspectral images of an in vitro wound model sample are used as an example for the application of the technique. The inverse modeling method and its basic assumptions are given in section 2.1, along with the PCA-based modification appropriate for wound imaging. The simulation study used to verify these assumptions and investigate the accuracy and uniqueness of the inverse modeled parameters is outlined in section 2.2. Information on the hyperspectral wound dataset used to demonstrate an application of the method is given in section 2.3. Finally, results and discussion are given in section 3. The method represents a partial modeling approach which combines data-driven results with photon transport modeling. Model fitting is mainly constrained to the upper layer, alleviating some of the inherent undetermined nature of the inverse photon transport modeling approach. The example given here is wound healing, but the method is generic, and can be used to investigate any top layer given that reflectance spectra are available with and without modifications to or removal of the top layers. Examples include characterization of burn wounds and estimation of epidermal skin thickness in pre-term newborns. 2.1 Inverse modeling setup 2.1.1 Fundamental assumptions Two main assumptions form the basis of the method: 1. The reflectance of a one-layer model can be written as a function of $\mu _a/\mu _s'$ (scale-invariance). 2. A multi-layer model and a one-layer representation yield identical reflectance values when a top layer is added. The method considers cases where reflectance spectra are available from two regions, with and without a top layer. Given the first assumption, the reflectance from the region without the top layer is represented using a one-layer model by estimating its corresponding $\mu _a/\mu _s'$ ratio without having to consider their actual forms. From the second assumption, using these properties in the deeper layer of a two-layer model can be used to represent the reflectance from the region with the top layer. The skin properties of the top layer can then be fitted without considering the deeper layers if the optical properties of the top layer are known. Insertion of one-layer properties from the region without the top layer into the two-layer model ensures that the boundary conditions towards the top layer are correct. The basic model steps are shown in Fig. 1. Fig. 1. Basic inverse model geometry. A one-layer model is fitted to the reflectance $R_0(\lambda )$ from a region with the top layer missing. Due to scale-invariance, any absorption coefficient $\mu _a$ and scattering coefficient $\mu _s$ obeying the required ratio are viable solutions. A top layer can then be fitted to the model by reusing the optical properties in the deeper layer of a two-layer model. 2.1.2 Photon transport model A diffusion model with an isotropic source function [44] is used as photon transport model. The main advantage of this model is its simple, analytic expression for the reflectance, enabling fast evaluation during optimization. In addition, it can yield $\mu _a/\mu _s'$ directly from the reflectance without iteration. 2.1.2.1 Theory Photon transport in biological tissue can be modeled by the radiative transfer equation (RTE) [45]. Assuming an almost isotropic light distribution and isotropic source functions, the time-independent RTE in a one-dimensional geometry is simplified to [44,45] (1)$$\mu_a \phi(z) - D \frac{d^2}{dz^2} \phi(z) = q(z),$$ where the diffusion constant is $D = \frac {1}{3(\mu _s' + \mu _a)}$ and $\phi$ is the fluence rate. A multi-layer medium is assumed, with $d_i$ describing the depth of layer interface $i$. With the source function in a layer $i$ given as [44] (2)$$q_i(z) = \mu_{s,i}' \exp(-\mu_{tr,i} z) \prod_{j=1}^{i-1} \exp(-\mu_{tr,j} d_j),$$ the solution for the fluence rate in the layer $i$ is given as [44] (3)$$\begin{aligned}\phi_i(z) &= \frac{\delta_i \mu_{s,i}'}{D_i (1 - \mu_{tr,i} \delta_i^2)} \exp[-\mu_{tr,i} (z-d_{i})] \prod_{j=1}^{i-1} \exp[-\mu_{tr,j}(d_j - d_{j-1})] \end{aligned}$$ (4)$$\begin{aligned}&+ A_{i1} \exp(-x/\delta_i) + A_{i2} \exp(x/\delta_i). \end{aligned}$$ The boundary condition at the air-tissue interface is taken as $j(z=0) = A\phi (z=0)$. The property $j$ is the photon flux. The boundary condition essentially relates the irradiance propagating back into the tissue to the irradiance propagating out of the tissue by an effective reflection coefficient [44]. A refraction index of $n = 1.4$ yields $A = 0.17$ [44]. It is required that $\lim _{z \to \infty } \phi (z) = 0$. All constants $A_{ij}$ can then be determined by using continuity in $j(z)$ and $\phi (z)$ between each layer and the boundary conditions above [44]. The diffuse reflectance $R_d$ is found by [44] (5)$$R_d = j(z=0).$$ The last expression is obtained by considering the irradiance transmitted into the air. Analytic solution for a two-layer model can be found in Svaasand et al. [44]. The one-layer solution is trivial to obtain. 2.1.2.2 Offset correction A correction constant is applied to the diffusion model reflectance. Comparing the diffusion model with the same boundary conditions and optical properties as a Monte Carlo simulation shows that the output reflectance from the diffusion model has an offset [46,47]. The assumption of an almost isotropic light distribution in the diffusion model leads to a less forward-directed photon flux close to the surface, as compared to other source functions such as the Delta-Eddington source function [47], and this yields a higher output reflectance contributing to the observed offset. The isotropic source function is chosen for simplicity and convenience, and it is considered as out of scope of this proof of principle to minimize this well known and systematic offset. It is however acknowledged that it introduces systematic errors in the estimated parameters. The diffusion model and two-layer Monte Carlo spectra from model A and model C1 to be described in section 2.2.1 were compared with equal input parameters. An offset correction of 0.036 was found to minimize the average root mean squared error (RMSE) among all spectra. The offset correction is demonstrated in Fig. 2. On average, the RMSE between the model C1 Monte Carlo spectra in section 2.2.1 and corresponding diffusion model spectra were 0.037 (standard deviation 0.003) and 0.005 (standard deviation 0.001) before and after offset correction, respectively. Fig. 2. Demonstration of empirical offset correction of the diffusion model reflectance. Model C1 in section 2.2.1 was used for the demonstrated examples, with the lowest possible parameter choices used for the upper reflectance spectrum, and the highest possible parameter choices for the lower reflectance spectrum. The RMSE between Monte Carlo and diffusion model spectrum was reduced from 0.036 to 0.006 for the upper spectrum, and from 0.043 to 0.007 for the lower spectrum. The Monte Carlo simulations were run with 1000 000 photons per wavelength. 2.1.3 Inverse modeling method for skin The main application of the method considers human skin with and without epidermis present, such as in wounds. Construction of a two-layer model from a basis reflectance was outlined in section 2.1.1. The properties of epidermis are then fitted to the reflectance from the region with the top layer. Python was used for development of the technique. Melanin is assumed to be the main absorber in epidermis in the visible range [45,48,49]. Minor absorbers include carotene, lipids, cell nuclei and filamentous proteins [50], and are often modeled using a bulk background absorption [31,44,51,52]. A small amount of blood is sometimes included in the epidermis to account for the non-planar geometry of the papillary dermis [44,51,52]. For simplicity, neither of these are included in the current model. The epidermis is assumed to consist of a single layer with melanin as the absorber as shown in Eq. (6) and a scattering corresponding to Eq. (8), along with a defined layer thickness. The objective function to be minimized was chosen to be the RMSE between measured reflectance and simulated reflectance, relative to the simulated reflectance. The function minimize from the Python package scipy.optimize was used to minimize the objective function, using L-BFGS-B (Limited-memory Broyden-Fletcher-Goldfarb-Shanno algorithm with box constraints) as the optimization method. The parameters were bounded, and they were rescaled to values between 0 and 1 in order to make them comparable. The fitted epidermal skin parameters are listed in Table 1. Table 1. Parameters to be fitted during optimization, along with their lower and upper bounds and the scaling factor applied before they are input into the optimization method. View Table | View all tables in this article 2.1.4 Modifications to the technique for application to wounds A modification of the inverse model was used for hyperspectral images of wounds, as the appropriate basis spectrum for a given re-epithelialized tissue sample is not known a priori. It was desired to let selection of the wound spectrum be something which could be fitted during optimization rather than iterating through the possible choices. A PCA transform with three components is applied to spectra from the wound region. This reduces all possible wound spectra down to a combination of three score parameters and corresponding PCA loading vectors. Three additional parameters were input during the optimization. These were used as PCA scores and inverse transformed to construct an artificial wound basis spectrum onto which an epidermis was placed. This allowed the inverse model to represent a wound spectrum that could be fitted during optimization. The PCA scores were then fitted along with the rest of the parameters. The fitted scores were bounded within the min/max range of the scores as transformed from the original wound spectra. Reconstructed wound basis spectra were constrained to non-negative values, effectively changing the optimization method to SLSQP (Sequential least squares programming). 2.2 Simulation study 2.2.1 Simulation setup GPU-MCML [53,54] was used to simulate reflectance spectra from a skin-like geometry. NVIDIA GeForce GTX 670 was used for GPU parallelization. Wavelengths from $\lambda = 400$ to 850 nm were modeled with 3 nm discretization. 1 000 000 photons were used in each simulation. The total run time for a full spectrum was between 12 and 22 seconds. All simulations were run with pencil beams incident on the skin model, and the tissue was assumed to have a refraction index of 1.4. The homogeneity of the model makes the total integrated reflectance equivalent with the reflectance from the same model illuminated with an infinitely broad beam. A layer thickness of 1 meter was used in order to emulate a semi-infinite layer. 2.2.1.1 Absorption properties The absorption in the top layer was modeled using an absorption model for melanin [55] (6)$$\mu_{\textrm{a,e}} = \mu_{\textrm{a,m,694}}(\lambda / 694)^{-3.46},$$ where $\mu _{\textrm {a,m,694}}$ is a parameter associated with the mean melanin content of the layer. Moderately dark skin corresponds to an absorption in the range 500-900 m$^{-1}$, while fair skin corresponds to an absorption in the range 200-300 m$^{-1}$ [44]. Assumed low and high values for melanin absorption and epidermal thickness are listed in Table 2. For the dermal layers, the absorption is modeled as (7)$$\mu_\textrm{a,d} = \mu_\textrm{oxy}(\lambda) c_\textrm{oxy} + \mu_\textrm{deoxy}(\lambda) c_\textrm{deoxy},$$ where $\mu _{\textrm {\{oxy, deoxy\}}}(\lambda )$ are the absorption spectra for oxygenated and deoxygenated blood, respectively. Assumed low and high values are listed in Table 2. These cover blood volume fractions from 2% to 10%, and oxygenations from approx. 20% to 80%. The inverse model avoids consideration of this layer, and the main goal is to model a layer with absorption and scattering magnitudes representing human skin. Table 2. Low and high values for parameters varied in the Monte Carlo simulations. 2.2.1.2 Scattering properties For scattering, the following model is used: (8)$$\mu_s' = \mu_\textrm{s,Mie,500}' (\lambda / 500)^{-b_\textrm{Mie}} + \mu_\textrm{s,Ray,500}' (\lambda / 500)^{-4}.$$ The parameters $\mu _{\textrm {s,Mie,500}}'$ and $b_{\textrm {Mie}}$ describe Mie scattering. Examples of values for ex vivo human skin are 1800 m$^{-1}$ and 0.22, respectively [56]. The parameter $\mu _{\textrm {s,Ray,500}}'$ describes Rayleigh scattering. Example of an ex vivo value is 1700 m$^{-1}$ [56]. Low and high values are listed in Table 2, and are assumed to represent the expected magnitudes in human skin. The parameter $b_{\textrm {Mie}}$ is kept constant at 0.22 [56] across all simulations. This parameter is expected to vary [57], however, changes in the coefficients were thought to represent a similar change in the wavelength-dependency that could test recovery of $b_{\textrm {Mie}}$. Further, the reflectance was found to be less sensitive to changes in scattering in the epidermal layer, and full parameter recovery of $b_{\textrm {Mie}}$ was not expected. For simplicity, the parameter was not varied in the simulations. 2.2.1.3 Model geometries Geometries used in this paper are shown in Fig. 3: One-layer geometry, multi-layer geometry and two-layer geometry. The following simulations were run: • Model A (one-layer model): All combinations of deeper layer parameters in Table 2 were used, yielding in total 16 varieties. • Model B (multi-layer model): Deeper layer parameters in Table 2 were randomly picked in each of the layers in a three-layer model. Layer thicknesses were set to 150 µm. The simulation was run with and without an additional top layer with thickness 100 µm, melanin content 300 m$^{-1}$ and the low scattering parameters in Table 2. • Model C1 (systematic two-layer model): Lowest deeper layer parameters in Table 2 were used for the deeper layer. The thickness of the top layer was varied between 100, 150, 200, 250 and 300 µm, the melanin content between 150, 300 and 700 m$^{-1}$, and all combinations of the scattering values listed in Table 2 were used. This yielded in total 60 varieties, or 12 spectra per layer thickness step. • Model C2 (random two-layer model): The properties of the deeper layer were randomly selected among the deeper layer parameters listed in Table 2. Parameters corresponding to the lowest scattering values were selected for the top layer, while melanin in (6) and thickness were randomly selected from a uniform probability distribution bounded by the lower and upper values in Table 2. 50 model varieties were sampled. Fig. 3. Model geometries used in the Monte Carlo simulations. 2.2.2 Verification of modeling assumptions Two main assumptions for the technique to be applicable were given in section 2.1.1. The model A simulations (one-layer models) above yield, over the various wavelengths, a large range of possible combinations of $\mu _a$ (65 to 6714 m$^{-1}$) and $\mu _s'$ (1517 to 5237 m$^{-1}$). The assumption that the reflectance can be written as a function of $\mu _a/\mu _s'$ is checked by plotting the output reflectance values as a function of $\mu _a/\mu _s'$ across all simulations. The model B simulations (multi-layer models) yield the reflectance from a complex multi-layer model with and without a top layer. An empirical Monte Carlo model was constructed for looking up $\mu _a/\mu _s'$ given a reflectance value through the use of a Savitzky-Golay fit and an interpolating natural cubic spline. This model was used to find the $\mu _a/\mu _s'$ ratio for a one-layer model from the complex multi-layer reflectance. A $\mu _s'$ was assumed ($\mu _s' = 1000 {\textrm{m}^{-1}} $), and a $\mu _a$ was calculated from the ratio. The assumption that a multi-layer model with an extra top layer is indistinguishable from a fitted one-layer model with the same top layer is then checked by comparing the latter model with an extra top layer to the original model with the same top layer. 2.2.3 Uniqueness of the inverse model solution Diffusion model simulations were run in order to investigate whether multiple optical parameters could yield the same $R$, and find the shape of the relation among the optical properties, if any. A reflectance ($R = 0.6$) was picked, and a $\mu _a/\mu _s'$ ratio (1/200) was set in the deeper layer of a two-layer model. Top layer thicknesses ranging from 10 to 500 µm and scattering coefficients ranging from 10 to 100 000 m$^{-1}$ were put into the top layer. The absorption coefficient necessary to yield the assumed reflectance were derived for each parameter combination. The uniqueness of the obtained skin parameter solution was then investigated. The inverse model was run repeatedly on the same simulated spectrum from model C1 with randomly generated start parameters in order to see whether it was possible to reach the same global solution. 2.2.4 Accuracy of the inverse modeled parameters In addition to investigation of the uniqueness of the solution, it is desired to determine the accuracy of the inverse modeled parameters as compared to the input parameters in the Monte Carlo model. Three variations of the inverse model were run on the model C1 simulations: 1. Fit a single parameter, with all parameters except for one fixed. This estimates a baseline accuracy of the inverse model for each parameter. 2. Fit all parameters simultaneously. 3. Fit only thickness and melanin content, with scattering parameters fixed. Model C2 picks epidermal and dermal parameters randomly across a continuous range, and is suitable for evaluating the parameter resolving performance, evaluating at multiple basis spectra and evaluating the case where the basis spectrum is not known. The same epidermal scattering parameters are used, representing a case where the epidermal scattering is known to be homogeneous. Here, three new cases of the inverse model were run: 1. Set the scattering parameter to the true parameter, estimate melanin content and thickness. Evaluates the inverse model error when the scattering is known. 2. Set the scattering parameter to the results from multi-parameter optimization on the first spectrum, estimate only melanin content and thickness for the rest. This outlines the estimated parameter behavior when the scattering is unknown but known to be homogeneous. 3. Set the scattering parameter to the true parameter and use the PCA inverse model in section 2.1.4 to fit the rest of the parameters. Here, all spectra without epidermis were used to fit a PCA transform, and the PCA transform is used to find a best basis spectrum for the spectrum at hand during optimization. These tests should then elucidate the accuracy of the inverse model under various conditions, from which conclusions may be drawn about its application to real measurement data. 2.3 Experimental tests Hyperspectral acquisition Reflectance data were acquired using a push-broom Hyspex VNIR-1600 hyperspectral camera (Norsk Elektro Optikk, Lillestrom, Norway). The images were acquired over the wavelength range 400-1000 nm, with a spectral resolution of 3.7 nm and an integration time of 7.5 ms per line of data. The camera has been radiometrically and spectrally calibrated using light sources with known characteristics. The radiometric calibration was used to apply correction factors to the images. The reflectance data were acquired with illumination from two linear light sources (Model 2900 Tungsten Halogen, Illumination Technologies, New York). Polarizers (VLR-100 NIR, 450-1100 nm, Meadowlark Optics, Frederick, Colorado) were mounted on the camera lens and the light sources in order to avoid specular reflection. A Spectralon reflectance target (WS-1-SL, Ocean Optics, Duiven, Netherlands) was included within each image and used to convert the raw data to reflectance spectra. Wound model Samples of the in vitro wound model were prepared from human abdominal skin. The project was approved by the regional ethical committee (REK-Midt-Norge), and informed consent was obtained from the donor. The sample used for demonstration in this study was prepared with a 4 mm wound using punch biopsy, and the full tissue sample was cut using 8 mm punch biopsy. The sample was incubated for 22 days in Dulbecco's Modified Eagle Medium (Gibco, USA), with fetal calf serum (10%), penicillin (50 ug/ml), streptomycin (50 U/ml) and glutamine added. Hyperspectral images were acquired at day 1, 2 and then every other day, and the medium was changed every imaging session. The wound was exposed to air by resting the sample on a metallic grid, in order to ensure development and migration of multiple cell layers [36]. Re-epithelialization visible by visual inspection of the RGB images occurred during the last ten days. The sample was therefore investigated at day 12, 18 and 22. A larger image subset over the wound boundary was selected at approximately the same region across these three days. PCA transforms were fitted to a subset of wound spectra at each day that by visual inspection and comparison of the spectra had no obvious re-epithelialization present. Each wound subset consisted of 7400 spectra, and 3 PCA components were used (explaining 87.9% of the variance, on average 0.012 RMSE between raw and reconstructed reflectance spectrum when running forward and inverse transforms). The modified PCA-based inverse model in section 2.1.4 was then run on each pixel in the larger subsets in order to yield skin parameters for the top layer. Simulations are presented in section 3.1. The modeling assumptions that form the basis of the inverse modeling technique are investigated using Monte Carlo simulations in section 3.1.1. The uniqueness and accuracy of inverse modeled parameters as compared to Monte Carlo simulations are given in 3.1.2 and 3.1.3. The simulation results are then summarized and discussed in section 3.1.4. The technique is used to estimate re-epithelialized layer thickness from hyperspectral images of wounds in section 3.1.5, and the performance of the technique is discussed in light of the findings from the simulation study. Two assumptions were given in section 2.1.1. These were that the reflectance of a one-layer model can be written as a function of $\mu _a/\mu _s'$, and that a multi-layer geometry with a top layer has a reflectance indistinguishable from a one-layer representation with the same top layer. The validity of the first assumption is confirmed in Fig. 4. The figure shows reflectance spectra acquired across a wide range of one-layer models, and a corresponding plot over the same reflectance values as a function of $\mu _a/\mu _s'$. The latter clearly demonstrates that there is a one-to-one correspondence between the $\mu _a/\mu _s'$ ratio and the reflectance. Other studies also confirm this fact [34]. Fig. 4. Simulated Monte Carlo reflectance spectra as a function of wavelength (left), and as a function of $\mu _a/\mu _s'$ (right). The reflectance is uniquely defined only down to the ratio $\mu _a/\mu _s'$. The simulations were run with 1000 000 photons per wavelength. The second assumption was that a one-layer representation of a multi-layer model yields identical reflectance to the multi-layer model when a top layer is added to either. This is demonstrated in Fig. 5. Here, the reflectance from one-layer models constructed from each layer of the multi-layer model are used to verify that none of the upper layers completely shield the deeper layers. Further, the reflectance from the multilayer model with an epidermis on top is compared to a one-layer representation with the same epidermis on top. As they are indistinguishable, the second assumption is verified. The various optical properties at the different wavelengths represent a wide range of layer combinations that all demonstrate the validity of the second assumption. This is also mostly given by the fact that placing a layer on top of some existing model will not modify the existing parts of the model. Here, the RMSE between the two example spectra was 0.0010. Further testing the same assumption on 20 randomly generated multi-layer skin models yielded an RMSE of 0.0015. Fig. 5. Demonstration using Monte Carlo simulations that a multi-layer model and its one-layer fit have identical reflectance spectra when adding additional top layers: Comparison of the reflectance spectrum from a multi-layered model and the reflectance spectra from a one-layer model constructed from each layer (top), and the reflectance from the multi-layered model with an epidermis on top compared to a single-layer approximation with the same epidermis on top (bottom). The consequence is that a one-layer model is appropriate for representing a wound spectrum when evaluating the effect of adding an epidermis to the wound spectrum. The simulations were run with 1000 000 photons per wavelength. The consequence of the two assumptions is that a top layer can be investigated without having to fully model the deeper layers, if measurements with and without the top layer are available. 3.1.2 Uniqueness of the inverse modeled solution The combined results above means that the properties of the deeper layer are scale-invariant. This is a consequence of the output reflectance having no units [34]. A top layer is independent from the deeper layers, and a similar scale-invariant relation must exist between $\mu _a$, $\mu _s'$ and $d_1$ in order to produce a unit-less reflectance at the output. The epidermal skin parameters will therefore likely be coupled. Parameter couplings between $d$ and $\mu _a$ for several $\mu _s'$ that yield the exact same reflectance are shown in Fig. 6. This demonstrates that there exists an infinite number of parameter sets that all yield the exact same reflectance, and sketches out the hyperplane on which the valid optical parameters reside. Fig. 6. Demonstration of the coupling between $d$, $\mu _a$ and $\mu _s'$ in the top layer in the two-layer diffusion model, for a single reflectance value. All these parameter combinations produce the same output reflectance ($R = 0.6$). To check potential skin parameter coupling, the model was fitted at random start parameters in Fig. 7. A clear relation between the estimated $d_1$ and the melanin content is evident. Fixing the scattering to the true parameters yields the same solution regardless of the start parameters. The relation is less clear between the estimated $d_1$ and the scattering parameters, but there are indications of a noisy quadratic relation similar to the melanin content relation. The scattering parameters were not found to influence the reflectance as much as the other parameters within the varied range, which can explain the noisiness of the relation. The lack of influence can be observed in Fig. 6, by the relations here being significantly changed only for larger scattering coefficients. Fig. 7. Results from fitted inverse model at random start parameters, demonstrating the coupling of the fitted parameters. All parameter combinations produce the same reflectance spectra. Top left: Estimated layer thickness versus estimated melanin concentration. Top right: Estimated layer thickness versus scattering parameters. Bottom: Estimated layer thickness versus fit RMSE. Fitted parameters at random start parameters for thickness and melanin, with the scattering parameters fixed to the true scattering parameters, are marked as "Fixed scattering" in the plots. The RMSE shows that each of these solutions are identical with respect to the simulated reflectance. Fitting all unknown parameters at the same time is therefore not expected to yield a unique solution. The found parameter non-uniqueness can be argued from the scale-invariance between $\mu _a$, $\mu _s'$ and $d_1$. The absorption coefficient $\mu _a$ is here modeled as a varying parameter multiplied by a fixed wavelength-dependency. Since the wavelength-dependency is fixed, it can be argued that the scale-invariance is translated into the parameter, and that this has a scale-invariant relation with $d_1$ and $\mu _s'$. The scattering model can be re-written to $\mu _s = A\left [f (\lambda /500)^{-b_{\textrm {Mie}}} + (1-f)(\lambda /500)^{-4}\right ]$, where $A = \mu _{\textrm {s,Mie,500}}' + \mu _{\textrm {s,Ray,500}}'$ and $f = \frac {\mu _{\textrm {s,Mie,500}}'}{A}$. Since $f$ is a dimensionless number and the wavelength-dependencies are fixed, the scale-invariant relation can be translated into $A$ and to the scattering parameters $\mu _{\textrm {s,Mie,500}}'$ and $\mu _{\textrm {s,Ray,500}}'$. A coupling between the skin parameters is therefore expected. All of these solutions are valid with respect to the stated problem, as all yield the same fitted reflectance. It can be observed that they all apparently characterize a unique diffuse layer despite the large variation in skin parameters. Each of the valid parameter sets in Fig. 7 were used to model a single-layer model with finite thickness, and Monte Carlo simulations were used to obtain reflectance and transmittance spectra. The spectra are plotted in Fig. 8. These are more or less identical. Deviations can be attributed to albedo-dependent inaccuracies in identical diffusion model simulations that would lead to variations in a Monte Carlo simulation. Fig. 8. Monte Carlo simulations of reflectance and transmittance through epidermises fitted at random start parameters, 50 spectra in total. Multiple spectra overlap in the plot. The simulations were run with 1000 000 photons per wavelength. 3.1.3 Accuracy of inverse modeled parameters 3.1.3.1 Systematic variation in input parameters (model C1) The error deviation results across the different parameters are shown in Fig. 9 for three cases: Fit of the particular parameter only, fit of all parameters simultaneously, and fit of melanin content and layer thickness only. Averages over the corresponding reflectance errors between fitted and original spectra are shown in Fig. 10. Fig. 9. Inverse model deviations from the original modeled parameters, at lowest and highest input value for each parameter: Layer thickness (top left), melanin content (top right) and scattering (bottom). The range from minimum to maximum deviation illustrates the expected error, while the offset of the mean deviation from the zero-line expresses bias. The different cases are, respectively, a fit of the parameter at hand with all other parameters fixed to the original parameters, a fit of all parameters simultaneously and a fit of layer thickness and melanin content simultaneously with scattering fixed. Fig. 10. Average absolute error as a function of wavelength between fitted and original MCML reflectance spectra for the various cases in Fig. 9. The case where all parameters are fitted simultaneously is first considered. Each fitted parameter deviates over a large range (highest relative error, low/high input parameter: thickness 69%/35%, melanin content 61%/40%, Mie scattering 116%/38%, Rayleigh scattering 93%/45%). This is to be expected due to the non-uniqueness of the solution. The variation in simulated reflectance likely trigger various local minima along the scale-invariance. Fitting a single parameter is more interesting. The deviation range for thickness and melanin content is more limited for this case, due to the uniqueness of the solution (highest relative error, low/high input parameter: thickness 17%/14%, melanin content 14%/15%). The deviation of the scattering still matches the order of magnitude of the input, however (Mie scattering 122%/46%, Rayleigh scattering 61%/29%). Changing the epidermal scattering parameters do not change the reflectance as much as the absorption and layer thickness. The scattering can therefore not be reliably estimated, even if it is the only fitted parameter. The mismatch between the diffusion model and the Monte Carlo spectra at the true parameter leads to a bias in all parameters. Varying the particular parameter only does not bridge the mismatch entirely, as seen in Fig. 10 by the higher reflectance errors as compared to the cases where multiple parameters are fitted. The parameter $b_{\textrm {Mie}}$ was not varied in the simulations. Recovered values (input 0.22) ranged from 0 to 3 in full parameter fit, and from 0 to 2 when it was the only parameter fitted. Last, fixing scattering and fitting the rest of the parameters is considered. Fitting the thickness and melanin at the same time apparently yields a less biased estimate than when considering them apart. Further, the error is at most 80 m$^{-1}$ for larger melanin contents (highest relative error, low/high input: 12%/12%), while the thickness can be estimated down to an error below 14 µm (11%/5%). 3.1.3.2 Random selection of input parameters (model C2) Parameter estimation results over random skin models are shown in Fig. 11. Using the true scattering yields a reasonable estimate of thickness and melanin content. The RMSEs are 17 µm (relative RMSE: 7%) and 57 m$^{-1}$ (8%) for the thickness and melanin contents, respectively. This is in line with the deviations found for thickness (below 14 µm) and melanin contents (below 80 m$^{-1}$) when fitting these simultaneously on the systematic variation earlier. Fig. 11. Modeled parameters versus estimated parameters across Monte Carlo simulations with random input parameters, fixed scattering. Two cases are shown: Scattering parameters fixed to the true scattering (blue), and scattering parameters fixed to the parameters estimated from a single reflectance spectrum (green). The former leads to reasonable estimates of thickness and melanin content, while the latter leads to estimates that are only correlated with the true parameters. Exact knowledge of the correct scattering is challenging. It is expected that it must be guessed in the application at hand. A case where the scattering is estimated from a single reflectance spectrum and fixed for the rest of the spectra is shown in the same figure above. Both melanin content and thickness estimates become biased, but they remain correlated with the true parameters and retain variations that are similar to the case where the true scattering is used. Although the scattering is not known and the thickness estimates are incorrect, this can then still be used to detect relative thickness changes. The method is to be applied to hyperspectral images of wounds, where the basis spectrum is not known a priori. The modification of the technique, as outlined in section 2.1.4, was used to fit basis spectra along with the rest of the epidermal skin parameters. The resulting RMSEs were 16 µm (relative RMSE: 6%) and 50 m$^{-1}$ (13%) for the thickness and melanin parameters, respectively, similar to RMSEs in the case where the basis spectra are known exactly. 3.1.4 Summary and general discussion of the simulation results The inverse modeling method presented in this paper could be a valuable tool for characterizing hyperspectral images of re-epithelialized tissues in wounds. The main inverse modeling assumptions have been verified. The reflectance of a one-layer model can be written as a function of $\mu _a/\mu _s'$. This means that any combination of $\mu _a$ and $\mu _s'$ yields the same reflectance as long as they obey the given ratio. Further, an arbitrary multi-layer model can be represented by a one-layer model. Adding a top layer to either of these models yields indistinguishable reflectance spectra. Thus, the top layer can be fitted and investigated without considering the deeper layers. It has been shown that no unique solutions exist for the top layer. The solutions are coupled, however, and yield unique $R(\lambda )$ and $T(\lambda )$ through the upper layer that are common for all parameter sets. The fitted, diffuse layer is therefore unique, though the lack of a known thickness means that the optical properties are undeterminable. The wavelength dependency in $R$ and $T$ can be valuable for drawing conclusions about the nature of the layer. The solutions for some of the skin parameters were found to be robust to changes in the start parameters during fitting when at least one parameter was fixated. This indicates that unique solutions may be possible to find in such cases. The reflectance was not found to be very sensitive to changes in the scattering parameter within the expected range. The scattering parameter is therefore a first choice for fixation. Fair estimates of the absorption parameter and layer thickness can be found when given the correct scattering, and the main expected levels can be discriminated. The method has been shown to yield reasonable relative estimates when the scattering is homogeneous, but unknown. The parameter value will then vary around a mean level within a small deviation, and be correlated with the true value. Such estimates are useful for determining whether a given location has a top layer thickness greater or less than the thickness of some other location. In practice, the scattering parameters can be fixed to e.g. the low parameter values outlined in section 2.2.1. Another possibility is to let all parameters be fitted simultaneously for a single spectrum in order to estimate a best fit for the wavelength-dependency, and then fixate the scattering parameters to these parameters for the rest of the spectra. The method is thus useful for in vitro wounds in two ways. First by demonstrating whether the optical properties of various tissue regions can be explained by wound optical properties with an epidermal layer on top. Second by evaluating relative layer thicknesses at different positions, and further use these to explain spatial variations by layer thickness differences. Melanin and thickness parameter fits have been found to have relative errors from 5 to 12%. Inverse methods in other studies that estimate epidermal thickness and melanin content are reported to have errors in the range of 9% for epidermal thickness and 8-15% for melanin content [58], or 6-8%, 16-20% for epidermal thickness and below 0.5% for melanin content [31], subject to modeling details. Relative errors of the current model are thus in the same range as methods reported in the literature. A weakness of the method is that the basis spectrum representing the deeper layers must be known. While known exactly for the simulations, wounds have inhomogeneities that result in no clear basis candidate. Taking a mean spectrum over the wound was not found to yield correct wavelength-dependencies. Iterating over all possible wound spectra and selecting the best candidate was found to yield better wavelength-dependencies and lower RMSEs, but this is problematic for a larger number of pixels. Using a PCA transform to represent the possible wound spectra was found to be a viable alternative that could be fitted during optimization. This alternative has been shown to yield similar parameter RMSEs to the case where the basis spectrum is known exactly. This thus represents a suitable modification to the technique for hyperspectral images of wounds. The layer uniqueness results show that a more direct approach technically could be taken in obtaining the reflectance and transmittance of the diffuse top layer, using a similar approach as the adding-doubling technique [59]. Such a technique would obtain reflectance and transmittance directly. A separate characterization using a one-layer model with finite thickness would be necessary for parameter estimation. The method in the current study obtains the parameters directly as a part of the procedure. Obtaining reflectance and transmittance would have to be done as a second step. Which method would be better to use would then depend on the application and the desired end result of the technique. A variant of adding-doubling would more clearly show that no unique parameters exist. It would not assume anything on the form of the optical properties of the top layer during fitting, which could be valuable as a more independent result. On the other hand, the form assumptions are necessary for enabling application of the PCA modification of the technique. For this study, an inverse diffusion model with an offset correction obtained from Monte Carlo simulations was used. Its performance would thus be similar to an inverse Monte Carlo model with some minor inaccuracies. This allows the technique to be evaluated in terms of the basic idea rather than being overshadowed by systematic errors, while making the model suitable for hyperspectral applications. Similar corrections include a background absorption included by Svaasand et al. [44] and blood volume fraction scaling done by Randeberg et al. [46]. The offset correction is not expected to work outside the parameter range for which it was fitted, and is thus not appropriate for any unknown spectrum. More elaborate correction schemes or better model approximations are required. The model could be replaced by an empirical Monte Carlo model, or a diffusion approximation more appropriate for the absorption/scattering ratios in human tissue. Examples include the $\delta$-Eddington/$\delta$-P1 approximation [47,60]. The latter will not eliminate the offset between the model and the Monte Carlo spectra entirely [47]. In addition, it should be noted that correction factors developed for simulated reflectance spectra in an integrating sphere geometry will not directly apply to reflectance spectra from hyperspectral images. Model replacement is out of scope for the current study, where the main aim is to present and demonstrate a proof of concept. Refining the core model will be a part of future studies. The simulations have thus verified the applicability of the technique, identified limitations and indicated what it can be used for. The technique can then be applied to experimental data. Thickness estimation of re-epithelialized areas in hyperspectral images of wounds is used as an example application. 3.1.5 Experimental results In the following, hyperspectral images of an in vitro wound model sample were used to demonstrate the inverse modeling technique. The PCA-based modification in section 2.1.4 was used to represent the wound spectra using PCA during fitting. Layer thickness results over a hyperspectral image subset at days 12, 18 and 22 during the wound healing process are shown in Fig. 12. Model fits for selected spectra from day 18 are shown in Fig. 13. Fig. 12. Results from application of the inverse model to hyperspectral images of an in vitro wound model sample. Three measurements are shown: day 12 (top row), day 18 (center row) and day 22 (bottom row). For each measurement, three images are shown: the RGB image with a dotted square indicating the image subset considered in the inverse model, the RGB values of the reflectance from the one-layer dermis reconstruction within the subset, and the estimated thickness of the epidermis within the subset. RGB images were constructed from the hyperspectral images at 615, 564 and 459 nm wavelength bands, and were gamma-corrected for increased contrast. The white-pink region corresponds to wound, while the brown region is intact tissue. Coordinates of the spectra plotted in Fig. 13 are marked in the day 18 image. Fig. 13. Model fits at selected spectra from day 18. The labeled positions refer to the positions used in Fig. 14, and are marked in Fig. 12. The peaks between 690 and 750 nm are artifacts due to mismatch of the order sorting filter in the hyperspectral camera, and were not fitted by the model. Fitted parameters were the melanin content, layer thickness and scattering parameters in epidermis, and the three PCA coefficients for the wound basis in dermis. The first main conclusion to be drawn from these results is that the spectral properties of the edge of the wound are explained by a gradually increasing re-epithelialization. This is modeled as a diffuse, epidermal-like layer placed on top of reflectance representing wound tissue. The layer has been shown in the simulation study to be unique. The fitted model then works as an explanatory model. The model shows that these regions have re-epithelialized, and that the optical properties here are no more than the optical properties of the wound with a typical epidermis on top. The main strength of the technique is that this can be shown without having to consider the optical properties of the dermal layers. This is a major advantage of the method as the optical properties of in vitro wound models are largely unknown. Minor changes that are challenging to identify in the RGB images can be found by the technique, as demonstrated by a thin epidermis apparently being present at the wound edge of day 12 in Fig. 12. Depth profiles along lines placed at the approximately same position across days are shown in Fig. 14. The estimated layer thicknesses here provide a relative estimate of the re-epithelialization thickness, given that the scattering properties are homogeneous, as shown by the simulation study. In vitro wounds which are exposed to air and incubated in the medium used in this study are expected to have migration of multiple epidermal cell layers which quickly stratify into more mature epithelium [36]. This migration occurs from the edge of the wound and towards the center of the wound, with a tip of non-cornified epidermis extending on the wound side of a more mature neo-epidermis [61]. Such behavior is consistent with the migration and increases in thickness represented by the depth profiles. An epidermal thickness of magnitude 50 um is expected [3], however, indicating that higher absorption or scattering magnitudes than the ones currently used in the model are in order. Histologies were not available for the wound model samples in this study, and repetition of the experiment is necessary for proper attribution of the reflectance changes to corresponding changes in epidermal layer composition. However, the epidermal layer presence indicated by the inverse model results is in agreement with statistical characterization of these data [38]. Fig. 14. Estimated depth profile along lines placed at approximately the same region at days 12, 18 and 22. The strongest colored plotted line is along the chosen line in the image, while weaker lines of the same color are profiles offset 1 and 2 pixels from the main line. Variations in absorption properties are not expected to be decouplable from thickness variations for real measurements. The fitted inverse model is able to match the reality in the simulations, which gives a clear minimum of the RMSE during optimization. More complex geometries or changes to the assumed optical properties broadens the minimum for real measurements, due to existence of multiple slightly sub-optimal solutions to the problem. Here, a melanin content range from 150 m$^{-1}$ to above 700 m$^{-1}$ and corresponding layer thicknesses yielded identical solutions. A clear minimum could only be found for high scattering levels, but in this case, this led to compensation by unrealistically high melanin contents. The only absorber included in epidermis was melanin. Inclusion of a background absorption is expected to perturb the fitted parameter results, and could reduce the required melanin absorption and make the minimum mentioned above more clear. This was not investigated further, however, and tuning of such modeling details should wait until confirmation of the epidermal composition by histology. This will therefore be investigated in future work. The simulation study indicates that at least one parameter should be fixed. All parameters were fitted simultaneously here, however, and no parameters were fixed, since the optimization seemed to produce stable estimates of both absorption and scattering properties. Only minor instabilities are evident in the day 12 profile in Fig. 14. This then shows that fitting a multi-parameter model to some reflectance might apparently give stable, unique results - but only by accident. Care must be taken, since the end result is dependent on the start parameters. Fixing at least one of the parameters is necessary for trustworthy results, as shown by the simulation study. Yet, as the parameters were stable in current case, these are the same results that would be obtained if e.g. the scattering parameters were fixed. Estimated parameters are then expected to correlate with the true parameters, as shown by the simulation study. An infinitely wide beam illuminating a spatially invariant slab is effectively assumed in the simulations. The width of the beam is sufficiently broad with respect to the extent of the wound model sample, but the illuminated geometry is not homogeneous. Edge effects will be present in sharp transitions of tissue types, and lead to escaped photons from one type of tissue into the other. This will lead to an under- or over-estimation of thickness or absorption properties in some parts [32], but this is unlikely to have a significant effect within the slowly varying parts of the tissue. More investigation could be made in future studies into adjusting the model to the measurement geometry. PCA was used to find a low-dimensional representation of the wound spectra that could be fitted during optimization. The PCA inverse transform with the selected number of components was found to be able to appropriately reproduce a given spectrum, and fitting the PCA scores during optimization gave reasonable results in the epidermal parameters. The simulation results indicate no significant decrease in estimation accuracy for the simulations. However, correctness for measurements should be investigated further in future studies, as the method might need tuning in e.g. number of components, or a different decomposition method than PCA might be more appropriate. The method is promising, however, and combines a data-driven, statistical approach to information extraction with physics-based photon transport modeling. The method was tailored towards wounds, as the basis spectrum is available and can be used to fit spectra with a top layer. With adaption it might be possible to use the technique to estimate relative variations in the epidermal skin thickness of pre-term newborns and characterize burn wounds. Further, the technique is suitable for characterizing strongly absorptive inclusions in scattering media. A spectrum from a single pixel was on average fitted in 0.66 seconds on a single CPU core, and 0.14 seconds when naively parallelizing the fitting of different pixels across 8 CPU cores (Intel Core i7-3840 QM, 2.80 GHz, 8 cores). The small subset of 50 x 40 pixels considered here would take 4 minutes and 40 seconds to fit using naive multiprocessing. The method currently runs a full, separate optimization of every pixel, which might not be needed. Future work will include adaption of GPU parallelization to reduce the running times. The current method, albeit slow, represents a proof of concept against which optimized solutions may be compared for correctness, and represents a first step towards a more scalable algorithm. A technique for estimating the skin parameters of the re-epithelialized layer in wounds has been developed. The method has been found to characterize a unique diffuse layer defined by a unique reflectance and transmittance spectrum. There exists an infinite number of valid skin parameters that might characterize this layer. Fixing e.g. scattering parameters, however, can yield good relative estimates of layer thickness. The method has been used to characterize a larger area over the boundary of an in vitro wound model sample, showing the usefulness of the approach in characterizing the re-epithelialized layer. Here, a PCA modification to find the optimal wound basis spectrum has also been demonstrated, and represents a successful combination of data-driven techniques with physical photon transport modeling. Thanks to Ingvild Haneberg and Matija Milanic for acquisition of in vitro skin model data. Thanks to Brita Pukstad for collaboration in the in vitro wound model experiment. Thanks to Terje Schjelderup for help with the proofreading. 1. T. A. Eikebrokk, B. S. Vassmyr, K. Ausen, C. Gravastrand, and B. Pukstad, "Cytotoxicity and effect on wound re-epithelialization after topical administration of tranexamic acid," BJS Open 3(6), 840–851 (2019). [CrossRef] 2. S. Lönnqvist, P. Emanuelsson, and G. Kratz, "Influence of acidic ph on keratinocyte function and re-epithelialisation of human in vitro wounds," J. Plast. Surg. Hand Surg. 49(6), 346–352 (2015). [CrossRef] 3. S. Lönnqvist, J. Rakar, K. Briheim, and G. Kratz, "Biodegradable gelatin microcarriers facilitate re-epithelialization of human cutaneous wounds - an in vitro study in human skin," PLoS One 10(6), e0128093 (2015). [CrossRef] 4. G. Kratz and C. C. Compton, "Tissue expression of transforming growth factor-β1 and transforming growth factor-αduring wound healing in human skin explants," Wound Rep. Reg. 5(3), 222–228 (1997). 5. E. Nyman, F. Huss, T. Nyman, J. Junker, and G. Kratz, "Hyaluronic acid, an important factor in the wound healing properties of amniotic fluid: In vitro studies of re-epithelialisation in human skin wounds," J. Plast. Surg. Hand Surg. 47(2), 89–92 (2013). [CrossRef] 6. G. Lu and B. Fei, "Medical hyperspectral imaging: a review," J. Biomed. Opt. 19(1), 010901 (2014). [CrossRef] 7. L. Khaodhiar, T. Dinh, K. T. Schomacker, S. V. Panasyuk, J. E. Freeman, R. Lew, T. Vo, A. A. Panasyuk, C. Lima, J. M. Giurini, T. E. Lyons, and A. Veves, "The use of medical hyperspectral technology to evaluate microcirculatory changes in diabetic foot ulcers and to predict clinical outcomes," Diabetes Care 30(4), 903–910 (2007). [CrossRef] 8. M. Denstedt, B. S. Pukstad, L. A. Paluchowski, J. E. Hernandez-Palacios, and L. L. Randeberg, "Hyperspectral imaging as a diagnostic tool for chronic skin ulcers," Proc. SPIE 8565, 85650N (2013). [CrossRef] 9. A. Nouvong, B. Hoogwerf, E. Mohler, B. Davis, A. Tajaddini, and E. Medenilla, "Evaluation of diabetic foot ulcer healing with hyperspectral imaging of oxyhemoglobin and deoxyhemoglobin," Diabetes Care 32(11), 2056–2061 (2009). [CrossRef] 10. D. Yudovsky, A. Nouvong, and L. Pilon, "Hyperspectral imaging in diabetic foot wound care," J. Diabetes Sci. Technol. 4(5), 1099–1113 (2010). [CrossRef] 11. A. Holmer, J. Marotz, P. Wahl, M. Dau, and P. W. Kämmerer, "Hyperspectral imaging in perfusion and wound diagnostic - methods and algorithms for the determination of tissue parameters," Biomed. Tech. 63(5), 547–556 (2018). [CrossRef] 12. M. A. Calin, T. Coman, S. V. Parasca, N. Bercaru, and S. R. S. D. Manea, "Hyperspectral imaging-based wound analysis using mixture-tuned matched filtering classification method," J. Biomed. Opt. 20(4), 046004 (2015). [CrossRef] 13. M. A. Calin, S. V. Parascal, D. Manea, and R. Savastru, "Hyperspectral imaging combined with machine learning classifiers for diabetic leg ulcer assessment - a case study," in MIUA 2019: Medical Image Understanding and Analysis, vol. 1065, (2019), pp. 74–85. 14. G. Daeschlein, I. Langner, T. Wild, S. von Podewils, C. Sicher, T. Kiefer, and M. Jünger, "Hyperspectral imaging as a novel diagnostic tool in microcirculation of wounds," Clin. Hemorheol. Microcirc. 67(3-4), 467–474 (2017). [CrossRef] 15. L. A. Paluchowski, H. B. Nordgaard, A. Bjorgan, S. A. Berget, and L. L. Randeberg, "Can spectral-spatial image segmentation be used to discriminate burn wounds?" J. Biomed. Opt. 21(10), 101413 (2016). [CrossRef] 16. M. Halicek, G. Lu, J. V. Little, X. Wang, M. Patel, C. C. Griffith, M. W. El-Deiry, A. Y. Chen, and B. Fei, "Deep convolutional neural networks for classifying head and neck cancer using hyperspectral imaging," J. Biomed. Opt. 22(6), 060503 (2017). [CrossRef] 17. H. Akbari, L. Halig, D. M. Schuster, B. Fei, A. Osunkoya, V. Master, P. Nieh, and G. Chen, "Hyperspectral imaging and quantitative analysis for prostate cancer detection," J. Biomed. Opt. 17(7), 0760051 (2012). [CrossRef] 18. J. Shapey, Y. Xie, E. Nabavi, R. Bradford, S. R. Saeed, S. Ourselin, and T. Vercauteren, "Intraoperative multispectral and hyperspectral label-free imaging: A systematic review of in vivo clinical studies," J. Biophotonics 12(9), e201800455 (2019). [CrossRef] 19. E. L. Larsen, L. L. Randeberg, E. Olstad, O. A. Haugen, A. Aksnes, and L. O. Svaasand, "Hyperspectral imaging of atherosclerotic plaques in vitro," J. Biomed. Opt. 16(2), 026011 (2011). [CrossRef] 20. A. O. H. Gerstner, W. Laffers, F. Bootz, D. L. Farkas, R. Martin, J. Bendix, and B. Thies, "Hyperspectral imaging of mucosal surfaces in patients," J. Biophotonics 5(3), 255–262 (2012). [CrossRef] 21. M. Wahabzada, M. Besser, M. Khosravani, M. T. Kuska, K. Kersting, A.-K. Mahlein, and E. Stürmer, "Monitoring wound healing in a 3d wound model by hyperspectral imaging and efficient clustering," PLoS One 12(12), e0186425 (2017). [CrossRef] 22. Y. Khouj, J. Dawson, J. Coad, and L. Vona-Davis, "Hyperspectral imaging and k-means classification for histologic evaluation of ductal carcinoma in situ," Front. Oncol. 8, 17 (2018). [CrossRef] 23. E. J. M. Baltussen, E. N. D. Kok, S. G. B. de Koning, J. Sanders, A. G. J. Aalbers, N. F. M. Kok, G. L. Beets, C. C. Flohil, S. C. Bruin, K. F. D. Kulhmann, H. J. C. M. Sterenborg, and T. J. M. Ruers, "Hyperspectral imaging for tissue classification, a way toward smart laparoscopic colorectal surgery," J. Biomed. Opt. 24(01), 1 (2019). [CrossRef] 24. B. Regeling, W. Laffers, A. O. H. Gerstner, S. Westermann, N. A. Müller, K. Schmidt, J. Bendix, and B. Thies, "Development of an image pre-processor for operational hyperspectral laryngeal cancer detection," J. Biophotonics 9(3), 235–245 (2016). [CrossRef] 25. A. Grigoroiu, J. Yoon, and S. E. Bohndiek, "Deep learning applied to hyperspectral endoscopy for online spectral classification," Sci. Rep. 10(1), 3947 (2020). [CrossRef] 26. D. Yudovsky, A. Nouvong, K. Schomacker, and L. Pilon, "Monitoring temporal development and healing of diabetic foot ulceration using hyperspectral imaging," J. Biophotonics 4(7-8), 565–576 (2011). [CrossRef] 27. S. Vyas, A. Banerjee, and P. Burlina, "Estimating physiological skin parameters from hyperspectral signatures," J. Biomed. Opt. 18(5), 057008 (2013). [CrossRef] 28. W. Feng, R. Shi, C. Zhang, T. Yu, and D. Zhu, "Lookup-table-based inverse model for mapping oxygen concentration of cutaneous microvessels using hyperspectral imaging," Opt. Express 25(4), 3481–3495 (2017). [CrossRef] 29. E. Zherebtsov, V. Dremin, A. Popov, A. Doronin, D. Kurakina, M. Kirillin, I. Meglinski, and A. Bykov, "Hyperspectral imaging of human skin aided by artificial neural networks," Biomed. Opt. Express 10(7), 3545–3559 (2019). [CrossRef] 30. M. Kewin, A. Rajaram, D. Milej, A. Abdalmalak, L. Morrison, M. Diop, and K. S. Lawrence, "Evaluation of hyperspectral nirs for quantitative measurements of tissue oxygen saturation by comparison to time-resolved nirs," Biomed. Opt. Express 10(9), 4789–4802 (2019). [CrossRef] 31. D. Yudovsky and L. Pilon, "Rapid and accurate estimation of blood saturation, melanin content, and epidermis thickness from spectral diffuse reflectance," Appl. Opt. 49(10), 1707–1719 (2010). [CrossRef] 32. A. Bjorgan, M. Milanic, and L. L. Randeberg, "Estimation of skin optical parameters for real-time hyperspectral imaging applications," J. Biomed. Opt. 19(6), 066003 (2014). [CrossRef] 33. M. S. Patterson, B. C. Wilson, and D. R. Wyman, "The propagation of optical radiation in tissue. ii: Optical properties of tissues and resulting fluence distributions," Lasers Med. Sci. 6(4), 379–390 (1991). [CrossRef] 34. G. Zonios and A. Dimou, "Modeling diffuse reflectance from semi-infinite turbid media: application to the study of skin optical properties," Opt. Express 14(19), 8661–8674 (2006). [CrossRef] 35. A. Kim and B. C. Wilson, Optical-Thermal Response of Laser-Irradiated Tissue (Springer, 2011), chap. Measurement of ex vivo and in vivo tissue optical properties: Methods and theories, pp. 267–319, 2nd ed. 36. K. Jansson, G. Kratz, and A. Haegerstrand, "Characterization of a new in vitro model for studies of reepithelialization in human partial thickness wounds," In Vitro Cell. Dev. Biol.: Anim. 32(9), 534–540 (1996). [CrossRef] 37. G. Kratz, "Modeling of wound healing processes in human skin using tissue culture," Microsc. Res. Tech. 42(5), 345–350 (1998). [CrossRef] 38. A. Bjorgan, B. Pukstad, and L. L. Randeberg, "Hyperspectral characterization of re-epithelialization in an in vitro wound model," J. Biophotonics (2020). 39. G. James, D. Witten, T. Hastie, and R. Tibshirani, An introduction to statistical learning (Springer, 2013), 1st ed. 40. G. Shaw and D. Manolakis, "Signal processing for hyperspectral image exploitation," IEEE Signal Process. Mag. 19(1), 12–16 (2002). [CrossRef] 41. J. Khodr and R. Younes, "Dimensionality reduction on hyperspectral images: A comparative review based on artificial datas," in 2011 4th International Congress on Image and Signal Processing (IEEE, 2011), pp. 1875–1883. 42. M. D. Farrell and R. M. Mersereau, "On the impact of pca dimension reduction for hyperspectral detection of difficult targets," IEEE Geosci. Remote Sens. Lett. 2(2), 192–195 (2005). [CrossRef] 43. L. Wang, S. L. Jacques, and L. Zheng, "Mcml monte carlo modeling of light transport in multi-layered tissues," Comput. Meth. Prog. Bio. 47(2), 131–146 (1995). [CrossRef] 44. L. Svaasand, L. Norvang, E. Fiskerstrand, E. Stopps, M. Berns, and J. Nelson, "Tissue parameters determining the visual appearance of normal skin and port-wine stains," Laser. Med. Sci. 10(1), 55–65 (1995). [CrossRef] 45. L. V. Wang and H. Wu, Biomedical Optics, Principles and Imaging (John Wiley & Sons, 2007). 46. L. L. Randeberg, A. Winnem, R. Haaverstad, O. A. Haugen, and L. O. Svaasand, "Performance of diffusion theory vs. monte carlo methods," Proc. SPIE 5862, 586200 (2005). [CrossRef] 47. T. Spott and L. O. Svaasand, "Collimated light sources in the diffusion approximation," Appl. Opt. 39(34), 6453–6465 (2000). [CrossRef] 48. R. R. Anderson and J. A. Parrish, "The optics of human skin," J. Invest. Dermatol. 77(1), 13–19 (1981). [CrossRef] 49. I. Fredriksson, O. Burdakov, M. Larsson, and T. Strömberg, "Inverse monte carlo in a multilayered tissue model: merging diffuse reflectance spectroscopy and laser doppler flowmetry," J. Biomed. Opt. 18(12), 127004 (2013). [CrossRef] 50. T. Lister, P. A. Wright, and P. H. Chappell, "Optical properties of human skin," J. Biomed. Opt. 17(9), 0909011 (2012). [CrossRef] 51. N. Verdel, A. Marin, M. Milanic, and B. Majaron, "Physiological and structural characterization of human skin in vivo using combined photothermal radiometry and diffuse reflectance spectroscopy," Biomed. Opt. Express 10(2), 944–960 (2019). [CrossRef] 52. L. Vidovic, M. Milanic, and B. Majaron, "Objective characterization of bruise evolution using photothermal depth profiling and monte carlo modeling," J. Biomed. Opt. 20(1), 017001 (2015). [CrossRef] 53. E. Alerstam, W. C. Y. Lo, T. D. Han, J. Rose, S. Andersson-Engels, and L. Lilge, "Next-generation acceleration and code optimization for light transport in turbid media using gpus," Biomed. Opt. Express 1(2), 658–675 (2010). [CrossRef] 54. E. Alerstam, T. Svensson, and S. Andersson-Engels, "Parallel computing with graphics processing units for high-speed monte carlo simulation of photon migration," J. Biomed. Opt. 13(6), 060504 (2008). [CrossRef] 55. T. Spott, L. O. Svaasand, R. E. Anderson, and P. F. Schmedling, "Application of optical diffusion theory to transcutaneous bilirubinometry," Proc. SPIE 3195, 234–245 (1998). [CrossRef] 56. A. N. Bashkatov, E. A. Genina, and V. V. Tuchin, "Optical properties of skin, subcutaneous, and muscle tissues: A review," J. Innov. Opt. Health Sci. 04(01), 9–38 (2011). [CrossRef] 57. S. L. Jacques, "Optical properties of biological tissues: a review," Phys. Med. Biol. 58(11), R37–R61 (2013). [CrossRef] 58. P. Naglic, L. Vidovic, M. Milanic, L. L. Randeberg, and B. Majaron, "Suitability of diffusion approximation for an inverse analysis of diffuse reflectance spectra from human skin in vivo," OSA Continuum 2(3), 905–922 (2019). [CrossRef] 59. S. A. Prahl, Optical-Thermal Response of Laser Irradiated Tissue (1995), chap. 5, The adding-doubling method, pp. 101–129. 60. S. A. Carp, S. A. Prahl, and V. Venugopalan, "Radiative transport in the delta-p1 approximation: accuracy of fluence rate and optical penetration depth predictions in turbid semi-infinite media," J. Biomed. Opt. 9(3), 632–647 (2004). [CrossRef] 61. G. D. Glinos, S. H. Verne, A. S. Aldahan, L. Liang, K. Nouri, S. Elliot, M. Glassberg, D. C. DeBuc, T. Koru-Sengul, M. Tomic-Canic, and I. Pastar, "Optical coherence tomography for assessment of epithelialization in a human ex vivo wound model," Wound Rep. Reg. 25(6), 1017–1026 (2017). [CrossRef] T. A. Eikebrokk, B. S. Vassmyr, K. Ausen, C. Gravastrand, and B. Pukstad, "Cytotoxicity and effect on wound re-epithelialization after topical administration of tranexamic acid," BJS Open 3(6), 840–851 (2019). S. Lönnqvist, P. Emanuelsson, and G. Kratz, "Influence of acidic ph on keratinocyte function and re-epithelialisation of human in vitro wounds," J. Plast. Surg. Hand Surg. 49(6), 346–352 (2015). S. Lönnqvist, J. Rakar, K. Briheim, and G. Kratz, "Biodegradable gelatin microcarriers facilitate re-epithelialization of human cutaneous wounds - an in vitro study in human skin," PLoS One 10(6), e0128093 (2015). G. Kratz and C. C. Compton, "Tissue expression of transforming growth factor-β1 and transforming growth factor-αduring wound healing in human skin explants," Wound Rep. Reg. 5(3), 222–228 (1997). E. Nyman, F. Huss, T. Nyman, J. Junker, and G. Kratz, "Hyaluronic acid, an important factor in the wound healing properties of amniotic fluid: In vitro studies of re-epithelialisation in human skin wounds," J. Plast. Surg. Hand Surg. 47(2), 89–92 (2013). G. Lu and B. Fei, "Medical hyperspectral imaging: a review," J. Biomed. Opt. 19(1), 010901 (2014). L. Khaodhiar, T. Dinh, K. T. Schomacker, S. V. Panasyuk, J. E. Freeman, R. Lew, T. Vo, A. A. Panasyuk, C. Lima, J. M. Giurini, T. E. Lyons, and A. Veves, "The use of medical hyperspectral technology to evaluate microcirculatory changes in diabetic foot ulcers and to predict clinical outcomes," Diabetes Care 30(4), 903–910 (2007). M. Denstedt, B. S. Pukstad, L. A. Paluchowski, J. E. Hernandez-Palacios, and L. L. Randeberg, "Hyperspectral imaging as a diagnostic tool for chronic skin ulcers," Proc. SPIE 8565, 85650N (2013). A. Nouvong, B. Hoogwerf, E. Mohler, B. Davis, A. Tajaddini, and E. Medenilla, "Evaluation of diabetic foot ulcer healing with hyperspectral imaging of oxyhemoglobin and deoxyhemoglobin," Diabetes Care 32(11), 2056–2061 (2009). D. Yudovsky, A. Nouvong, and L. Pilon, "Hyperspectral imaging in diabetic foot wound care," J. Diabetes Sci. Technol. 4(5), 1099–1113 (2010). A. Holmer, J. Marotz, P. Wahl, M. Dau, and P. W. Kämmerer, "Hyperspectral imaging in perfusion and wound diagnostic - methods and algorithms for the determination of tissue parameters," Biomed. Tech. 63(5), 547–556 (2018). M. A. Calin, T. Coman, S. V. Parasca, N. Bercaru, and S. R. S. D. Manea, "Hyperspectral imaging-based wound analysis using mixture-tuned matched filtering classification method," J. Biomed. Opt. 20(4), 046004 (2015). M. A. Calin, S. V. Parascal, D. Manea, and R. Savastru, "Hyperspectral imaging combined with machine learning classifiers for diabetic leg ulcer assessment - a case study," in MIUA 2019: Medical Image Understanding and Analysis, vol. 1065, (2019), pp. 74–85. G. Daeschlein, I. Langner, T. Wild, S. von Podewils, C. Sicher, T. Kiefer, and M. Jünger, "Hyperspectral imaging as a novel diagnostic tool in microcirculation of wounds," Clin. Hemorheol. Microcirc. 67(3-4), 467–474 (2017). L. A. Paluchowski, H. B. Nordgaard, A. Bjorgan, S. A. Berget, and L. L. Randeberg, "Can spectral-spatial image segmentation be used to discriminate burn wounds?" J. Biomed. Opt. 21(10), 101413 (2016). M. Halicek, G. Lu, J. V. Little, X. Wang, M. Patel, C. C. Griffith, M. W. El-Deiry, A. Y. Chen, and B. Fei, "Deep convolutional neural networks for classifying head and neck cancer using hyperspectral imaging," J. Biomed. Opt. 22(6), 060503 (2017). H. Akbari, L. Halig, D. M. Schuster, B. Fei, A. Osunkoya, V. Master, P. Nieh, and G. Chen, "Hyperspectral imaging and quantitative analysis for prostate cancer detection," J. Biomed. Opt. 17(7), 0760051 (2012). J. Shapey, Y. Xie, E. Nabavi, R. Bradford, S. R. Saeed, S. Ourselin, and T. Vercauteren, "Intraoperative multispectral and hyperspectral label-free imaging: A systematic review of in vivo clinical studies," J. Biophotonics 12(9), e201800455 (2019). E. L. Larsen, L. L. Randeberg, E. Olstad, O. A. Haugen, A. Aksnes, and L. O. Svaasand, "Hyperspectral imaging of atherosclerotic plaques in vitro," J. Biomed. Opt. 16(2), 026011 (2011). A. O. H. Gerstner, W. Laffers, F. Bootz, D. L. Farkas, R. Martin, J. Bendix, and B. Thies, "Hyperspectral imaging of mucosal surfaces in patients," J. Biophotonics 5(3), 255–262 (2012). M. Wahabzada, M. Besser, M. Khosravani, M. T. Kuska, K. Kersting, A.-K. Mahlein, and E. Stürmer, "Monitoring wound healing in a 3d wound model by hyperspectral imaging and efficient clustering," PLoS One 12(12), e0186425 (2017). Y. Khouj, J. Dawson, J. Coad, and L. Vona-Davis, "Hyperspectral imaging and k-means classification for histologic evaluation of ductal carcinoma in situ," Front. Oncol. 8, 17 (2018). E. J. M. Baltussen, E. N. D. Kok, S. G. B. de Koning, J. Sanders, A. G. J. Aalbers, N. F. M. Kok, G. L. Beets, C. C. Flohil, S. C. Bruin, K. F. D. Kulhmann, H. J. C. M. Sterenborg, and T. J. M. Ruers, "Hyperspectral imaging for tissue classification, a way toward smart laparoscopic colorectal surgery," J. Biomed. Opt. 24(01), 1 (2019). B. Regeling, W. Laffers, A. O. H. Gerstner, S. Westermann, N. A. Müller, K. Schmidt, J. Bendix, and B. Thies, "Development of an image pre-processor for operational hyperspectral laryngeal cancer detection," J. Biophotonics 9(3), 235–245 (2016). A. Grigoroiu, J. Yoon, and S. E. Bohndiek, "Deep learning applied to hyperspectral endoscopy for online spectral classification," Sci. Rep. 10(1), 3947 (2020). D. Yudovsky, A. Nouvong, K. Schomacker, and L. Pilon, "Monitoring temporal development and healing of diabetic foot ulceration using hyperspectral imaging," J. Biophotonics 4(7-8), 565–576 (2011). S. Vyas, A. Banerjee, and P. Burlina, "Estimating physiological skin parameters from hyperspectral signatures," J. Biomed. Opt. 18(5), 057008 (2013). W. Feng, R. Shi, C. Zhang, T. Yu, and D. Zhu, "Lookup-table-based inverse model for mapping oxygen concentration of cutaneous microvessels using hyperspectral imaging," Opt. Express 25(4), 3481–3495 (2017). E. Zherebtsov, V. Dremin, A. Popov, A. Doronin, D. Kurakina, M. Kirillin, I. Meglinski, and A. Bykov, "Hyperspectral imaging of human skin aided by artificial neural networks," Biomed. Opt. Express 10(7), 3545–3559 (2019). M. Kewin, A. Rajaram, D. Milej, A. Abdalmalak, L. Morrison, M. Diop, and K. S. Lawrence, "Evaluation of hyperspectral nirs for quantitative measurements of tissue oxygen saturation by comparison to time-resolved nirs," Biomed. Opt. Express 10(9), 4789–4802 (2019). D. Yudovsky and L. Pilon, "Rapid and accurate estimation of blood saturation, melanin content, and epidermis thickness from spectral diffuse reflectance," Appl. Opt. 49(10), 1707–1719 (2010). A. Bjorgan, M. Milanic, and L. L. Randeberg, "Estimation of skin optical parameters for real-time hyperspectral imaging applications," J. Biomed. Opt. 19(6), 066003 (2014). M. S. Patterson, B. C. Wilson, and D. R. Wyman, "The propagation of optical radiation in tissue. ii: Optical properties of tissues and resulting fluence distributions," Lasers Med. Sci. 6(4), 379–390 (1991). G. Zonios and A. Dimou, "Modeling diffuse reflectance from semi-infinite turbid media: application to the study of skin optical properties," Opt. Express 14(19), 8661–8674 (2006). A. Kim and B. C. Wilson, Optical-Thermal Response of Laser-Irradiated Tissue (Springer, 2011), chap. Measurement of ex vivo and in vivo tissue optical properties: Methods and theories, pp. 267–319, 2nd ed. K. Jansson, G. Kratz, and A. Haegerstrand, "Characterization of a new in vitro model for studies of reepithelialization in human partial thickness wounds," In Vitro Cell. Dev. Biol.: Anim. 32(9), 534–540 (1996). G. Kratz, "Modeling of wound healing processes in human skin using tissue culture," Microsc. Res. Tech. 42(5), 345–350 (1998). A. Bjorgan, B. Pukstad, and L. L. Randeberg, "Hyperspectral characterization of re-epithelialization in an in vitro wound model," J. Biophotonics (2020). G. James, D. Witten, T. Hastie, and R. Tibshirani, An introduction to statistical learning (Springer, 2013), 1st ed. G. Shaw and D. Manolakis, "Signal processing for hyperspectral image exploitation," IEEE Signal Process. Mag. 19(1), 12–16 (2002). J. Khodr and R. Younes, "Dimensionality reduction on hyperspectral images: A comparative review based on artificial datas," in 2011 4th International Congress on Image and Signal Processing (IEEE, 2011), pp. 1875–1883. M. D. Farrell and R. M. Mersereau, "On the impact of pca dimension reduction for hyperspectral detection of difficult targets," IEEE Geosci. Remote Sens. Lett. 2(2), 192–195 (2005). L. Wang, S. L. Jacques, and L. Zheng, "Mcml monte carlo modeling of light transport in multi-layered tissues," Comput. Meth. Prog. Bio. 47(2), 131–146 (1995). L. Svaasand, L. Norvang, E. Fiskerstrand, E. Stopps, M. Berns, and J. Nelson, "Tissue parameters determining the visual appearance of normal skin and port-wine stains," Laser. Med. Sci. 10(1), 55–65 (1995). L. V. Wang and H. Wu, Biomedical Optics, Principles and Imaging (John Wiley & Sons, 2007). L. L. Randeberg, A. Winnem, R. Haaverstad, O. A. Haugen, and L. O. Svaasand, "Performance of diffusion theory vs. monte carlo methods," Proc. SPIE 5862, 586200 (2005). T. Spott and L. O. Svaasand, "Collimated light sources in the diffusion approximation," Appl. Opt. 39(34), 6453–6465 (2000). R. R. Anderson and J. A. Parrish, "The optics of human skin," J. Invest. Dermatol. 77(1), 13–19 (1981). I. Fredriksson, O. Burdakov, M. Larsson, and T. Strömberg, "Inverse monte carlo in a multilayered tissue model: merging diffuse reflectance spectroscopy and laser doppler flowmetry," J. Biomed. Opt. 18(12), 127004 (2013). T. Lister, P. A. Wright, and P. H. Chappell, "Optical properties of human skin," J. Biomed. Opt. 17(9), 0909011 (2012). N. Verdel, A. Marin, M. Milanic, and B. Majaron, "Physiological and structural characterization of human skin in vivo using combined photothermal radiometry and diffuse reflectance spectroscopy," Biomed. Opt. Express 10(2), 944–960 (2019). L. Vidovic, M. Milanic, and B. Majaron, "Objective characterization of bruise evolution using photothermal depth profiling and monte carlo modeling," J. Biomed. Opt. 20(1), 017001 (2015). E. Alerstam, W. C. Y. Lo, T. D. Han, J. Rose, S. Andersson-Engels, and L. Lilge, "Next-generation acceleration and code optimization for light transport in turbid media using gpus," Biomed. Opt. Express 1(2), 658–675 (2010). E. Alerstam, T. Svensson, and S. Andersson-Engels, "Parallel computing with graphics processing units for high-speed monte carlo simulation of photon migration," J. Biomed. Opt. 13(6), 060504 (2008). T. Spott, L. O. Svaasand, R. E. Anderson, and P. F. Schmedling, "Application of optical diffusion theory to transcutaneous bilirubinometry," Proc. SPIE 3195, 234–245 (1998). A. N. Bashkatov, E. A. Genina, and V. V. Tuchin, "Optical properties of skin, subcutaneous, and muscle tissues: A review," J. Innov. Opt. Health Sci. 04(01), 9–38 (2011). S. L. Jacques, "Optical properties of biological tissues: a review," Phys. Med. Biol. 58(11), R37–R61 (2013). P. Naglic, L. Vidovic, M. Milanic, L. L. Randeberg, and B. Majaron, "Suitability of diffusion approximation for an inverse analysis of diffuse reflectance spectra from human skin in vivo," OSA Continuum 2(3), 905–922 (2019). S. A. Prahl, Optical-Thermal Response of Laser Irradiated Tissue (1995), chap. 5, The adding-doubling method, pp. 101–129. S. A. Carp, S. A. Prahl, and V. Venugopalan, "Radiative transport in the delta-p1 approximation: accuracy of fluence rate and optical penetration depth predictions in turbid semi-infinite media," J. Biomed. Opt. 9(3), 632–647 (2004). G. D. Glinos, S. H. Verne, A. S. Aldahan, L. Liang, K. Nouri, S. Elliot, M. Glassberg, D. C. DeBuc, T. Koru-Sengul, M. Tomic-Canic, and I. Pastar, "Optical coherence tomography for assessment of epithelialization in a human ex vivo wound model," Wound Rep. Reg. 25(6), 1017–1026 (2017). Aalbers, A. G. J. Abdalmalak, A. Akbari, H. Aksnes, A. Aldahan, A. S. Alerstam, E. Anderson, R. E. Anderson, R. R. Andersson-Engels, S. Ausen, K. Baltussen, E. J. M. Banerjee, A. Bashkatov, A. N. Beets, G. L. Bendix, J. Bercaru, N. Berget, S. A. Berns, M. Besser, M. Bjorgan, A. Bohndiek, S. E. Bootz, F. Bradford, R. Briheim, K. Bruin, S. C. Burdakov, O. Burlina, P. Bykov, A. Calin, M. A. Carp, S. A. Chappell, P. H. Chen, A. Y. Chen, G. Coad, J. Coman, T. Compton, C. C. Daeschlein, G. Dau, M. Davis, B. Dawson, J. de Koning, S. G. B. DeBuc, D. C. Denstedt, M. Dimou, A. Dinh, T. Diop, M. Doronin, A. Dremin, V. Eikebrokk, T. A. El-Deiry, M. W. Elliot, S. Emanuelsson, P. Farkas, D. L. Farrell, M. D. Fei, B. Feng, W. Fiskerstrand, E. Flohil, C. C. Fredriksson, I. Freeman, J. E. Genina, E. A. Gerstner, A. O. H. Giurini, J. M. Glassberg, M. Glinos, G. D. Gravastrand, C. Griffith, C. C. Grigoroiu, A. Haaverstad, R. Haegerstrand, A. Halicek, M. Halig, L. Han, T. D. Hastie, T. Haugen, O. A. Hernandez-Palacios, J. E. Holmer, A. Hoogwerf, B. Huss, F. Jacques, S. L. James, G. Jansson, K. Jünger, M. Junker, J. Kämmerer, P. W. Kersting, K. Kewin, M. Khaodhiar, L. Khodr, J. Khosravani, M. Khouj, Y. Kiefer, T. Kim, A. Kirillin, M. Kok, E. N. D. Kok, N. F. M. Koru-Sengul, T. Kratz, G. Kulhmann, K. F. D. Kurakina, D. Kuska, M. T. Laffers, W. Langner, I. Larsen, E. L. Larsson, M. Lawrence, K. S. Lew, R. Liang, L. Lilge, L. Lima, C. Lister, T. Little, J. V. Lo, W. C. Y. Lönnqvist, S. Lu, G. Lyons, T. E. M. Sterenborg, H. J. C. Mahlein, A.-K. Majaron, B. Manea, D. Manea, S. R. S. D. Manolakis, D. Marin, A. Marotz, J. Martin, R. Master, V. Medenilla, E. Meglinski, I. Mersereau, R. M. Milanic, M. Milej, D. Mohler, E. Morrison, L. Müller, N. A. Nabavi, E. Naglic, P. Nelson, J. Nieh, P. Nordgaard, H. B. Norvang, L. Nouri, K. Nouvong, A. Nyman, E. Nyman, T. Olstad, E. Osunkoya, A. Ourselin, S. Paluchowski, L. A. Panasyuk, A. A. Panasyuk, S. V. Parasca, S. V. Parascal, S. V. Parrish, J. A. Pastar, I. Patel, M. Patterson, M. S. Pilon, L. Popov, A. Prahl, S. A. Pukstad, B. Pukstad, B. S. Rajaram, A. Rakar, J. Randeberg, L. L. Regeling, B. Rose, J. Ruers, T. J. M. Saeed, S. R. Sanders, J. Savastru, R. Schmedling, P. F. Schmidt, K. Schomacker, K. Schomacker, K. T. Schuster, D. M. Shapey, J. Shaw, G. Shi, R. Sicher, C. Spott, T. Stopps, E. Strömberg, T. Stürmer, E. Svaasand, L. Svaasand, L. O. Svensson, T. Tajaddini, A. Thies, B. Tibshirani, R. Tomic-Canic, M. Tuchin, V. V. Vassmyr, B. S. Venugopalan, V. Vercauteren, T. Verdel, N. Verne, S. H. Veves, A. Vidovic, L. Vo, T. von Podewils, S. Vona-Davis, L. Vyas, S. Wahabzada, M. Wahl, P. Wang, L. V. Westermann, S. Wild, T. Wilson, B. C. Winnem, A. Witten, D. Wright, P. A. Wu, H. Wyman, D. R. Xie, Y. Yoon, J. Younes, R. Yu, T. Yudovsky, D. Zhang, C. Zheng, L. Zherebtsov, E. Zhu, D. Zonios, G. Biomed. Opt. Express (4) Biomed. Tech. (1) BJS Open (1) Clin. Hemorheol. Microcirc. (1) Comput. Meth. Prog. Bio. (1) Front. Oncol. (1) IEEE Geosci. Remote Sens. Lett. (1) IEEE Signal Process. Mag. (1) In Vitro Cell. Dev. Biol.: Anim. (1) J. Biomed. Opt. (14) J. Biophotonics (4) J. Diabetes Sci. Technol. (1) J. Innov. Opt. Health Sci. (1) J. Invest. Dermatol. (1) J. Plast. Surg. Hand Surg. (2) Laser. Med. Sci. (1) Lasers Med. Sci. (1) OSA Continuum (1) Phys. Med. Biol. (1) Sci. Rep. (1) Wound Rep. Reg. (2) Equations on this page are rendered with MathJax. Learn more. (1) μ a ϕ ( z ) − D d 2 d z 2 ϕ ( z ) = q ( z ) , (2) q i ( z ) = μ s , i ′ exp ⁡ ( − μ t r , i z ) ∏ j = 1 i − 1 exp ⁡ ( − μ t r , j d j ) , (3) ϕ i ( z ) = δ i μ s , i ′ D i ( 1 − μ t r , i δ i 2 ) exp ⁡ [ − μ t r , i ( z − d i ) ] ∏ j = 1 i − 1 exp ⁡ [ − μ t r , j ( d j − d j − 1 ) ] (4) + A i 1 exp ⁡ ( − x / δ i ) + A i 2 exp ⁡ ( x / δ i ) . (5) R d = j ( z = 0 ) . (6) μ a,e = μ a,m,694 ( λ / 694 ) − 3.46 , (7) μ a,d = μ oxy ( λ ) c oxy + μ deoxy ( λ ) c deoxy , (8) μ s ′ = μ s,Mie,500 ′ ( λ / 500 ) − b Mie + μ s,Ray,500 ′ ( λ / 500 ) − 4 . Parameters to be fitted during optimization, along with their lower and upper bounds and the scaling factor applied before they are input into the optimization method. Lower bound Layer thickness, d 1 0 500 500 μ m Melanin absorption, μ a,m,694 0 2000 1000 m − 1 Scattering, μ s,Ray,500 ′ 0 5000 5000 m − 1 Scattering, μ s,Mie,500 ′ 1 5000 5000 m − 1 Scattering, b Mie 0 4 4 - Low and high values for parameters varied in the Monte Carlo simulations. Low value Upper Layer thickness, d 1 50 500 μ m Upper Melanin absorption, μ a,m,694 150 700 m − 1 Upper/Deeper Scattering, μ s,Ray,500 ′ 1500 3000 m − 1 Upper/Deeper Scattering, μ s,Mie,500 ′ 1500 3000 m − 1 Deeper Oxy. blood fr., C oxy 0.01 0.05 - Deeper Deoxy. blood fr., C deoxy 0.01 0.05 -
CommonCrawl
Home Journals IJHT Thermal analysis of greenhouses installed under semi arid climate Thermal analysis of greenhouses installed under semi arid climate Kamel Mesmoudi*| Kheireddine Meguellati | Pierre-Emmanuel Bournet Laboratory of Mechanics Structural and Materials, University of Batna 2, Avenue Boukhlouf Mohamed El Hadi 05000 Batna, Algeria Department of Materials Science, University of Batna1, Avenue Boukhlouf Mohamed El Hadi 05000 Batna, Algeria Agro campus Ouest, EPHor, Environmental Physics and Horticulture Research Unit, F-49045 Angers, France [email protected] The greenhouse design as well as the cover material properties in particular may strongly impact the greenhouse energy. To study the effect of these parameters, three typical unheated greenhouses equipped with rows of canopy were considered. Experiments were launched to establish the boundary conditions and validate the model. Two parametric studies were carried out: for the nocturnal period when the energy performance of each type of greenhouse was investigated, and for the diurnal period, when the sun path was simulated taking into account the type of the cover, its spectral optical and thermal properties. Results indicate that for the nocturnal period, the ambient air temperature in the tunnel and vertical wall greenhouse was relatively homogenous and warmer compared with the temperature distribution in the Venlo greenhouse. The plastic greenhouse, especially the tunnel one had better performances concerning the homogenization of the climate and the thermal energy storage. Concerning the diurnal period, and for both plastic greenhouses equipped with fully opened side vents, the air located between the rows of canopy and ground surfaces remained very slow, not exceeding 0.2 ms-1; for the Venlo glasshouse, the recirculation loop situated above the crop improved the air mixing and induced a good homogenization. Results indicate that the cover material with highest absorptivity, deteriorated the natural ventilation, increasing the air temperature by convection, and reduced the available Photosynthetically Active Radiation. greenhouse design, thermal analysis, CFD simulation, radiation, coupled model A greenhouse is an enclosed structure, that protects the crops from the outside environment by creating favorable conditions, which traps the short wavelength solar radiation and stores the long wavelength thermal radiation to create a favorable microclimate for higher productivity, together with certain limitations that will depend on the bioclimatic stage conditions of its location, the geometry of the structure, and the spectral optical properties of covering materials in particular. Managing the greenhouse microclimate is essential to maintain an optimum inside environment during the different stages of plant growth. Modeling is an interesting approach to assess the microclimate in greenhouses, and test different scenarios. Among the modeling tools, CFD (Computational Fluid Dynamics) is an advanced technique for design in engineering. It has been increasingly used in different types of agricultural studies, such as livestock houses, greenhouses and broiler houses. CFD offers many advantages to the food industry as it provides a mean to test the influence of multiple variables with low economic cost (compared with experiments). Nowadays the CFD technique is recognized as a powerful tool to model the climate generated inside greenhouses and to test the performances of different structural designs. Since the pioneering work of Nara [1], CFD simulation has been increasingly used to study assess distributed indoor climate for a wide range of greenhouse shapes, especially in the north Latitude. Several review papers present the state of the art concerning CFD developments. Reicharth [2] presented the main conclusions derived from the published material together with their latest results on greenhouse modeling. Norton [3] provided a state-of-the-art review of CFD and its applications in the design of ventilation systems for agricultural production systems. They concluded that the greenhouse CFD modeling was a higher standard than that of animal housing, owing to the incorporation of the crop biological responses as a function of the local environmental conditions. The main factors governing air movements inside the greenhouse were analyzed by [4], with a particular focus on conclusions drawn from field experiments, laboratory scale models and CFD simulations. The principles of CFD, the modeling approach and its adaptation to greenhouse climate simulation were described, paying attention to ventilation efficiency inside greenhouses with respect to the greenhouse geometry, opening arrangements, wind speed and direction, addition of insect-proof or shading screens, and interactions with the crop. More recently Bartzanas [5] presented a review on various CFD applications to improve crop farming systems such as, soil tillage, sprayers, harvesting, machinery, and greenhouses, they discussed the possibilities of incorporating the CFD models in decision support tools for precision farming. Specific processes involved in greenhouses were also analyzed into details in the literature. These processes include ventilation, interaction with the crop and radiative effects. The effect of vent arrangements on the ventilation and energy transfer in a multi-span glasshouse was studied by [6], using a bi-band radiation model. The analysis of the humidity issues in greenhouse climate using the CFD tools at different scales: the leaf, the canopy and the greenhouse itself was also conducted by Bournet [7]. The effect of the crop is particularly important for greenhouses, as side openings may be partly obstructed by the crop rows. Hernandez [8] studied the effects of crop row orientation (perpendicular or parallel to the wall equipped with side openings) on the ventilation and microclimate of a plastic multi span greenhouse. More recently, based on a computational fluid dynamics (CFD) model and an experimental approach, Majdoubi [9] analyzed the effect of crop row orientation on internal climate in a large type greenhouse, and found the ventilation rate to be heavily dependent on the orientation of the crop rows with respect to the dominant wind direction. The relationship between ventilation and the characteristics of a tomato crop growing inside was systematically studied by [10] in a naturally ventilated tunnel greenhouse using the tracer gas method. It appears however that most early studies ignored or failed to consider the presence of the crop, and did not provide detailed information about the way solar and atmospheric radiations were taken into account. In recent years, the use of Computational Fluid Dynamics made it possible to analyze the factors that determine greenhouse microclimate with respect to its structural specifications and used equipment [11, 12]. However, in those studies, radiation was not simulated directly through a radiative transfer equation but its effect was indirectly incorporated either through specific boundary conditions or through the addition of an extra source term in the energy transport equation. More recently, Kim [13] included the short wave length radiation in their simulations (diurnal conditions) while Bournet [6] implemented a bi-band radiation model distinguishing short and long wave length contributions to take account of the diverse optical properties of the glass within these different bands. Moreover, few studies have addressed the question of the dynamics of solar radiation and temperature distribution as [14] did it for a tunnel greenhouse at a daily time scale. More recently [15] presented numerical simulations of the climatic parameter distribution of a ventilated tunnel greenhouse on the basis of a 3D CFD approach using a bi-band discrete ordinates (DO) model, and calculating the sensible and latent heat transfers between leaves and the surrounding air by including the long wave and shortwave radiation fluxes in each crop control volume taking account of the sun position at each time step. In the southern Mediterranean basin, the bioclimatic stage is semi-arid and the use of greenhouses for crop production is rapidly increasing. However, the characterization of the energy balance of the greenhouses for this bioclimatic zone still remains to be done and achieving favorable environment becomes essential in order to warranty the greenhouse feasibility [16,17]. Indeed, maintaining ventilation performance during the diurnal period and controlling the heat release during the nocturnal period are the major factors influencing both climate control and yield quality over much of the year. These aspects are major challenges still facing designers and growers. Nevertheless, few investigations of the performance of greenhouses in southern Mediterranean climates have been undertaken so far and the involved physical mechanisms remain poorly understood. Some progress was made in recent years since the energy balance and the behavior of the indoor microclimate has long become a matter of concern in the studies conducted by [18, 19, 20, 21]. Performance criteria based on very different approaches are difficult to compare and a common approach based clearly on the same bioclimatic stage is required so that greenhouses performance can be simulated and examined with respect to their engineering design (both greenhouse geometry and covering material). Under arid climate conditions, few CFD works that predict and analyze the microclimate of greenhouses exist [22]. In the present study we will present a numerical analysis of the thermal environment of greenhouses in Batna (6˚11' East, 35˚33' North). The region is localized at altitudes of 900-1000 m above sea level and characterized by high winter insulation, varying from 10.5 to 14 hours/day between October and March, and by cold and dry winters, with average minimal temperatures between -5 ˚C and 2 ˚C during the night periods of January to March, with low levels of moisture. The aim of the present study is to examine the influence of greenhouse configuration on the inside microclimate and energy consumption for three different unheated greenhouses (tunnel, Venlo and plastic vertical wall greenhouse) during two periods (diurnal and nocturnal) focusing in particular on the ventilation mechanism, the thermal behavior and the heat losses. In this prospect a CFD model was used, and experiments were launched to establish the boundary conditions and validate the model. 2.1 Experimental greenhouses In order to estimate the ability of the model to correctly simulate the thermal characteristics of the microclimate of the tested greenhouses, production greenhouses were equipped with sensors to provide input data for the model, and for its validation. The measurements were carried out in three experimental N-S oriented greenhouses (tunnel, Venlo and plastic vertical wall greenhouse) located at the agricultural research farm of the department of agronomy of the University of Batna1 (35.330 N., 6.110 E.) in the north area of Eastern Algeria. The geometrical characteristics of the greenhouses were as follows: for the tunnel and the plastic vertical wall greenhouses eaves height of 2.4m, ridge height of 3.4m, total width of 4 m, and total length of 8m (Figure 1), for the Venlo glasshouse, the greenhouse was a standard 4 m width, 3.60 m high under the ridge and 3.27 m high under the gutter (Figure 1, Figure 2). The glasshouse was covered with a 4 mm thick horticultural glass and equipped with two opposite roof openings; the tunnel and plastic vertical wall greenhouses were covered with a polyethylene sheet and were equipped with two continuous side openings (roll-up type) located at 0.6m from the ground and with a maximum opening of 0.9m. The greenhouses were grown with a tomato crop, which reached a height of 1m during the experiments. 1a.png (a) Plastic tunnel greenhouse with roll-up type openings 1b.png (b) Plastic vertical wall greenhouse with roll-up type openings 1c.png (c) Venlo glass greenhouse with pivoting roof door type openings Figure 1. Geometries of the greenhouses and configurations considered for the ventilation efficiency study 2.2 Measurements Two different types of measurements were conducted: (a) outside the greenhouse to determine the characteristics of the atmospheric boundary layer in order to provide the boundary conditions to the model, (b) inside the greenhouse to validate the simulations. (a) Measurements of the weather conditions surrounding the greenhouse were conducted with sensors installed outside on a mast, 10 m away to the East of the greenhouse (Figure 1). External wind speed and direction were monitored by two cup anemometers (Model 100075, accuracy ±0.1m.s-1, Climatronic Corporation) and a wind vane (Model 100076, accuracy±2°, Climatronic Corporation). The outside global solar radiation was measured with a pyranometer (SP lite, Kipp & Zonen, Netherlands). The outside air temperature and humidity were also measured using platinum probes in statically ventilated shelters (Model MP601A, accuracy± 0.2%, Rotronic instrument crop) located at the same height as the outside pyranometer. All the above mentioned measurements were recorded every 2s and then averaged over 30min periods, using a data logger system (Campbell Scientific Micro logger, CR3000, USA). (b) Measurements of the temperature and the humidity distribution in the middle section of the greenhouse were also conducted. The measurement locations were distributed along a cross-section at the center of each greenhouse in the same vertical plane. The temperature and relative humidity of the interior air were recorded by means of a data logger (OAKTON Logger Plus) using a remote sensing system. The temperatures of the solid surfaces (ground, underground and wall surfaces of the cover) were measured every 2 second with thermocouples, and then averaged over 30 min periods. The incoming solar radiation was measured with a pyranometer (SP lite, Kipp & Zonen, Netherlands) placed inside the greenhouse at the center and 1.5m above the ground. The cover surface temperatures of the greenhouse were measured at six positions distributed along the greenhouse sides and roof using stick on thermocouples secured to the cover with transparent adhesive tape. The storage and the processing of data were carried out with the Micro Lab plus Software. Figure 2 shows the sketch of the Venlo type experimental greenhouse and the location of the sensors. The beach of Measurements and the accuracy of all the sensors used are specified in Table A in Appendix Figure 2. Sketch of the experimental Venlo greenhouse showing the location of the sensors (All the distances are in linear meter) 2.3 Numerical model The commercially available CFD code Fluent v.6.1 was used for this study. A 2-D grid was built for each case, and the model was run in order to compare the numerical results with the experimental data. Although 2D simulations do not represent precisely the reality inside the greenhouse, it could be a computationally beneficial assumption for the investigation of the transport phenomena especially at the middle section of a long structure with open side vents along the whole length. In addition, 2D modeling also makes it possible to save significant computing time for model simulation development, meshing and convergence process. 2.3.1 Grid definition and numerical procedure The calculation domain was restricted to the greenhouse itself, in the middle plan of the greenhouses ensuring fast calculation. The grid was selected after several attempts in order to reduce the CPU time needed for convergence and to ensure the independency of the numerical results from the grid. The grid was an unstructured, quadrilateral mesh with a higher density in critical portions of the flow subject to strong gradients. After several trials with different densities, the calculations were based on a 70 by 90 cell grid (Figure 3). The area of calculation includes the canopy, the soil, and the inner walls of the greenhouses. Different regions with adapted meshes were considered and no mesh was applied to the outer space surrounding the greenhouse. The inner space was meshed using an unstructured grid with sizes varying from 0.2m in the center of the greenhouse to 0.06 m near the greenhouse cover. The crop rows, considered as a porous medium, were meshed using a structured, cubic, 0.18 m grid and the soil mesh under the greenhouse consisted of three layers (0.01 m, 0.15 m and 1.44 m) with 0.005 m, 0.03 m and 0.2 m structured meshes. Figure 3. Geometry of the whole calculation domain and greenhouse mesh details 2.3.2 Governing equations The CFD method allows the explicit calculation of the average velocity vector field of a flow by numerically solving the corresponding transport equations. The two-dimensional conservation equations describing the transport phenomena for steady flows are of the general form: $\frac{\partial(U \phi)}{\partial x}+\frac{\partial(V \phi)}{\partial y}=\Gamma_{\phi} \nabla^{2} \phi+S_{\phi}$ (1) In equation (1), ϕ represents the concentration of the transported quantity in a dimensionless form, namely the momentum (velocity components) and the mass and energy. U, and V and are the components of velocity vector; $\Gamma_{\phi}$ is the diffusion coefficient; and Sϕ is the source term. The governing equations were discretized following the procedure described by Patankar [23] using the finite volume technique which consists in integrating the governing equations over a control volume. The Boussinesq model was activated to take account of the buoyancy effects in the computational domain. The standard k-ε model assuming isotropic turbulence was adopted to describe turbulent transport as it proved to be a good compromise for a realistic description of turbulence and computational efficiency as reported by several studies of greenhouse microclimate. This model is a semi-empirical model based on additional transport equations for the turbulent kinetic energy (k) and its dissipation rate (ε). The complete set of the equations of the k-ε model can be found in [24] and their commonly used set of parameters (empirically determined) are: Cμ= 0.09, Cε1 = 1.44, Cε2 = 1.96, σκ=1, and σε = 1.3 (Fluent, 1998). Radiative sub model (RTE Radiative Transfer Equation). The discrete ordinates method has received significant attention due to its good compromise between accuracy, computational economy and flexibility [25, 26]. Up until now however, most CFD studies did not include both the interchange of short and long wavelength radiation between the sky and the greenhouse cladding and only indirectly introduce the effect of radiative transfers in the model. In order to simulate the effect of solar incident radiation on the greenhouse cover, the discrete ordinate DO model was used. In this model it was assumed that radiation energy was 'convected' simultaneously in all directions through the medium at its own speed. The DO model available in Fluent makes it possible to solve the Radiative Transfer Equation (RTE) in semi-transparent media. It can be used to assess non-gray radiation using a gray-band model. So it is adequate for participating media with a spectral absorption coefficient αλ that varies in a stepwise fashion across the spectral bands. The discrete ordinates radiation model solves the RTE for a finite number of discrete solid angles, each associated with a vector direction $\vec{s}$ in the global Cartesian system (x, y, z). It transforms the RTE equation into a transport equation for the luminance in the spatial coordinates (x, y, z). The DO model solves as many transport equations as there are $\vec{s}$ directions. The RTE for spectral intensity $I_{\lambda}(\vec{r}, \vec{s})$ is written as: $\frac{d I_{\lambda}(\vec{r}, \vec{s})}{d s}+\left(\alpha_{\lambda}+\sigma_{s}\right) I_{\lambda}(\vec{r}, \vec{s})=\alpha_{\lambda} \frac{\sigma T^{4}}{\pi}+$ $\frac{\sigma_{s}}{4 \pi} \int_{0}^{4 \pi} I_{\lambda}(\vec{r}, \vec{s}) . \Phi\left(\vec{s}, \vec{s}^{\prime}\right) d \Omega$ (2) where in Eq. (2) Iλ is the radiation intensity for wavelength λ (W.m-2.sr-1), $\vec{r}$ the position vector, $\vec{s}$ the radiation direction vector, αλ the spectral absorption coefficient (m-1), λ the wavelength (m-1), σ the Stefan-Boltzmann constant (σ=5.672x10-8 W.m-2.K-4), σs the scattering coefficient (m-1), Φ the phase function, and Ω the solid angle. We assumed the refraction index, the scattering coefficient and phase function to be independent of the wavelength. The angular space 4π at any spatial location was discretized into Nθ x Nφ solid angles of extent ωi, called control angles. The angles θ and φ are the polar and azimuthal angles, and are measured with respect to the global Cartesian system (x, y, z). In our case a 3x3 pixilation was used. Although in this equation the refraction index is considered to be constant, in the calculation of black body emission as well as in the calculation of boundary conditions imposed by semi-transparent walls the band length depended values of refractive index were used (provided in Table 1 and Table 2). The RTE equation was integrated over each wavelength. Then the total intensity $I(\vec{r}, \vec{s})$ in each direction and position $\vec{r}$ , $\vec{s}$ was computed using equation (3): $I(\vec{r}, \vec{s})=\sum_{x} I_{\lambda_{x}}(\vec{r}, \vec{s}) \Delta \lambda_{x}$ (3) Where, the summation is undertaken over the wavelength bands. The RTE equation is coupled with the energy equation through a volumetric source term given by the following equation (4) [27]: $S_{r}=-\frac{\partial q_{r}}{\partial x_{i}}=\alpha_{\lambda}\left[4 \pi I_{\lambda}^{0}(\vec{r})-\int_{0}^{4 \pi} I_{\lambda}^{0}(\vec{r}, \vec{s}) d \Omega\right]$ (4) With Sr is the radiation source term (J), qr the radiative flux (W), xi the component in i- direction (m), and $I_{\lambda}^{0}$ is the black body intensity given by the Planck function (W.m-2). Table 1. Optical properties of the cover for the three greenhouse geometries Cover type & Thickness (mm) (hortical glass) Polyethylene film (low density) Absorptivity (α) Refractive index (n) Table 2. Mean values of the thermal and optical properties of the greenhouse components (kg.m-3) Heat transfer conductivity (W.m-1.K-1) Specific heat capacity (J.kg-1.K-1) Absorptivity α Crop sub model. The crop was simulated using the equivalent porous medium approach through the addition of a momentum source term, due to the drag effect of the crop, to the standard fluid flow equations [28]. The plants were simulated as porous materials with a viscous resistance a-1 = 27380m-2 and inertial resistance C2=1.534m-1. These parameters used in the pressure drop expression for a tomato crop were derived from [29] for a low velocity range. For the purpose of the study, sensible and latent heat transfers were omitted and attention was rather paid on the mechanical interaction of the crop with its environment. 2.3.3 Boundary conditions Boundary conditions for each variable ϕ (transport variable) must be specified for each boundary surface of the domain. In particular, ϕ values for the upper boundary and the leeward lateral boundaries were determined with the assumption of a null gradient of ϕ. For the other boundaries, ϕ was determined either directly from experimental data bases or deduced from specific models. The left opening was supposed to face the East and the wind direction: the wind was normal to the ridge and a parabolic wind profile was imposed at the opening of the greenhouse with a given velocity profile and temperature 300 K, which is considered to be the temperature of the ambient air around the greenhouse. This profile was determined from the measurements of the wind speed at each ventilation opening of the greenhouse. Figure 4 illustrates the corresponding imposed profile fitted by a parabolic law. At the inlet section, a fully developed turbulent profile was also considered. At the outlet section (leeward right opening), a constant pressure (P=Patm) was imposed. Finally, the boundary conditions prescribed a wall type boundary condition along the floor and wall and the cover was considered as a finite thickness wall consisting of semi-transparent materials. Figure 4. Wind profile imposed at the opening of the plastic greenhouses The optical and thermal properties of the components of greenhouses are provided in Table 1 [30] and Table 2 [16]. A heat flux boundary condition was applied at the external boundary of the cover region. It is a mixed heat flux boundary condition (combination of radiation and convection with convective heat transfer coefficient). The corresponding convective coefficient depends on the wind speed, according to the law established by [16] on the same type of greenhouse and under similar climatic conditions h=2.56+2.3U0.69, where U is the mean wind speed along the roof of the greenhouse. Also, the same boundary condition was imposed along the internal wall surface where the solid and the fluid zones are coupled, restoring a conjugated heat transfer treatment at the specific area. The convective coefficient between the interior air and the interior wall depends on the temperature gradient (interior air - interior wall) according to the following law [16] h=3.59ΔT0.33. Fixed air temperatures were imposed along the ground. The side walls were considered as adiabatic and opaque while the ground was considered as a diffusively radiating opaque material. 2.3.4 Numerical procedure A second-order upwind discretization scheme was used for momentum and turbulence transport equation. The convergence criterion for all variables was 10-6. 2.4 Parametric studies Two parametric studies were carried out: diurnal and nocturnal conditions, in order to investigate the effect of greenhouse geometry, as well as the effect of two different cover materials with different optical characteristics, on the thermal behavior, heat losses, and temperature patterns of the tested greenhouses. For the first parametric study, i.e. the diurnal period, a typical day of the spring season in the region of Batna was chosen for simulation sand calculations were launched at a time corresponding to midday. The incident irradiance (the earth solar radiation) was distributed in three wavelength bands: the ultra-violet (λ=0.01 - 0.4 μm), the visible or PAR (λ=0.4 - 0.76μm) and the near infrared (λ=0.76 - 1.1 μm). In Table 1, the normal irradiances per wavelength band are presented. In all cases a fraction of 24% diffuse radiation was considered. For the second parametric study, i.e. the nocturnal period, the same day was considered, but at midnight, and two cover materials with different optical properties were studied. The corresponding spectral optical properties are provided in Table 1 and Table 2. 3. Results and Discussions The measurements were carried out on a, sunny springs days (march 10th to April 1st 2015) at solar noon for the first parametric study, and at midnight for the second parametric study. The required parameters were measured, every 1min, at the locations shown in Figure 1, averaged at every 15 min and recorded in a data logger (CR3000 Micro logger, Campbell Scientific, Inc.). These Measurements were conducted over two periods, from March to April 2015inside the tunnel, Venlo and plastic vertical wall greenhouses. During the same period, climatic data was also recorded outside the greenhouses. Sets of data were used not only to define the boundary conditions of the model, but also to validate the simulations. For the purpose of the study, two contrasted cases were analyzed: (a) diurnal period, (b) nocturnal period. The transport phenomena inside the experimental naturally ventilated greenhouses were investigated using the mean values of the outside climate conditions for specifying the boundary conditions Table 3. Table 3. Mean values of the outside climate conditions during the measurements Min & Max Outside air Temperature (K) 279-297.3 Outside air Humidity (%) Outside air velocity (m.s-1) Global solar radiation (W.m-2) UV (W.m-2) PAR (W.m-2) NIR (W.m-2) Hour angle degree (˚) 3.1 Validation of the model In order to check on the validity of the performance of the CFD model, the validation of the present work was undertaken on the basis on experimental field surveys conducted in the Venlo geometry greenhouse covered with horticultural glass. Figure 5 shows the air temperature profiles for this glasshouse along the middle axis of the greenhouse at 2m from the inlet flow vent opening both for the diurnal period Figure 5a, and for the nocturnal period Figure 5b. (a) Diurnal period (b) Nocturnal period Figure 5. Computed and experimental vertical profiles of the temperature (K) at the middle of the Venlo Glasshouse, (a) Diurnal period, (b) Nocturnal period For the diurnal case, a good agreement between the measured and simulated profiles was reached, a correlation coefficient of R2 = 0.9669 and a mean square error of χ = 2.2481K are denoted and the standard deviation of the temperature (±1°C) may be ascribed to the experimental errors and to the models used for the determination of the temperature. Figure 5a shows that temperature distribution along the vertical axis disclosed two distinct areas, one at the bottom of the domain, where the temperature remained relatively high due to the energy exchange with plants, and a second on eat the top half domain where the temperature was clearly affected by the fresher temperature of the entering stream. A difference of about 5K was observed and simulated between these two areas. Concerning the nocturnal period, Figure 5b shows the vertical distribution of the measured temperatures and the numerical air temperatures again at 2m from the sides of the greenhouse. In this case also a good agreement was reached between the numerical and the experimental values. While a correlation coefficient of R2 = 0.9440 and a mean square error of χ = 0.1036 are denoted. When observing Figure 5b, we denote globally a good correlation between predicted and measured air temperature. Contrary to the diurnal case, the air temperature profile was relatively homogeneous except in areas close to the ground and roof where high temperature gradients were reported. These gradients were mainly induced by heat exchanges along the ground and roof. 3.1.1 Diurnal period Flow field. CFD results concerning the first parametric study: i.e. the diurnal period for a clear day of the spring season are shown in Figures 6-10 for all considered geometries. The computed contours of the air velocities, stream function, air temperature, and PAR radiation profiles at specific sections are provided. From the results, it comes out that the main mechanism governing heat transfers is convection associated with the entering air stream except in areas close to the cover and in the corners of the greenhouse where incident solar radiation and heat storage mainly impact the temperature. As shown in Fig 6a, Fig.6b and Fig. 6c the computed contours of the air velocities for all cases showed that the flow was dominated by a strong convective airflow through the windward opening. The internal flow had the same direction as the wind direction and was damped by plants (porosity). Due to the obstacle created by the crop, the flow separated into two unequal streams. Results thus indicate that the wind direction clearly influenced the air velocity inside the greenhouse and hence its ventilation rates. (a) Tunnel plastic greenhouse (b) Straight wall plastic greenhouse (c) Venlo glasshouse Figure 6. Computed contours of the air velocity (ms-1) at the middle of the greenhouses Modifying the optical properties of the covering material impacted the amount of solar energy entering the greenhouse, causing variability in the flow pattern for each type of greenhouse and for each studied cover material described in Table 2. Two recirculation loops appeared above and below the inlet, trapping small amounts of fluid. The optical properties of the cover determined not only the size but even the existence of the upper corner recirculation for the plastic greenhouses (tunnel and straight wall greenhouse) as can be seen in Fig. 8a and Fig. 8b. In these greenhouses, the main stream flew above the plants and the smallest one inside the crop with lower speed. The differences of the streamlines contours between both plastic greenhouses are restricted to the upper domain close to the inlet. Thus the dependence of the upper corner recirculation on the cover material and on the greenhouse geometry is clear and its size was probably determined by the combination of both effects (the tunnel one was smaller than the vertical wall greenhouse one). This recirculation plays a significant role in the total flow pattern and temperature distribution inside the greenhouse since it divides the domain into two distinguished areas. The recirculation formed at the bottom corner near the entrance seemed however to be independent of radiation. The air situated between the rows of canopy was hardly affected by the main entering stream, with velocities in this region not exceeding 0.2 ms-1. The flow decelerated as a consequence of the viscous and inertial resistances. Above the height of the ventilator (i.e. at 1.6 m), the air velocities progressively reduced. The computed contours of the air velocities obtained for these cases were characterized by a weak air current near the ground, and a recirculation loop with slower speed near the roof and flowing counter current with respect to the outside wind. This recirculation loop improved the air mixing but most of the air left the greenhouse volume without a good homogenization. Contrary to the plastic greenhouses, for the Venlo greenhouse, no air recirculation close to the roof was predicted but a large loop was simulated at the bottom, trapping large amounts of fluid. The existence of this recirculation was mainly governed by the geometry and the vent location effects. In addition, the air flow near the roof was mainly driven by the convective flow through the vent, reaching maximum values within the range [1.4; 1.7 ms-1]. Fig.7a, Fig.7b and Fig.7c provide the horizontal u-velocity profiles at different locations inside the greenhouses at a distance of 2m, 4m and 6m respectively from the inlet for the plastic greenhouses and at 0.25m, 2m, and 3m for the Venlo glasshouse (the Venlo greenhouse is relatively small compared with the plastic ones). Close to the ground the velocity profiles were similar for all cases and characterized by low velocities caused by the resistance of tomato plants combined with shear along the floor. Large peak appeared over the plants where the flow accelerated. The peak position moved up according to the distance from the inlet, while its magnitude decreased, following the spreading of the jet as it mixed with the ambient air. For the plastic greenhouses (Fig. 7a and Fig.7b), close to the roof, negative values of u-velocity were predicted, corresponding to area backflow. A difference in the profiles was observed for the straight wall greenhouse in the upper part of the domain close to the vent height: the mean negative air velocity in this region had values within the range [-0.6; -0.1] ms-1 for the straight wall greenhouse, and within the range [-0.4; -0.01] ms-1 for the tunnel greenhouse), showing the dependence of the flow (especially in terms of velocity magnitude) on the greenhouse design. Figure 7. Computed profiles of air velocity (ms-1) at the middle of the greenhouses for three positions x=2m, 4m, 6m from the inlet for cases (a) and (b) and x=0.25m, 2m, 3.75m for case (c) Figure 8. Computed contours of stream function (m2s-1) at the middle of the greenhouses Temperature distribution. Fig. 9a, Fig. 9b and Fig. 9c show the air temperature distributions for the three types of greenhouse design (for the two studied covering materials) under diurnal conditions. Not surprisingly, the temperature distribution followed the air velocity distribution. In the area just above the crop, the air temperature was similar to that of the outside air (295-298K) due to the strong air movement in this region. The temperatures in the center of the greenhouse were relatively homogeneous above the crop rows, while they strongly vary in the vicinity of the walls. Lower temperatures were predicted close to the main stream of air (coming from the outside), while higher temperatures were simulated near the ground and roof. Figure 9. Computed contours of air temperature (K) at the middle of the greenhouses For the plastic greenhouses, the main temperature gradients were predicted in the upper corner near the inlet, since in this region the total heat transfer was stronger. The rest of the domain presented similar patterns for the examined case. For the vertical wall greenhouse covered with a material characterized by a high absorptivity (plastic cover), the roof temperature reached a relatively high equilibrium temperature, causing heating by convection of the nearby air. The temperature of the back flow air trapped in this recirculation zone increased as this dead zone favored the accumulation of heat provided by the incoming transmitted radiation. The same behavior was also depicted in Fig.10a and Fig.10b showing the temperature profiles at predefined positions. 10a.png 10b.png 10c.png Figure 10. The computed profiles of air temperature (K) at the middle of the greenhouses for three positions x=2m, 4m, 6m from the inlet for cases (a) and (b) and x=0.25m, 2m, 3.75m for case (c) Fig.9c and Fig.10c present the air temperature contours and temperature profiles for the Venlo glasshouse. Two distinct areas can be observed: one at the bottom of domain, where the temperature distribution was mainly governed by the energy exchange with plants and soil. In this region, the temperature was mostly affected by the reduced air velocity and the recirculation. The second area is located at the top of the greenhouse where the temperature was mainly affected by the flow ventilation. In this area the temperature was close to the temperature of the entering stream. A core flow appeared at the center where the temperature of the air remains droughty homogenous. The recirculation region in the canopy zone was fully developed on contrary to the two other designs, resulting in a temperature rise. Figure 11. Computed PAR (Wm-2) profiles along the greenhouse width at a level of 1.5m from greenhouse ground Radiation distribution. In Fig.11a and Fig.11b and Fig.11c the PAR radiation profiles at 1.5 m from the ground are presented for the three studied greenhouses, with the two different covering materials (thin plastic film and horticultural glass). The impact of covering materials in terms of PAR penetrating into the interior greenhouse was of course directly linked to the material transmittance. From the figures, two groups of greenhouses which present roughly similar behaviors may be distinguished: the tunnel and straight wall greenhouses with thin plastic film (Fig.11a and Fig.11b) on the one hand, and the Venlo glass greenhouse with Fig. 8c on the other hand. For the plastic greenhouses the low value of the transmittance only allowed a small amount of PAR to enter the greenhouse and the PAR distribution inside the greenhouse almost disclosed a uniform distribution over the plants. Both plastic greenhouses had analogous performance as they absorbed significant amount of incident solar PAR. In both cases, the maximum PAR intensity reached roughly the same value, proving the good functional performance of the tunnel design of the greenhouse roof, compared with the straight wall greenhouse design for which a lower percentage of the incoming PAR radiation reached the crop. Conversely, the Venlo greenhouse was covered with glass with a higher transmittance and therefore made it possible higher amounts of PAR to penetrate inside the shelter (meaning a higher PAR reached the crop). 3.1.2 Nocturnal period Figure 12. Computed contours of air velocity (ms-1) at the middle of the greenhouses Flow field. During the night, the greenhouses were closed, unheated and deprived of any heating system. Under such conditions, the movement of the interior air was characterized by two counter-rotative convective loops guided by the greenhouse walls and following a circular trajectory along the internal surface of the walls and the roof. The ascendance of the air in the center of the greenhouse in that case was mainly driven by the convection induced by the heat stored inside the ground during daytime (Fig.12a, Fig.12b and Fig.12c) and released at night. Low values of air velocities were predicted in the vicinity of the canopy rows (0.03 - 0.01 ms-1). Velocity collapsed inside the crop as a consequence of the viscous and inertial resistances. It reached maximum values (0.19 - 0.15 ms-1) near the soil under the canopy where the temperature gradients enhanced buoyancy forces and air movements. The highest velocities were also predicted along the vertical medium axis (ascendant stream movement) and near the walls (descendant stream movement) for each of tested geometry. Temperature variation. Fig.13a, Fig.13b and Fig.13c show the air temperature distributions for the three greenhouse designs. For all cases, the predicted air temperature inside the greenhouse reached the max values near the ground and in the ascending streams at the center of greenhouses (290-288 K). The lowest values were obtained near the walls (287-285 K). Simulations also reveal that for the plastic greenhouses (and especially for the tunnel greenhouse), the ambient air temperature distribution was relatively homogenous and higher compared with the temperature distribution inside the Venlo greenhouse. The average air temperature inside the tunnel greenhouse was 290 K (standard deviation ±0.45), it was 288 K (±0.5) in the vertical wall greenhouse and 287 K (±1.32) in the Venlo glasshouse. The tunnel greenhouse geometry disclosed the best air mixing during this period (at night), probably caused and facilitated by the curvature of the roof of the greenhouse. Air temperature was almost uniformly distributed in the Venlo and vertical wall greenhouses but it was 2 K less than the interior tunnel air temperature. Figure 13. Computed contours of air temperature (K) at the middle of the greenhouses Figure 14. Computed contours of stream function (m2s-1) at the middle of the greenhouses Assessing the micro climate in greenhouses is of prime interest as climate is one of the main factors impacting plant growth and development. Indeed the climate governs two important physiological mechanisms of the plants, namely transpiration and photosynthesis. It is highly dependent on greenhouse geometry, thermo-physical and optical properties of the covering material, and on the outside weather conditions. In the present study, the influence of greenhouse design and configuration on greenhouse microclimate and energy consumption for unheated greenhouses under semi-arid climate was numerically investigated using a commercially available CFD code. A field survey was undertaken in order to establish the boundary conditions and validate the model. Three different greenhouse designs with two different covering materials (plastic and glass) were investigated during two periods (night and daytime) resulting in different airflow and temperature patterns. Results indicate that for the first parametric study, namely during daytime, in greenhouses with the cover material with the highest absorptivity (i.e. the plastic film). High absorptivity reduced the available Photosynthetically Active Radiation (PAR) but it distributed it equally inside the greenhouse. It was also concluded that for the same greenhouses equipped with fully opened side vents, the air located between the rows of canopy, and near the roof, remained very slow, not exceeding 0.2 m s-1. For the Venlo greenhouse, the recirculation loop situated above the crop improved the air mixing and appeared to induce a good homogenization compared with the plastic greenhouse geometry. The flow recirculation, showed the importance of internal temperature gradients, although forced convection which resulted from natural ventilation was dominant. Consequently the Venlo greenhouse had the best performance in terms of ventilation, particularly in the area covered by the crop (0.4-0.6 ms-1) compared with the plastic greenhouse for which air velocities less than 0.3 ms-1 were predicted. The Venlo greenhouse also maintained a relatively low difference of temperature with the outside air (6-7 K) compared with the plastic greenhouse (8-10 K). In the Venlo glasshouse, the canopy located in the middle of the greenhouse also received higher amounts of PAR compared with plants located in the vicinity of the walls. Such heterogeneity in the PAR distribution may lead to an important variability in the crop activity, thus impacting the crop growth and development. Concerning the nocturnal case, the ambient air temperature in the tunnel and vertical wall greenhouse was relatively homogenous and higher compared with the temperature distribution in the Venlo glasshouse. The air temperature at the center of the tunnel greenhouse was 290 K, while it was 288 K and 287 K in the vertical wall and Venlo glasshouse respectively. It can be concluded that for the nocturnal period, the plastic greenhouse, especially the tunnel one had better performances concerning the climate homogenization and the thermal energy storage. This study paves the way for future investigations on the impact of greenhouse design and choice of the covering material on greenhouse climate in semi-arid areas It also stresses the need to properly include thermal transfers as well as radiative transfers in the modeling approach in order to accurately predict canopy radiation absorption, photosynthesis and transpiration in the next developments of the numerical tool. Iλ Spectral radiation intensity, Wm-2.sr-1 $I_{\lambda}^{0}$ Black body intensity given by the Planck function, W.m-2 pressure, Pa Prandtl number [Dimensionless] Radiative flux, W Reynolds number [Dimensionless] Surface, m2 Correlation coefficient [Dimensionless] Radiation source term, J Sϕ Dimensionless source term Temperature, K Axial component of velocity vector, ms-1 Radial component of velocity vector, ms-1 Space component in i- direction, m Convectif heat transfer coeficient, Wm-2k-1 Greek Symbols aλ Spectral absorption coefficient, m-1 $\Gamma_{\phi}$ Dimensionless diffusion coefficient Polar angle, rd Wavelength, m Density, kg.m-3 Stefan-Boltzmann constant, σ =5.672 10-8 W.m-2K-4 σs Scattering coefficient, m-1 Azimuthal angle, rd ϕ Dimensionless concentration of the transported quantity Phase function [Dimensionless] Solid angle, sr Mean square error depends on wavelength transported quantity like: U, V, T, C, k, ϵ [1] Nara M. (1979). Studies on air distribution in farm buildings, two dimensional numerical analysis and experiment, Journal of the Society of Agricultural Structures of Japan, Vol. 9, No. 2, pp. 18-25. Available: https://www.jstage.jst.go.jp/article/sasj1971/9/2/9_2_1/pdf [2] Reichrath S., Davies T.W. (2002). Using CFD to model the internal climate of greenhouses: past, present and future, Agronomie, Vol. 22, No. 1, pp. 3-19. DOI: 10.1051/agro:2001006 [3] Norton T., Sun D.W., Grant J., Fallon R., Dodd V. (2007). Applications of computational fluid dynamics (CFD) in the modeling and design of ventilation systems in the agricultural industry: a review, Bioresource Technology, Vol. 98, No. 12, pp. 2386-2414. DOI: 10.1016/j.biortech.2006.11.025 [4] Bournet P.E., Boulard T. (2010). Effect of ventilator configuration on the distributed climate of greenhouses: a review of experimental and CFD studies, Comput Electron Agr, Vol. 74, No. 2, pp. 195-217. DOI: 10.1016/j.compag.2010.08.007 [5] Bartzanas T., Kacira M., Zhu H., Karmakar S., Tamimi E., Katsoulas N., Lee T.B., Kittas C. (2013). Computational fluid dynamics applications to improve crop production systems, Computers and Electronics in Agriculture, Vol. 93, pp. 151-167. DOI: 10.1016/j.compag.2012.05.012 [6] Bournet P.E., Khaoua S.A.O., Boulard T. (2007). Numerical prediction of theeffect of vent arrangements on the ventilation and energy transfer in multi-spanglasshouse using a bi-band radiation model, Biosystems Engineering, Vol. 98, No. 2, pp. 224-234. DOI: 10.1016/j.biosystemseng.2007.06.007 [7] Bournet P.E. (2014). Assessing greenhouse climate using CFD: a focus on air humidity issues, Acta Hort, Vol. 1037, pp. 971-985. DOI: 10.17660/ActaHortic.2014.1037.129 [8] Hernandez J., Fernandez-Tapia J.M., Hita O., Soriano T., Morales M.I., Castilla N., Escobar I. (2006). Effect of crop row orientation on the passive ventilation of a plastic greenhouse, Acta Hortic, Vol. 719, pp. 205-210. DOI: 10.17660/ActaHortic.2006.719.21 [9] Majdoubi H. (2007). Contribution à la modélisation d'une serre: couplage des transferts aérauliques et radiatifs dans une serre de tomate de grande surface dotée de filet anti-insectes, thesis, Univ. IBN ZOHR, Agadir, Morocco. [10] Fatnassi H., Leyronasb C., Boulard T., Bardinb M., Nicotb P. (2009). Dependence of greenhouse tunnel ventilation on wind direction and crop height, Biosys Engineering, Vol. 103, No. 3, pp. 338-343. DOI: 10.1016/j.biosystemseng.2009.03.005 [11] Fatnassi H., Boulard T., Poncet C., Chave M. (2006). Optimisation of greenhouse insect screening with computational fluid dynamics, Biosys Engineering, Vol. 93, No. 3, pp. 301-312. DOI: 10.1016/j.biosystemseng.2005.11.014 [12] Khaoua S.A.O., Bournet P.E., Migeon C., Chassériaux G., Boulard T. (2006). Analysis of greenhouse ventilation efficiency with CFD, Biosys Engineering, Vol. 95, No. 1, pp. 83-98. DOI: 10.1016/j.biosystemseng.2006.05.004 [13] Kim K., Yoon J.Y., Kwon H.J., Han J.H., Son J.E., Nam S.W., Giacomelli G.A., Lee I.B. (2008). 3-D CFD analysis of relative humidity distribution in greenhouse with a fog cooling system and refrigerative dehumidifiers, Biosys Engineering, Vol. 100, No. 2, pp. 245-255. DOI: 10.1016/j.biosystemseng.2008.03.006 [14] Fidaros D.K., Baxevanou C.A., Bartzanas T., Kittas C. (2010). Numerical simulation ofthermal behavior of a ventilated arc greenhouse during a solar day, Renewable Energy, Vol. 35, No. 7, pp. 1380-1386. DOI: 10.1016/j.renene.2009.11.013 [15] Nebbali R., Roy J.C., Boulard T. (2012). Dynamic simulation of the distributed radiative and convective climate within a cropped greenhouse, Renewable Energy, Vol. 43, pp. 111-129. DOI: 10.1016/j.renene.2011.12.003 [16] Mesmoudi K., Soudani A., Zitouni B., Bournet P.E., Serir L. (2010). Experimental study of the energy balance of an unheated greenhouse under hot and arid climates:Study for the night period of winter season, Journal of the Association of ArabUniversities for Basic and Applied Sciences, Vol. 9, No. 1, pp. 27-37. DOI: 10.1016/j.jaubas.2010.12.007 [17] Mesmoudi K., Soudani A., Bournet P.E. (2010). The determination of the inside air temperature of a greenhouse with tomato crop, under hot and arid climates, J. App. Sc.in Env. Sanit., Vol. 5, No. 2, pp. 114-127. HAL Id: hal-00729706 [18] Mesmoudi K., Bougoul S., Bournet P.E. (2012). Thermal performance of an unheated greenhouse under semi-arid conditions during the night, Acta Hort, Vol. 952, pp. 417-424. DOI: 10.17660/ActaHortic.2012.952.52 [19] Serir L., Bournet P.E., Benmoussa H., Mesmoudi K. (2012). Thermal simulation of a greenhouse under a semi-arid climate, Acta Hort, Vol. 927, pp. 635-642. DOI: 10.17660/ActaHortic.2012.927.78 [20] Mesmoudi K., Bournet P.E., Zitouni B., Outtas T. (2013). Numerical simulation of the airflow and temperature distribution in a closed venlo greenhouse under hot and arid climate, Acta Hort, Vol. 1008, pp. 235-240. DOI: 10.17660/ActaHortic.2013.1008.31 [21] Mesmoudi K., Soudani A., Bougoul S., Bournet P.E. (2012). On the determination of the convective heat transfer coefficient at the greenhouse cover under semi arid climatic conditions, Acta Hort, Vol. 927, pp. 619-626. DOI: 10.17660/ActaHortic.2012.927.76 [22] Mesmoudi K., Bournet P.E. (2014). Assessing the daily evolution of the climate inside a greenhouse under semi-arid conditions using field surveys and CFD modeling, Acta Hort, Vol. 1037, pp. 1049-1054. DOI: 10.17660/ActaHortic.2014.1037.138 [23] Patankar S.V. (1980). Calculation of the flow field, in Numerical Heat Transfer and Fluid Flow, 1st ed. New York, McGaw Hill, USA, ch. 6, sec. 7, pp. 126-129. [24] Mohammadi B., Pironneau O. (1994). Analysis of the k-epsilon turbulence model, Research in Applied Mathematics, 1st ed., Paris, Wiley-Masson, France. [25] Coelho J.P., de Moura Oliveira P.B., Boaventura C.J. (2005). Greenhouse air temperature predictive control using the particle swarm optimization algorithm, Computers and Electronics in Agriculture, Vol. 49, No. 3, pp. 330-344. DOI: 10.1016/j.compag.2005.08.003 [26] Raithby G.D., Chui E.H. (1990). A finite-volume method for predicting a radiant heat transfer in enclosures with participating media, Transactions of ASME Journal of Heat Transfer, Vol. 112, No. 2, pp. 415-423. DOI: 10.1115/1.2910394 [27] Kim S.H., Huh K.Y. (2000). A new angular discretization scheme of the finite volume method for 3-D radiative heat transfer in absorbing, emitting and anisotropically scattering media. International Journal of Heat and Mass Transfer, Vol. 43, No. 07, pp. 1233-1242. DOI: 10.1016/S0017-9310(99)00211-2 [28] Boulard T., Wang S. (2002). Experimental and numerical studies on the heterogeneity of crop transpiration in a plastic tunnel, Computers and Electronics in Agriculture, Vol. 34, No. 1-3, pp. 173-190. DOI: 10.1016/S0168-1699(01)00186-7 [29] Molina-Aiz F.D., Valera D.L., Pena A.A., Alvarez A.J., Gil J.A. (2006). Analysis of the effect of rollup vent arrangement and wind speed on Almeria-type greenhouse ventilation performance with computational fluid dynamics, Acta Hort, Vol. 719, pp. 173-180. DOI: 10.17660/ActaHortic.2006.719.17 [30] Pieters J.G., Deltour J.M. (2006). Performances of greenhouses with the presence of condensation on cladding materials, Journal of Agricultural Engineering Research, Vol. 68, No. 2, pp. 125-137. DOI: 10.1006/jaer.1997.0187
CommonCrawl
The periodic-parabolic logistic equation on $\mathbb{R}^N$ Rational periodic sequences for the Lyness recurrence February 2012, 32(2): 605-617. doi: 10.3934/dcds.2012.32.605 On strange attractors in a class of pinched skew products Àlex Haro 1, Departament de Matemàtica Aplicada i Anàlisi, Universitat de Barcelona, Gran Via de les Corts Catalanes 585, 08007 Barcelona Received October 2010 Revised July 2011 Published September 2011 In this paper we construct strange attractors in a class of pinched skew product dynamical systems over homeomorphims on a compact metric space. We assume that maps between fibers satisfy Inada conditions and that the base space is a super-repeller (it is invariant and its vertical Lyapunov exponent is $+\infty$). In particular, we prove the existence of a measurable but non-continuous invariant graph, whose vertical Lyapunov exponent is negative. %We will refer to such an object as a strange attractor. Since the dynamics on the strange attractor is the one given by the base homeomorphism, we will say that it is a strange chaotic attractor or a strange non-chaotic attractor depending on the fact that the dynamics on the base is chaotic or non-chaotic. The results complement the paper by G. Keller on rigorous proofs of existence of strange non-chaotic attractors. Keywords: pinched skew products., Strange attractors, strange non-chaotic attractors. Mathematics Subject Classification: Primary: 37C60, 37C70; Secondary: 37B5. Citation: Àlex Haro. On strange attractors in a class of pinched skew products. Discrete & Continuous Dynamical Systems - A, 2012, 32 (2) : 605-617. doi: 10.3934/dcds.2012.32.605 Lluís Alsedà and Michał Misiurewicz, Attractors for unimodal quasiperiodically forced maps,, J. Difference Equ. Appl., 14 (2008), 1175. doi: 10.1080/10236190802332274. Google Scholar Kristian Bjerklöv, Positive Lyapunov exponent and minimality for a class of one-dimensional quasi-periodic Schrödinger equations,, Ergodic Theory Dynam. Systems, 25 (2005), 1015. Google Scholar Kristian Bjerklöv, SNA's in the quasi-periodic quadratic family,, Comm. Math. Phys., 286 (2009), 137. doi: 10.1007/s00220-008-0626-y. Google Scholar Z. I. Bezhaeva and V. I. Oseledets, On an example of a "strange nonchaotic attractor'',, Funktsional. Anal. i Prilozhen., 30 (1996), 1. Google Scholar Henk W. Broer, Carles Simó and Renato Vitolo, Chaos and quasi-periodicity in diffeomorphisms of the solid torus,, Discrete Contin. Dyn. Syst. Ser. B, 14 (2010), 871. doi: 10.3934/dcdsb.2010.14.871. Google Scholar Paul Glendinning, Global attractors of pinched skew products,, Dyn. Syst., 17 (2002), 287. doi: 10.1080/14689360210160878. Google Scholar Celso Grebogi, Edward Ott, Steven Pelikan and James A. Yorke, Strange attractors that are not chaotic,, Phys. D, 13 (1984), 261. doi: 10.1016/0167-2789(84)90282-3. Google Scholar Michael-R. Herman, Une méthode pour minorer les exposants de Lyapounov et quelques exemples montrant le caractère local d'un théorème d'Arnol'd et de Moser sur le tore de dimension $2$,, Comment. Math. Helv., 58 (1983), 453. Google Scholar Àlex Haro and Joaquim Puig, Strange nonchaotic attractors in Harper maps,, Chaos, 16 (2006). Google Scholar À. Haro and C. Simó, To be or not to be an SNA: That is the question, 2005., Available from: \url{http://www.maia.ub.es/dsg/2005/0503haro.pdf}., (2005). Google Scholar Tobias H. Jäger, On the structure of strange non-chaotic attractors in pinched skew products,, Ergodic Theory Dynam. Systems, 27 (2007), 493. Google Scholar Tobias H. Jäger, Strange non-chaotic attractors in quasiperiodically forced circle maps,, Comm. Math. Phys., 289 (2009), 253. doi: 10.1007/s00220-009-0753-0. Google Scholar Tobias H. Jäger, The creation of strange non-chaotic attractors in non-smooth saddle-node bifurcations,, Mem. Amer. Math. Soc., 201 (2009). Google Scholar Àngel Jorba and Joan Carles Tatjer, A mechanism for the fractalization of invariant curves in quasi-periodically forced 1-D maps,, Discrete Contin. Dyn. Syst. Ser. B, 10 (2008), 537. doi: 10.3934/dcdsb.2008.10.537. Google Scholar Kunihiko Kaneko, Fractalization of torus,, Progr. Theoret. Phys., 71 (1984), 1112. doi: 10.1143/PTP.71.1112. Google Scholar Gerhard Keller, A note on strange nonchaotic attractors,, Fund. Math., 151 (1996), 139. Google Scholar Ken-Ichi Inada, On a two-sector model of economic growth: Comments and a generalization,, The Review of Economic Studies, 30 (1963), 119. doi: 10.2307/2295809. Google Scholar Awadhesh Prasad, Surendra Singh Negi and Ramakrishna Ramaswamy, Strange nonchaotic attractors,, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 11 (2001), 291. doi: 10.1142/S0218127401002195. Google Scholar Antonio Pumariño, José Ángel Rodríguez, Enrique Vigil. Renormalizable Expanding Baker Maps: Coexistence of strange attractors. Discrete & Continuous Dynamical Systems - A, 2017, 37 (3) : 1651-1678. doi: 10.3934/dcds.2017068 Shin Kiriki, Yusuke Nishizawa, Teruhiko Soma. Heterodimensional tangencies on cycles leading to strange attractors. Discrete & Continuous Dynamical Systems - A, 2010, 27 (1) : 285-300. doi: 10.3934/dcds.2010.27.285 Johan Matheus Tuwankotta, Eric Harjanto. Strange attractors in a predator–prey system with non-monotonic response function and periodic perturbation. Journal of Computational Dynamics, 2019, 6 (2) : 469-483. doi: 10.3934/jcd.2019024 Hunseok Kang. Dynamics of local map of a discrete Brusselator model: eventually trapping regions and strange attractors. Discrete & Continuous Dynamical Systems - A, 2008, 20 (4) : 939-959. doi: 10.3934/dcds.2008.20.939 Antonio Pumariño, José Ángel Rodríguez, Enrique Vigil. Renormalization of two-dimensional piecewise linear maps: Abundance of 2-D strange attractors. Discrete & Continuous Dynamical Systems - A, 2018, 38 (2) : 941-966. doi: 10.3934/dcds.2018040 Antonio Pumariño, José Ángel Rodríguez, Enrique Vigil. Persistent two-dimensional strange attractors for a two-parameter family of Expanding Baker Maps. Discrete & Continuous Dynamical Systems - B, 2019, 24 (2) : 657-670. doi: 10.3934/dcdsb.2018201 P.E. Kloeden, Victor S. Kozyakin. The perturbation of attractors of skew-product flows with a shadowing driving system. Discrete & Continuous Dynamical Systems - A, 2001, 7 (4) : 883-893. doi: 10.3934/dcds.2001.7.883 Tomás Caraballo, Alexandre N. Carvalho, Henrique B. da Costa, José A. Langa. Equi-attraction and continuity of attractors for skew-product semiflows. Discrete & Continuous Dynamical Systems - B, 2016, 21 (9) : 2949-2967. doi: 10.3934/dcdsb.2016081 Roy Adler, Bruce Kitchens, Michael Shub. Stably ergodic skew products. Discrete & Continuous Dynamical Systems - A, 1996, 2 (3) : 349-350. doi: 10.3934/dcds.1996.2.349 Gerhard Keller. Stability index, uncertainty exponent, and thermodynamic formalism for intermingled basins of chaotic attractors. Discrete & Continuous Dynamical Systems - S, 2017, 10 (2) : 313-334. doi: 10.3934/dcdss.2017015 Roy Adler, Bruce Kitchens, Michael Shub. Errata to "Stably ergodic skew products". Discrete & Continuous Dynamical Systems - A, 1999, 5 (2) : 456-456. doi: 10.3934/dcds.1999.5.456 Feng Rong. Non-algebraic attractors on $\mathbf{P}^k$. Discrete & Continuous Dynamical Systems - A, 2012, 32 (3) : 977-989. doi: 10.3934/dcds.2012.32.977 Gábor Domokos, Domokos Szász. Ulam's scheme revisited: digital modeling of chaotic attractors via micro-perturbations. Discrete & Continuous Dynamical Systems - A, 2003, 9 (4) : 859-876. doi: 10.3934/dcds.2003.9.859 Emile Franc Doungmo Goufo. Multi-directional and saturated chaotic attractors with many scrolls for fractional dynamical systems. Discrete & Continuous Dynamical Systems - S, 2020, 13 (3) : 629-643. doi: 10.3934/dcdss.2020034 Eugen Mihailescu, Mariusz Urbański. Transversal families of hyperbolic skew-products. Discrete & Continuous Dynamical Systems - A, 2008, 21 (3) : 907-928. doi: 10.3934/dcds.2008.21.907 Jose S. Cánovas, Antonio Falcó. The set of periods for a class of skew-products. Discrete & Continuous Dynamical Systems - A, 2000, 6 (4) : 893-900. doi: 10.3934/dcds.2000.6.893 Matúš Dirbák. Minimal skew products with hypertransitive or mixing properties. Discrete & Continuous Dynamical Systems - A, 2012, 32 (5) : 1657-1674. doi: 10.3934/dcds.2012.32.1657 Viorel Nitica. Examples of topologically transitive skew-products. Discrete & Continuous Dynamical Systems - A, 2000, 6 (2) : 351-360. doi: 10.3934/dcds.2000.6.351 Jon Aaronson, Michael Bromberg, Nishant Chandgotia. Rational ergodicity of step function skew products. Journal of Modern Dynamics, 2018, 13: 1-42. doi: 10.3934/jmd.2018012 Xinyuan Liao, Caidi Zhao, Shengfan Zhou. Compact uniform attractors for dissipative non-autonomous lattice dynamical systems. Communications on Pure & Applied Analysis, 2007, 6 (4) : 1087-1111. doi: 10.3934/cpaa.2007.6.1087 Àlex Haro
CommonCrawl
Polarization memory rate as a metric to differentiate benign and malignant tissues Daniel C. Louie, Lioudmila Tchvialeva, Sunil Kalia, Harvey Lui, and Tim K. Lee Daniel C. Louie,1,2,3,4,* Lioudmila Tchvialeva,1,2,4 Sunil Kalia,1,2,4 Harvey Lui,1,2,4 and Tim K. Lee1,2,3,4 1Department of Dermatology and Skin Science, University of British Columbia, Vancouver, BC V6T 1Z4, Canada 2Photomedicine Institute, Vancouver Coastal Health Research Institute, Vancouver, BC V6T 1Z4, Canada 3School of Biomedical Engineering, University of British Columbia, Vancouver, BC V6T 1Z4, Canada 4Departments of Cancer Control Research and Integrative Oncology, BC Cancer, Vancouver, BC V5Z 1L3, Canada *Corresponding author: [email protected] Daniel C. Louie https://orcid.org/0000-0002-5354-2423 Harvey Lui https://orcid.org/0000-0002-0584-5980 Tim K. Lee https://orcid.org/0000-0002-0157-2081 D Louie L Tchvialeva S Kalia H Lui T Lee Daniel C. Louie, Lioudmila Tchvialeva, Sunil Kalia, Harvey Lui, and Tim K. Lee, "Polarization memory rate as a metric to differentiate benign and malignant tissues," Biomed. Opt. Express 13, 620-632 (2022) Polarization enhanced laparoscope for improved visualization of tissue structural changes... Robert M. Trout, et al. Polarization and depolarization metrics as optical markers in support to histopathology of ex vivo... Deyan Ivanov, et al. Investigating the depolarization property of skin tissue by degree of polarization uniformity... Xin Zhou, et al. Tissue Optics and Spectroscopy Circular polarization Diffuse optical tomography Molecular scattering Optical angular momentum Physical background Physical model Evidence and literature Experimental background and methods Non-invasive optical methods for cancer diagnostics, such as microscopy, spectroscopy, and polarimetry, are rapidly advancing. In this respect, finding new and powerful optical metrics is an indispensable task. Here we introduce polarization memory rate (PMR) as a sensitive metric for optical cancer diagnostics. PMR characterizes the preservation of circularly polarized light relative to linearly polarized light as light propagates in a medium. We hypothesize that because of well-known indicators associated with the morphological changes of cancer cells, like an enlarged nucleus size and higher chromatin density, PMR should be greater for cancerous than for the non-cancerous tissues. A thorough literature review reveals how this difference arises from the anomalous depolarization behaviour of many biological tissues. In physical terms, though most biological tissue primarily exhibits Mie scattering, it typically exhibits Rayleigh depolarization. However, in cancerous tissue the Mie depolarization regime becomes more prominent than Rayleigh. Experimental evidence of this metric is found in a preliminary clinical study using a novel Stokes polarimetry probe. We conducted in vivo measurements of 20 benign, 28 malignant and 59 normal skin sites with a 660 nm laser diode. The median PMR values for cancer vs non-cancer are significantly higher for cancer which supports our hypothesis. The reported fundamental differences in depolarization may persist for other types of cancer and create a conceptual basis for further developments in polarimetry applications for cancer detection. © 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement Tissue biopsy, the acknowledged gold standard for cancer detection, is based on the visualization of morphological changes in cancer tissue on a cellular level. Current optical imaging aims to offer non-invasive methods which could be automated and provide diagnostics in vivo. Several types of light-tissue interactions have generated a diversity of imaging techniques. For example, photographic, multispectral, narrow-band, fluorescence, light scattering spectroscopy, and multi-photon imaging exploits mostly absorption in endogenous chromophores whereas molecular scattering is of foremost importance in Raman, diffuse optical tomography, and diffuse reflectance [1]. Despite a wide variety of existing optical methods, a search for new approaches with different contrast mechanisms is under enduring attention. Coherence and polarization are optical properties connected to light phase and could create additional channels of information. The interaction of polarized light and biological tissue is a growing area in biomedical optics with a substantial development achieved recently [2,3]. The specially arranged polarization with optical angular momentum opens new perspectives for practical biomedical applications [4]. Coherent-domain optical metrology, which includes polarimetry, currently provides the highest sensitivity and accuracy among optical techniques at the scale of micro-structural pathological changes occurring at the sub-cellular level [5,6]. Traditionally polarization has been used for optical gating, to contrast shallow photons which undergo a smaller number of scattering events against deeper ones [7]. To this point, the combination of polarized images visualizes skin pathology with enhanced contrast [8]. Recently, modification of tissue intrinsic polarization properties was linked to tissue pathological alterations which makes a polarimetry useful tool for skin [9–11], liver [12], breast [13–15], lung [16], prostate [17,18], and colon [19] cancer detection. The changes in polarization properties like depolarization, birefringence, and diattenuation are the subject of polarimetry. Depolarization is the most prominent effect when polarized light propagates in scattering media and exceeds birefringence and attenuation by an order of magnitude [20]. It is a direct result of scattering and rises quickly over the course of several scattering events. It is expected that the characteristic depolarization length within a medium is of the same order as the transport length and depends on bulk optical coefficients [21]. While bulk optical coefficients demonstrate some differences for cancer vs noncancerous tissues, this dissimilarity is not considered a reliable diagnostic criterion [22]. However, depolarization behaviour can be different even in tissue with the same bulk optical properties [23], which makes it a more reliable parameter for tissue differentiation. The most common metrics quantifying depolarization are the Degree of Polarization (DOP) and the Polarization Memory Rate (PMR). These properties will be defined and explained in section 2. PMR is a relatively new polarization metric and its diagnostic potential was recently demonstrated though not fully explored. For example, PMR was able to separate pig skin damaged by irradiation from healthy tissue [24]. In the present paper we explore the diagnostic potential of PMR for skin cancer detection. We hypothesize that morphological changes in cancer cells will lead to a noticeable increase in PMR value for cancer tissue compared to benign. 2. Physical background Polarization is the orientation of the tip of the electrical vector of an oscillating light wave. In the fully polarized case, this wave is modeled as an elliptical trajectory that includes linear and circular components. Light can also be depolarized, in which the oscillations are randomly oriented. The polarization state of light can be quantified through a Stokes vector S containing four Stokes parameters as per Eq. (1) [3,25]. (1)$$\begin{array}{*{20}{c}} {S = \left[ {\begin{array}{*{20}{c}} {{S_0}}\\ {{S_1}}\\ {{S_2}}\\ {{S_3}} \end{array}} \right] = \left[ {\begin{array}{*{20}{c}} {{I_0} + {I_{90}}}\\ {{I_0} - {I_{90}}}\\ {{I_{45}} - {I_{135}}}\\ {{I_{RH}} - {I_{LH}}} \end{array}} \right]} \end{array}$$ Each of the Stokes parameters corresponds to the difference between two orthogonal components of polarization. S0 is the total intensity of polarized light and can be calculated as sum of any two orthogonal components. S1 is the difference between horizontally and vertically linear polarized components, S2 is the difference between linear polarized components +45° and -45° from the horizontal, and S3 is the difference in intensity between right- and left-hand circular polarized components. The Stokes parameters can be determined from six intensity measurements. In addition, due to the relationships between the Stokes parameters, it is possible to measure a Stokes vector using only four intensity measurements. From the Stokes vector, one can calculate the characteristics of the polarization ellipse and depolarization metrics such as the degree of polarization (DOP), and its component parts degree of circular polarization (DOCP), and degree of linear polarization (DOLP) as in Eq. (2) [3] [25] (2)$$\begin{array}{*{20}{c}} {DOCP = \; \frac{{\sqrt {S_3^2} }}{{{S_0}}},\; DOLP = \; \frac{{\sqrt {S_1^2 + S_2^2} }}{{{S_0}}},\; DOP = \; \frac{{\sqrt {S_1^2 + S_2^2 + S_3^2} }}{{{S_0}}}} \end{array}$$ The most common metric used to quantify depolarization behavior is the Degree of Polarization (DOP), which represents the fraction of backscattered light that remains polarized. It ranges from 0 for fully depolarized to 1 for fully polarized light and could lie between 0 and 1, for partially polarized light. A non-negative form of DOCP is adopted in our model, noting that other conventions allow for negative DOCP of circular light to indicate helicity [26]. However, while DOP depends on tissue optical properties, it can also vary due to light source characteristics, initial light polarization, and detection geometry [27]. As a result, DOP is difficult to standardize across different tissue polarimetry machines. The metric we focus on in this paper is the Polarization Memory Rate (PMR), so named after the concept of polarization memory introduced in [28], which is the ability for light to remain polarized and forego decay as it travels through a medium. PMR as a measure of the relative rates of linear and circular polarization decay was explored as a metric separating Mie and Rayleigh scattering regimes in [29]. In the present paper, we will follow [24] where PMR is defined as the ratio of the DOP of the circular component of polarized light (DOCP) to the DOP of the linear component of polarized light (DOLP), as in Eq. (3). (3)$$\begin{array}{*{20}{c}} {PMR = \; \frac{{DOCP}}{{DOLP}}} \end{array}$$ If the probing light has stronger circulation polarization memory it results in PMR > 1, in the case of the preservation of the linear polarization PMR < 1. As a relative measure, it is possible that this metric is more easily comparable between devices than DOP, and less affected by surface roughness [30]; though these relationships will need to be verified in future study. 3. Physical model 3.1 Scattering regime Since depolarization occurs due to scattering it is important to introduce two basic scattering models, which can then lead to specific depolarization models. Small particles (where particle size < light wavelength) demonstrate isotropic scattering in the Rayleigh regime, whereas and large particles (where size > wavelength) have a large forward scattering component in the Mie regime [31]. The boundary between scattering regimes is evaluated by calculating the size parameter. (4)$$\begin{array}{*{20}{c}} {X = \frac{{d\pi {n_m}}}{\lambda }} \end{array}$$ where d is the diameter of a spherical scatterer, nm is the refractive index of the surrounding media, and λ is the wavelength of the probing light [31]. Mie anisotropic scattering occurs when X >>1, while Rayleigh isotropic scattering occurs when X<<1. Rayleigh and Mie regimes represent two fundamental physical models of scattering with different theory and outcomes, which also encompass the depolarization of light [32]. However, depolarization behaviour is not as closely linked to scatterer size. Due to other factors described below, it is possible to have Mie-sized particles display depolarization found in the Rayleigh regime. Therefore Rayleigh and Mie depolarization regimes must be independently identified. For qualitative identification of these two depolarization regimes, let us explore the mechanisms of depolarization using the concept of electromagnetic waves. 3.2 Depolarization mechanisms Two mechanisms of depolarization, "geometrical" and "dynamical" are introduced in [33]. Geometrical depolarization is caused by changes in the polarization plane orientation as a wave propagates in the scattering media. The random trajectory change caused by scattering results in randomized orientations of the wave polarization plane, and this increased randomness causes light to depolarize. Dynamical depolarization is caused by changes in the complex amplitudes of cross-polarized wave components (those parallel and perpendicular to the scattering plane). This type of depolarization occurs as these amplitudes disperse after multiple scattering events. A key differentiating factor is that linear components of polarized light undergo both depolarization mechanisms whereas circular polarized components of light undergo only dynamical depolarization, due to circular polarization's independence from plane orientation due to axial symmetry. 3.3 Depolarization regime vs. scattering regime To relate the model of depolarization mechanisms to the scattering regimes, it was shown [33] that the depolarization of light in anisotropic media composed of large Mie scatterers is dominated by geometrical depolarization. In this media, single scattering predominantly occurs in a forward direction, which more strongly affects polarization plane rotation than the amplitudes of cross polarized components. This leads to a dominance of geometrical over dynamical depolarization in the Mie anisotropic regime, resulting in a faster decay of the linear polarization component of light compared to circular per scattering event [28,34]. The opposite tendency is observed in the isotropic Rayleigh regime where forward and backward scattering have more equal possibility. Backward scattering will introduce an additional helicity flip in circular polarization, which results in a faster decay of circular polarized components relative to linear [26]. These differences in light depolarization make PMR useful in characterizing the Mie and Rayleigh scattering regimes. Mie media with relatively large scattered particles should have a predominant circular polarization memory with PMR > 1, and conversely Rayleigh media with relatively small scatterers is expected to preserve linear light better with PMR < 1 [29]. The scattering of light in biological tissue is a complex process involving particles with essentially different sizes and two types of scattering occurring simultaneously. Yet, it is possible to identify the average scattering regime by the value of the bulk anisotropy index g which approaches 1 and 0 for Mie and Rayleigh scattering respectively [31]. Most biological tissues demonstrate anisotropic scattering with g (in the visible region, 600 nm) in the diapason of 0.7-0.9 [35–37]. This is typical of scatterers in the Mie scattering regime and are expected to exhibit circular polarization memory with PMR > 1. However, recent studies show that colon, bladder, muscles, skin, liver, fat and many other tissues maintain linear polarization memory with PMR < 1 [19,24,38], which would indicate anomalous depolarization in the Rayleigh depolarization regime. Anomalous depolarization is a specific phenomenon that occurs due to a low relative refractive index between the scatterer and medium, as described in section 3.4 below. Note that the preservation of the linear polarization could also appear due to the backscattering observation mode [39]. Another important factor that could maintain linear polarization memory preferentially to circular polarization memory in tissue is the considerable number of small particles present in the extracellular matrix and cell compartments, like mitochondria, lysosomes, and collagen fibrils [19,40,41]. Ultimately, we expect that a combination of device geometry, ordinary depolarization from small particles, and anomalous depolarization from large particles contributes to the predominant Rayleigh depolarization [42]. Beyond the effect of malignancy on the cell nucleus, it is also necessary to regard stromal changes, such as the disordering of extracellular collagen. These changes are often observed in tissue polarimetry, visible through birefringence and diattenuation imaging channels [20,43], though they are less relevant to depolarization mechanisms [44]. As stated above, the scattering of light in biological tissue is complex and multifaceted, and as such the explanation offered by our model limits its scope to depolarization caused by changes in the cell nucleus and refractive indices. 3.4 Anomalous depolarization Anomalous depolarization behavior is linked to a low relative refractive index m = ns/nm, where ns and nm are the refractive indices of scatterers and of surrounding media respectively [45] [42]. The Mie and Rayleigh depolarization regimes are better separated using a reduced size parameter $X^{\prime}$ [42] dependent on m: (5)$$\begin{array}{*{20}{c}} {X^{\prime} = ({m - 1} )X} \end{array}$$ The above cited work states that Mie media with $X^{\prime} < 2.5\; $and sizes X >>1 will display Rayleigh depolarization with linear polarization memory, if (m-1)<<1. The relative refractive index of cell compartments mostly satisfies the condition for anomalous depolarization behaviour [46]. The depolarization areas are schematically shown in Fig. 1. Usual Rayleigh and Mie scattering regions occur the left and right of the indicated scattering regime boundary. To the right of the boundary, all scattering is in the Mie regime. However, depolarization regime is separated by a different axis, depicted in pink and blue respectively. The transition area between the two regimes is indicated in white. It is located under the borderline $X^{\prime} = 2.5\; $introduced in [42] and covers the transition and Mie scattering regions. To note, at higher m the depolarization regime boundary joins the scattering regime boundary. Fig. 1. The diagram of the two fundamental regimes of depolarization in terms of relative index of refraction m and size parameter X [42] . The Rayleigh and Mie depolarization zones are depicted in pink and blue correspondingly. The transitional area between two regimes indicated in white. 4. Evidence and literature The theoretical support of our hypothesis can be obtained from published data. It is well-known that cancerous and precancerous cells present with a significant enlargement in nucleus diameter (increased X) and mass density of cell nuclei [47]. Cells with denser nuclei should have an increased ns refractive index (which will increase m), and has been confirmed for colon [48], brain [49], and breast cancer [50]. Larger X and m could push the depolarization regime toward Mie depolarization, which is a change measurable with PMR. For normal intestinal epithelial cells the diameters were obtained in the range of d = 5.0 ± 0.5 µm with m = 1.035. For T84 intestinal malignant cells the corresponding values were d = 9.8 ± 1.5 µm and m = 1.040 [47]. Taking into account the typical refractive index for cell cytoplasm nm = 1.37 (λ = 0.63 µm) [46] and using Eq (4) and (5), we can evaluate the approximate size parameters for normal and cancerous nucleus as X = 34 ± 3 µm and X = 67 ± 10 µm correspondingly. Both numbers are much greater than 1 and link to the Mie scattering regime, as expected of biological tissue. However, the reduced size parameter for the normal cells $X^{\prime}$=1.2 ± 0.1 is less than the boundary value 2.5 and linked to Rayleigh depolarization regime with PMR < 1. As for the cancerous cells, the reduced size parameter $X^{\prime}$=2.7 ± 0.4 exceeds the boundary between two depolarization regimes which should lead to PMR > 1. Using refractive indexes of brain tissues measured in [49], assuming similar nucleus diameters, nm, and following calculations shown for the first example, it is possible to set the same approximate size parameters for gray matter X = 34 (d = 5.0, ns = 1.39, nm = 1.37) and for glioblastoma X = 67 (d = 9.8, ns = 1.45, nm = 1.37). As in the case of intestinal epithelial cells, the numbers are associated with Mie scattering regime. Due to greater diversity of relative refractive indexes m = ns/nm for brain tissue, the reduced size parameters are about $X^{\prime} = $0.5 (Rayleigh depolarization, PMR < 1) for normal tissue and $X^{\prime} = $3.9 for brain cancer (Mie depolarization, PMR > 1). These findings are summarized in Table 1. By the classical definition, one could define normal tissue and cancerous tissue as Mie scatterers. However, the new ranging based on the reduced size parameter $X^{\prime}$ describes cancerous tissue as obeying Mie while the non-cancerous tissue mostly following Rayleigh depolarization models. This fundamental shift could be evaluated by the Polarization Memory Rate which exposes the relative decay of circular vs linear polarizations at a time when light propagates in a tissue. Table 1. Summary of benign and malignant tissue differences in literature 5. Experimental background and methods Practical implementation of polarimetry for bulk tissue cancer detection has been explored through a variety of methods [3], where examples of measurement schemes are often tailored to the intended target site be they ex vivo [51,52] or in vivo [43,53]. As shown in each example, complete Mueller matrix measurement is now the most common polarimetric analysis, allowing for the calculation of a large variety of depolarization metrics. Any of these metrics, or a combination thereof, may be the optimal method to measure the proposed fundamental shift in depolarization between benign and malignant tissues. Recognizing that PMR is one of many depolarization metrics, the present study measures PMR to take advantage of the simplicity and rapid measurement speed of Stokes vector polarimetry. The more lightweight instrumentation of Stokes vector polarimetry offers it greater flexibility for applications on in vivo body sites other than skin, and ease of implementation into the clinical workflow. In addition, our instrumentation follows a line of research measuring polarization speckle. Mueller matrix imaging systems, such as in [54], optimize their images by employing mechanisms to erase speckle. With an aim to embrace speckle, a short measurement time on the order of 5 ms is required to eliminate the effect of movement when measuring a speckle pattern in vivo. It is a prominent future goal to measure the Mueller matrix in tandem with speckle, and to further investigate the fundamental depolarization shift through a Mueller matrix analysis. The in vivo measurements in this study were taken by a handheld one-shot Stokes polarimetry probe shown in Fig. 2. This device offers rapid Stokes vector measurements, and includes the ability to alternate between linear and circular initial polarization states. In the operational geometry of this probe, the backscattered laser light from the target site forms a far-field speckle pattern in the detector plane, which is expected to be statistically uniform over the four detectors. Rather than four division-in-time measurements at one point in space, measurement time is decreased by employing four division-in-space measurements at one point in time. The polarization state analyzer (PSA) is comprised of four photodiodes with film polarizers (Bolder Vision Optik) placed in front. A polarization state generator (PSG) comprised of a wire grid polarizer and removable quarterwave plate film was used to maintain the specific polarization of the diode laser (660 nm, 120 mW, Thorlabs, HL6545MG) in either linear or right-hand circular polarization states. The diode laser is oriented with respect to the PSG such that a maximum intensity is transmitted through the system. The device is aligned on a skin target through the use of an adhesive sticker commonly used identify points of interest in dermatological practice. The maximum outer dimensions of the probe are 20 cm in length and 5 cm in diameter. The acquisition time is 4.7 milliseconds. More details on the probe construction and operation can be found in [55]. Fig. 2. The portable polarization probe, diagram and photograph. 6. Experimental results For the experimental validation we have measured PMR for skin lesions in vivo. Skin is an easily accessible organ providing an opportunity to explore cancerous lesions, benign lesions, and normal healthy skin tissue. The latter two tissue types are both examples of non-cancerous morphology. After building a Stokes polarization probe, we conducted a preliminary clinical study involving 71 volunteer patients at the Vancouver General Hospital Skin Care Centre in Vancouver, Canada. The study was approved by University of British Columbia research ethical board, H06-&0281 and includes 6 malignant melanoma (MM), 14 basal cell carcinoma (BBC), 8 squamous cell carcinoma (SCC), 9 actinic keratosis (AK), 5 benign nevus (BN), 15 seborrheic keratosis (SK), and 59 normal skin cases. The diagnoses of malignant cases were pathologically proven. Skin sites were grouped in to three categories: cancer lesions (28), benign lesions (20) and normal skin sites taken contralateral to the lesion (59). Actinic Keratosis was not grouped as it is an intermediate pre-cancerous lesion. Using the Stokes probe (described in Methods), we measured DOCP and DOLP and calculated PMR for each case. The PMR values for the each lesion type, as well as grouped categories of cancer (MM, BCC, SCC), benign (BN, SK) and normal skin sites are shown in Fig. 3. Fig. 3. Mean and standard error shown for each type of tested lesion (A), and categories of cancer, benign, and normal skin (B). Most points have PMR values less than 1 indicating Rayleigh depolarization, however the measurements from cancerous lesions detect higher PMR values, indicating a shift toward Mie depolarization. The PMR measurements for the benign tissue pathology are similar to the normal skin. The benign pathologies and healthy tissues have close mean PMR values. Most PMR data for benign lesions are smaller than for skin cancer. Most of the points from cancerous lesions, and in particular MM, have PMR values closer to or greater than one than the benign lesion points, which demonstrates the approach to the Mie depolarization limit. The median PMR for cancer is greater than the median PMR for non-cancerous skin sites. In addition, measurements from actinic keratosis, which is an intermediate pre-cancer lesion, lies between the cancerous and benign categories. Because the data are not normally distributed (according to a Shapiro-Wilk test) a non-parametric Mann-Whitney U-Test was employed to find p-value 0.018 for cancerous vs benign lesions and 0.005 for cancerous lesions vs. normal skin sites from benign lesion patients. This skin lesion data indicates that PMR could potentially detect specific cancer pathology. The whisker plot in Fig. 4 displays median values and inter-quartile range. Visually similar pairs of MM-BN and MM-SK images are shown with their PMR values indicated. The MM that resembles SK and BN could result in misdiagnosis of deadly skin cancer by practitioners. The results of the preliminary study validate the potential of Polarization Memory Rate as a metric for cancer differentiation. PMR demonstrates a significant 20% increase in median values for cancerous lesions compared to benign lesions which indicates behavior closer to the Mie depolarization regime. However, most of the measured PMR values for cancer comply with Rayleigh depolarization limit. In the Discussion section we will explore the factors influencing PMR and address the reasons why we primarily observe PMR less than one. Fig. 4. Whisker plot of cancerous and benign categories indicating medians and interquartile range. Enhanced markers indicate pairs (Malignant Melanoma-Nevus and Malignant Melanoma-Seborrheic Keratosis) with visual similar clinical presentation. The corresponding images for MM are on the left and benign lesions are on the right. 7.1 Discussion of results As is seen in Fig. 4, most measurements have PMR < 1, indicating the Rayleigh depolarization regime even theoretically we expect PMR points appearing in Mie region. There are a few possible reasons. A first factor reducing PMR might be the wavelength of the laser. The red diode laser with wavelength 660 nm has penetration depth on the order of a millimeter [56] and the emerging light spot of a few millimeters. Though thickness and depth data was not collected for all lesions in this study (as the benign lesions were not biopsied), the Breslow thicknesses of the collected melanoma lesions were each 1.5 mm or below. The majority of cancerous lesions examined in this study were subjects of early detection by dermatologists and the majority can be expected to be thin or superficial. As such, the propagating trajectories likely surpass the lesion volume adding a significant portion of the healthy tissue to the total signal. Under such circumstances the backscattering light will possibly obey the Rayleigh depolarization regime on average, as is typical of healthy skin. To address another aspect of geometry, we expect that lesion inhomogeneity would have only a minor contribution to the measurement, as the collected field is comprised of light emerging from a total volume that surpasses that of the lesion. The second reason for the observed Rayleigh depolarization could be connected to a non-even distribution of depolarization over the backscattered area. This is not connected to sample inhomogeneity, but rather the general behaviour of depolarization in backscattering geometry. Both experimental and MC simulations show that for Mie depolarization media there is a local increase in linear polarization memory at central zones of the emerging spot [39]. The size of the anomalous Rayleigh depolarization area is about the transport mean free path of the turbid media and is located at the central area with maximal backscattered intensity. In this case the central backscattering signal with strong linear polarization memory might contribute to the total signal as well. These PMR reducing factors are mostly related to the detector placement within the probe geometry to ensure symmetrical detection of a uniform field. To note, this aspect of detection geometry is uniquely subject to user errors. A non-perpendicular orientation angle of the device with the target could result in non-symmetry when observing the backscattered field, resulting in further distortions. Finding the optimal light source wavelength for penetration control, and a device scheme resistant to orientation variance will be a future engineering pursuit. However, even under these unfavorable experimental conditions, the PMR demonstrates some diagnostic ability linked to the specific morphological modifications in cancer cells. We assume that this is the result of a reasonable choice of the laser type and probe geometry. Choosing coherent laser light as the light source for this polarimetry device results in the formation of a stochastic interference pattern (speckle pattern) on the detector area. Speckles were initially treated as a noise in many imaging techniques, but speckle patterns contain additional information due to its encoding of light phase modifications, and recently has proven to have potential for medical diagnostics [57] becoming a quickly developed area in coherent optics [58]. Describing the components of a speckle pattern, each bright area (speckle) has an individual state of polarization (SOP) which is elliptical in general, but the parameters and orientation of the polarization ellipses may vary from speckle to speckle forming a polarization speckle pattern [59]. The ability to evaluate skin lesions using speckle patterns has been demonstrated [60], though their discriminative ability has strongly been linked to the lesion's surface roughness [61]. Indeed, the influence of surface roughness on the DOP measurements of the probe used in this study has been explored [11], and there is a negative correlation between surface roughness and DOP. However, a subsequent examination of roughness on PMR, a metric comprised of two relative DOP measurements, indicates a mitigation of surface roughness's influence on measurements [30], though this mechanism is still to be explored in detail. To embrace speckle, and the sensitivity it offers to our measurements, it is required for us to design probe with speckle behavior in mind. 7.2 Influence of speckle on results There are three factors influencing polarization speckle: the light source, the scattering tissue, and the geometry of the device. Enhancing the detection of the biological markers of interest, the first and third factors should be carefully addressed. Polarization is a statistical phenomenon mutually connected with light coherence [62]. Because polarization and coherence correlation functions are of the same scale, light depolarization occurs at propagating distances comparable to light decorrelation which is of the order of the temporal coherence length [63]. Most photons with trajectories close to the coherence length will participate in polarization speckle formation, while the deeper photons will produce a uniform intensity bias. In order to convey relevant tissue information, the average photon path length in tissue must match the order of the temporal coherence length of the light source [64], and therefore we require a laser with a coherence length of a few millimeters to match the expected skin depolarization length [56]. In our probe we use a low coherence diode laser with the coherence length ∼3 mm. The emerging light will be partially coherent and partially polarized to produce an informative value of DOP between 0 and 1. The diameter of the laser beam and the geometrical characteristics of the detection scheme define the light spatial coherence length which is about the average size of polarization speckle [65]. The amount of speckle on the detector area (equal to the area divided by a spatial coherence length) should be optimized. As shown in [66] the spatial degree of polarization depends on distribution of individual speckle SOPs over the speckle field. Therefore, if the number of speckles is high, in the case of single point measurements (as is the case with photodiode detectors) the spatial averaging over detector area could affect the measured DOP and degrade the information about tissue. Alternatively, if the number of speckles is low, the device will measure the average light intensity, DOP, and PMR with a large error which is inversely proportional to the square root of the number of speckles [67]. For the future optimization of the spatial coherence length of the laser field we should map the spatial distributions for DOP(x,y) and PMD(x,y) and explore their statistical moments. Our hypothesis is that morphological changes in cancer cells will lead to a noticeable increase in PMR value for cancer tissue compared to benign. This is due to the difference in depolarization regime displayed by the scattering in benign and malignant cells. Our results present a numerical validation of this hypothesis based on published data and experimental validation based on measurements on skin tissue in vivo. Having chosen human skin as a convenient and easily accessible object, we designed a handheld polarization probe and conducted a preliminary clinical study including malignant lesions, benign lesions, and non-lesion skin sites. We discovered a statistically significant increase in PMR for malignant lesions relative to non-malignant sites, indicating that depolarization in skin cancer compared to non-cancerous tissues approaches the Mie depolarization regime. However, we recognize that this study is in its earliest stage, and we propose further testing to determine the extent to which this ultimate difference in PMR is measurable. We posit that this difference may be measurable in types of cancer beyond skin, to create a conceptual basis on which to advance the greater field of medical polarimetry. VGH and UBC Hospital Foundation; Canadian Dermatology Foundation; BC Cancer Foundation; Natural Sciences and Engineering Research Council of Canada (2017-04932); Canadian Melanoma Foundation. This work is supported in part by grants from the BC Cancer Foundation, Canadian Melanoma Foundation, Canadian Dermatology Foundation, NSERC, and VGH and UBC Hospital Foundation. We thank all our study participants, and the clinical staff at the Vancouver General Hospital Skin Care Centre for their time and other contributions to this project. The authors declare no relevant conflicts of interest. Data underlying the results presented in this paper contains confidential patient information and are not publicly available at this time, but may be obtained from the authors upon reasonable request. 1. D. J. Waterhouse, C. R. M. Fitzpatrick, B. W. Pogue, J. P. B. O'Connor, and S. E. Bohndiek, "A roadmap for the clinical implementation of optical-imaging biomarkers," Nat. Biomed. Eng. 3(5), 339–353 (2019). [CrossRef] 2. J. C. Ramella-Roman, I. Saytashev, and M. Piccini, "A review of polarization-based imaging technologies for clinical and preclinical applications," J. Opt. 22(12), 123001 (2020). [CrossRef] 3. C. He, H. He, J. Chang, B. Chen, H. Ma, and M. J. Booth, "Polarisation optics for biomedical and clinical applications: a review," Light: Sci. Appl.10(194), (2021). 4. I. Meglinski, T. Novikova, and K. Dholakia, "Polarization and Orbital Angular Momentum of Light in Biomedical Applications: feature issue introduction," Biomed. Opt. Express 12(10), 6255–6258 (2021). [CrossRef] 5. J. Qi and D. Elson, "Mueller polarimetric imaging for surgical and diagnostic applications: A review," J. Biophotonics 10(10), 950–982 (2017). [CrossRef] 6. V. V. Tuchin, L. Wang, and D. A. Zimnyakov, Optical Polarization in Biomedical Applications (Springer-Verlag Berlin Heidelberg, 2006). 7. L. V. Wang, G. L. Coté, and S. L. Jacques, "Special section guest editorial: tissue polarimetry," J. Biomed. Opt. 7(3), 278 (2002). [CrossRef] 8. S. L. Jacques, J. C. Ramella-Roman, and K. Lee, "Imaging skin pathology with polarized light," J. Biomed. Opt. 7(3), 329–340 (2002). [CrossRef] 9. P. Ghassemi, P. Lemaillet, T. Germer, J. Shupp, S. Venna, M. Boisvert, K. Flanagan, M. Jordan, and J. Ramella-Roman, "Out-of-plane Stokes imaging polarimeter for early skin cancer diagnosis," J. Biomed. Opt. 17(7), 760141 (2012). [CrossRef] 10. J. F. do Boer, C. K. Hitzenberger, and Y. Yasumo, "Polarization sensitive optical coherence tomography - a review," Biomed. Opt. Express 8(3), 1838–1873 (2017). [CrossRef] 11. D. C. Louie, J. Phillips, L. Tchvialeva, S. Kalia, H. Lui, W. Wang, and T. Lee, "Degree of optical polarization as a tool for detecting melanoma: proof of principle," J. Biomed. Opt. 23(12), 1 (2018). [CrossRef] 12. Y. Wang, H. He, J. Chang, S. Lui, M. Li, N. Zeng, J. Wu, and H. Ma, "Mueller matrix microscope: a quantitative tool to facilitate detection and fibrosis scorings of liver cirrhosis and cancer tissues," J. Biomed. Opt. 21(7), 71112 (2016). [CrossRef] 13. R. Patel, A. Khan, R. Quinlan, and A. Yaroslavsky, "Polarization-sensitive multimodal imaging for detecting breast cancer," Cancer Res. 74(17), 4685–4693 (2014). [CrossRef] 14. M. Villiger, D. Lorenser, R. McLaughlin, B. Quirk, R. Kirk, B. Bouma, and D. Sampson, "Deep tissue volume imaging of birefringence through fibre-optic needle probes for the delineation of breast tumour," Sci. Rep. 6(1), 28771 (2016). [CrossRef] 15. A. Golaraei, L. Kontenis, R. Cisek, D. Tokarz, S. Done, B. Wilson, and V. Barzda, "Changes of collagen ultrastructure in breast cancer tissue determined by second-harmonic generation double Stokes-Mueller polarimetric microscopy," Biomed. Opt. Express 7(10), 4054 (2016). [CrossRef] 16. B. Kunnen, C. Macdonald, A. Doronin, S. Jacques, M. Eccles, and I. Meglinski, "Application of circularly polarized light for non-invasive diagnosis of cancerous tissues and turbid tissue-like scattering media," J. Biophotonics 8(4), 317–323 (2015). [CrossRef] 17. V. Ushenko, A. Sdobnov, A. Syvokorovskava, A. Dubolazov, O. Vanchulyak, A. Ushenko, Y. Ushenko, M. Gorsky, M. Sidor, A. Bykov, and I. Meglinski, "3D mueller-matrix diffusive tomography of polycrystalline blood films for cancer diagnosis," Photonics 5(4), 54 (2018). [CrossRef] 18. V. A. Ushenko, B. T. Hogan, A. Dubolazov, G. Piavchenko, S. L. Kuznetsov, A. G. Ushenko, Y. O. Ushenko, M. Gorsky, A. Bykov, and I. Meglinski, "3D Mueller matrix mapping of layered distributions of depolarisation degree for analysis of prostate adenoma and carcinoma diffuse tissues," Nat. Rev. Cancer 11(1), 5162 (2021). [CrossRef] 19. T. Novikova, A. Pierangelo, A. De Martino, A. Benali, and P. Validire, "Polarimetric imaging for cancer diagnosis and staging," Opt. Photonics News 23(10), 26–33 (2012). [CrossRef] 20. N. Ghosh and I. Vitkin, "Tissue polarimetry: concepts, challenges, applications, and outlook," J. Biomed. Opt. 16(11), 110801 (2011). [CrossRef] 21. V. Tuchin, Tissue Optics Light Scattering Methods and Instruments for Medical Diagnosis (SPIE, 2000). 22. A. Garcia-Uribe, J. Zou, M. Duvic, J. H. Cho-Vega, V. Prieto, and L. Wang, "In vivo diagnosis of melanoma and nonmelanoma skin cancer using oblique incidence diffuse reflectance spectrometry," Cancer Res. 72(11), 2738–2745 (2012). [CrossRef] 23. A. Manzoor, S. Alali, A. Kim, M. F. G. Wood, M. Ikram, and I. A. Vitkin, "Do different turbid media with matched bulk optical properties also exhibit similar polarization properties?" Biomed. Opt. Express 2(12), 3248–3258 (2011). [CrossRef] 24. F. Boulvert, Y. Piderriere, G. Le Brun, B. Le Jeune, and J. Cariou, "Comparison of entropy and polarization memory rate behavior through a study of weakly-anisotropic depolarizing biotissues," Opt. Commun. 272(2), 534–538 (2007). [CrossRef] 25. A. Vitkin, N. Ghosh, and A. De Martino, "Tissue Polarimetry," in Photonics: Scientific Foundations, Technology and Applications, Volume IV (John Wiley & Sons, Inc., 2015), pp. 203–321. 26. M. D. Singh and I. A. Vitkin, "Discriminating turbid media by scatterer size and scattering coefficient using backscattered linearly and circularly polarized light," Biomed. Opt. Express 12(11), 6831–6843 (2021). [CrossRef] 27. C. Amra, M. Zerrad, L. Siozade, G. Georges, and C. Deumié, "Partial polarization of light induced by random defects at surfaces or bulks," Opt. Express 16(14), 10372–10354 (2008). [CrossRef] 28. F. C. MacKintosh, J. X. Zhu, D. J. Pine, and D. A. Weitz, "Polarization memory of multiply scattered-light," Phys. Rev. B 40(13), 9342–9345 (1989). [CrossRef] 29. D. Bicout, C. Brosseau, A. Martinez, and J. Schmitt, "Depolarization of multiply scattered waves by spherical diffusers: Influence of the size parameter," Phys. Rev. E 49(2), 1767–1770 (1994). [CrossRef] 30. L. Tchvialeva, D. C. Louie, Y. Wang, S. Kalia, H. Lui, and T. K. Lee, "Polarization-based skin cancer detection in vivo," SPIE Proceedings Fifteenth International Conference on Correlation Optics, 2021. 31. A. Ishimaru, Wave Propagation and Scattering in Random Media (Academic Press, 1978.) 32. L. F. Rojas-Ochoa, D. Lacoste, R. Lenke, P. Schurtenberger, and F. Scheffold, "Depolarization of backscattered linearly polarized light," J. Opt. Soc. Am. 21(9), 1799–1804 (2004). [CrossRef] 33. E. E. Gorodnichev, A. I. Kuzolev, and D. B. Rogozkon, "Multiple scattering of polarized light in a turbid medium," J. Exp. Theor. Phys. 104(2), 319–341 (2007). [CrossRef] 34. G. Lewis, D. Jordan, and J. Roberts, "Backscattering target detection in a turbid medium by polarization discrimination," Appl. Opt. 38(18), 3937–3944 (1999). [CrossRef] 35. A. N. Bashkatov, E. A. Genina, and V. V. Tuchin, "Tissue Optical Properties," in Handbook of Biomedical Optics (Taylor & Francis Group,2011), pp. 67–100. 36. A. N. Bashkatov, E. A. Genina, V. Kochubey, V. S. Rubtsov, E. A. Kolesnikova, and V. V. Tuchin, "Optical properties of human colon tissues in the 350–2500 nm spectral range," Quantum Electron. 44(8), 779–784 (2014). [CrossRef] 37. S. L. Jacques, "Optical properties of biological tissues: a review," Phys. Med. Biol. 58(11), R37–R61 (2013). [CrossRef] 38. V. Sankaran, J. Walsh, and D. Maitland, "Comparative study of polarized light propagation in biologic tissues," J. Biomed. Opt. 7(3), 300–306 (2002). [CrossRef] 39. X. Wang, J. Lai, Y. Song, and Z. Li, "The anomalous depolarization anisotropy in the central backscattering area for turbid medium with Mie scatterers," J. Opt. 20(5), 55601 (2018). [CrossRef] 40. C. A. Nader, R. Nassif, F. Pellen, B. Le Jeune, G. Le Brun, and M. Abboud, "Influence of size, proportion, and absorption coefficient of spherical scatterers on the degree of light polarization and the grain size of speckle pattern," Appl. Opt. 54(35), 10369–10375 (2015). [CrossRef] 41. C. M. Macdonald, S. L. Jacques, and I. V. Meglinski, "Circular polarization memory in polydisperse scattering media," Phys. Rev. E 91(3), 33204 (2015). [CrossRef] 42. N. Ghosh, P. Gupta, A. Pradhan, and S. Majumder, "Anomalous behavior of depolarization of light in a turbid medium," Phys. Lett. A 354(3), 236–242 (2006). [CrossRef] 43. J. Chue-Sang, N. Holness, M. Gonzalez, J. Greaves, I. Saytashev, S. Stoff, A. Gandjbakhche, V. V. Chernomordik, G. Burkett, and J. C. Ramella-Roman, "Use of Mueller matrix colposcopy in the characterization of cervical collagen anisotropy," J. Biomed. Opt. 23(12), 121605 (2018). [CrossRef] 44. A. Vahidnia, K. Madanipour, R. Abedini, R. Karimi, J. Sanderson, Z. Zare, and P. Parvin, "Quantitative polarimetry Mueller matrix decomposition for diagnosing melanoma and non-melanoma human skin cancer," OSA Continuum 4(11), 2862–2875 (2021). [CrossRef] 45. A. D. Kim and M. Moscoso, "Influence of the relative refractive index on the depolarization of multiply scattered waves," Phys. Rev. E 64(2), 26612 (2001). [CrossRef] 46. P. Y. Liu, L. K. Chin, W. Ser, H. F. Chen, C. M. Hsieh, C. H. Lee, K. B. Sung, T. C. Ayi, P. H. Yap, B. Liedberg, K. Wang, T. Bourouina, and Y. Leprince-Wang, "Cell refractive index for cell biology and disease diagnosis: past, present and future," Lab Chip 16(4), 634–644 (2016). [CrossRef] 47. L. Perelman, "Optical diagnostic technology based on light scattering spectroscopy for early cancer detection: expert review of medical devices," Expert Rev. Med. Devices 3(6), 787–803 (2006). [CrossRef] 48. R. S. Gurjar, V. Backman, L. T. Perelman, I. Georgakoudi, K. Badizadegan, I. Itzkan, R. R. Dasari, and M. S. Feld, "Imaging human epithelial properties with polarized light-scattering spectroscopy," Nat. Med. 7(11), 1245–1248 (2001). [CrossRef] 49. T. Biswas and T. Luu, "In vivo MR measurement of refractive index, relative water content and T2 relaxation time of various brain lesions with clinical application to discriminate brain lesions," The Internet Journal of Radiology13(1), (2009). 50. Z. Wang, K. Tangella, A. Kajdacsy-Balla, and G. Popescu, "Tissue refractive index as marker of disease," J. Biomed. Opt. 16(11), 116017 (2011). [CrossRef] 51. S. Alali and I. A. Vitkin, "Polarized light imaging in biomedicine: emerging Mueller matrix methodologies for bulk tissue assessment," J. Biomed. Opt. 20(6), 61104 (2015). [CrossRef] 52. J. Qi and D. S. Elson, "A high definition Mueller polarimetric endoscope for tissue characterisation," Sci. Rep. 6(1), 25953 (2016). [CrossRef] 53. B. Varin, J. Rehbinder, J. Dellinger, C. Heinrich, M. Torzynski, C. Spenlé, D. Bagnard, and J. Zallat, "Monitoring subcutaneous tumors using Mueller polarimetry: study on two types of tumors," Biomed. Opt. Express 12(10), 6055–6065 (2021). [CrossRef] 54. O. Rodríguez-Núñez, P. Schucht, E. Hewer, T. Novikova, and A. Pierangelo, "Polarimetric visualization of healthy brain fiber tracts under adverse conditions: ex vivo studies," Biomed. Opt. Express 12(10), 6674–6685 (2021). [CrossRef] 55. D. C. Louie, L. Tchvialeva, S. Kalia, H. Lui, and T. K. Lee, "Constructing a portable optical polarimetry probe for in-vivo skin cancer detection," J. Biomed. Opt.26(3) (2021). 56. X. Guo, M. F. G. Wood, and A. Vitkin, "Monte Carlo study of pathlength distribution of polarized light in turbid media," Opt. Express 15(3), 1348–1360 (2007). [CrossRef] 57. A. Ushenko, A. Sdobnov, A. Dubolazov, M. Gritsuk, Y. Ushenko, A. Bykov, and I. Meglinski, "Stokes-correlometry analysis of biological tissues with polycrystalline structure," IEEE J. Sel. Top. Quantum Electron. 25(1), 1–12 (2019). [CrossRef] 58. C. Zhang, S. Horder, T. K. Lee, and W. Wang, "Development of polarization speckle based on random polarization phasor sum," J. Opt. Soc. Am. A 36(2), 277–282 (2019). [CrossRef] 59. M. Takeda, W. Wang, and S. G. Hanson, "Polarization speckles and generalized Stokes vector wave: a review," Proc. SPIE7387, 73870V (2010). [CrossRef] 60. L. Tchvialeva, G. Dhadwal, H. Lui, S. Kalia, H. Zeng, D. I. McLean, and T. K. Lee, "Polarization speckle imaging as a potential technique for in vivo skin cancer detection," J. Biomed. Opt. 18(6), 61211 (2012). [CrossRef] 61. L. Tchvialeva, H. Zeng, I. Markhvida, D. I. McLean, H. Lui, and T. K. Lee, "Skin roughness assessment," in New Developments in Biomedical Engineering (Intech, 2010), pp. 341–358. 62. A. Dogiaru and R. Carminati, "Electromagnetic field correlations in three-dimensional speckles," Phys. Rep. 559(10), 1016 (2015). [CrossRef] 63. E. Wolf, Introduction to the Theory of Coherence and Polarization of Light (Cambridge University, 2007). 64. L. Tchvialeva, T. K. Lee, I. Markhvida, D. I. McLean, and H. Z. H. Lui, "Using a zone model to incorporate the influence of geometry on polychromatic speckle contrast," Opt. Eng. 47(7), 74201 (2008). [CrossRef] 65. P. Elies, L. Bernard, F. Leroy-Brehonnet, J. Cariou, and J. Lotrian, "Experimental investigation of the speckle polarization for a polished aluminium sample," J. Phys. D: Appl. Phys. 30(1), 29–39 (1997). [CrossRef] 66. W. Wang, S. Hanson, and M. Takeda, "Statistics of polarization speckle: theory versus experiment," Proc. SPIE7388(10), 1117 (2009). 67. L. Tchvialeva, I. Markhvida, and T. K. Lee, "Error analysis for polychromatic speckle contrast measurements," Opt. Lasers Eng. 49(12), 1397–1401 (2011). [CrossRef] D. J. Waterhouse, C. R. M. Fitzpatrick, B. W. Pogue, J. P. B. O'Connor, and S. E. Bohndiek, "A roadmap for the clinical implementation of optical-imaging biomarkers," Nat. Biomed. Eng. 3(5), 339–353 (2019). J. C. Ramella-Roman, I. Saytashev, and M. Piccini, "A review of polarization-based imaging technologies for clinical and preclinical applications," J. Opt. 22(12), 123001 (2020). C. He, H. He, J. Chang, B. Chen, H. Ma, and M. J. Booth, "Polarisation optics for biomedical and clinical applications: a review," Light: Sci. Appl.10(194), (2021). I. Meglinski, T. Novikova, and K. Dholakia, "Polarization and Orbital Angular Momentum of Light in Biomedical Applications: feature issue introduction," Biomed. Opt. Express 12(10), 6255–6258 (2021). J. Qi and D. Elson, "Mueller polarimetric imaging for surgical and diagnostic applications: A review," J. Biophotonics 10(10), 950–982 (2017). V. V. Tuchin, L. Wang, and D. A. Zimnyakov, Optical Polarization in Biomedical Applications (Springer-Verlag Berlin Heidelberg, 2006). L. V. Wang, G. L. Coté, and S. L. Jacques, "Special section guest editorial: tissue polarimetry," J. Biomed. Opt. 7(3), 278 (2002). S. L. Jacques, J. C. Ramella-Roman, and K. Lee, "Imaging skin pathology with polarized light," J. Biomed. Opt. 7(3), 329–340 (2002). P. Ghassemi, P. Lemaillet, T. Germer, J. Shupp, S. Venna, M. Boisvert, K. Flanagan, M. Jordan, and J. Ramella-Roman, "Out-of-plane Stokes imaging polarimeter for early skin cancer diagnosis," J. Biomed. Opt. 17(7), 760141 (2012). J. F. do Boer, C. K. Hitzenberger, and Y. Yasumo, "Polarization sensitive optical coherence tomography - a review," Biomed. Opt. Express 8(3), 1838–1873 (2017). D. C. Louie, J. Phillips, L. Tchvialeva, S. Kalia, H. Lui, W. Wang, and T. Lee, "Degree of optical polarization as a tool for detecting melanoma: proof of principle," J. Biomed. Opt. 23(12), 1 (2018). Y. Wang, H. He, J. Chang, S. Lui, M. Li, N. Zeng, J. Wu, and H. Ma, "Mueller matrix microscope: a quantitative tool to facilitate detection and fibrosis scorings of liver cirrhosis and cancer tissues," J. Biomed. Opt. 21(7), 71112 (2016). R. Patel, A. Khan, R. Quinlan, and A. Yaroslavsky, "Polarization-sensitive multimodal imaging for detecting breast cancer," Cancer Res. 74(17), 4685–4693 (2014). M. Villiger, D. Lorenser, R. McLaughlin, B. Quirk, R. Kirk, B. Bouma, and D. Sampson, "Deep tissue volume imaging of birefringence through fibre-optic needle probes for the delineation of breast tumour," Sci. Rep. 6(1), 28771 (2016). A. Golaraei, L. Kontenis, R. Cisek, D. Tokarz, S. Done, B. Wilson, and V. Barzda, "Changes of collagen ultrastructure in breast cancer tissue determined by second-harmonic generation double Stokes-Mueller polarimetric microscopy," Biomed. Opt. Express 7(10), 4054 (2016). B. Kunnen, C. Macdonald, A. Doronin, S. Jacques, M. Eccles, and I. Meglinski, "Application of circularly polarized light for non-invasive diagnosis of cancerous tissues and turbid tissue-like scattering media," J. Biophotonics 8(4), 317–323 (2015). V. Ushenko, A. Sdobnov, A. Syvokorovskava, A. Dubolazov, O. Vanchulyak, A. Ushenko, Y. Ushenko, M. Gorsky, M. Sidor, A. Bykov, and I. Meglinski, "3D mueller-matrix diffusive tomography of polycrystalline blood films for cancer diagnosis," Photonics 5(4), 54 (2018). V. A. Ushenko, B. T. Hogan, A. Dubolazov, G. Piavchenko, S. L. Kuznetsov, A. G. Ushenko, Y. O. Ushenko, M. Gorsky, A. Bykov, and I. Meglinski, "3D Mueller matrix mapping of layered distributions of depolarisation degree for analysis of prostate adenoma and carcinoma diffuse tissues," Nat. Rev. Cancer 11(1), 5162 (2021). T. Novikova, A. Pierangelo, A. De Martino, A. Benali, and P. Validire, "Polarimetric imaging for cancer diagnosis and staging," Opt. Photonics News 23(10), 26–33 (2012). N. Ghosh and I. Vitkin, "Tissue polarimetry: concepts, challenges, applications, and outlook," J. Biomed. Opt. 16(11), 110801 (2011). V. Tuchin, Tissue Optics Light Scattering Methods and Instruments for Medical Diagnosis (SPIE, 2000). A. Garcia-Uribe, J. Zou, M. Duvic, J. H. Cho-Vega, V. Prieto, and L. Wang, "In vivo diagnosis of melanoma and nonmelanoma skin cancer using oblique incidence diffuse reflectance spectrometry," Cancer Res. 72(11), 2738–2745 (2012). A. Manzoor, S. Alali, A. Kim, M. F. G. Wood, M. Ikram, and I. A. Vitkin, "Do different turbid media with matched bulk optical properties also exhibit similar polarization properties?" Biomed. Opt. Express 2(12), 3248–3258 (2011). F. Boulvert, Y. Piderriere, G. Le Brun, B. Le Jeune, and J. Cariou, "Comparison of entropy and polarization memory rate behavior through a study of weakly-anisotropic depolarizing biotissues," Opt. Commun. 272(2), 534–538 (2007). A. Vitkin, N. Ghosh, and A. De Martino, "Tissue Polarimetry," in Photonics: Scientific Foundations, Technology and Applications, Volume IV (John Wiley & Sons, Inc., 2015), pp. 203–321. M. D. Singh and I. A. Vitkin, "Discriminating turbid media by scatterer size and scattering coefficient using backscattered linearly and circularly polarized light," Biomed. Opt. Express 12(11), 6831–6843 (2021). C. Amra, M. Zerrad, L. Siozade, G. Georges, and C. Deumié, "Partial polarization of light induced by random defects at surfaces or bulks," Opt. Express 16(14), 10372–10354 (2008). F. C. MacKintosh, J. X. Zhu, D. J. Pine, and D. A. Weitz, "Polarization memory of multiply scattered-light," Phys. Rev. B 40(13), 9342–9345 (1989). D. Bicout, C. Brosseau, A. Martinez, and J. Schmitt, "Depolarization of multiply scattered waves by spherical diffusers: Influence of the size parameter," Phys. Rev. E 49(2), 1767–1770 (1994). L. Tchvialeva, D. C. Louie, Y. Wang, S. Kalia, H. Lui, and T. K. Lee, "Polarization-based skin cancer detection in vivo," SPIE Proceedings Fifteenth International Conference on Correlation Optics, 2021. A. Ishimaru, Wave Propagation and Scattering in Random Media (Academic Press, 1978.) L. F. Rojas-Ochoa, D. Lacoste, R. Lenke, P. Schurtenberger, and F. Scheffold, "Depolarization of backscattered linearly polarized light," J. Opt. Soc. Am. 21(9), 1799–1804 (2004). E. E. Gorodnichev, A. I. Kuzolev, and D. B. Rogozkon, "Multiple scattering of polarized light in a turbid medium," J. Exp. Theor. Phys. 104(2), 319–341 (2007). G. Lewis, D. Jordan, and J. Roberts, "Backscattering target detection in a turbid medium by polarization discrimination," Appl. Opt. 38(18), 3937–3944 (1999). A. N. Bashkatov, E. A. Genina, and V. V. Tuchin, "Tissue Optical Properties," in Handbook of Biomedical Optics (Taylor & Francis Group,2011), pp. 67–100. A. N. Bashkatov, E. A. Genina, V. Kochubey, V. S. Rubtsov, E. A. Kolesnikova, and V. V. Tuchin, "Optical properties of human colon tissues in the 350–2500 nm spectral range," Quantum Electron. 44(8), 779–784 (2014). S. L. Jacques, "Optical properties of biological tissues: a review," Phys. Med. Biol. 58(11), R37–R61 (2013). V. Sankaran, J. Walsh, and D. Maitland, "Comparative study of polarized light propagation in biologic tissues," J. Biomed. Opt. 7(3), 300–306 (2002). X. Wang, J. Lai, Y. Song, and Z. Li, "The anomalous depolarization anisotropy in the central backscattering area for turbid medium with Mie scatterers," J. Opt. 20(5), 55601 (2018). C. A. Nader, R. Nassif, F. Pellen, B. Le Jeune, G. Le Brun, and M. Abboud, "Influence of size, proportion, and absorption coefficient of spherical scatterers on the degree of light polarization and the grain size of speckle pattern," Appl. Opt. 54(35), 10369–10375 (2015). C. M. Macdonald, S. L. Jacques, and I. V. Meglinski, "Circular polarization memory in polydisperse scattering media," Phys. Rev. E 91(3), 33204 (2015). N. Ghosh, P. Gupta, A. Pradhan, and S. Majumder, "Anomalous behavior of depolarization of light in a turbid medium," Phys. Lett. A 354(3), 236–242 (2006). J. Chue-Sang, N. Holness, M. Gonzalez, J. Greaves, I. Saytashev, S. Stoff, A. Gandjbakhche, V. V. Chernomordik, G. Burkett, and J. C. Ramella-Roman, "Use of Mueller matrix colposcopy in the characterization of cervical collagen anisotropy," J. Biomed. Opt. 23(12), 121605 (2018). A. Vahidnia, K. Madanipour, R. Abedini, R. Karimi, J. Sanderson, Z. Zare, and P. Parvin, "Quantitative polarimetry Mueller matrix decomposition for diagnosing melanoma and non-melanoma human skin cancer," OSA Continuum 4(11), 2862–2875 (2021). A. D. Kim and M. Moscoso, "Influence of the relative refractive index on the depolarization of multiply scattered waves," Phys. Rev. E 64(2), 26612 (2001). P. Y. Liu, L. K. Chin, W. Ser, H. F. Chen, C. M. Hsieh, C. H. Lee, K. B. Sung, T. C. Ayi, P. H. Yap, B. Liedberg, K. Wang, T. Bourouina, and Y. Leprince-Wang, "Cell refractive index for cell biology and disease diagnosis: past, present and future," Lab Chip 16(4), 634–644 (2016). L. Perelman, "Optical diagnostic technology based on light scattering spectroscopy for early cancer detection: expert review of medical devices," Expert Rev. Med. Devices 3(6), 787–803 (2006). R. S. Gurjar, V. Backman, L. T. Perelman, I. Georgakoudi, K. Badizadegan, I. Itzkan, R. R. Dasari, and M. S. Feld, "Imaging human epithelial properties with polarized light-scattering spectroscopy," Nat. Med. 7(11), 1245–1248 (2001). T. Biswas and T. Luu, "In vivo MR measurement of refractive index, relative water content and T2 relaxation time of various brain lesions with clinical application to discriminate brain lesions," The Internet Journal of Radiology13(1), (2009). Z. Wang, K. Tangella, A. Kajdacsy-Balla, and G. Popescu, "Tissue refractive index as marker of disease," J. Biomed. Opt. 16(11), 116017 (2011). S. Alali and I. A. Vitkin, "Polarized light imaging in biomedicine: emerging Mueller matrix methodologies for bulk tissue assessment," J. Biomed. Opt. 20(6), 61104 (2015). J. Qi and D. S. Elson, "A high definition Mueller polarimetric endoscope for tissue characterisation," Sci. Rep. 6(1), 25953 (2016). B. Varin, J. Rehbinder, J. Dellinger, C. Heinrich, M. Torzynski, C. Spenlé, D. Bagnard, and J. Zallat, "Monitoring subcutaneous tumors using Mueller polarimetry: study on two types of tumors," Biomed. Opt. Express 12(10), 6055–6065 (2021). O. Rodríguez-Núñez, P. Schucht, E. Hewer, T. Novikova, and A. Pierangelo, "Polarimetric visualization of healthy brain fiber tracts under adverse conditions: ex vivo studies," Biomed. Opt. Express 12(10), 6674–6685 (2021). D. C. Louie, L. Tchvialeva, S. Kalia, H. Lui, and T. K. Lee, "Constructing a portable optical polarimetry probe for in-vivo skin cancer detection," J. Biomed. Opt.26(3) (2021). X. Guo, M. F. G. Wood, and A. Vitkin, "Monte Carlo study of pathlength distribution of polarized light in turbid media," Opt. Express 15(3), 1348–1360 (2007). A. Ushenko, A. Sdobnov, A. Dubolazov, M. Gritsuk, Y. Ushenko, A. Bykov, and I. Meglinski, "Stokes-correlometry analysis of biological tissues with polycrystalline structure," IEEE J. Sel. Top. Quantum Electron. 25(1), 1–12 (2019). C. Zhang, S. Horder, T. K. Lee, and W. Wang, "Development of polarization speckle based on random polarization phasor sum," J. Opt. Soc. Am. A 36(2), 277–282 (2019). M. Takeda, W. Wang, and S. G. Hanson, "Polarization speckles and generalized Stokes vector wave: a review," Proc. SPIE7387, 73870V (2010). L. Tchvialeva, G. Dhadwal, H. Lui, S. Kalia, H. Zeng, D. I. McLean, and T. K. Lee, "Polarization speckle imaging as a potential technique for in vivo skin cancer detection," J. Biomed. Opt. 18(6), 61211 (2012). L. Tchvialeva, H. Zeng, I. Markhvida, D. I. McLean, H. Lui, and T. K. Lee, "Skin roughness assessment," in New Developments in Biomedical Engineering (Intech, 2010), pp. 341–358. A. Dogiaru and R. Carminati, "Electromagnetic field correlations in three-dimensional speckles," Phys. Rep. 559(10), 1016 (2015). E. Wolf, Introduction to the Theory of Coherence and Polarization of Light (Cambridge University, 2007). L. Tchvialeva, T. K. Lee, I. Markhvida, D. I. McLean, and H. Z. H. Lui, "Using a zone model to incorporate the influence of geometry on polychromatic speckle contrast," Opt. Eng. 47(7), 74201 (2008). P. Elies, L. Bernard, F. Leroy-Brehonnet, J. Cariou, and J. Lotrian, "Experimental investigation of the speckle polarization for a polished aluminium sample," J. Phys. D: Appl. Phys. 30(1), 29–39 (1997). W. Wang, S. Hanson, and M. Takeda, "Statistics of polarization speckle: theory versus experiment," Proc. SPIE7388(10), 1117 (2009). L. Tchvialeva, I. Markhvida, and T. K. Lee, "Error analysis for polychromatic speckle contrast measurements," Opt. Lasers Eng. 49(12), 1397–1401 (2011). Abboud, M. Abedini, R. Alali, S. Amra, C. Ayi, T. C. Backman, V. Badizadegan, K. Bagnard, D. Barzda, V. Bashkatov, A. N. Benali, A. Bernard, L. Bicout, D. Biswas, T. Bohndiek, S. E. Boisvert, M. Booth, M. J. Boulvert, F. Bouma, B. Bourouina, T. Brosseau, C. Burkett, G. Bykov, A. Cariou, J. Carminati, R. Chen, H. F. Chernomordik, V. V. Chin, L. K. Cho-Vega, J. H. Chue-Sang, J. Cisek, R. Coté, G. L. Dasari, R. R. De Martino, A. Dellinger, J. Deumié, C. Dhadwal, G. Dholakia, K. do Boer, J. F. Dogiaru, A. Done, S. Doronin, A. Dubolazov, A. Duvic, M. Eccles, M. Elies, P. Elson, D. Elson, D. S. Feld, M. S. Fitzpatrick, C. R. M. Flanagan, K. Gandjbakhche, A. Garcia-Uribe, A. Genina, E. A. Georgakoudi, I. Georges, G. Germer, T. Ghassemi, P. Ghosh, N. Golaraei, A. Gonzalez, M. Gorodnichev, E. E. Gorsky, M. Greaves, J. Gritsuk, M. Guo, X. Gupta, P. Gurjar, R. S. Hanson, S. Hanson, S. G. He, H. Heinrich, C. Hewer, E. Hitzenberger, C. K. Hogan, B. T. Holness, N. Horder, S. Hsieh, C. M. Ikram, M. Ishimaru, A. Itzkan, I. Jacques, S. Jacques, S. L. Jordan, D. Jordan, M. Kajdacsy-Balla, A. Kalia, S. Karimi, R. Khan, A. Kim, A. Kim, A. D. Kirk, R. Kochubey, V. Kolesnikova, E. A. Kontenis, L. Kunnen, B. Kuznetsov, S. L. Kuzolev, A. I. Lacoste, D. Lai, J. Le Brun, G. Le Jeune, B. Lee, C. H. Lee, K. Lee, T. Lee, T. K. Lemaillet, P. Lenke, R. Leprince-Wang, Y. Leroy-Brehonnet, F. Lewis, G. Li, M. Li, Z. Liedberg, B. Liu, P. Y. Lotrian, J. Louie, D. C. Lui, H. Lui, H. Z. H. Lui, S. Luu, T. Ma, H. Macdonald, C. Macdonald, C. M. MacKintosh, F. C. Madanipour, K. Maitland, D. Majumder, S. Manzoor, A. Markhvida, I. Martinez, A. McLaughlin, R. McLean, D. I. Meglinski, I. Meglinski, I. V. Moscoso, M. Nader, C. A. Nassif, R. Novikova, T. O'Connor, J. P. B. Parvin, P. Patel, R. Pellen, F. Perelman, L. Perelman, L. T. Phillips, J. Piavchenko, G. Piccini, M. Piderriere, Y. Pierangelo, A. Pine, D. J. Pogue, B. W. Popescu, G. Pradhan, A. Prieto, V. Qi, J. Quinlan, R. Quirk, B. Ramella-Roman, J. Ramella-Roman, J. C. Rehbinder, J. Roberts, J. Rodríguez-Núñez, O. Rogozkon, D. B. Rojas-Ochoa, L. F. Rubtsov, V. S. Sampson, D. Sanderson, J. Sankaran, V. Saytashev, I. Scheffold, F. Schmitt, J. Schucht, P. Schurtenberger, P. Sdobnov, A. Ser, W. Shupp, J. Sidor, M. Singh, M. D. Siozade, L. Song, Y. Spenlé, C. Stoff, S. Sung, K. B. Syvokorovskava, A. Takeda, M. Tangella, K. Tchvialeva, L. Tokarz, D. Torzynski, M. Tuchin, V. Tuchin, V. V. Ushenko, A. Ushenko, A. G. Ushenko, V. Ushenko, V. A. Ushenko, Y. Ushenko, Y. O. Vahidnia, A. Validire, P. Vanchulyak, O. Varin, B. Venna, S. Villiger, M. Vitkin, A. Vitkin, I. Vitkin, I. A. Walsh, J. Wang, K. Wang, L. V. Wang, W. Wang, X. Waterhouse, D. J. Weitz, D. A. Wilson, B. Wolf, E. Wood, M. F. G. Yap, P. H. Yaroslavsky, A. Yasumo, Y. Zallat, J. Zare, Z. Zeng, H. Zeng, N. Zerrad, M. Zhang, C. Zhu, J. X. Zimnyakov, D. A. Zou, J. Biomed. Opt. Express (7) Cancer Res. (2) Expert Rev. Med. Devices (1) IEEE J. Sel. Top. Quantum Electron. (1) J. Biomed. Opt. (11) J. Exp. Theor. Phys. (1) J. Opt. (2) J. Opt. Soc. Am. (1) J. Phys. D: Appl. Phys. (1) Lab Chip (1) Nat. Biomed. Eng. (1) Nat. Med. (1) Nat. Rev. Cancer (1) Opt. Commun. (1) Opt. Lasers Eng. (1) Opt. Photonics News (1) Photonics (1) Phys. Lett. A (1) Phys. Med. Biol. (1) Phys. Rep. (1) Phys. Rev. B (1) Phys. Rev. E (3) Quantum Electron. (1) (1) S = [ S 0 S 1 S 2 S 3 ] = [ I 0 + I 90 I 0 − I 90 I 45 − I 135 I R H − I L H ] (2) D O C P = S 3 2 S 0 , D O L P = S 1 2 + S 2 2 S 0 , D O P = S 1 2 + S 2 2 + S 3 2 S 0 (3) P M R = D O C P D O L P (4) X = d π n m λ (5) X ′ = ( m − 1 ) X Summary of benign and malignant tissue differences in literature Tissue type d (µm) Scattering Regime Depolarization Regime [47] Epithelial intestine 5.0 1.035 34 1.2 Mie Rayleigh <1 [47] Malignant intestine 9.8 1.040 67 2.7 Mie Mie >1 [49] Gray matter 5.0 1.015 34 0.5 Mie Rayleigh <1 [49] Glioblastoma 9.8 1.058 67 3.9 Mie Mie >1
CommonCrawl
Page: Sliding contact bearing design approach Following methods are used in designing the sliding contact bearing design. 1. PETROFF'S EQUATION Petroff's equation is used to determine the coefficient of friction in journal bearings. It is based on the following assumptions: (i) The shaft is concentric with the bearing (ii) The bearing is subjected to a light load In practice, such conditions do not exist. However, Petroff's equation is important because it defines the group of dimensionless parameters that govern the frictional properties of the bearing. It is given by, $$f=2 \pi^{2}\left(\frac{r}{c}\right)\left(\frac{\mu n_{s}}{p}\right)$$ Petroff's equation indicates that there are two important dimensionless parameters, $\frac{r}{c}$ and $\frac{\mu n_{s}}{p}$ that govern the coefficient of friction and other frictional properties like frictional torque, frictional power loss and temperature rise in the bearing. 2. MCKEE's EQUATION Bearing Modulus is a dimensionless parameter on which the coefficient friction in a bearing depends fig. In the region to the left of Point C, operating conditions are severe and mixed lubrication occurs as shown in the fig below. A small change in speed or increase in load can reduce $Z \mathrm{N'}/ \mathrm{p}$ and a small education in $\mathrm{ZN'} / \mathrm{p}$ can increase the coefficient of friction drastically. This increases heat which reduces the viscosity of the lubricant. This further reduces $\mathrm{ZN}^{\prime} / \mathrm{p}$ leading to further increase in friction. 3. REYNOLD'S EQUATION The theory of hydrodynamic lubrication is based on a differential equation derived by Osborne Reynolds. This equation is based on the following assumptions: (i) The lubricant obeys Newton's law of viscosity (ii) The lubricant is incompressible (iii) The inertia forces in the oil film are negligible (iv) The viscosity of the lubricant is constant (v) The effect of the curvature of the film with respect to film thickness is neglected. It is assumed that the film is so thin that the pressure is constant across the film thickness (vi) The shaft and the bearing are rigid (vii) There is a continuous supply of lubricant The Reynold's equation is as follows, $$\frac{\partial}{\partial_{x}}\left(h^{3} \frac{\partial p}{\partial_{x}}\right)+\frac{\partial}{\partial_{z}}\left(h^{3} \frac{\partial p}{\partial_{z}}\right)=6 \mu U\left(\frac{\partial h}{\partial_{x}}\right)$$ There is no exact analytical solution for this equation for bearings with finite length. Theoretically, exact solutions can be obtained if the bearing is assumed to be either Infinitely long or very short. These two solutions are called Sommerfeld's solutions. Approximate solutions using numerical methods are available for bearings with finite length. 4. RAIMONDI AND BOYD METHOD There is no exact solution to Reynold's equation fora journal bearing having a finite length. However, AA Raimondi and John Boyd of Westinghouse Research Laboratory solved this equation on a computer using the iteration technique. The results of this work are available in the form of charts and tables (PSG7.36-7.39). In the Raimondi and Boyd method, the performance of the bearing is expressed in terms of dimensionless parameters. L/D $\epsilon$ $2h_o/C$ S $\phi$ $\mu$ D/C 4q/DCn'L $\frac{q_{s}}{q}$ $\frac{\rho C^{\prime} \Delta t o}{P}$ $\mathrm{P} / \mathrm{Pmax}$ The parameters are explained below. Length to diameter ratio (L/D ratio) It is the ratio of bearing length to journal diameter. (Assume 1 as the default value) If L/D is > 1, the bearing is said to be a long bearing. If L/D is < 1, the bearing is said to be a short bearing. If L/D is = 1, the bearing is said to be a Square bearing. Attitude or eccentricity ratio ($\epsilon$) It is the ratio of the eccentricity to the radial clearance. $\epsilon = 2e/C = 1 - 2h_o / C$ Film thickness variable It is the ratio of film thickness to the radial clearance. $FTV = 2h_o / C= h_o / C/2$ Sommerfeld number (S) The Sommerfeld number is also a dimensionless parameter used extensively in the design of journal bearings. $$S=\frac{Z^{\prime} n^{\prime}}{P}\left[\frac{D}{C}\right]^{2}$$ Attitude angle $\phi$ It is the angle between the load axis and line of intersection of centers of journal and bearing. Coefficient of friction variable (CFV) It is a dimensionless parameter used to calculate the coefficient of friction between bearing and journal. Flow variable (FV) It is a dimensionless parameter used to calculate the flow rate of oil required for lubrication of bearing. Side flow variable (SFV) It is a dimensionless parameter defined as the ratio of side flowrate of oil to the total flow rate of oil. Temperature variable (TV) It is a dimensionless parameter used to calculate the temperature rise of the oil film of bearing. Pressure variable (PV) It is a dimensionless parameter used to calculate the pressure of oil inside the bearing. page sliding contact bearing design • 12 views modified 4 days ago • written 4 days ago by Yashbeer &starf; 160
CommonCrawl
Results for 'Elliot C. Welch' 1000+ found Self-Predication in Plato's Euthyphro?Elliot C. Welch - 2008 - Apeiron 41 (4):193-210.details Plato: Euthyphro in Ancient Greek and Roman Philosophy Intentional Action Processing Results From Automatic Bottom-Up Attention: An EEG-Investigation Into the Social Relevance Hypothesis Using Hypnosis.Eleonore Neufeld, Elliot C. Brown, Sie-In Lee-Grimm, Albert Newen & Martin Brüne - 2016 - Consciousness and Cognition 42:101-112.details Social stimuli grab our attention: we attend to them in an automatic and bottom-up manner, and ascribe them a higher degree of saliency compared to non-social stimuli. However, it has rarely been investigated how variations in attention affect the processing of social stimuli, although the answer could help us uncover details of social cognition processes such as action understanding. In the present study, we examined how changes to bottom-up attention affects neural EEG-responses associated with intentional action processing. We induced an (...) increase in bottom-up attention by using hypnosis. We recorded the electroencephalographic µ-wave suppression of hypnotized participants when presented with intentional actions in first and third person perspective in a video-clip paradigm. Previous studies have shown that the µ-rhythm is selectively suppressed both when executing and observing goal-directed motor actions; hence it can be used as a neural signal for intentional action processing. Our results show that neutral hypnotic trance increases µ-suppression in highly suggestible participants when they observe intentional actions. This suggests that social action processing is enhanced when bottom-up attentional processes are predominant. Our findings support the Social Relevance Hypothesis, according to which social action processing is a bottom-up driven attentional process, and can thus be altered as a function of bottom-up processing devoted to a social stimulus. (shrink) Attention, Misc in Philosophy of Mind Mindreading in Philosophy of Cognitive Science Neuroscience in Cognitive Sciences Reward in the Mirror Neuron System, Social Context, and the Implications on Psychopathology.Elliot C. Brown & Martin Brüne - 2014 - Behavioral and Brain Sciences 37 (2):196-197.details Philosophy of Neuroscience in Philosophy of Cognitive Science Beyond Civil Rights: A Comparative Analysis of Wage-Earning Employment for Women in Iran and Turkey.Margaret Leahyy & Elliot C. Rau - 2012 - Muslim World Journal of Human Rights 9 (1).details This paper argues that a thorough discussion on women's rights needs to go beyond progress in civil and political rights. Looking comparatively at Iran and Turkey, it posits that the legal and political context of secularism v. theocracy has much less of an impact than expected, and suggests that other variables offer much more compelling explanations. The paper combines feminist constructivist insights with the broader lens of world -system theory to hypothesize that the explanation for women's inequality in employment is (...) found at the intersection of endogenous gender constructions heavily influenced by culture and religion, and of an international economic order that contributes to the reproduction of a patriarchal structure in poorer countries, regardless of their political system. (shrink) Feminist Ethics in Normative Ethics Reexamination of the Role of the Hypothalamus in Motivation.Elliot S. Valenstein, Verne C. Cox & Jan W. Kakolewski - 1970 - Psychological Review 77 (1):16-31.details Mental States and Processes in Philosophy of Mind Color and Psychological Functioning: The Effect of Red on Performance Attainment.Andrew J. Elliot, Markus A. Maier, Arlen C. Moller, Ron Friedman & Jörg Meinhardt - 2007 - Journal of Experimental Psychology: General 136 (1):154-168.details Color in Philosophy of Mind Psalm 119: The Exaltation of Torah.Mark S. Smith, David Noel Freedman, Jeffrey C. Geoghehan & Andrew Welch - 2001 - Journal of the American Oriental Society 121 (1):149.details Humans Use Directed and Random Exploration to Solve the Explore–Exploit Dilemma.Robert C. Wilson, Andra Geana, John M. White, Elliot A. Ludvig & Jonathan D. Cohen - 2014 - Journal of Experimental Psychology: General 143 (6):2074-2081.details J.-J. STAMM. "Erlösen und Vergeben im Alten Testament." - Adam-C. WELCH. "Prophet and Priest in Old Testament." - H.-J. EBELING. "Das Messiasgeheimnis und die Botschaft des Marcus-Evangelisten." - J. BOISSET . "La prismauté de l'Esprit dans le message évangélique." - "Speculum inclusorum" a cura di P. Livario OLIGER. - P. TOURNIER. "Médicine de la personne". [REVIEW]Arnold Reymond - 1941 - Revue de Théologie Et de Philosophie:181.details Incidental Regulation of Attraction: The Neural Basis of the Derogation of Attractive Alternatives in Romantic Relationships.Meghan L. Meyer, Elliot T. Berkman, Johan C. Karremans & Matthew D. Lieberman - 2011 - Cognition and Emotion 25 (3):490-505.details Emotion and Consciousness in Psychology in Philosophy of Cognitive Science Evaluation Anxiety.Moshe Zeidner, Gerald Matthews, A. J. Elliot & C. S. Dweck - 2005 - In Andrew J. Elliot & Carol S. Dweck (eds.), Handbook of Competence and Motivation. The Guilford Press.details $17.91 used $120.49 new (collection) Amazon page The Philosophy of Creativity.Elliot Samuel Paul & Scott Barry Kaufman (eds.) - 2014 - Oxford University Press.details Creativity pervades human life. It is the mark of individuality, the vehicle of self-expression, and the engine of progress in every human endeavor. It also raises a wealth of neglected and yet evocative philosophical questions: What is the role of consciousness in the creative process? How does the audience for a work for art influence its creation? How can creativity emerge through childhood pretending? Do great works of literature give us insight into human nature? Can a computer program really be (...) creative? How do we define creativity in the first place? Is it a virtue? What is the difference between creativity in science and art? Can creativity be taught? -/- The new essays that comprise The Philosophy of Creativity take up these and other key questions and, in doing so, illustrate the value of interdisciplinary exchange. Written by leading philosophers and psychologists involved in studying creativity, the essays integrate philosophical insights with empirical research. -/- CONTENTS -/- I. Introduction Introducing The Philosophy of Creativity Elliot Samuel Paul and Scott Barry Kaufman -/- II. The Concept of Creativity 1. An Experiential Account of Creativity Bence Nanay -/- III. Aesthetics & Philosophy of Art 2. Creativity and Insight Gregory Currie 3. The Creative Audience: Some Ways in which Readers, Viewers and/or Listeners Use their Imaginations to Engage Fictional Artworks Noël Carroll 4. The Products of Musical Creativity Christopher Peacocke -/- IV. Ethics & Value Theory 5. Performing Oneself Owen Flanagan 6. Creativity as a Virtue of Character Matthew Kieran -/- V. Philosophy of Mind & Cognitive Science 7. Creativity and Not So Dumb Luck Simon Blackburn 8. The Role of Imagination in Creativity Dustin Stokes 9. Creativity, Consciousness, and Free Will: Evidence from Psychology Experiments Roy F. Baumeister, Brandon J. Schmeichel, and C. Nathan DeWall 10. The Origins of Creativity Elizabeth Picciuto and Peter Carruthers 11. Creativity and Artificial Intelligence: a Contradiction in Terms? Margaret Boden -/- VI. Philosophy of Science 12. Hierarchies of Creative Domains: Disciplinary Constraints on Blind-Variation and Selective-Retention Dean Keith Simonton -/- VII. Philosophy of Education (& Education of Philosophy) 13. Educating for Creativity Berys Gaut 14. Philosophical Heuristics Alan Hájek. (shrink) Agency in Philosophy of Action Creativity in Philosophy of Mind Natural Kinds in Metaphysics Philosophy of Artificial Intelligence in Philosophy of Cognitive Science Philosophy of Education in Philosophy of Social Science On Writing History, C. Knapp.John C. Welch - 1923 - Classical World: A Quarterly Journal on Antiquity 17:176.details Lv Welch.Sg Simpson, Ta Slaman, Steel Jr, Wh Woodin, Ri Soare, M. Stob, C. Spector & Am Turing - 1999 - In Edward R. Griffor (ed.), Handbook of Computability Theory. Elsevier. pp. 153.details $200.00 new (collection) Amazon page Elliot Is Brahman the Power of Children as Symbols Tillich, Whitehead, the Gita, and Sacredness.C. Robert Mesle - 2010 - Tattva - Journal of Philosophy 2 (2):1-8.details Utilitarianism for Animals, Kantianism for People? Harming Animals and Humans for the Greater Good.Lucius Caviola, Guy Kahane, Jim A. C. Everett, Elliot Teperman, Julian Savulescu & Nadira S. Faber - forthcoming - Journal of Experimental Psychology: General.details ELLIOT, H. S. R. . - The Letters of John Stuart Mill. [REVIEW]C. Read - 1911 - Mind 20:97.details John Stuart Mill in 19th Century Philosophy Reimagining Schools: The Selected Works of Elliot W. Eisner.Elliot W. Eisner - 2005 - Routledge.details Elliot Eisner has spent the last 40 years researching, thinking and writing about some of the key and enduring issues in Arts Education, Curriculum Studies and Qualitative Research. He has contributed over 20 books and 500 articles to the field. In this book, Professor Eisner has compiled a career-long collection of his finest pieces-extracts from books, key articles, salient research findings and major theoretical contributions-so the world can read them in a single manageable volume. Starting with a specially written (...) Introduction, which gives an overview of Professor Eisner's career and contextualizes his selection, the chapters cover a wide range of issues, including: · Children and art · The use of educational connoisseurship · Aesthetic modes of knowing · Absolutism and relativism in curriculum theory · Education reform and the ecology of schooling · The future of education research This is a must-have book for anyone wishing to know more about the development of Arts Education, Curriculum Studies and Qualitative Research over the last four decades, and about Elliot Eisner's contribution to these exciting fields. This book is part of the World Library of Educationalists series, which celebrates the contributions made to education by leading figures. Each scholar has selected his or her own key writings from across numerous books and journal articles, and often spread across two or more decades to be presented in a single volume. Through these books, readers can chase up the themes and strands that have been lodged in a lifetime's work, and so follow the development of these scholars' contributions to the field, as well as the development of the fields themselves. Other scholars included in the series: Richard Aldrich, Stephen J. Ball, John Elliott, Howard Gardner, John Gilbert, Ivor F. Goodson, David Hargreaves, David Labaree, E.C. Wragg, John White. (shrink) Beyond Market Fundamentalism: A Review Essay.Lisa C. Welch - 2005 - CLR James Journal 11 (1):127-133.details Imagination and Human Nature. [REVIEW]H. T. C. & Livingston Welch - 1935 - Journal of Philosophy 32 (19):529.details Cyril Welch, "Linguistic Responsibility". [REVIEW]C. G. Prado - 1989 - Dialogue 28 (4):667.details Environmental Philosophy a Collection of Readings /Edited by Robert Elliot and Arran Gare. --. --.Robert Elliot & Arran Gare - 1983 - Pennsylvania State University Press, C1983.details Contents: Ethical principals for environmental protection / Robert Goodin -- Political representation for future generations / Gregory S. Kavka and Virginia L. Warren -- On the survival of humanity / Jan Narveson -- On deep versus shallow theories of environmental pollution / C.A. Hooker -- Preservation of wilderness and the good life / Janna L. Thompson -- The rights of the nonhuman world / Mary Anne Warren -- Are values in nature subjective or objective? / Holmes Rolston III - Duties (...) concerning islands / Mary Midgley -- Gaia and the forms of life / Stephen R.L. Clark -- Western traditions and environmental ethics / Robin Attfield -- Traditional American Indian and traditional western European attitudes toward nature / J. Baird Callicott -- Roles and limits of paradigms in environmental thought and action / Richard Routley. [Book Synopsis]. (shrink) Environmental Philosophies, Misc in Philosophy of Biology Cyril Welch, "The Sense of Language". [REVIEW]James C. Morrison - 1979 - Man and World 12 (1):97.details Phenomenology in Continental Philosophy The Evolution of Soundscape Appraisal Through Enactive Cognition.Kirsten A.-M. van den Bosch, David Welch & Tjeerd C. Andringa - 2018 - Frontiers in Psychology 9.details Comments on "The Educational Imagination"The Educational Imagination.Monroe C. Beardsley & Elliot W. Eisner - 1981 - Journal of Aesthetic Education 15 (1):115.details Linguistic Responsibility Cyril Welch Victoria: Sono Nis Press, 1988. Pp. 400. $20.00.C. G. Prado - 1989 - Dialogue 28 (4):667-.details Japanese Sword-Mounts.William Elliot Griffis & Helen C. Gunsaulus - 1925 - Journal of the American Oriental Society 45:88.details Japanese Philosophy, Misc in Asian Philosophy Basohli Painting.Stuart C. Welch & M. S. Randhawa - 1961 - Journal of the American Oriental Society 81 (4):440.details Les Peintures des Manuscrits Safavis de 1502 À 1587Les Peintures des Manuscrits Safavis de 1502 a 1587.Stuart C. Welch & Ivan Stchoukine - 1960 - Journal of the American Oriental Society 80 (3):271.details Protestant Thought in the Nineteenth Century.C. Welch - 1972details El egoísmo psicológico.Elliot Sober - 1998 - Isegoría 18:47-70.details El egoísmo psicológico es una teoría sobre la motivación que afirma que nuestros deseos últimos son autoccntrados. Las crío ticas contra el egoísmo psicológico se pueden dividir en tres categorías: a) se dice que no es una auténtica teoría; b) que es una teoría refutada por la observación de la conducta; c) que se debería rechazar en favor de una teoría alternativa según la cual los seres humanos tienen deseos últimos tanto egoístas como altruistas. Se analizan estos tres tipos de (...) crítica y se concluye que la situación del debate entre egoísmo y pluralismo motivacional es de tablas. Situación que puede encontrar alguna salida a partir de consideraciones evolucionistas. El egoísmo no merece ser considerado como hipótesis por defecto. Aunque sea en un grado pequeño, el peso de la evidencia favorece al pluralismo. (shrink) Alfred North Whitehead an Anthology. Selected by F.S.C. Northrop and Mason W. Gross; Introductions and a Note on Whitehead's Terminology.Alfred North Whitehead, Mason Welch Gross & F. S. C. Northrop - 1953 - At the University Press.details Alfred North Whitehead in 20th Century Philosophy William Henry Welch and the Heroic Age of American Medicine by Simon Flexner; James Thomas Flexner. [REVIEW]J. De C. M. Saunders - 1943 - Isis 34:381-382.details History of Science in General Philosophy of Science William H. Welch and the Rise of Modern Medicine by Donald Fleming; Oscar Handlin. [REVIEW]J. De C. M. Saunders - 1955 - Isis 46:382-383.details History of Science, Misc in General Philosophy of Science Philosophy of Medicine, Miscellaneous in Philosophy of Science, Misc Scientific Change, Misc in General Philosophy of Science A Journey To Brindisi In 37 B.C.Alistair Elliot - 1993 - Arion 1 (1).details Review of Experimentelle Unterstuhungen?ber die Helligkeit der Farben, Unterstuhungen?ber den Lichtsinn, Zur Lehre von den Gesichtsempfindungen welche aus successiven Reizen resultiren, The Use of the Rotating Sectored Disk in Photometry, and Ueber den kieinsten Gesichtswinkel. [REVIEW]C. L. Franklin - 1894 - Psychological Review 1 (4):428-431.details The Extent of Computation in Malament–Hogarth Spacetimes.P. D. Welch - 2008 - British Journal for the Philosophy of Science 59 (4):659-674.details We analyse the extent of possible computations following Hogarth ([2004]) conducted in Malament–Hogarth (MH) spacetimes, and Etesi and Németi ([2002]) in the special subclass containing rotating Kerr black holes. Hogarth ([1994]) had shown that any arithmetic statement could be resolved in a suitable MH spacetime. Etesi and Németi ([2002]) had shown that some relations on natural numbers that are neither universal nor co-universal, can be decided in Kerr spacetimes, and had asked specifically as to the extent of computational limits there. (...) The purpose of this note is to address this question, and further show that MH spacetimes can compute far beyond the arithmetic: effectively Borel statements (so hyperarithmetic in second-order number theory, or the structure of analysis) can likewise be resolved: Theorem A. If H is any hyperarithmetic predicate on integers, then there is an MH spacetime in which any query ? n H ? can be computed. In one sense this is best possible, as there is an upper bound to computational ability in any spacetime, which is thus a universal constant of that spacetime. Theorem C. Assuming the (modest and standard) requirement that spacetime manifolds be paracompact and Hausdorff, for any spacetime there will be a countable ordinal upper bound, , on the complexity of questions in the Borel hierarchy computable in it. Introduction 1.1 History and preliminaries Hyperarithmetic Computations in MH Spacetimes 2.1 Generalising SADn regions 2.2 The complexity of questions decidable in Kerr spacetimes An Upper Bound on Computational Complexity for Each Spacetime CiteULike Connotea Del.icio.us What's this? (shrink) Metaphysics of Spacetime in Philosophy of Physical Science Special Relativity in Philosophy of Physical Science Direct download (10 more) Weak Systems of Determinacy and Arithmetical Quasi-Inductive Definitions.P. D. Welch - 2011 - Journal of Symbolic Logic 76 (2):418 - 436.details We locate winning strategies for various ${\mathrm{\Sigma }}_{3}^{0}$ -games in the L-hierarchy in order to prove the following: Theorem 1. KP+Σ₂-Comprehension $\vdash \exists \alpha L_{\alpha}\ models"\Sigma _{2}-{\bf KP}+\Sigma _{3}^{0}-\text{Determinacy}."$ Alternatively: ${\mathrm{\Pi }}_{3}^{1}\text{\hspace{0.17em}}-{\mathrm{C}\mathrm{A}}_{0}\phantom{\rule{0ex}{0ex}}$ "there is a β-model of ${\mathrm{\Delta }}_{3}^{1}-{\mathrm{C}\mathrm{A}}_{0}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+\text{\hspace{0.17 em}}{\mathrm{\Sigma }}_{3}^{0}$ -Determinacy." The implication is not reversible. (The antecedent here may be replaced with ${\mathrm{\Pi }}_{3}^{1}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\left({\mathrm{\Pi }}_{3}^{1}\right)-{\mathrm{C}\mathrm{A}}_{0}:\text{\hspace{0.17em}}{\mathrm{\Pi }}_{3}^{1}$ instances of Comprehension with only ${\mathrm{\Pi }}_{3}^{1}$ -lightface definable parameters—or even weaker theories.) Theorem 2. KP +Δ₂-Comprehension +Σ₂-Replacement + ${\mathrm{\Sigma }}_{3}^{0}\phantom{\rule{0ex}{0ex}}$ -Determinacy. (Here AQI (...) is the assertion that every arithmetical quasi-inductive definition converges.) Alternatively: $\Delta _{3}^{1}{\rm CA}_{0}+{\rm AQI}\nvdash \Sigma _{3}^{0}$ -Determinacy. Hence the theories: ${\mathrm{\Pi }}_{3}^{1}-{\mathrm{C}\mathrm{A}}_{0},\text{\hspace{0.17em}}{\mathrm{\Delta }}_{3}^{1}-{\mathrm{C}\mathrm{A}}_{0}+\text{\hspace{0.17em}}{\mathrm{\Sigma }}_{3}^{0}-\mathrm{D}\mathrm{e}\mathrm{t}\phantom{\rule{0ex}{0ex}}$ -Det, ${\mathrm{\Delta }}_{3}^{1}-{\mathrm{C}\mathrm{A}}_{0}+\mathrm{A}\mathrm{Q}\mathrm{I}$ , and ${\mathrm{\Delta }}_{3}^{1}-{\mathrm{C}\mathrm{A}}_{0}\phantom{\rule{0ex}{0ex}}$ are in strictly descending order of strength. (shrink) Model Theory in Logic and Philosophy of Logic The Career of M. Aemilius Lepidus 49-44 B.C.Kathryn Welch - 1995 - Hermes 123 (4):443-454.details Elementary Classics. Eutropius Adapted for the Use of Beginners.M. W., W. Welch & C. G. Duffield - 1885 - American Journal of Philology 6 (4):500.details Classics in Arts and Humanities Avoidance Motivation and Conservation of Energy.Marieke Roskes, Andrew J. Elliot, Bernard A. Nijstad & Carsten K. W. De Dreu - 2013 - Emotion Review 5 (3):264-268.details Compared to approach motivation, avoidance motivation evokes vigilance, attention to detail, systematic information processing, and the recruitment of cognitive resources. From a conservation of energy perspective it follows that people would be reluctant to engage in the kind of effortful cognitive processing evoked by avoidance motivation, unless the benefits of expending this energy outweigh the costs. We put forward three empirically testable propositions concerning approach and avoidance motivation, investment of energy, and the consequences of such investments. Specifically, we propose that (...) compared to approach-motivated people, avoidance-motivated people (a) carefully select situations in which they exert such cognitive effort, (b) only perform well in the absence of distracters that occupy cognitive resources, and (c) become depleted after exerting such cognitive effort. (shrink) Emotions in Philosophy of Mind Boekbesprekingen.P. Huizing, J. Beyer, A. van Kol, S. Trooster, P. Fransen, F. De Raedemaeker, H. Geurtsen, J. De Munter, J. Nota, P. de Bruin, L. Steins Bisschop, M. De Tollenaere, A. Poncelet, W. Couturier, L. Vander Kerken, A. Snoeck, F. Malmberg, A. Kuylaars, A. Raignier, Fr Elliot, E. Huffer, M. Dierickx, J. Rupert, R. Leijs, J. Houben, J. VanderMeersch, E. J. Vandenbussche, J. De Fraine, I. de la Potterie, P. Smulders, P. Ploumen, J. Van Torre, H. Somers, C. Sträter & E. Vandenbussche - 1952 - Bijdragen 13 (1):76-116.details Assessing Decision Making Capacity for Do Not Resuscitate Requests in Depressed Patients: How to Apply the "Communication" and "Appreciation" Criteria.Benjamin D. Brody, Ellen C. Meltzer, Diana Feldman, Julie B. Penzner & Janna S. Gordon-Elliot - 2017 - HEC Forum 29 (4):303-311.details The Patient Self Determination Act of 1991 brought much needed attention to the importance of advance care planning and surrogate decision-making. The purpose of this law is to ensure that a patient's preferences for medical care are recognized and promoted, even if the patient loses decision-making capacity. In general, patients are presumed to have DMC. A patient's DMC may come under question when distortions in thinking and understanding due to illness, delirium, depression or other psychiatric symptoms are identified or suspected. (...) Physicians and other healthcare professionals working in hospital settings where medical illness is frequently comorbid with depression, adjustment disorders, demoralization and suicidal ideation, can expect to encounter ethical tension when medically sick patients who are also depressed or suicidal request do not resuscitate orders. (shrink) Biomedical Ethics in Applied Ethics Some Must Watch While Some Must Sleep by William C. Dement, And: Advances in Sleep Research Ed. By Elliot D. Weitzman.Jacobus W. Mostert - 1977 - Perspectives in Biology and Medicine 20 (3):469-469.details Consciousness, Sleep, and Dreaming in Philosophy of Cognitive Science American Catholic Philosophical Quarterly 842.John Lemos, Thomas J. McPartland, John C. Médaille, Robert J. Spitzer, Runar M. Thorsteinsson, John R. Welch & Notre Dame - 2010 - American Catholic Philosophical Quarterly 84 (4).details Downey, R., Fiiredi, Z., Jockusch Jr., CG and Ruhel, LA.W. I. Gasarch, A. C. Y. Lee, M. Groszek, T. Hummel, V. S. Harizanov, H. Ishihara, B. Khoussainov, A. Nerode, I. Kalantari & L. Welch - 1998 - Annals of Pure and Applied Logic 93:263.details Review Essay / Rational Police.Elliot D. Cohen - 1990 - Criminal Justice Ethics 9 (2):64-71.details Edwin J. Delattre, Character and Cops: Ethics in Policing Washington, D.C.: American Enterprise Institute for Public Policy Research, 1989, xviii + 247 pp. Political Ethics in Applied Ethics Miscarriage, Abortion or Criminal Feticide: Understandings of Early Pregnancy Loss in Britain, 1900–1950.Rosemary Elliot - 2014 - Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences 47:248-256.details Reproductive Ethics in Applied Ethics La théorie du développement moral défendue par Elliot Turiel et Larry P. Nucci peut-elle apporter un fondement empirique à l'éthique minimale ?Nathalie Maillard - 2013 - Les ateliers de l'éthique/The Ethics Forum 8 (1):4-27.details Les recherches menées dans le champ de la psychologie morale par Larry P. Nucci et Elliot Turiel conduisent à identifier le domaine moral avec le domaine des jugements prescriptifs concernant la manière dont nous devons nous comporter à l'égard des autres personnes. Ces travaux empiriques pourraient apporter du crédit aux propositions normatives du philosophe Ruwen Ogien qui défend une conception minimaliste de l'éthique. L'éthique minimale exclut en particulier le rapport à soi du domaine moral. À mon avis cependant, ces (...) travaux de psychologie morale ne permettent pas du tout d'affirmer que nous sommes, empiriquement parlant, des minimalistes moraux. Les résultats des recherches de Nucci et Turiel montrent que les personnes considèrent intuitivement que le domaine personnel – le domaine des actions qui affectent prioritairement l'agent lui-même – doit échapper au contrôle ou à l'interférence des autres personnes. Mais affirmer que c'est l'agent lui-même qui possède l'autorité légitime de décider dans le domaine personnel ne signifie pas que tout ce qu'il y fait soit moralement indifférent. (shrink) Psychology of Ethics in Normative Ethics Hacia una lógica de analogía.John R. Welch - 1994 - Revista Latinoamericana de Filosofia 20 (1):161-167.details How do we distinguish good and bad analogies? Luis A. Camacho proposed that false analogies be construed as false material conditionals. This article offers a counter-proposal: analogies of all sorts can be understood as singular inductive inferences. For the sake of simplicity, this proposal is illustrated with reference to Carnap's favorite inductive method c*. Applications of Probability, Misc in Philosophy of Probability Argument in Epistemology Inductive Logic in Logic and Philosophy of Logic 1 — 50 / 1000 Page generated Mon Jan 17 14:27:21 2022 on philpapers-web-dd886cf8-6zvlz
CommonCrawl
Title: Scalable Distributed Deep Learning With Model Parallelism Authors: Saar Eliad Supervisors: Assaf Schuster Abstract: This work discusses a particular case of Deep Learning where the model is too large to fully fit into the memory of a single GPU during training. Fundamentally, it focuses on fine-tuning giant neural networks on commodity hardware with automatic pipeline model parallelism. Fine-tuning is an increasingly common technique that leverages transfer learning to dramatically expedite the training of huge, high-quality models. Critically, it holds the potential to make giant state-of-the-art models pre-trained on high-end super-computing-grade systems readily available for users that lack access to such costly resources. Unfortunately, this potential is still difficult to realize because the models often do not fit in the memory of a single commodity GPU, making fine-tuning a challenging problem. We present FTPipe, a system that explores a previously unexplored dimension of pipeline model parallelism, making the efficient multi-GPU execution of fine-tuning tasks for giant neural networks readily accessible. A key novel concept, called Mixed-pipe, allows balancing the compute and memory-load on the GPUs by partitioning the model into computational blocks of any granularity while relaxing model topology constraints. Our system goes beyond synchronization and topology limitations of previous pipeline-parallel approaches, efficiently training a new family of models, including the current state-of-the-art. Our extensive experiments on giant NLP models (BERT-340M, GPT2-1.5B, and T5-3B) show that FTPipe achieves up to3$\times$ speedup and state-of-the-art accuracy when fine-tuning giant transformers with billions of parameters. These models require from 12GB to 59GB of GPU memory, and FTPipe executes them on 8 commodity RTX2080-Ti GPUs, each with 11GB memory and standard PCIe.
CommonCrawl
Home » Content The numbers in "Tails you win: the science of chance" Submitted by david on Tue, 16/10/2012 - 8:32am Here are brief background notes and links for the numbers used in the "Tails You Win: the Science of Chance" Warning: as it says in the programme, all probabilities are a matter of judgment. Therefore all the numbers given below are open to dispute, and certainly do not apply to each individual with their unique circumstances. They are intended to represent reasonable betting odds given the limited information provided. Probability of falling in while punting: 0.005 Scudamore's Punting Company estimate 750 – 850 hires per week during the busy season, with accidental immersions running at less than 0.5%. (personal communication) Probability of a random Cambridge student winning a Nobel prize: 1 in 5000 65 Cambridge Graduates have won Nobel prizes. They were students between around 1861 (Rayleigh) and 1972 (Tsien), say 100 years. Now, in each current year around 6500 new students are admitted (around 50% undergraduates). 100 years ago, it was around half that. So say there have been 320,000 new students over the 100 years. The proportion who won Nobel prizes is 65/320,000 = 0.02%, or 1 in 5,000. Of course we have to assume this past rate represents future probability, but I have no reason to believe to the contrary. Probability of a random Cambridge student not finishing their degree: 1 in 100 Degree completion in Cambridge is 99% , the highest in the country. "on average each person in Britain has a 1 in a million daily chance of some kind of violent or accidental death?" In 2010, out of a population of 55,000,000 in England Wales, 17,201 died of accidents and violence, known as External causes of morbidity and mortality. So that is 17,201 /55 = 313 micromorts a year, or around 1 a day. "1 in a million is roughly the chance of flipping heads twenty times" Each head has probability $\frac{1}{2}$, so 20 in a row is $\frac{1}{2}$ to the power 20, which is 1 in $2^{20} = 1,048,576$ "the likelihood of a major earthquake hitting the Bay area is something like 63% over the next thirty years." From Mary Lou Zoback. See the US Geological Survey's excellent earthquake information, including risk maps. "With two dice, and 36 possible combinations, there's only one way to throw a 2. But you're much more likely to get a seven." I need two 1's to get a total of 2, with probability 1/6 x 1/6 = 1/36. But there are six ways to get a 7, and so the probability is 6/36 = 1/6. Chances of winning at roulette (American rules): Straight 37 – 1 A US roulette table has 2 zeros. So if I place $1 on a single number, there is one way to win and 37 ways to lose (35 other numbers and 2 zeros), all equally likely. Split: 18 – 1 If I place $1 on two adjacent numbers, there are 2 ways to win and 36 to lose. Street: 11.667 – 1 If I place $1 on three adjacent numbers, there are 3 ways to win and 35 to lose, which is 35/3 = 11 2/3 to 1 against. Red/Black: 1.111 – 1 If I place $1 on, say red,, there are 18 ways to win and 20 to lose, which is 20/18 = 10/9 = 1.11 to 1 against. "in Roulette the house advantage is 5.26 percent under American rules." If I place $1 on a single number there is probability of 1/38 that I end up with 36, and a 37/38 chance that I end up with nothing. The expected gain is therefore: 36/38 - 1 = −0.0526 (5.26% house edge) Margaret de Valois: Actuary, Mazars: 36, non-smoker. Her chance of living to 100 … 21.1% This actuarial assessment takes into account assumed improvements in survival over the next 60 years. Life tables that do this are known as 'cohort' life-tables and come in different versions depending on whether a low, principle or high projection are made. Logarithmic plot of Halley data Copy of Halley's original data "I was born in the 1950s and back then my expected lifespan was just 67 years." See Social Trends 41 - Health "The average lifespan is actually rising by 3 months a year" See Social Trends 41 - Health "Between 1930 and 2009 period life expectancy at birth in the UK increased by around 20 years for both sexes" "If I were born today, I could expect to live to 78." .... "So at my age now, I can expect to live to …. 82" See 2010 Interim Life Tables for England and Wales The various statements about "expecting to lose ½ an hour" are based on the following procedure (see our Microlives page for a full description). Use epidemiological studies to estimate the annual excess risk for all-cause mortality (hazard ratio) associated with a life-long behaviour Translate the hazard ratios into changes in life-expectancy using the current life-tables for England and Wales Calculate the ratio (change in life-expectancy)/(number of days post 35 years old) obtain a pro-rata loss/gain associated with each day with a habit or behaviour post 35 years old. An annual excess risk of around 9% (hazard ratio 1.09) translates to an approximate loss of ½ an hour each day past 35. Of course we cannot know or measure the precise effect of, say, smoking 2 cigarettes – the "1/2 hour" is a mathematical construct that averages over populations and lifetimes. We also cannot conclude that changes in behaviour will result in subsequent benefits. "Research tells us that for every day you're five kilos overweight – like I am – you can expect to lose half an hour off your life." A recent meta-analysis based on over 66,000 deaths estimated a hazard ratio of 1.29 for all-cause mortality per 5 kg/sqm increase in body mass index (BMI) over the optimum of 22.5 to 25 kg/sqm. For a man/woman of average height (1.75m/1.62m), this corresponds to a hazard ratio of around 1.09/1.10 per 5 kg overweight. "Sad to say, if you're a man sinking three pints a day then that's also half an hour. " In a recent meta-analysis of studies involving over 1,000,000 subjects and 94,000 deaths, one drink (10g of alcohol) per day was associated with an adjusted hazard ratio of around 0.83 for men. Each successive daily drink was associated with a hazard ratio of around 1.06, up to 6 drinks a day. The protective effect of alcohol on all-cause mortality is controversial due to the possibility of residual confounding and ex-drinkers having stopped due to ill-health: Di Castelnuovo and colleagues consider a cautious hazard ratio for low consumption is 0.9, which we have assumed. This for a man, 3 pints of low-strength beer (3.5%) per day, containing around 6 UK units (48g of alcohol), would be associated with a hazard ratio of around 0.9 x 1.06 x 1.06 x 1.06 x 1.06 = 1.13, corresponding to around ½ hour each day loss in life-expectancy. "A regular run of half an hour' and you can expect to live longer – half an hour longer." We assume this regular exercise is on top of being reasonably active. A meta-analysis of 22 studies and over 52,000 deaths estimated an adjusted all-cause hazard ratio of 0.81 for 2.5 hours per week (20 minutes a day) of moderate exercise compared to no activity. There were strong diminishing marginal returns, with 7 hours a week associated with a hazard ratio of 0.76, or 0.76/0.81 = 0.94 when compared to 2.5 hours a week. So the extra ½ hour a day of exercise was associated with slightly less than ½ hour average gain in life expectancy. Also, in a study of over 400,000 people in Taiwan, an extra 15 minutes of activity each day was associated with a hazard reduction of 4%, so ½ hour was associated with a hazard ario of 0.92, corresponding to around ½ hour gain in life-expectancy. These results are for 'moderate exercise', and so the 'run' is more of the 'brisk walk' variety. "Two cigarettes costs half an hour." Doll and Peto estimated a Standardised Mortality Ratio of 2.17 for smoking 15-24 cigarettes per day. If we assume a hazard ratio of 2.2 for 20 cigarettes, and a constant effect of each cigarette on the risk, then pro-rata this translates to a hazard ratio of 1.08 per 2 cigarettes, which translates to 1/2 a day loss of life-expectancy. "I'm 58 now. As the years roll by, in more and more of these possible futures I die, until by the age of 82 about half of my future selves will be dead and about half still alive" Numbers surviving derived from 2010 Interim Life Tables for England and Wales For smokers, a hazard ratio of 2 is assumed. Weekend ski trip – 3 micromorts Downhill skiing in the Alps has an estimated average daily risk of 1.1 micromorts. We assume a long weekend. Scuba-dive – 5 micromorts British Sub-Aqua Club report "the fatality rate for BSAC members is 0.54 fatalities per 100,000 dives" (it is twice as high for non-members) Marathon run – 7 micromorts A US study reports 26 cardiac deaths in 3, 292, 268 marathons. "There's only about a 7 in a million chance of death. 7 micromorts." The US Parachuting Association reports 21 deaths in 3,000,000 jumps in 2010. The British Parachuting Association reports an average of around 10 micromorts a jump for trained parachutists, but only 3 micromorts for a tandem jump (based on 1 fatality in 380,000 jumps). My judgement was around 7. "That's the equivalent of riding about forty miles on a motorbike." Around 7 miles on a motorbike per micromort in the UK in 2010. Being 18 – 500 micromorts Annual hazard from all causes for an 18-year old. 2010 Interim Life Tables for England and Wales Being 58 – 7000 micromorts Annual hazard from all causes for a 58-year old. 2010 Interim Life Tables for England and Wales Chance of each Premium Bond today winning a prize … 24,000-1 Premium bond website. "A perfect sequence of five numbers – there should be 50 of these in the book." Th first number must be 1,2,3,4,5 (prob 1/2). Each subsequent number has 1/10 chance of being correct. So expected number of sequences of $5$ numbers is $$ 1,000,000 \times \frac{1}{2} \times \frac{1}{10} \times \frac{1}{10} \times \frac{1}{10} \times \frac{1}{10} = 50$$ "And the same number five times in a row, there should be about 100 of those." The expected number of runs of length $n$ out of $N$ digits between 0 and 9 is $$ E[R_n|N] = N \times \left(\frac{1}{10} \right)^{n-1} $$ If we assume $N=1,000,000$ and $n=5$, then the expected numbers of runs of 5 identical digits occurring in the book is $$ 1,000,000 \times \left(\frac{1}{10} \right)^{4} = 100$$ "You can even expect somewhere in a million random numbers, the same number to occur seven times in a row" $$ 1,000,000 \times \left(\frac{1}{10} \right)^{6} = 1,$$ "taking all the results together, the totals match the shape of randomness remarkably well." At each draw, a number has a 6/49 chance of being picked. The number of times each number occurs after N draws therefore has a Binomial distribution with mean 6/79 N and variance N 6/49 43/49. A normal approximation to this distribution can be applied to the histogram of the counts. "There's only a 1 in 14 million chance of me winning the jackpot. " See standard calculations for lottery odds "There's only a 1 in 56 chance of me getting the smallest prize of £10." "Overall the lottery only pays back 45% of the money it takes in. " National lottery website. "a family having three children all with the same birthday, born in different years, but all their three children being born on the same birthday. Wow what are the chances of that? We assume no family planning, and no February 29th births, and so babies essentially pop out at random, equally likely to be born on one of 365 days. The important thing is that the first birthday is irrelevant - it can be anything. But the next 2 must be on the same day as the first, and this occurs with probability 1/365 x 1/365 = 1 in 135,000. See our web page on this. "Since there's a million families in this country with three children... There are 24,000,000 households in Great Britain, and 1,000,000 of them are made up of a couple and 3 or more dependent children [Social Trends 37, page 14, 2007]. An hour watching TV: LOSE 15 MINUTES OF YOUR LIFE! Based on 1270 deaths, the EPIC Norfolk study reported a hazard ratio of 1.04 per hour of television per day, adjusted for other lifestyle factors including overall activity, corresponding to 1/4 hour off your life-expectancy for each 1 hour watching TV. halley-life-table.jpg 140.88 KB
CommonCrawl
On the convergence to critical scaling profiles in submonolayer deposition models KRM Home Solution to the Boltzmann equation in velocity-weighted Chemin-Lerner type spaces December 2018, 11(6): 1333-1358. doi: 10.3934/krm.2018052 A pedestrian flow model with stochastic velocities: Microscopic and macroscopic approaches Simone Göttlich , , Stephan Knapp and Peter Schillen Department of Mathematics, University of Mannheim, 68131 Mannheim, Germany * Corresponding author: S. Göttlich Received March 2017 Revised December 2017 Published June 2018 We investigate a stochastic model hierarchy for pedestrian flow. Starting from a microscopic social force model, where the pedestrians switch randomly between the two states stop-or-go, we derive an associated macroscopic model of conservation law type. Therefore we use a kinetic mean-field equation and introduce a new problem-oriented closure function. Numerical experiments are presented to compare the above models and to show their similarities. Keywords: Interacting particle system, stochastic processes, mean field equations, hydrodynamic limit, macroscopic pedestrian model, numerical simulations. Mathematics Subject Classification: Primary: 90B20, 65Cxx, 35L60. Citation: Simone Göttlich, Stephan Knapp, Peter Schillen. A pedestrian flow model with stochastic velocities: Microscopic and macroscopic approaches. Kinetic & Related Models, 2018, 11 (6) : 1333-1358. doi: 10.3934/krm.2018052 D. Armbruster, S. Martin and A. Thatcher, Elastic and inelastic collisions of swarms, Physica D: Nonlinear Phenomena, 344 (2017), 45-57. doi: 10.1016/j.physd.2016.11.008. Google Scholar D. Armbruster, S. Motsch and A. Thatcher, Swarming in bounded domains, Physica D: Nonlinear Phenomena, 344 (2017), 58-67. doi: 10.1016/j.physd.2016.11.009. Google Scholar H. Bauer, Probability Theory, vol. 23 of De Gruyter Studies in Mathematics, Walter de Gruyter & Co., Berlin, 1996, Translated from the fourth (1991) German edition by Robert B. Burckel and revised by the author. doi: 10.1515/9783110814668. Google Scholar N. Bellomo, C. Bianca and V. Coscia, On the modeling of crowd dynamics: An overview and research perspectives, S$\vec{\rm e}$MA J., 54 (2011), 25-46. Google Scholar C. Cercignani, R. Illner and M. Pulvirenti, The Mathematical Theory of Dilute Gases, vol. 106 of Applied Mathematical Sciences, Springer-Verlag, New York, 1994. doi: 10.1007/978-1-4419-8524-8. Google Scholar L. Chen, S. Göttlich and Q. Yin, Mean field limit and propagation of chaos for a pedestrian flow model, Journal of Statistical Physics, 166 (2017), 211-229. doi: 10.1007/s10955-016-1679-5. Google Scholar A. Chertock, A. Kurganov, A. Polizzi and I. Timofeyev, Pedestrian flow models with slowdown interactions, Math. Models Methods Appl. Sci., 24 (2014), 249-275. doi: 10.1142/S0218202513400083. Google Scholar E. Cristiani, B. Piccoli and A. Tosin, Multiscale Modeling of Pedestrian Dynamics, vol. 12 of MS&A. Modeling, Simulation and Applications, Springer, Cham, 2014. doi: 10.1007/978-3-319-06620-2. Google Scholar P. Degond, C. Appert-Rolland, M. Moussaïd, J. Pettré and G. Theraulaz, A hierarchy of heuristic-based models of crowd dynamics, J. Stat. Phys., 152 (2013), 1033-1068. doi: 10.1007/s10955-013-0805-x. Google Scholar P. Degond and C. Ringhofer, Stochastic dynamics of long supply chains with random breakdowns, SIAM J. Appl. Math., 68 (2007), 59-79. doi: 10.1137/060674302. Google Scholar P. Degond, C. Appert-Rolland, J. Pettré and G. Theraulaz, Vision-based macroscopic pedestrian models, Kinet. Relat. Models, 6 (2013), 809-839. doi: 10.3934/krm.2013.6.809. Google Scholar G. Dimarco and S. Motsch, Self-alignment driven by jump processes: Macroscopic limit and numerical investigation, Math. Models Methods Appl. Sci., 26 (2016), 1385-1410. doi: 10.1142/S0218202516500330. Google Scholar R. Etikyala, S. Göttlich, A. Klar and S. Tiwari, Particle methods for pedestrian flow models: From microscopic to nonlocal continuum models, Math. Models Methods Appl. Sci., 24 (2014), 2503-2523. doi: 10.1142/S0218202514500274. Google Scholar I. I. Gikhman and A. V. Skorokhod, The Theory of Stochastic Processes. Ⅱ, Classics in Mathematics, Springer-Verlag, Berlin, 2004, Translated from the Russian by S. Kotz, Reprint of the 1975 edition. doi: 10.1007/978-3-642-61921-2. Google Scholar D. Helbing, A fluid dynamic model for the movement of pedestrians, Complex Systems, 6 (1992), 391-415, arXiv: cond-mat/9805213. Google Scholar D. Helbing and P. Molnár, Social force model for pedestrian dynamics, Physical Review E, 51 (1998), 4282-4286, arXiv: cond-mat/9805244. doi: 10.1103/PhysRevE.51.4282. Google Scholar R. L. Hughes, A continuum theory for the flow of pedestrians, Transportation Research Part B: Methodological, 36 (2002), 507-535. doi: 10.1016/S0191-2615(01)00015-7. Google Scholar P.-E. Jabin, Macroscopic limit of Vlasov type equations with friction, Ann. Inst. H. Poincaré Anal. Non Linéaire, 17 (2000), 651-672. doi: 10.1016/S0294-1449(00)00118-9. Google Scholar P.-E. Jabin, Various levels of models for aerosols, Math. Models Methods Appl. Sci., 12 (2002), 903-919. doi: 10.1142/S0218202502001957. Google Scholar A. Jüngel, Transport Equations for Semiconductors, vol. 773 of Lecture Notes in Physics, Springer-Verlag, Berlin, 2009. doi: 10.1007/978-3-540-89526-8. Google Scholar A. Klar, F. Schneider and O. Tse, Approximate models for stochastic dynamic systems with velocities on the sphere and associated fokker-planck equations, Kinetic and Related Models, 7 (2014), 509-529. doi: 10.3934/krm.2014.7.509. Google Scholar R. J. LeVeque, Finite Volume Methods for Hyperbolic Problems, Cambridge Texts in Applied Mathematics, Cambridge University Press, Cambridge, 2002. doi: 10.1017/CBO9780511791253. Google Scholar B. Piccoli and A. Tosin, Time-evolving measures and macroscopic modeling of pedestrian flow, Arch. Ration. Mech. Anal., 199 (2011), 707-738. doi: 10.1007/s00205-010-0366-y. Google Scholar B. Piccoli and A. Tosin, Pedestrian flows in bounded domains with obstacles, Contin. Mech. Thermodyn., 21 (2009), 85-107. doi: 10.1007/s00161-009-0100-x. Google Scholar M. Schultz, Stochastic transition model for pedestrian dynamics, in Pedestrian and Evacuation Dynamics 2012, Springer International Publishing, (2013), 971-985, arXiv: 1210.5554. doi: 10.1007/978-3-319-02447-9_81. Google Scholar A. Tordeux and A. Schadschneider, A stochastic optimal velocity model for pedestrian flow, in Parallel Processing and Applied Mathematics, Springer International Publishing, 9574 (2016), 528-538. doi: 10.1007/978-3-319-32152-3_49. Google Scholar A. Tordeux and A. Schadschneider, White and relaxed noises in optimal velocity models for pedestrian flow with stop-and-go waves, Journal of Physics A: Mathematical and Theoretical, 49 (2016), 185101, 16pp. doi: 10.1088/1751-8113/49/18/185101. Google Scholar Figure 1. Velocity vector at the boundary Figure 2. Overview of the deterministic and stochastic model hierarchy equations Figure 3. Densities at different times: $u_{ij}^{\text{Mic}, n}$ for the microscopic and $u_{ij}^{\text{Mac}, n}$ for the macroscopic model Figure 4. Mass balances at $x = -1$ and $x = 0$ Figure 5. $L^1$ and $L^2$ error Figure 6. Densities for $\lambda_1$ at different times: $u_{ij}^{\text{Mic}, n}$ for the microscopic and $u_{ij}^{\text{Mac}, n}$ for the macroscopic model Figure 8. Mass balances at $x = 1$ Figure 9. $L^1$ and $L^2$ errors Table 1. Numerical error and EOOC for the first example $ \mathsf{err} $ EOOC $\Delta x = {}^{1}\!\!\diagup\!\!{}_{5}\;$ 0.3251 - $\Delta x = {}^{1}\!\!\diagup\!\!{}_{10}\;$ 0.1755 0.8897 Table 2. Numerical error and EOOC for the second example with rate function $\lambda_1$ $\mathsf{err} $ EOOC $\displaystyle \Delta x = {}^{1}\!\!\diagup\!\!{}_{5}\;$ 0.4457 - $\displaystyle \Delta x = {}^{1}\!\!\diagup\!\!{}_{10}\;$ 0.2215 1.0085 Charles Bordenave, David R. McDonald, Alexandre Proutière. A particle system in interaction with a rapidly varying environment: Mean field limits and applications. Networks & Heterogeneous Media, 2010, 5 (1) : 31-62. doi: 10.3934/nhm.2010.5.31 Martin Burger, Peter Alexander Markowich, Jan-Frederik Pietschmann. Continuous limit of a crowd motion and herding model: Analysis and numerical simulations. Kinetic & Related Models, 2011, 4 (4) : 1025-1047. doi: 10.3934/krm.2011.4.1025 Martin Burger, Marco Di Francesco, Peter A. Markowich, Marie-Therese Wolfram. Mean field games with nonlinear mobilities in pedestrian dynamics. Discrete & Continuous Dynamical Systems - B, 2014, 19 (5) : 1311-1333. doi: 10.3934/dcdsb.2014.19.1311 Pierre Degond, Simone Goettlich, Axel Klar, Mohammed Seaid, Andreas Unterreiter. Derivation of a kinetic model from a stochastic particle system. Kinetic & Related Models, 2008, 1 (4) : 557-572. doi: 10.3934/krm.2008.1.557 Franco Flandoli, Matti Leimbach. Mean field limit with proliferation. Discrete & Continuous Dynamical Systems - B, 2016, 21 (9) : 3029-3052. doi: 10.3934/dcdsb.2016086 Seung-Yeal Ha, Jeongho Kim, Jinyeong Park, Xiongtao Zhang. Uniform stability and mean-field limit for the augmented Kuramoto model. Networks & Heterogeneous Media, 2018, 13 (2) : 297-322. doi: 10.3934/nhm.2018013 Franco Flandoli, Marta Leocata, Cristiano Ricci. The Vlasov-Navier-Stokes equations as a mean field limit. Discrete & Continuous Dynamical Systems - B, 2019, 24 (8) : 3741-3753. doi: 10.3934/dcdsb.2018313 Lukas Neumann, Christian Schmeiser. A kinetic reaction model: Decay to equilibrium and macroscopic limit. Kinetic & Related Models, 2016, 9 (3) : 571-585. doi: 10.3934/krm.2016007 Yufeng Shi, Tianxiao Wang, Jiongmin Yong. Mean-field backward stochastic Volterra integral equations. Discrete & Continuous Dynamical Systems - B, 2013, 18 (7) : 1929-1967. doi: 10.3934/dcdsb.2013.18.1929 Young-Pil Choi, Samir Salem. Cucker-Smale flocking particles with multiplicative noises: Stochastic mean-field limit and phase transition. Kinetic & Related Models, 2019, 12 (3) : 573-592. doi: 10.3934/krm.2019023 Vadim Kaushansky, Christoph Reisinger. Simulation of a simple particle system interacting through hitting times. Discrete & Continuous Dynamical Systems - B, 2019, 24 (10) : 5481-5502. doi: 10.3934/dcdsb.2019067 Yves Achdou, Mathieu Laurière. On the system of partial differential equations arising in mean field type control. Discrete & Continuous Dynamical Systems - A, 2015, 35 (9) : 3879-3900. doi: 10.3934/dcds.2015.35.3879 Antonio DeSimone, Martin Kružík. Domain patterns and hysteresis in phase-transforming solids: Analysis and numerical simulations of a sharp interface dissipative model via phase-field approximation. Networks & Heterogeneous Media, 2013, 8 (2) : 481-499. doi: 10.3934/nhm.2013.8.481 Rong Yang, Li Chen. Mean-field limit for a collision-avoiding flocking system and the time-asymptotic flocking dynamics for the kinetic equation. Kinetic & Related Models, 2014, 7 (2) : 381-400. doi: 10.3934/krm.2014.7.381 Pierluigi Colli, Gianni Gilardi, Pavel Krejčí, Jürgen Sprekels. A vanishing diffusion limit in a nonstandard system of phase field equations. Evolution Equations & Control Theory, 2014, 3 (2) : 257-275. doi: 10.3934/eect.2014.3.257 Seung-Yeal Ha, Jeongho Kim, Xiongtao Zhang. Uniform stability of the Cucker-Smale model and its application to the Mean-Field limit. Kinetic & Related Models, 2018, 11 (5) : 1157-1181. doi: 10.3934/krm.2018045 Seung-Yeal Ha, Jeongho Kim, Peter Pickl, Xiongtao Zhang. A probabilistic approach for the mean-field limit to the Cucker-Smale model with a singular communication. Kinetic & Related Models, 2019, 12 (5) : 1045-1067. doi: 10.3934/krm.2019039 Cédric Bernardin, Valeria Ricci. A simple particle model for a system of coupled equations with absorbing collision term. Kinetic & Related Models, 2011, 4 (3) : 633-668. doi: 10.3934/krm.2011.4.633 Pierre Degond, Cécile Appert-Rolland, Julien Pettré, Guy Theraulaz. Vision-based macroscopic pedestrian models. Kinetic & Related Models, 2013, 6 (4) : 809-839. doi: 10.3934/krm.2013.6.809 Jianhui Huang, Xun Li, Jiongmin Yong. A linear-quadratic optimal control problem for mean-field stochastic differential equations in infinite horizon. Mathematical Control & Related Fields, 2015, 5 (1) : 97-139. doi: 10.3934/mcrf.2015.5.97 2018 Impact Factor: 1.38 HTML views (77) Simone Göttlich Stephan Knapp Peter Schillen
CommonCrawl
Nexus Network Journal April 2014 , Volume 16, Issue 1, pp 69–87 | Cite as Design and Fabrication of Free-Form Reciprocal Structures Dario Parigi Poul Henning Kirkegaard Due to their non-hierarchical nature, the geometry of reciprocal assemblies cannot be described conveniently with the available CAD modelling tools or by hierarchical, associative parametric modellers. The geometry of a network of reciprocally connected elements is a characteristic that emerges, bottom-up, from the complex interaction between all the elements' shape, topology and position, and requires numerical solution of the elements' geometric compatibility. A computational method, the "Reciprocalizer", has been developed by the authors to predict and control the geometry of large networks of reciprocally connected elements, and it has been now perfected and included in a larger procedure that can be regarded as an extremely flexible and capable design tool for the generation of free-form reciprocal structures. The design tool has been applied for the design and realization of a free-form structure composed of 506 round, un-notched wooden elements with a diameter of 22 mm. This paper focuses on the geometry of reciprocal systems and the unique issues of fabrication posed by such assemblies. Reciprocal frames Free-form Geometry Fabrication Reciprocal structures have been studied and used in the past for different needs and purposes, and their presence throughout history is scattered and discontinuous (Pugnale and Kirkegaard 2011). Recently they have been classified along other woven structures such as knitting, fabrics and basket works. This is due to the fact that the elements forming them are interwoven with one another, with the peculiarity that reciprocal structures use elements which are stiff and short compared to the size of the entire structure (Baverel and Popovic 2011). In the world of construction, the application of the principle of reciprocity requires: the presence of at least two elements allowing the generation of a certain forced interaction; that each element of the composition must support and be supported by another one; that every supported element must meet its support along the span and never in the vertices (Pugnale and Kirkegaard 2011). When a superimposition joint is used (i.e., un-notched bars sit on the top and on the bottom of each other, as shown in Fig. 4a or 5a) reciprocal structures develop naturally out-of-plane because the elements' axes are not aligned; for this reason they can be defined as intrinsically three-dimensional. Due to their non-hierarchical nature, the geometry of a reciprocal assembly is extremely difficult to predict and control, and it cannot be described with available CAD software or by hierarchical, associative parametric modellers. The geometry of a network of reciprocally connected elements is a characteristic that emerges, bottom-up, from the complex interaction between all the elements shape, topology and position. The three-dimensionality can be regarded as a design opportunity, for the possibility to create three-dimensional configurations with simple joints and standardized elements. A computational method called the "Reciprocalizer" has been developed by the authors to predict and control the geometry of large networks of reciprocally connected elements (Parigi and Sassone 2012; Parigi and Kirkegaard 2013, 2014). In the first part of the present article, the peculiarities of reciprocal structures geometry are presented, a tentative classification of different configurations on the basis of the joint type is discussed, and an explanation is provided of the way the Reciprocalizer handles those and the morphological characteristics of the resulting assemblies. In the second part, we deal with the issues of fabrication and the solutions devised for an efficient fabrication process. In the third part, the entire process from design to fabrication of a free-form reciprocal structure composed of 506 round, un-notched wooden sticks of 22 mm of diameter (Fig. 1) with a particular focus on the fabrication and the passages from digital model to the prototype (Fig. 2), is described. Open image in new window a Starting free-form geometry, b reciprocal mesh generated to closely fit the starting geometry, c the geometry and the mesh overlapped a Digital model of the three-dimensional reciprocal geometry and b realized prototype of the three-dimensional reciprocal geometry Geometry of Reciprocal Structures As we said, reciprocal structures that use a superimposition joint are intrinsically three-dimensional. The extent and direction of the out-of-plane deviation depend on the values of three geometric parameters necessary to describe each connection between two elements b i and b j . the engagement length l ij , which measures the position where each element is supported along the supporting element (Fig. 3a) a Engagement length lij and b effect of engagement length in the assembly morphology the eccentricity e ij , which measures the distance between elements axes, directly dependent on the elements thickness and shape (Fig. 4a); a Eccentricity eij and b effect of different eccentricity values in the assembly morphology the specification of whether element b i sits on the top or on the bottom of element b j with respect to a reference vector r j whose tip indicates the top position (Fig. 5a). a Top/bottom position and b effect of different combinations of top/bottom support positions in the assembly morphology It is possible to observe the effect of changing each of the geometric parameters in the morphology of a three-element fan (Figs. 3b, 4b, 5b). The out-of-plane deviation increases with the increasing of the eccentricity value e ij ; on the contrary, for large values of the engagement length l ij , the out-of-plane deviation is low, and it greatly increases if the engagement length value is close to zero. The top/bottom position involves a topological change in the nature of the joint. In a three-element fan, four possible combinations of top/bottom positions determine four distinct characteristic geometries. When all elements are placed on the top or in the bottom of each other respectively a dome-like or reversed dome-like geometry is created. On the other side, when elements connect with mixed top/bottom positions, geometries that will prove to be fundamental for the creation of free-form shapes are created. As a consequence of the non-hierarchical nature of reciprocal assemblies, those parameters are not independent, and any change in one parameter value should be followed by the simultaneous adjustment of all the other parameters values in the assembly in order to maintain the geometric compatibility. This is why the geometry of a reciprocal assembly cannot be conveniently described with available CAD modelling tools or by hierarchical, associative parametric modellers. The geometry of a network of reciprocally connected elements requires the use of iterative numerical methods to solve the physical interplay between all sticks in the assembly. Physical models can be used at an exploratory level, to discover new configurations (Parigi and Pugnale 2014). However, the use and development of the typology requires a design tool that allows for predicting and controlling the geometry of large networks of reciprocally connected elements. The Reciprocalizer mentioned earlier is capable of handling different families of reciprocal systems based on the joint type. Classification and Morphology of Families of Reciprocal Structures Based on the Joint Type In the simplest possible reciprocal configurations, each locally supported element sits on the top of the supporting element with a superimposition joint. When structures are composed with this rule, the friction of the joint is sufficient to keep them stable, and thus they can be erected by stacking one element on top of another. The arrangements sketched by Leonardo da Vinci in the Codex Atlanticus belong to this family (see Houlsby 2014). However, the rule places severe constraints on which geometries can be obtained; generally speaking, the structures obtained have positive Gaussian curvature. When considering practical applications, a friction-only joint should be avoided in favour of a bilateral joint to improve the robustness of the assembly. When a bilateral joint is used instead of a friction-only joint, the structures continue to exhibit all of the properties that characterize reciprocal system defined in "Introduction". When a bilateral joint is used, the supporting element can either sit on the top or on the bottom of the supported element; the joint will exhibit prevalent compression when sitting on the top and prevalent tension when sitting on the bottom of the supporting element. The use of both top/bottom support position activates one of the most intriguing characteristics of reciprocal systems: the possibility to create free-form shapes with standardized elements. Additionally, when a bilateral joint is used elements could be arranged without the superimposition joint, with their axis aligned. Historically, the proposals for a reciprocal roof by Sebastiano Serlio belong to this family (see Houlsby 2014). We can thus describe three families of reciprocal systems based on their joint type: friction-only joint (top support position); bilateral superimposition joint/(top or bottom support position); bilateral joint with aligned elements axes. Each of these typologies presents interesting and unique properties, in terms of both geometry and possible practical applications. Reciprocal Structures Families Obtained with the Reciprocalizer The architecture of the "Reciprocalizer" algorithm was set up for the highest generality, to provide the possibility to generate configurations in any of the three families of reciprocal structures just listed. The Reciprocalizer is thus capable of generating configurations in all three conditions: where each supported element meets the supporting element in the top position, where each supporting element is met indifferently in the top or in the bottom position, and where elements join with their axes aligned. The diagram in Fig. 6a represents a starting test configuration made of nine elements connected at their ends and converging in groups of three to four central nodes. Each element is defined with the specification of the nodal coordinates of its end points. An additional table must indicate the topology of connections between elements. The Reciprocalizer determines the geometry of the reciprocal joint by computing the three geometric parameters—eccentricity (e ij ), engagement length (l ij ), and top/bottom support position for each b i –b j connection—and adjusting the values according to the goal configuration, which may fall into one of the three families described in "Classification and morphology of families of reciprocal structures based on the joint type". a Starting configuration, b solution 1: top support configuration, c solution 2: mixed top/bottom support configuration and d solution 3: aligned axes joint In solution 1 (Fig. 6b), the support position is set for all connections b i –b j to be the top position. Therefore, each locally supported element meets the supporting element in the top position. The resulting configuration can be erected with friction-only joints, stacking one element on the top of each other. In solution 2 (Fig. 6c), the algorithm ignores the specification of the top/bottom support position. Therefore, any top/bottom position is accepted, and the only condition set at each connection b i –b j is that the geometric compatibility is assured by the contact position. As a consequence, the algorithm computes the reciprocal configuration which is spatially the closest to the starting configuration geometry, and it will generate configurations with mixed top/bottom support positions. With those settings the algorithm exhibits the ability to adapt reciprocal assemblies to any free-form shape. A requirement for such configurations is that the joint must be bilateral if the supported elements connect the supporting element in the bottom position. In solution 3 (Fig. 6d), elements join with their axes aligned. In this case, the value of eccentricity is set to 0, regardless the elements shape. In those configurations, the elements' shape does not affect the eccentricity value, and the choice of the top/bottom position parameter does not apply. This solution has interesting practical applications for the possibility to create flat configurations, and for the independence of the elements' shape from the overall geometry of the assembly. Issues of Fabrication Despite the apparent simplicity of the superimposition joint, when large, controlled networks of elements must be assembled, reciprocal structures require remarkable precision and uncommon expedients for their fabrication, because the overall geometry of the assemblage is the result of the complex and simultaneous interaction between all elements' position and size. Therefore, the values of the geometric parameters at each connection are dependent on the values of all the others in the assembly. The practical consequence is that the elements must be assembled according to the exact parameters values, because if even a single connection sits outside a small tolerance, all the other elements' positions are influenced and an overall modification of the geometry is determined. The extent of these modifications might require adjustment of all the other connections in the assembly in order to restore the geometric compatibility. Therefore, if one element is misplaced during assembly, it can cause a geometric incompatibility in the other elements of the assembly assembled according to the design parameters values. This requirement for precision is challenged by the fact that each joint parameter is unique in free-form geometries, and in principle the elements' spatial position requires six coordinates in order to be determined (either the position of the two end points with their coordinates x, y, z, or alternatively, the coordinates of one end point of the element and its rotation around the x-axis φ x , around the y-axis φ y , around the z-axis φ z ). For the realization of the reciprocal mesh dome of the Rokko Observatory, whose architectural concept by Architect Hiroshi Sambuichi had been constructively and structurally developed by ARUP (Kidokoro and Goto 2011), the complexity of assembling a three-dimensional reciprocal structure was immediately evident to the engineers and the contractors. The complexity was tackled by creating the geometry of the dome from the repetition in a symmetrical circular array of an identical "master" slice. This strategy enabled the possibility to create a single "gig" or temporary scaffolding for the master slice that supports the steel elements in their final position before they are fixed through soldering. The Scaffolding-Free Fabrication We propose a method for fabrication of free-form reciprocal structures that does not require a temporary scaffolding to keep the elements in their final position before they can be fixed. The method is based on considerations regarding the kinematical determinacy of reciprocal systems, with a particular focus on networks composed of three-element fans. If we consider a single three-element fan where elements are connected with internal hinges, it is a kinematically determinate assembly in both two and three-dimensional configurations (i.e., it exhibits only the free-body motions when unconstrained to the ground; Parigi et al. 2014). As a direct consequence, each fan, when complete, is rigid and retains its shape (Fig. 7a). a A three-element assembly is internally kinematically determinate, and exhibits only the three rigid body motions and b a four-element assembly is kinematically indeterminate, and it exhibits one inextensional mechanism beside the three rigid body motions The geometry of a network of fans can be understood as being formed at the local level of the rigid fan units, which, when connected among themselves into a network, determine the global geometry. Therefore, if the network is assembled with the subsequent addition of individual elements, starting from an arbitrary position, each fan retains, upon completion, the final shape and position with the adjacent fans. The geometry is assembled bottom-up, by enabling, at the local level of the fan, the complex interaction between the elements shape, topology and position that is characteristic of reciprocal systems. Our ability to build the overall geometry relies uniquely on our ability to precisely assemble each element with the adjacent one. In case circular elements are used, each connection between elements b i –b j , is uniquely determined by a single contact point for each of the two connecting elements, whose position should be identified along their surface, and that we call P ij on element b i and P ji on element b j . Then we should be able to connect the elements and so that P ij and P ji are coincident. A statically determinate three-element fan is formed upon completion of the connections between elements b i –b j , b j –b k , b k –b i , or, respectively, the coupling of points P ij –P ji , P jk –P kj , P ki –P ik . When the three connections are completed, the fan is rigid, and each element retains its three-dimensional positioning with respect to the others in the assembly (Fig. 8). The three connections required to form a three-element fan The opportunity to build the assembly bottom-up is provided by the fact that the three-bar fan is kinematically determinate. When we consider on the other side a single four-element fan where elements are connected with internal hinges, it is a kinematically indeterminate assembly in both two- and three-dimensional configurations (i.e., it exhibits, besides the free body motions, one additional inextensional mechanism) (Fig. 7b). The assembly is a mechanism. However the contact point between bars does not change during the mechanism, therefore a similar assembly procedure might be adopted here as well. Further studies might suggest that it is convenient to suppress, at least temporarily, the additional inextensional mechanisms for the purpose of fabrication. A network of four-bar assemblies is kinematically indeterminate, but only if the assembly is formed by a regular repetition of fans, while the network is kinematically determinate if the assembly is formed by a non-regular repetition of fans (Parigi et al. 2014). The Prototype: From Digital Design to Manufacturing The Reciprocalizer was included in a larger procedure that can be regarded as an extremely flexible and capable design tool for free-form reciprocal structures. This design tool has been applied for the design and realization of a free-form reciprocal structure composed of 506 round, un-notched wooden sticks of 22 mm of diameter. This paper describes the overall process from the design to the fabrication and focuses especially on the unique issues of fabrication of such assemblies. The structure was built by the students of the Master of Science programme in "Architectural Design" during a 1-week construction workshop that we organized at Aalborg University in the 2012 fall semester (Fig. 9). Night shot of the final prototype exhibited along the Limfjord, Aalborg waterfront The design process includes: the definition of the starting free-form surface; the definition of a network of elements connected at their ends with no axes eccentricity; the definition of the corresponding reciprocal configuration with the Reciprocalizer algorithm; the calculation and output of the fabrication data; the fabrication. The Starting Free-Form Surface The design begins with the definition of a starting geometry. In this case a doubly-curved, free-form surface with no axis of symmetry (Fig. 10) was chosen in order to test both the capability of the reciprocal system to fit to highly irregular surfaces with both positive and negative Gaussian curvature (Fig. 11), and the ability of the Reciprocalizer to transform each joint into a correspondent reciprocal joint. The starting free-form geometry. a Elevation aa, b elevation bb, c plan and d perspective Curvature analysis on the starting free-form geometry. a Elevation aa, b elevation bb, c plan and d perspective The Network of Elements The next step is the definition of a network of elements connected at their ends with no axis eccentricity, drawn upon the previously defined surface. Elements at this stage have no thickness, and only their axes are represented. In this case, a Voronoi diagram was chosen to cover the free-form surface (Fig. 12). The resulting mesh is constituted from 506 elements distributed over the surface. Voronoi diagram on the starting free-form geometry. a Elevation aa, b elevation bb, c plan and d perspective The Reciprocalizer Starting from the Voronoi diagram, and after assigning the elements' thickness, the corresponding reciprocal configuration is obtained. Each node of the Voronoi diagram where three elements converge is turned into a corresponding three-element fan (Fig. 13). The Reciprocalizer is set to ignore the specification on the top/bottom support position, and to only compute the geometric compatibility of elements by physical contact. As a consequence, the algorithm generates a configuration with mixed top/bottom support position and variable engagement lengths so that the overall geometry closely fits the starting free-form geometry. The reciprocal configuration. a (Above left) elevation aa, b (above right) elevation bb, c (below left) plan and d (below right) perspective The Fabrication Data A table of values outputs the data needed for the fabrication (i.e., for the element b i , the position of the contact point P ij with each of the b j connecting elements). Point P ij is located along element b i surface and its position can be described with two values: its distance from one reference end D ij , and the angle α ij that it creates with a reference line arbitrarily set on the side element, measured from the element axes and in a perpendicular plane (Fig. 14). Except for the elements on the boundaries, in general each element connects with four adjacent elements, and therefore four contact points must be described (Fig. 15). The distance D ij and the angle α ij showed on a sample element A 4 × 4 sample of the fabrication table. The complete table has dimension 2,024 × 4 In this case the table presents 2,024 lines, each one describing the position of one contact point. In principle, this table is the only document needed for the entire process of fabrication. However, the digital model of the structure was also supplied as a tool to visually find and double-check the position of the elements in the space. The Fabrication Process The fabrication process starts with cutting the elements to their length, labelling them with their identification ID and marking the position of the contact points P ij on each element (Fig. 16). The marking process He marking process is relatively fast and low tech (a pencil and a paper ruler). First, a reference line is drawn on the side of the element and along it a distance D ij is marked. Then, starting from this distance, the angle α ij identifying the final position of D ij , is marked. In order to mark both D ij and α ij with the same measuring tool, the value of the angle α ij is converted into a distance A ij along the element circumference (Fig. 17) with the formula: Conversion of α ij into linear distance A ij $$ A_{ij} = \alpha_{ij} r_{ij} \frac{\pi }{180}. $$ A paper ruler was chosen for its adaptability to measure distances along both straight paths, the case when measuring the distance D ij , and curved paths, the case of measuring the distance A ij along the element circumference (Fig. 18). The labelled and marked elements For the coupling of the elements, two solutions of the joint were developed and tested, each requiring a different drill hole on the markings of the points P ij . In joint 1, the depth and width of the drill hole are equal to the radius of a shear lock (a sphere in this case), which is then placed between the two connecting elements (Fig. 19), preventing elements from sliding off their respective positions. After the sphere is placed, the elements need to be fixed in position, and the knot in Fig. 21 is used. Joint 1 In joint 2, the drill hole goes through the elements, and a rope is threaded through both elements (Fig. 20), and an initial knot is used to keep the elements in place, before the final fixing knot of Fig. 21 is used. Joint 2 was used for the final realization of the prototype. The knot fixing the joint The sequence of construction starts from the centre and gradually extends to the sides. This assembling sequence allows for immediate detection of, if any, errors in the construction, because the geometric compatibility is checked each time a ring of the dome is complete (Fig. 22). The joint adopted allowed a two-step fixing: initially elements are presented and loosely fixed with a knot passing through the elements, and then the final fixing knot is performed in a second stage. The assembling process was smooth and done with the simultaneous collaboration of groups of around thirty people. The final prototype closely matches the digital model shape (Figs. 2, 23). a Plan of the construction sequence in steps, b step i, e step n. b–e Construction sequence Picture of the final prototype and side-by-side comparison with the digital design model. Photo: Mathias Sønderskov Our experience has proved that the structural typology of reciprocally connected elements with superimposition joints, if supported by a custom developed design and fabrication process, is well-suited to build low-cost, free-form shapes with standardized elements. The design and fabrication of the prototype had two main objectives: (1) to test the results of the Reciprocalizer algorithm, and (2) to test a fast, efficient construction method, scaffolding-free, suited for any irregular free-form geometries. The experiment was successful in confirming and validating the output obtained from the Reciprocalizer algorithm and the ability to match the shape of the digital model into the prototype by connecting the elements according to the precise position indicated in the fabrication table. It was also successful in demonstrating the possibility to build the structure without scaffolding, by starting from a central point and gradually extending to the sides, taking advantage of the kinematical determinacy of the three-element reciprocal unit. The construction of the dome also proved to be an engaging and almost magical experience when the expected shape emerged from the complex interaction of hundreds of precisely assembled elements with a low-tech fabrication technique. The authors would like to thank the Department of Architecture, Design and Media Technology and the Department of Civil Engineering, Aalborg University, for providing working spaces and funds for the construction workshop, the students of the first semester of the Master of Science program in "Architectural Design" at Aalborg University for their intense participation and Eng. Ryota Kidokoro, invited guest of the workshop. All photos are by Dario Parigi except Fig. 23 by Mathias Sønderskov. Baverel, O., and O. Popovic Larsen. 2011. A review of woven structures with focus on reciprocal systems—nexorades. International Journal of Space Structures 26: 4.Google Scholar Houlsby, G.T. 2014. John Wallis and the numerical analysis of structures. Nexus Network Journal 16: 1. (in this same issue).CrossRefGoogle Scholar Kidokoro, R., and K. Goto. 2011. Rokko observatory—application of geometric engineering. In Proceedings of the international symposium on algorithmic design for architecture and urban design, 14–16 March 2011. Tokyo: ALGODE.Google Scholar Parigi, D., and P.H. Kirkegaard. 2013. The reciprocalizer: a design tool for reciprocal structures. In Proceedings of CC2013, 14th international conference on civil, structural and environmental engineering computing, 3–6 September 2013. Italy: Cagliari.Google Scholar Parigi, D., and P.H. Kirkegaard. 2014. The reciprocalizer: an agile design tool for reciprocal structures. Nexus Network Journal 16: 1. (in this same issue).CrossRefGoogle Scholar Parigi, D., Kirkegaard, P.H., and M. Sassone. 2012. Hybrid optimization in the design of reciprocal structures. In Proceedings of the IASS symposium 2012: from spatial structures to space structures, 21–24 May 2012, Seoul.Google Scholar Parigi, D., M. Sassone, P.H. Kirkegaard, and P. Napoli. 2014. General kinematic formulation of planar reciprocal systems. Nexus Network Journal 16: 1. (in this same issue).CrossRefGoogle Scholar Parigi, D., and A. Pugnale. 2014. Three-dimensionality in structural reciprocity: concepts and generative rules. Nexus Network Journal 16: 1. (in this same issue).CrossRefGoogle Scholar Pugnale, A., Parigi, D., Sassone, M., and P.H. Kirkegaard. 2011. The principle of structural reciprocity: history, properties and design issues. In Proceedings of the IABSE–IASS, symposium 2011: taller, longer, lighter, 20–23 September 2011, London.Google Scholar © Kim Williams Books, Turin 2014 1.Department of Civil EngineeringAalborg UniversityAalborgDenmark 3.Department of Civil EngineeringAarhus UniversityAarhusDenmark Parigi, D. & Kirkegaard, P.H. Nexus Netw J (2014) 16: 69. https://doi.org/10.1007/s00004-014-0177-9 Publisher Name Springer Basel
CommonCrawl
Blow-up rate of solutions of parabolic poblems with nonlinear boundary conditions DCDS-S Home An explicit stable numerical scheme for the $1D$ transport equation June 2012, 5(3): 657-670. doi: 10.3934/dcdss.2012.5.657 Support properties of solutions to nonlinear parabolic equations with variable density in the hyperbolic space Fabio Punzo 1, Dipartimento di Matematica "G. Castelnuovo", Università di Roma "La Sapienza", P.le A. Moro 5, I-00185 Roma Received June 2010 Revised August 2010 Published October 2011 We consider the Cauchy problem for a class of nonlinear parabolic equations with variable density in the hyperbolic space, assuming that the initial datum has compact support. We provide simple conditions, involving the behaviour of the density at infinity, so that the support of every nonnegative solution is not compact at some positive time, or it remains compact for any positive time. These results extend to the case of the hyperbolic space those given in [8] for the Cauchy problem in $\mathbb{R}^n$. Keywords: Sub- and supersolutions, Laplace-Beltrami operator, Support of solutions, Comparison principles, Hyperbolic space.. Mathematics Subject Classification: Primary: 35K61, 35K67, 35B99; Secondary: 35B40, 35B5. Citation: Fabio Punzo. Support properties of solutions to nonlinear parabolic equations with variable density in the hyperbolic space. Discrete & Continuous Dynamical Systems - S, 2012, 5 (3) : 657-670. doi: 10.3934/dcdss.2012.5.657 R. Benedetti and C. Petronio, "Lectures on Hyperbolic Geometry,", Universitext, (1992). doi: 10.1007/978-3-642-58158-8. Google Scholar I. Birindelli and R. Mazzeo, Symmetry for solutions of two phase semilinear elliptic equations on hyperbolic space,, Indiana Univ. Math. J., 58 (2009), 2347. doi: 10.1512/iumj.2009.58.3714. Google Scholar E. B. Davies, "Heat Kernel and Spectral Theory,", Cambridge Tracts in Mathematics, 92 (1989). Google Scholar D. Eidus and S. Kamin, The filtration equation in a class of functions decreasing at infinity,, Proc. Amer. Math. Soc., 120 (1994), 825. doi: 10.1090/S0002-9939-1994-1169025-2. Google Scholar A. Grigor'yan and M. Noguchi, The heat kernel on hyperbolic space,, Bull. Lond. Math. Soc., 30 (1998), 643. doi: 10.1112/S0024609398004780. Google Scholar A. Grigor'yan, Analytic and geometric background of recurrence and non-explosion of the Brownian motion on Riemannian manifolds,, Bull. Amer. Math. Soc. (N.S.), 36 (1999), 135. Google Scholar A. Grigor'yan, Heat kernels on weighted manifolds and applications,, in, 398 (2006), 93. Google Scholar S. Kamin and R. Kersner, Disappearance of interfaces in finite time,, Meccanica, 28 (1993), 117. doi: 10.1007/BF01020323. Google Scholar S. Kamin, R. Kersner and A. Tesei, On the Cauchy problem for a class of parabolic equations with variable density,, Atti Accad. Naz. Lincei Cl. Sci. Fis. Mat. Natur. Rend. Lincei (9) Mat. Appl., 9 (1998), 279. Google Scholar S. Kamin and P. Rosenau, Nonlinear diffusion in a finite mass medium,, Comm. Pure Appl. Math., 35 (1982), 113. Google Scholar S. Kumaresan and J. Prajapat, Serrin's result for hyperbolic space and sphere,, Duke Math. J., 91 (1998), 17. doi: 10.1215/S0012-7094-98-09102-5. Google Scholar M. A. Pozio and A. Tesei, On the uniqueness of bounded soutions to singular parabolic problems,, Discr. Cont. Dyn. Syst., 13 (2005), 117. doi: 10.3934/dcds.2005.13.117. Google Scholar F. Punzo, On the Cauchy problem for nonlinear parabolic equations with variable density,, J. Evol. Equations, 9 (2009), 429. doi: 10.1007/s00028-009-0018-6. Google Scholar F. Punzo, Well-posedness of the Cauchy problem for nonlinear parabolic equations with variable density in the hyperbolic space,, Nonlin. Diff. Eq. Appl., (). Google Scholar G. Reyes and J. L. Vazquez, The Cauchy problem for the inhomogeneous porous medium equation,, Netw. Heterog. Media, 1 (2006), 337. doi: 10.3934/nhm.2006.1.337. Google Scholar Yoshitsugu Kabeya. Eigenvalues of the Laplace-Beltrami operator under the homogeneous Neumann condition on a large zonal domain in the unit sphere. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3529-3559. doi: 10.3934/dcds.2020040 Mostafa Mbekhta. Representation and approximation of the polar factor of an operator on a Hilbert space. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020463 Chao Wang, Qihuai Liu, Zhiguo Wang. Periodic bouncing solutions for Hill's type sub-linear oscillators with obstacles. Communications on Pure & Applied Analysis, 2021, 20 (1) : 281-300. doi: 10.3934/cpaa.2020266 Zaizheng Li, Qidi Zhang. Sub-solutions and a point-wise Hopf's lemma for fractional $ p $-Laplacian. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020293 Hirokazu Ninomiya. Entire solutions of the Allen–Cahn–Nagumo equation in a multi-dimensional space. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 395-412. doi: 10.3934/dcds.2020364 Dong-Ho Tsai, Chia-Hsing Nien. On space-time periodic solutions of the one-dimensional heat equation. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3997-4017. doi: 10.3934/dcds.2020037 Zedong Yang, Guotao Wang, Ravi P. Agarwal, Haiyong Xu. Existence and nonexistence of entire positive radial solutions for a class of Schrödinger elliptic systems involving a nonlinear operator. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020436 Xing Wu, Keqin Su. Global existence and optimal decay rate of solutions to hyperbolic chemotaxis system in Besov spaces. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2021002 Jianhua Huang, Yanbin Tang, Ming Wang. Singular support of the global attractor for a damped BBM equation. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020345 Peng Luo. Comparison theorem for diagonally quadratic BSDEs. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020374 Ying Lin, Qi Ye. Support vector machine classifiers by non-Euclidean margins. Mathematical Foundations of Computing, 2020, 3 (4) : 279-300. doi: 10.3934/mfc.2020018 Tomáš Smejkal, Jiří Mikyška, Jaromír Kukal. Comparison of modern heuristics on solving the phase stability testing problem. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1161-1180. doi: 10.3934/dcdss.2020227 Gunther Uhlmann, Jian Zhai. Inverse problems for nonlinear hyperbolic equations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 455-469. doi: 10.3934/dcds.2020380 Xing-Bin Pan. Variational and operator methods for Maxwell-Stokes system. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3909-3955. doi: 10.3934/dcds.2020036 Ole Løseth Elvetun, Bjørn Fredrik Nielsen. A regularization operator for source identification for elliptic PDEs. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2021006 Wenqin Zhang, Zhengchun Zhou, Udaya Parampalli, Vladimir Sidorenko. Capacity-achieving private information retrieval scheme with a smaller sub-packetization. Advances in Mathematics of Communications, 2021, 15 (2) : 347-363. doi: 10.3934/amc.2020070 Ilyasse Lamrani, Imad El Harraki, Ali Boutoulout, Fatima-Zahrae El Alaoui. Feedback stabilization of bilinear coupled hyperbolic systems. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020434 Constantine M. Dafermos. A variational approach to the Riemann problem for hyperbolic conservation laws. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 185-195. doi: 10.3934/dcds.2009.23.185 Qiang Long, Xue Wu, Changzhi Wu. Non-dominated sorting methods for multi-objective optimization: Review and numerical comparison. Journal of Industrial & Management Optimization, 2021, 17 (2) : 1001-1023. doi: 10.3934/jimo.2020009 Petr Čoupek, María J. Garrido-Atienza. Bilinear equations in Hilbert space driven by paths of low regularity. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 121-154. doi: 10.3934/dcdsb.2020230 Fabio Punzo
CommonCrawl
Formalized Mathematics (ISSN 1426-2630) Volume 8, Number 1 (1999): pdf, ps, dvi. Yatsuka Nakamura, Andrzej Trybulec, Czeslaw Bylinski. Bounded Domains and Unbounded Domains, Formalized Mathematics 8(1), pages 1-13, 1999. MML Identifier: JORDAN2C Summary: First, notions of inside components and outside components are introduced for any subset of $n$-dimensional Euclid space. Next, notions of the bounded domain and the unbounded domain are defined using the above components. If the dimension is larger than 1, and if a subset is bounded, a unbounded domain of the subset coincides with an outside component (which is unique) of the subset. For a sphere in $n$-dimensional space, the similar fact is true for a bounded domain. In 2 dimensional space, any rectangle also has such property. We discussed relations between the Jordan property and the concept of boundary, which are necessary to find points in domains near a curve. In the last part, we gave the sufficient criterion for belonging to the left component of some clockwise oriented finite sequences. Andrzej Trybulec. Rotating and Reversing, Formalized Mathematics 8(1), pages 15-20, 1999. MML Identifier: REVROT_1 Summary: Quite a number of lemmas for the Jordan curve theorem, as yet in the case of the special polygonal curves, have been proved. By ``special" we mean, that it is a polygonal curve with edges parallel to axes and actually the lemmas have been proved, mostly, for the triangulations i.e. for finite sequences that define the curve. Moreover some of the results deal only with a special case: \begin{itemize} \item[-] finite sequences are clockwise oriented, \item[-] the first member of the sequence is the member with the lowest ordinate among those with the highest abscissa (N-min $f,$ where $f$ is a finite sequence, in the Mizar jargon). \end{itemize} In the change of the orientation one has to reverse the sequence (the operation introduced in \cite{FINSEQ_5.ABS}) and to change the second restriction one has to rotate the sequence (the operation introduced in \cite{FINSEQ_6.ABS}). The goal of the paper is to prove, mostly simple, facts about the relationship between properties and attributes of the finite sequence and its rotation (similar results about reversing had been proved in \cite{FINSEQ_5.ABS}). Some of them deal with recounting parameters, others with properties that are invariant under the rotation. We prove also that the finite sequence is either clockwise oriented or it is such after reversing. Everything is proved for the so called standard finite sequences, which means that if a point belongs to it then every point with the same abscissa or with the same ordinate, that belongs to the polygon, belongs also to the finite sequence. It does not seem that this requirement causes serious technical obstacles. Andrzej Trybulec, Yatsuka Nakamura. On the Components of the Complement of a Special Polygonal Curve, Formalized Mathematics 8(1), pages 21-23, 1999. MML Identifier: SPRECT_4 Summary: By the special polygonal curve we meana simple closed curve, that is a polygone and moreover has edges parallel to axes. We continue the formalization of the Takeuti-Nakamura proof \cite{TAKE-NAKA} of the Jordan curve theorem. In the paper we prove that the complement of the special polygonal curve consists of at least two components. With the theorem which has at most two components we completed the theorem that a special polygonal curve cuts the plane into exactly two components. Czeslaw Bylinski. Gauges, Formalized Mathematics 8(1), pages 25-27, 1999. MML Identifier: JORDAN8 Christoph Schwarzweller. The Ring of Integers, Euclidean Rings and Modulo Integers, Formalized Mathematics 8(1), pages 29-34, 1999. MML Identifier: INT_3 Summary: In this article we introduce the ring of Integers, Euclidean rings and Integers modulo $p$. In particular we prove that the Ring of Integers is an Euclidean ring and that the Integers modulo $p$ constitutes a field if and only if $p$ is a prime. Yatsuka Nakamura. Logic Gates and Logical Equivalence of Adders, Formalized Mathematics 8(1), pages 35-45, 1999. MML Identifier: GATE_1 Summary: This is an experimental article which shows that logical correctness of logic circuits can be easily proven by the Mizar system. First, we define the notion of logic gates. Then we prove that an MSB carry of `4 Bit Carry Skip Adder' is equivalent to an MSB carry of a normal 4 bit adder. In the last theorem, we show that outputs of the `4 Bit Carry Look Ahead Adder' are equivalent to the corresponding outputs of the normal 4 bits adder. The policy here is as follows: when the functional (semantic) correctness of a system is already proven, and the correspondence of the system to a (normal) logic circuit is given, it is enough to prove the correctness of the new circuit if we only prove the logical equivalence between them. Although the article is very fundamental (it contains few environment files), it can be applied to real problems. The key of the method introduced here is to put the specification of the logic circuit into the Mizar propositional formulae, and to use the strong inference ability of the Mizar checker. The proof is done formally so that the automation of the proof writing is possible. Even in the 5.3.07 version of Mizar, it can handle a formulae of more than 100 lines, and a formula which contains more than 100 variables. This means that the Mizar system is enough to prove logical correctness of middle scaled logic circuits. Bartlomiej Skorulski. The Sequential Closure Operator in Sequential and Frechet Spaces, Formalized Mathematics 8(1), pages 47-54, 1999. MML Identifier: FRECHET2 Adam Grabowski. Properties of the Product of Compact Topological Spaces, Formalized Mathematics 8(1), pages 55-59, 1999. MML Identifier: BORSUK_3 Artur Kornilowicz. Compactness of the Bounded Closed Subsets of $\calE^2_\rmT$, Formalized Mathematics 8(1), pages 61-68, 1999. MML Identifier: TOPREAL6 Summary: This paper contains theorems which describe the correspondence between topological properties of real numbers subsets introduced in \cite{RCOMP_1.ABS} and introduced in \cite{PRE_TOPC.ABS}, \cite{COMPTS_1.ABS}. We also show the homeomorphism between the cartesian product of two $R^1$ and ${\cal E}^2_{\rm T}$. The compactness of the bounded closed subset of ${\cal E}^2_{\rm T}$ is proven. Adam Grabowski. Hilbert Positive Propositional Calculus, Formalized Mathematics 8(1), pages 69-72, 1999. MML Identifier: HILBERT1 Artur Kornilowicz. Homeomorphism between [:$\calE^i_\rmT, \calE^j_\rmT$:] and $\calE^i+j_\rmT$, Formalized Mathematics 8(1), pages 73-76, 1999. MML Identifier: TOPREAL7 Summary: In this paper we introduce the cartesian product of two metric spaces. As the distance between two points in the product we take maximal distance between coordinates of these points. In the main theorem we show the homeomorphism between [:${\cal E}^i_{\rm T}, {\cal E}^j_{\rm T}$:] and ${\cal E}^{i+j}_{\rm T}$. Katsumi Wasaki, Noboru Endou. Full Subtracter Circuit. Part I, Formalized Mathematics 8(1), pages 77-81, 1999. MML Identifier: FSCIRC_1 Summary: We formalize the concept of the full subtracter circuit, define the structures of bit subtract/borrow units for binary operations, and prove the stability of the circuit. Yuguang Yang, Katsumi Wasaki, Yasushi Fuwa, Yatsuka Nakamura. Correctness of Binary Counter Circuits, Formalized Mathematics 8(1), pages 83-85, 1999. MML Identifier: GATE_2 Summary: This article introduces the verification of the correctness for the operations and the specification of the 3-bit counter. Both cases: without reset input and with reset input are considered. The proof was proposed by Y. Nakamura in \cite{GATE_1.ABS}. Yuguang Yang, Katsumi Wasaki, Yasushi Fuwa, Yatsuka Nakamura. Correctness of Johnson Counter Circuits, Formalized Mathematics 8(1), pages 87-91, 1999. MML Identifier: GATE_3 Summary: This article introduces the verification of the correctness for the operations and the specification of the Johnson counter. We formalize the concepts of 2-bit, 3-bit and 4-bit Johnson counter circuits with a reset input, and define the specification of the state transitions without the minor loop. Noboru Endou, Artur Kornilowicz. The Definition of the Riemann Definite Integral and some Related Lemmas, Formalized Mathematics 8(1), pages 93-102, 1999. MML Identifier: INTEGRA1 Summary: This article introduces the Riemann definite integral on the closed interval of real. We present the definitions and related lemmas of the closed interval. We formalize the concept of the Riemann definite integral and the division of the closed interval of real, and prove the additivity of the integral. Takashi Mitsuishi, Yuguang Yang. Properties of the Trigonometric Function, Formalized Mathematics 8(1), pages 103-106, 1999. MML Identifier: SIN_COS2 Summary: This article introduces the monotone increasing and the monotone decreasing of {\em sinus} and {\em cosine}, and definitions of hyperbolic {\em sinus}, hyperbolic {\em cosine} and hyperbolic {\em tangent}, and some related formulas about them. Shunichi Kobayashi, Yatsuka Nakamura. Predicate Calculus for Boolean Valued Functions. Part II, Formalized Mathematics 8(1), pages 107-109, 1999. MML Identifier: BVFUNC_4 Summary: In this paper, we have proved some elementary predicate calculus formulae containing the quantifiers of Boolean valued functions with respect to partitions. Such a theory is an analogy of usual predicate logic. Shunichi Kobayashi, Yatsuka Nakamura. Propositional Calculus for Boolean Valued Functions. Part I, Formalized Mathematics 8(1), pages 111-113, 1999. MML Identifier: BVFUNC_5 Summary: In this paper, we have proved some elementary propositional calculus formulae for Boolean valued functions. Shunichi Kobayashi, Yatsuka Nakamura. Propositional Calculus for Boolean Valued Functions. Part II, Formalized Mathematics 8(1), pages 115-117, 1999. MML Identifier: BVFUNC_6 Jing-Chao Chen. Insert Sort on \SCMFSA, Formalized Mathematics 8(1), pages 119-127, 1999. MML Identifier: SCMISORT Summary: This article describes the insert sorting algorithm using macro instructions such as if-Macro (conditional branch macro instructions), for-loop macro instructions and While-Macro instructions etc. From the viewpoint of initialization, we generalize the halting and computing problem of the While-Macro. Generally speaking, it is difficult to judge whether the While-Macro is halting or not by way of loop inspection. For this reason, we introduce a practical and simple method, called body-inspection. That is, in many cases, we can prove the halting problem of the While-Macro by only verifying the nature of the body of the While-Macro, rather than the While-Macro itself. In fact, we have used this method in justifying the halting of the insert sorting algorithm. Finally, we prove that the insert sorting algorithm given in the article is autonomic and its computing result is correct. Yuguang Yang, Katsumi Wasaki, Yasushi Fuwa, Yatsuka Nakamura. Correctness of a Cyclic Redundancy Check Code Generator, Formalized Mathematics 8(1), pages 129-132, 1999. MML Identifier: GATE_4 Summary: We prove the correctness of the division circuit and the CRC (cyclic redundancy checks) circuit by verifying the contents of the register after one shift. Circuits with 12-bit register and 16-bit register are taken as examples. All the proofs are done formally. Andrzej Trybulec. Defining by Structural Induction in the Positive Propositional Language, Formalized Mathematics 8(1), pages 133-137, 1999. MML Identifier: HILBERT2 Summary: The main goal of the paper consists in proving schemes for defining by structural induction in the language defined by Adam Grabowski \cite{HILBERT1.ABS}. The article consists of four parts. Besides the preliminaries where we prove some simple facts still missing in the library, they are: \item{-} ``About the language'' in which the consequences of the fact that the algebra of formulae is free are formulated, \item{-} ``Defining by structural induction'' in which two schemes are proved, \item{-} ``The tree of the subformulae'' in which a scheme proved in the previous section is used to define the tree of subformulae; also some simple facts about the tree are proved. Czeslaw Bylinski. Some Properties of Cells on Go-Board, Formalized Mathematics 8(1), pages 139-146, 1999. MML Identifier: GOBRD13 Shunichi Kobayashi. Propositional Calculus for Boolean Valued Functions. Part III, Formalized Mathematics 8(1), pages 147-148, 1999. MML Identifier: BVFUNC_7 Shunichi Kobayashi. Propositional Calculus for Boolean Valued Functions. Part IV, Formalized Mathematics 8(1), pages 149-150, 1999. MML Identifier: BVFUNC_8 Akihiko Uchibori, Noboru Endou. Basic Properties of Genetic Algorithm, Formalized Mathematics 8(1), pages 151-160, 1999. MML Identifier: GENEALG1 Summary: We defined the set of the gene, the space treated by the genetic algorithm and the individual of the space. Moreover, we defined some genetic operators such as one point crossover and two points crossover, and the validity of many characters were proven. Shunichi Kobayashi. Propositional Calculus for Boolean Valued Functions. Part V, Formalized Mathematics 8(1), pages 161-162, 1999. MML Identifier: BVFUNC_9 Artur Kornilowicz. Properties of Left and Right Components, Formalized Mathematics 8(1), pages 163-168, 1999. MML Identifier: GOBRD14 Christoph Schwarzweller. Noetherian Lattices, Formalized Mathematics 8(1), pages 169-174, 1999. MML Identifier: LATTICE6 Summary: In this article we define noetherian and co-noetherian lattices and show how some properties concerning upper and lower neighbours, irreducibility and density can be improved when restricted to these kinds of lattices. In addition we define atomic lattices. Jing-Chao Chen. A Small Computer Model with Push-Down Stack, Formalized Mathematics 8(1), pages 175-182, 1999. MML Identifier: SCMPDS_1 Summary: The SCMFSA computer can prove the correctness of many algorithms. Unfortunately, it cannot prove the correctness of recursive algorithms. For this reason, this article improves the SCMFSA computer and presents a Small Computer Model with Push-Down Stack (called SCMPDS for short). In addition to conventional arithmetic and "goto" instructions, we increase two new instructions such as "return" and "save instruction-counter" in order to be able to design recursive programs. Jing-Chao Chen. The SCMPDS Computer and the Basic Semantics of its Instructions, Formalized Mathematics 8(1), pages 183-191, 1999. MML Identifier: SCMPDS_2 Summary: The article defines the SCMPDS computer and its instructions. The SCMPDS computer consists of such instructions as conventional arithmetic, ``goto'', ``return'' and ``save instruction-counter'' (``saveIC'' for short). The address used in the ``goto'' instruction is an offset value rather than a pointer in the standard sense. Thus, we don't define halting instruction directly but define it by ``goto 0'' instruction. The ``saveIC'' and ``return'' equal almost call and return statements in the usual high programming language. Theoretically, the SCMPDS computer can implement all algorithms described by the usual high programming language including recursive routine. In addition, we describe the execution semantics and halting properties of each instruction. Jing-Chao Chen. Computation and Program Shift in the SCMPDS Computer, Formalized Mathematics 8(1), pages 193-199, 1999. MML Identifier: SCMPDS_3 Summary: A finite partial state is said to be autonomic if the computation results in any two states containing it are same on its domain. On the basis of this definition, this article presents some computation results about autonomic finite partial states of the SCMPDS computer. Because the instructions of the SCMPDS computer are more complicated than those of the SCMFSA computer, the results given by this article are weaker than those reported previously by the article on the SCMFSA computer. The second task of this article is to define the notion of program shift. The importance of this notion is that the computation of some program blocks can be simplified by shifting a program block to the initial position. Jing-Chao Chen. The Construction and Shiftability of Program Blocks for SCMPDS, Formalized Mathematics 8(1), pages 201-210, 1999. MML Identifier: SCMPDS_4 Summary: In this article, a program block is defined as a finite sequence of instructions stored consecutively on initial positions. Based on this definition,any program block with more than two instructions can be viewed as the combination of two smaller program blocks. To describe the computation of a program block by the result of its two sub-blocks, we introduce the notions of paraclosed, parahalting, valid, and shiftable, the meaning of which may be stated as follows: \begin{itemize} \item[-] a program is paraclosed if and only if any state containing it is closed, \item[-] a program is parahalting if and only if any state containing it is halting, \item[-] in a program block, a jumping instruction is valid if its jumping offset is valid, \item[-] a program block is shiftable if it does not contain any return and saveIC instructions, and each instruction in it is valid. \end{itemize} When a program block is shiftable, its computing result does not depend on its storage position. Jing-Chao Chen. Computation of Two Consecutive Program Blocks for SCMPDS, Formalized Mathematics 8(1), pages 211-217, 1999. MML Identifier: SCMPDS_5 Summary: In this article, a program block without halting instructions is called No-StopCode program block. If a program consists of two blocks, where the first block is parahalting (i.e. halt for all states) and No-StopCode, and the second block is parahalting and shiftable, it can be computed by combining the computation results of the two blocks. For a program which consists of a instruction and a block, we obtain a similar conclusion. For a large amount of programs, the computation method given in the article is useful, but it is not suitable to recursive programs. Jing-Chao Chen. The Construction and Computation of Conditional Statements for SCMPDS, Formalized Mathematics 8(1), pages 219-234, 1999. MML Identifier: SCMPDS_6 Summary: We construct conditional statements like the usual high level program language by program blocks of SCMPDS. Roughly speaking, the article justifies such a fact that when the condition of a conditional statement is true (false), and the true (false) branch is shiftable, parahalting and does not contain any halting instruction, and the false branch is shiftable, then it is halting and its computation result equals that of the true (false) branch. The parahalting means some program halts for all states, this is strong condition. For this reason, we introduce the notions of "is\_closed\_on" and "is\_halting\_on". The predicate "A is\_closed\_on B" denotes program A is closed on state B, and "A is\_halting\_on B" denotes program A is halting on state B. We obtain a similar theorem to the above fact by replacing parahalting by "is\_closed\_on" and "is\_halting\_on".
CommonCrawl
OSA Publishing > Optica > Volume 7 > Issue 11 > Page 1587 Prem Kumar, Editor-in-Chief Hyperspectral multiphoton microscopy for in vivo visualization of multiple, spectrally overlapped fluorescent labels Amanda J. Bares, Menansili A. Mejooli, Mitchell A. Pender, Scott A. Leddon, Steven Tilley, Karen Lin, Jingyuan Dong, Minsoo Kim, Deborah J. Fowell, Nozomi Nishimura, and Chris B. Schaffer Amanda J. Bares,1 Menansili A. Mejooli,1 Mitchell A. Pender,1 Scott A. Leddon,2 Steven Tilley, II,1 Karen Lin,1 Jingyuan Dong,1 Minsoo Kim,2 Deborah J. Fowell,2 Nozomi Nishimura,1 and Chris B. Schaffer1,* 1The Nancy E. and Peter C. Meinig School of Biomedical Engineering, Cornell University, Ithaca, New York 14853, USA 2Center for Vaccine Biology and Immunology, Dept. of Microbiology and Immunology, University of Rochester Medical Center, Rochester, New York 14642, USA *Corresponding author: [email protected] Nozomi Nishimura https://orcid.org/0000-0003-4342-9416 A Bares M Mejooli M Pender S Leddon S Tilley K Lin J Dong M Kim D Fowell N Nishimura C Schaffer Vol. 7, •https://doi.org/10.1364/OPTICA.389982 Amanda J. Bares, Menansili A. Mejooli, Mitchell A. Pender, Scott A. Leddon, Steven Tilley, Karen Lin, Jingyuan Dong, Minsoo Kim, Deborah J. Fowell, Nozomi Nishimura, and Chris B. Schaffer, "Hyperspectral multiphoton microscopy for in vivo visualization of multiple, spectrally overlapped fluorescent labels," Optica 7, 1587-1601 (2020) Spatial-spectral multiplexing for hyperspectral multiphoton fluorescence imaging (OE) Hyperspectral imaging and spectral unmixing for improving whole-body fluorescence cryo-imaging (BOE) Spectral-resolved multifocal multiphoton microscopy with multianode photomultiplier tubes (OE) Fluorescent markers High throughput optics Liquid crystal filters Multiphoton microscopy Time imaging Original Manuscript: February 12, 2020 Suppl. Mat. (1) The insensitivity of multiphoton microscopy to optical scattering enables high-resolution, high-contrast imaging deep into tissue, including in live animals. Scattering does, however, severely limit the use of spectral dispersion techniques to improve spectral resolution. In practice, this limited spectral resolution together with the need for multiple excitation wavelengths to excite different fluorophores limits multiphoton microscopy to imaging a few, spectrally distinct fluorescent labels at a time, restricting the complexity of biological processes that can be studied. Here, we demonstrate a hyperspectral multiphoton microscope that utilizes three different wavelength excitation sources together with multiplexed fluorescence emission detection using angle-tuned bandpass filters. This microscope maintains scattering insensitivity, while providing high enough spectral resolution on the emitted fluorescence and capitalizing on the wavelength-dependent nonlinear excitation of fluorescent dyes to enable clean separation of multiple, spectrally overlapping labels, in vivo. We demonstrated the utility of this instrument for spectral separation of closely overlapped fluorophores in samples containing 10 different colors of fluorescent beads, live cells expressing up to seven different fluorescent protein fusion constructs, and in multiple in vivo preparations in mouse cortex and inflamed skin, with up to eight different cell types or tissue structures distinguished. © 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement Fluorescence multiplexing has become a critical tool for multiparameter studies of cell phenotypes. Fluorescence multiplexing, or the simultaneous identification of multiple fluorescent labels in a sample, is often used in flow cytometry for the visualization of many cell-surface and intracellular markers. Multiplexing in flow cytometry enables high throughput analysis of single-cell characteristics such as complex cellular phenotypes and cell size, alongside population measurements [1,2]. The continued development of fluorescent markers and corresponding flow cytometry instrumentation over the last few decades have expanded label identification capabilities from a handful of labels [3,4] to panels consisting of 28 targets [5,6]. Flow cytometry is now regularly used as an exploratory tool to identify unique cell types in a range of tissue environments in various disease states [7–11]. However, spatial tissue information is lost during sample preparation for flow cytometry, so multiplexed immunofluorescence in formalin-fixed paraffin-embedded tissue sections or biopsies has gained traction in recent years [12–14]. As the number of fluorescent labels increases alongside development of corresponding slide scanning instrumentation for spectral discrimination of multiple labels, researchers can identify more advanced cellular phenotypes with regard to their position in the tissue—for example, immune cells' proximity to a tumor [15–17]. Cell-resolved imaging in live animal models offers the opportunity to follow the dynamic interactions among cells driving normal and disease-state physiological processes and would benefit from the kind of fluorescence multiplexing now used in flow cytometry and immunohistology to enable the simultaneous visualization of a broad variety of cell types. For example, the inflammatory response in mouse models of disease or the immune response to infectious agents involves the recruitment of a complex array of different inflammatory and immune cell types, whose interactions with each other and with surrounding cells and tissue structures influences cellular behaviors. Visualizing such inflammatory or immune responses a few cell types at a time could limit the ability to identify critical cell interactions. Current microscope designs that can effectively separate signals from multiple spectrally overlapping fluorescent markers—typically utilizing spectrally resolved detection—are poorly suited to imaging in scattering samples, limiting their utility for in vivo measurements. Two-photon excited fluorescence (2PEF) microscopy has become the technique of choice for imaging deep into scattering samples and enables visualization of fluorescently labeled features at subcellular resolution. In 2PEF, a tightly focused, infrared wavelength (700–1300 nm), femtosecond duration pulsed laser source provides nonlinear excitation of fluorescent markers that is restricted to a submicrometer focal volume. Images are formed by raster scanning this focal volume and recording the fluorescence intensity point by point. This approach provides high-contrast, high-resolution imaging of labeled structures deep in mouse cortex (${\sim}{1}\;{\rm mm}$), despite optical scattering [18]. However, typical 2PEF systems only have two to four different wavelength channels, limiting measurements to just a few fluorescent markers, constraining the complexity of cellular interactions that can be studied. A number of methods have been developed to increase spectral resolution and enable 2PEF to distinguish a larger number of fluorescent species. Many commercial (Zeiss Meta/Quasar, Leica SP) and custom systems [19–22] utilize dispersive optics, such as a diffraction grating or prism, to spectrally separate emitted fluorescence for detection on an array of optical detectors. The scattering of fluorescence photons before they are collected by the microscope objective leads to a divergent cone of light at the back aperture of the objective. Dispersive detection methods are angle-dependent, so this divergent light leads to a dramatic loss of spectral resolution—over 100 nm of spectral blurring per degree of divergence with a diffraction grating (Fig. S1 and Supplement 1). On the other hand, blocking this scattered light (e.g., with a confocal pinhole) strongly reduces the signal strength and eliminates much of the benefit of 2PEF for deep imaging [23]. Other approaches to increase spectral resolution in 2PEF have focused on optical designs that simply increase the number of detector channels and/or use multiple excitation laser wavelengths [24–28]. End users may be able to modify commercial instruments to acquire spectral data sets by tuning between multiple laser wavelengths between acquisitions, or manually swapping filter sets. Although adequate for some multiplexed experiments, these approaches are often difficult to implement or are not fully optimized for efficient spectral detection in scattering samples for increasingly multiplexed applications. Here, we describe a hyperspectral multiphoton microscope (HMM) that temporally multiplexes three excitation wavelengths and tunable spectral detection, on a frame-by-frame basis, to image with high spectral resolution. Emitted fluorescence was spectrally resolved using angle-tuned bandpass filters (ATBFs) with an optical system designed to collect nearly the full cone of divergent light from the objective back aperture, without loss of spectral resolution. A total of 12 successive image scans were acquired over about 35 s to produce a 48-channel hyperspectral data set. We demonstrated hyperspectral imaging in a variety of samples, including multiple colors of fluorescent beads in a gel, multiple colors of fluorescent proteins fused to intracellular proteins in cultured cells, and multiple cell types and tissue structures labeled using both exogenous dyes and fluorescent protein expression in mouse cortex and skin, in vivo. A. Hyperspectral Multiphoton Imaging Setup Imaging was performed on a custom-built laser scanning microscope incorporating three femtosecond laser sources (Ti:Sapphire: Chameleon Vision and Mira, Coherent; Yb:fiber: Satsuma, Amplitude Systèmes) [Fig. 1(a)]. Laser beams were combined and spatially overlapped using two dichroic mirrors with cutoff wavelengths of 875 and 1000 nm (FF875-Di01, Semrock; DMSP1000R, Thorlabs). Beam powers were independently controlled by servo-controlled half-wave plates and a polarizing beam splitter and were monitored using photodiodes. The three laser beams were expanded using independent two-lens telescopes, so the ${1/e}$ diameter just filled the 4-mm clear aperture of the galvanometric scan mirrors (Cambridge Technologies). These telescopes served a dual purpose: to overfill the back aperture of the objective and to allow for independent tuning of the axial focal location for each excitation laser to ensure parfocality. Two objectives were used for imaging: a ${25\times}$, 1.05 numerical aperture (NA) water immersion objective for bead samples and in vivo imaging (Olympus, XLPLN25XWMP2), and a ${63\times}$, 1.2 NA water immersion objective for live cell imaging (Zeiss). To overlap the excitation volumes of the three lasers, we ensured all three beams were colinear for 3 m of propagation, and we varied the divergence of the shortest and longest wavelength lasers so that the focal planes of all three lasers overlapped. Axial alignment of the focal planes was typically within 1 µm with the 1.05 NA objective. Fig. 1. The HMM multiplexed three excitation lasers and 16 spectral emission bands to provide a 48-channel image. (a) Schematic diagram of the HMM. Three different wavelength femtosecond laser beams passed through mechanical shutters, were combined with long-pass (LP) and short-pass (SP) dichroics and directed to galvonometric scan mirrors (SMs). Fluorescence emission was collected via the microscope objective and routed using LP dichroics to four separate, broad color channels (Chan A–D). ATBFs were placed before each detector to select small passbands of emission light. (b) Plot of the transmission through the ATBFs for the angles used in hyperspectral imaging. The vertical dashed lines represent the cutoff wavelengths for the LP secondary dichroics and the 700-nm SP blocking filter. A hyperspectral image was acquired by taking a four-channel image using one excitation laser for the first filter position (1), then repeating this image across the other two excitation lasers, and then repeating this entire process across the other filter angles (2–4). Fluorescence was epidetected from the sample and directed to custom detection optics with a dichroic mirror (700dcxru, Chroma). Fluorescence from the sample was first sent through a 720-nm short-pass filter to block excitation light (FF01-720/SP, Semrock) and then divided into four broad color channels using three long-pass dichroic mirrors with cutoffs at 495, 552, and 624 nm (FF495-Di03, FF552-Di02, FF624-Di01, Semrock). In each channel, an ATBF (TBP01-490/15, TBP01-550/15, TBP01-620/14, TBP01-700/13, VersaChrome, Semrock) was attached to a high-torque motor (S9352HV, Futaba) via a custom aluminum frame. Signals were detected with GaAsP photomultiplier tubes (PMTs, H10770PB-40SEL, Hamamatsu) with gain control and conditioned with a custom preamplifier and a four-channel lowpass filter set to a 1-MHz cutoff frequency (3944 Four Channel, Krohn-Hite). Signal acquisition and scan mirror signals were controlled with a PCI-6610 DAQ board (National Instruments). Wave plate rotation, shutters, and filter motors were controlled with a PCI-6602 DAQ board (National Instruments). All images were acquired with ScanImage software (Vidrio) in MATLAB (MathWorks), with custom scripts for hyperspectral acquisition. B. Scattering-Insensitive Detection Optics Design When imaging in scattering samples, the 2PEF exits the back aperture of a microscope objective in a divergent cone. Detection optics were designed and ray-traced in Zemax (Zemax, LLC) to capture as much of this divergent light as possible and deliver it to the active area of the PMT, while simultaneously minimizing the spread in incidence angles on the ATBF to minimize spectral broadening and maximize filter transmission (Fig. S2, Supplement 1). Optics were designed around the 13.5° half-angle of light that exited the back aperture of the 1.05 NA objective (Olympus) when highly diffuse light entered the objective. This approximated the distribution of light from a highly scattering sample. A four-lens optical detection system was designed, with the first lens (80-mm focal length, AC508-080-A, Thorlabs) shared by all detection channels and sitting 75 mm from the back aperture of the objective. A second lens (175-mm focal length, LA1399A.1, Thorlabs) was positioned 175 mm from the first lens, with fixed dichroics that split the emitted fluorescence into four channels between these two lenses. After this second lens, the light passed through the ATBF, with a maximal angular divergence of a 7.7° half-angle. Finally, a pair of lenses (75-mm focal length at 126 mm from second lens and 16-mm focal length at 33 mm further behind; LA1145-A and ACL25416U-A, Thorlabs) condensed the light onto the 5-mm active area of the PMT. C. Channel Transmittance Measurements The transmittance of the ATBFs in our setup was measured as a function of angle (Fig. S3, Supplement 1). To acquire these spectra, the PMT for each channel was replaced by an integrating sphere (IS200-4, Thorlabs) with an optical fiber attached to a side port. The collected light was spectrally resolved on a commercial spectrometer (HR4000, Ocean Optics). A halogen lamp with a diffusing plate was placed beyond the focal length of the microscope objective to provide diffuse, spectrally broad illumination. The transmitted spectrum was first measured with the ATBF removed, and then with the filter in place for 1° increments of the filter angle over 60°. The transmittance of the ATBF was calculated and smoothed over a 1.5-nm window. D. Image Acquisition Before acquiring a hyperspectral image, it was necessary to first optimize the gain of the four PMTs and the power of the three lasers to achieve high signal-to-noise detection of all fluorescent species while avoiding PMT saturation across all angles of the ATBFs and all excitation sources. Images were acquired from all four PMTs simultaneously. An example imaging sequence was acquired as follows: all four ATBFs were rotated to their first imaging angle (the farthest blueshifted within their channel), the shutter for the first laser opened, a four-channel image was acquired, the shutter closed, and the next laser shutter opened, and a second four-channel image was acquired; this was repeated so that images with all three excitation lasers were acquired. Then, the ATBFs were rotated to the next imaging angle, and the process repeated, until images were acquired across all desired filter angles for all three lasers. In most of the imaging shown in this paper, four distinct angles of the ATBFs [Table S1, of Supplement 1, and Fig. 1(b)] were used, which provided coverage of the visible spectrum (420–700 nm) with ${\sim}{20\,\,\rm nm}$ spectral resolution. To create a 3D image stack, the imaging depth would then be adjusted and the whole process repeated for the new image plane. E. Spectral Calibration Because PMT gain settings were optimized for each image, it was necessary to calibrate the signal across PMT channels if determining real spectral profiles was necessary (Fig. S4, Supplement 1). The spectrum of an incandescent flashlight with color-balancing filters (FGT165, Thorlabs) was multiplied by the measured transmission profile for the ATBFs for each acquisition angle (Fig. S3, Supplement 1), providing a "predicted" PMT signal for the incandescent spectrum. After each hyperspectral image acquisition and before changing PMT gains, the flashlight was directed toward a white piece of card stock placed directly below the objective, and the angle of the flashlight was manually adjusted to avoid PMT saturation. The PMT signal was then measured at the ATBF angles used for hyperspectral imaging, as well as at 1° increments over $\pm 4^\circ$ around those angles. For each ATBF angle, these data were used to find a calibration factor that fit the measured PMT signal from the flashlight across those nine angles to the predicted PMT signal. We thus generated 16 such calibration factors (one for each PMT at each of the four ATBF angles commonly used), each averaged across the nine, closely spaced angles where measurements were made. These calibration factors were used to scale the spectral end members to compare with actual emission spectra. To test the accuracy of this calibration procedure, it was used to measure the spectrum of fluorescent dyes with variable PMT gain and laser power settings (Fig. S5, Supplement 1). Pools of 25 µM fluorescein and 25 µM sulforhodamine 101 (S359, Thermo Fisher Scientific) dye were placed on glass slides, covered with a glass coverslip, and sealed. Images were acquired across 16 emission detection channels with the 900-nm laser source, for three different settings of PMT gain and laser power. These gain/power combinations were chosen to ensure a detectable signal while avoiding saturation, and were representative of settings that would be chosen during image acquisition. After each acquisition, the spectral calibration procedure was performed, and the extracted spectra compared to published spectra and to the predicted spectra for our system, with good agreement all around (Fig. S5, Supplement 1). F. General Image Analysis In images where data were acquired with bidirectional scans, we made small shifts in every other row to minimize mismatch between forward and backward scans. A two-dimensional (2D) median filter with a radius of 1 pixel was applied to each image using FIJI [29]. For analysis methods relying on literature spectra of fluorescent species, images were scaled by the appropriate calibration factors. Due to the chromatic aberration induced by the excitation optics across the three laser wavelengths used for excitation, identical structures imaged with different excitation sources were separated spatially, especially toward the edges of large-area scans [typically less than ${\sim}{2}\;{\unicode{x00B5}{\rm m}}$ for a field of view (FOV) of ${\sim}{300}\;{\unicode{x00B5}{\rm m}}$]. Spectral channels containing similar structures across all lasers were identified, and images for the 800- and 1030-nm excitation were registered to the 900-nm image using a 2D affine transformation (affine2d function in MATLAB). The calculated transform was then applied to all images corresponding to that laser. Using a nonnegative least squares approach (lsqnonneg function in MATLAB), the hyperspectral data for each voxel in the image were fit as a linear sum across multiple spectral end members. Each fluorescent species or other nonlinear signal [e.g., second harmonic generation (SHG)] in the image was represented by one of these spectral end members. For each set of imaging experiments, the derivation of these end members was unique and is discussed in detail for each experiment. The normalized squared 2-norm of the residual was used to evaluate the goodness of this fit and to identify image regions that the chosen spectral end members did not represent well. For relevant data sets, regions of interest (ROIs) were chosen to evaluate the goodness of fit (Supplement 1). G. Multicolor Fluorescent Bead Sample Ten colors of dye-loaded, 15-µm diameter polystyrene beads (F8837 Blue, F8838 Blue-green, F21010 Green, F8844 Yellow-green, F21011 Yellow, F8841 Orange, F21012 Red-orange, F8842 Red, F21013 Carmine, F8839 Crimson, Thermo Fisher Scientific) were embedded in 2% (wt/vol) agarose gel. The bead solutions were concentrated to achieve a density high enough to visualize all bead colors in a single FOV. Briefly, ${\sim}{0.5}\;{\rm mL}$ of bead solution was centrifuged, and the supernatant removed. Additional colors were sequentially added and centrifuged until a pellet with all colors was produced. A small amount of hot agarose was added to the bead pellet, allowed to briefly solidify, and then scooped onto the lid of a cell culture dish for imaging. Samples that contained each bead color individually were similarly prepared. The multicolor bead sample was imaged first, with PMT gains and laser powers set to visualize all beads without saturation. Without changing any imaging parameters, the single-color bead samples were imaged, and the flashlight calibration described above was performed. Image slices were acquired at 2-µm steps, with eight slices for the mixed sample and five slices for the single-color bead samples. Bead data were preprocessed using the standard workflow. All single-color images were maximum-projected, and both single-color images and mixed-bead images were registered using the affine transformation. Spectral end members were determined by averaging across five different manually determined ROIs for each single-color sample. A single slice of the mixed-bead image was unmixed, and each of the unmixed images assigned a false color to create the color composite. Once data were unmixed, new spectral end members were chosen from the mixed-bead sample, using the unmixed image to identify beads of each color (Fig. S6, Supplement 1). Unmixed images [Fig. 2(c)] were again assigned a false color to create the color composite [Fig. 2(d)]. For the 48-channel image array [Fig. 2(b)] and extracted bead spectra [Fig. 2(f)], data were scaled according to the calibration procedure. The bead spectra were also measured with a spectrometer for comparison with the spectra published by the manufacturer [Fig. 2(a)] and the extracted bead spectra from the HMM [Fig. 2(f), Fig. S7 of Supplement 1]. Fig. 2. Hyperspectral image of 10 colors of 15-µm diameter fluorescent beads embedded in agarose gel. (a) Fluorescence emission spectra of the 10 bead colors, as reported by the manufacturer (Thermo Fisher); (b) 48-channel image array of the bead sample; images acquired at a given laser wavelength are shown in rows, and at a given filter angle in columns (denoted by the passband center wavelength). The colored bands indicate the four broad color channels, Chan A–D. (c) Unmixed images of individual bead colors; (d) false-color composite image of spectrally unmixed bead sample; some beads appear to range in size from 15 to 20 µm in diameter. (e) Intensity values of beads for all 10 unmixed images across Lines 1 and 2 in d; (f) calibrated bead spectra across the three excitation lasers measured using the HMM. Spectra were normalized to the brightest dye (Carmine, 800-nm excitation) for relative comparison of bead brightness. Scale bars, 50 µm. To quantify the quality of unmixing, the number of pixels that overlapped between multiple unmixed images was determined. A Rènyi entropy filter (FIJI) was applied to each unmixed image to mask out background pixels and set signal pixels to 1. The number of pixels present in more than one image was measured by summing these thresholded images and identifying pixels with a value higher than 1. The number of pixels present in all image pairs were counted to evaluate significant crossover between different colors of the unmixed image (Table S2). A 2D correlation coefficient was also computed between all unmixed images (Table S3, Supplement 1). The signal-to-noise ratio (SNR) was calculated by selecting an ROI including background pixels to represent background, and multiple ROIs inside the same color beads to represent the bead signal. The SNR was calculated as the ratio of the mean bead signal to the standard deviation of the background signal (Table S4, Supplement 1). To visually evaluate overlap, pixel intensity values were measured across two, single-pixel-wide lines in the unmixed 10-color image [Fig. 2(d)] and the intensity values plotted as a function of distance [Fig. 2(e)]. To evaluate the suitability of our data for pixel-clustering and segmentation, beads were clustered in 10-dimensional space. Each of the 10 unmixed images was thresholded using the Rènyi entropy function in FIJI and maximum projected to mask out the background. A range filter (rangefilt function in MATLAB) was applied with a neighborhood of nine pixels to include texture information in addition to spectral information in clustering, as beads should contain pixels belonging to only one cluster. ${K}$-means clustering was performed using 10 clusters (kmeans function in MATLAB) on all nonbackground pixels. Each cluster was seeded with the average intensity of a particular color bead in its respective unmixed image and zeros in other images. An image was then created that classified all pixels into the 10 clusters identified by ${K}$-means [Fig. S8(a), Supplement 1] and plotted the 10-dimensional profile of 5000 random points belonging to each cluster [Fig. S8(b), Supplement 1]. Because beads cannot physically overlap in the sample, each pixel should belong to only one bead color. Images collected in each PMT channel (Chan A–D) across all ATBF angles for the 800-nm laser source were summed to simulate a four-channel microscope image [Fig. S9(a), Supplement 1]. The same ROIs used to select spectral end members for 48-channel unmixing were used to extract four-channel end members in the simulated four-channel data set. The four-channel mixed bead image was unmixed and used to generate a composite [Fig. S9(b), Supplement 1], with each of the unmixed images assigned the same false colors as the 48-channel composite [Fig. S9(c), Supplement 1]. The same bead, one for each color, was extracted from both the four-channel and 48-channel composites for direct comparison [Fig. S9(d), Supplement 1]. H. Cultured Cells Labeled with Multiple, Differently Colored Fluorescent Protein Fusion Constructs To demonstrate multicolor imaging in live cell culture, we expressed fusion proteins that linked different color fluorescent proteins to various intracellular proteins (Tables S5 and S6), using transduced cell lines and additional transient transfection (Table S7, see Supplement 1). We needed cells to exhibit similar, high expression levels of all fluorescent proteins, as dim fluorescent markers are often difficult to image with high signal to noise in the presence of bright markers, even with hyperspectral detection. Simultaneous transient transfection with all vectors would lead to poor expression of some labels, making it difficult to locate cells that expressed all labels. Thus, we generated stable HeLa cell lines expressing one or two pairs of the fusion constructs using lentiviral methods (Table S6). This was done by creating polycistronic plasmids that coded for the production of two different fluorescent proteins, each fused to a different intracellular protein, with a self-cleaving peptide sequence between the two fusion proteins (Figs. S10 and S11 of Supplement 1) [30]. These plasmids were packaged into lentiviral vectors (Fig. S12, Supplement 1). HeLa cells were then transduced with one of these polycistronic constructs, and cells that stably expressed both fusion proteins, mNeon-Keratin/mTFP1-CEBP237, were selected. These cells were then transduced with a second lentiviral vector and cells stably expressing four fusion proteins (mNeon-Keratin/mTFP1-CEBPA237, and sfGFP-Caveolin/mKO2-TOMM20) were selected. Starting with these cells, we then transfected with additional plasmids coding for single or paired fusion proteins to label additional structures before imaging. Fig. 3. Hyperspectral images of live HeLa cells labeled with up to seven different fluorescent protein fusion constructs. (a) Fluorescence emission spectra of seven different fusion constructs, indicating both the fluorescent protein and the fused cellular protein. Note that mAmetrine-LaminB1 is denoted in black in the emission plot and white in the images for optimal contrast. (b) Spectrally unmixed, false-color composite image of cells expressing three fusion proteins with highly overlapping spectral emission [bold curves in (a)]; (c)–(e) spectrally unmixed, false-color composite images of groups of HeLa cells expressing up to seven constructs. Insets (i) and (ii) show cells with multiple nuclear labels. Inset (iii) shows a cell with the signal from mTurquoise-VASP digitally removed to more clearly show the four labels near the cell nucleus, while (iii') shows the same cell with tdTomato-Calreticulin, also removed for further clarity. See Table S7 of Supplement 1 for cell transfection details for each image panel. Spectra sources, mTurquoise and dKatushka (Addgene, see Supplement 1), mTFP1 [35], sfGFP [36], mNeon [37], mAmetrine [38], tdTomato (Thermo Fisher SpectraViewer). Scale bars. (b) 5 µm; (c)–(e) 25 µm. I. Imaging and Analysis of Cultured Cells Multicolor cells were plated in 60-mm cell culture dishes, and media were replaced with phosphate-buffered saline (PBS) for imaging. All cell images were acquired with a ${63\times}$, 1.2 NA water immersion objective (Zeiss). Groups of multicolor cells (Table S7) were imaged with fixed laser power and PMT gain settings. Cells expressing a single fusion protein were then imaged with the same settings to enable selection of spectral end members for unmixing. Images were median-filtered, registered, and maximum-projected across multiple slices. After projection, spectral end members were extracted from single-color images (Fig. S13, Supplement 1) and used for nonnegative least squares unmixing of mixed-color samples (Fig. 3; Figs. S14–S17 of Supplement 1). Because each cell expressed an unknown combination of fluorescent labels, residual images and a priori knowledge of structure morphology were used in an iterative manner to identify the labeled structures present (and thus the relevant spectral end members) in the cells in each image. J. Multicolor Labeling of Brain Cells Using Transgenic Mice and Exogenous Dyes All animal procedures were approved by the Cornell Institutional Animal Care and Use Committee and were performed under the guidance of the Cornell Center for Animal Resources and Education. Mice expressing yellow fluorescent protein (YFP) in pyramidal neurons (Thy1-YFP-H) were crossed with mice expressing green fluorescent protein (GFP) in microglia (Cx3cr1-GFP) to achieve homozygous expression of both genes. Optical access to the cortex was achieved through a glass-covered cranial window. Animals were anesthetized using isoflurane (1.5%–2% in oxygen) and placed on heating pads with feedback control for body temperature maintenance at 37 ˚C (40-90-8D, FHC, Inc.). Mice were given atropine sulfate subcutaneously (0.005 mg/100 g mouse weight; 54925-063-10, Med-Pharmex Inc.) to prevent lung secretions. Mice were also given dexamethasone subcutaneously (0.025 mg/100 g mouse weight; 07-808-8194, Phoenix Pharm Inc.) and ketoprofen subcutaneously (0.5 mg/100 g mouse weight; Zoetis Inc.) to preemptively reduce pain and inflammation. Bupivacaine (0.1 mL, 0.125%; Hospira Inc.) was subcutaneously administered at the incision site to provide a local nerve block. A 6-mm diameter bilateral hole over the cerebral cortex was created with a dental drill (HP4-917-21, Fordom). Approximately 45 min before removing a portion of the skull, 50 µL of 1 mM Alexa Fluor 633 hydrazide (A30634, Life Technologies) in saline was injected retro-orbitally to label elastin in arterioles [31]. Once the skull piece was removed carefully with forceps, 500 nL of 20 µM SR101 (S359, Thermo Fisher) in saline was pressure-injected through a pulled glass pipette ${\sim}{300}\;{\unicode{x00B5}{\rm m}}$ deep into the cortex, while saline was used on the brain surface to prevent drying of the tissue. After injection, the cranial opening was covered with fresh saline and an 8-mm glass coverslip was glued to the skull using cyanoacrylate adhesive (Loctite) and dental cement (Co-Oral-Ite Dental). The animal was immediately moved for hyperspectral imaging. Prior to imaging, 50 µL of 10% weight/volume (w/v) Cascade Blue dextran (D1976, Thermo Fisher) was injected retro-orbitally to label the vasculature. An hourly dose of atropine sulfate and 5% w/v glucose was administered subcutaneously while imaging. After imaging, some mice were allowed to recover from anesthesia for future imaging sessions. In these animals, recovery was allowed for three weeks and then mice were imaged again. Three hours prior to subsequent imaging sessions, mice were briefly anesthetized and injected with 100 µL of 5 mM SR101 and 30 µL of 1 mM Alexa Fluor 633 hydrazide retro-orbitally to allow time for the dye to clear from the vasculature and to label their respective structures. The mouse was allowed to wake up for a 3 h waiting period, and then reanesthetized for retro-orbital injection of Cascade Blue dextran (as above) and imaging. Images were median-filtered, registered, and maximum-projected across multiple slices. After projection, spectral end members were extracted from within the image and used for nonnegative least squares unmixing of that image (Fig. 4, Figs. S18–S22 of Supplement 1). Residual images and a priori knowledge of structure morphology was used in an iterative manner to select end members until most structures were accurately defined. To enable visualization of detailed features in some images, logarithmic functions were applied to the image in FIJI. Fig. 4. Hyperspectral images of multiple exogenous dyes and transgenic labels in live mouse cortex. (a) Fluorescence emission spectra of five different fluorescent labels (Thermo Fisher SpectraViewer), indicating both the fluorescent label and the corresponding labeled structure; (b) spectrally unmixed, false-color composite image of five labels 25 µm below the cortical surface. Large blood vessels (cyan) are identified as arterioles due to the elastin lining (magenta) along the vessel edge. Astrocyte end feet (red) also line the vessel wall, with dendrites (yellow) and microglia (green) scattered throughout the FOV. The density of labeling obscures many structural subtleties, and digital removal of the YFP-dendrite and SR101-astrocyte colors reveals small morphological features of microglia processes and elastin along the arteriole (right). (c)–(d) Spectrally unmixed, false-color composite image of four labels 100 and 200 µm below the cortex surface, respectively. The vascular dye decreases in signal rapidly with depth as the 800-nm laser source used for its excitation becomes power-limited. Scale bars, 50 µm. To demonstrate image unmixing with fewer spectral channels, five of the 48 channels were selected, each containing the brightest signal from one of five labels used in the in vivo cortical imaging. Spectral end members from the same ROIs as the 48-channel data set were extracted and used for unmixing [Fig. S23(a), Supplement 1]. In addition to the five-channel data set, another data set was selected by choosing additional channels that could be collected simultaneously from the other PMTs, not adding additional acquisition time. This 12-channel data set was unmixed in the same manner [Fig. S23(b), Supplement 1]. We also created mice that expressed variable amounts of different color fluorescent proteins in cortical neurons, using viral transfection techniques and relying on the stochasticity of viral transfection (Figs. S24–S26; see Supplement 1). K. Imaging Invasion of Immune and Inflammatory Cells in Skin after Inflammatory Stimulus We took a multipronged strategy to label different immune cells and stromal components. Three different types of fluorescently labeled antigen-specific CD4 T cells were adoptively transferred into transgenic hosts with tissue specific expression of GFP. Additional subsets of immune cells and stroma were labeled by acute intravascular (IV) and intradermal injections with fluorescently labeled antibodies and small molecule detection reagents. First, naïve CD4 T cells were isolated from OT-II T cell receptor (specific for ovalbumin 323-339 complexed to ${\rm I} - {{\rm A}^b}$) expressing mice (strain 004194, Jackson Laboratory) that lacked expression of a fluorescent protein, or expressed either tdTomato [by breeding OT-II mice with CD4 CRE mice (022071, Jackson Laboratory) and Stop-floxed tdTomato mice (007909, Jackson Laboratory)], or ECFP [by breeding of OI-II mice with CAG-ECFP mice (004218, Jackson Laboratory)]. Next, naïve fluorescent protein expressing CD4 cells were differentiated in vitro into TH2 cells, while the unlabeled naïve CD4 T cells were differentiated into TH1 cells. Briefly, naïve OT-II CD4 T cells were enriched from the spleen and lymph nodes by complement-mediated lysis of MHC class II (clone M5/114, produced in-house), ${\rm CD8}\alpha$ (clone 3.115, produced in-house), and CD24 (clone J11D, produced in-house) expressing cells. These CD4 cells were further enriched for naïve cells by positive selection for CD62L (clone MEL-14, BD Biosciences) expression with paramagnetic beads. Antigen-presenting cells (APCs) were prepared from spleens of syngeneic mice by depleting T cells via complement-mediated lysis of CD90 expressing cells (clone JIJ, produced in-house). To drive TH1 differentiation, naïve CD4 T cells were cocultured with irradiated APCs, OVA peptide (chicken ovalbumin 323-339AA, 1 µM, Biopeptide Company), anti-IL4 antibody (clone 11B11, 40 µg/mL, Bio X Cell), rmIL-12 (20 ng/mL, Peprotech), and rhIL2 (10 U/mL, NIH). TH2 differentiation was achieved by coculturing naïve CD4 T cells with irradiated APCs in the presence of OVA peptide (1 µM), anti-${\rm IFN}\gamma$ (clone XMG1.2, 50 µg/mL, Bio X Cell), rmIL-4 (50 ng/mL, Peprotech), and rhIL-2 (10 U/mL, NIH) (see Ref. [32] for detailed methods). After five days of cell culture, unlabeled TH1 cells were stained with Cell Tracker Deep Red (Thermo Fisher). To visualize TH1 and TH2 CD4 T cells in vivo, ${3} \times {{10}^6}$ Deep Red TH1, ECFP TH2, and tdTomato TH2 cells were transferred by tail vein injection into MacGreen (strain 018549, Jackson Laboratory) mice after 5 days of in vitro differentiation. MacGreen mice express GFP under the control of the csf1r promoter, thus labeling Langerhans cells in the epidermis as well as macrophages and inflammatory monocytes in the dermis. Immediately after cell transfer, the pinna of the ear was immunized with 10 µL of OVA peptide emulsified in complete Freund's adjuvant (CFA). Ears were imaged 3 days after the OVA injection. 500 ng of each of the following antibodies were coinjected intradermally at the imaging site 2–3 h before the start of imaging: anti-CD8-BV650 (clone H35017.2, BD Biosciences) to label CD8 T cells, anti-CD11c-PE-CF594 (clone N418, BD Biosciences) to label dendritic cells, and anti-GR1-Alexa 594 (clone RB6-8C5, BD Biosciences) to label neutrophils. To decrease background staining due to Fc capture of fluorescently labeled antibodies, anti-CD16/CD32 (clone 2.4G2, produced in-house) were coinjected with the labeled antibodies. Just prior to imaging, elastin, endothelial cells, and lymphatic vessels were labeled, respectively, with an intravenous injection of 5 µg of hydrazine-Alexa 633 (Thermo Fisher), 25 µg anti-CD31-BV421 (clone 390, BD Biosciences), and 25 µg anti-LYVE1-Alexa647 (clone 223322, ${R}$ and ${D}$ Systems), all dissolved in saline. The ear of the mouse was then imaged at the site of the OVA injection on the HMM. Briefly, the mouse was anesthetized using isoflurane, and received hourly subcutaneous injections of atropine and glucose, as described above. Body temperature was maintained at $37^{\circ}\text{C}$ with a heating blanket. The ear was mounted on a custom-built mouse ear imaging platform, following the approach outlined in Ref. [33]. Power for the three lasers and PMT gains were set for the 16 emission channels to ensure detectable signal across all 48 channels of the HMM without saturation. An imaging field was identified that contained as many differently labeled cell types and tissue structures as could be found. A 3D image stack was taken containing 4-µm spaced slices across a depth of 100 µm, beginning 20 µm beneath the surface of the ear. The mouse was euthanized at the conclusion of the imaging experiment. Using custom-written MATLAB code, background subtraction was performed for each channel, where the background signal was taken as the mean of the dimmest 10% of the pixels in each channel. The images were then median-filtered using a 1-pixel radius median filter. Spectral end members were found by manually selecting ROIs for cell-shaped objects or tissue structures that appeared to contain single labels, using the full 3D image stack. Using these pure label ROIs, end-member spectra for each label were extracted. To identify the cells and tissue structures imaged, we calibrated the spectral end members with the flashlight procedure described above and compared these to literature spectra that had been scaled by the measured spectral transmission of the ATBFs at the angles used in the HMM image acquisition [Figs. 5(a) and 5(d)]. The extracted 48-element end-member spectra were then used to carry out nonnegative least squares unmixing of the full image stack to generate pure label images [Figs. 5(b) and 5(c); Fig. S27 of Supplement 1]. Maximum projections across 3 to 10 slices were generated for different labels at different depths in the image stack to generate single label images. Fig. 5. Hyperspectral images of multiple cell types and tissue structures in a multicolor fluorescently labeled mouse ear after a dermal inflammatory injury. (a) Literature emission spectra of the six exogenous labels that were identified (with the corresponding marker and the labeled structure) from the region with the most different label types present (Thermo Fisher SpectraViewer, BioLegend Spectra Analyzer); (b) 3D rendering of the stack of image slices taken across a depth of 100 µm at 4-µm intervals beginning at 20 µm below the ear surface. The different cell types and tissue structures are false-colored as indicated in (a). (c) Spectrally unmixed grayscale single-label images obtained from maximum projections across three slices for the eight distinct spectral end members identified in the image stack. The projections were centered at different depths in the overall image stack: blood vessels, 40 µm; hair, 16 µm (10 slices); collagen, 28 µm; macrophages, 36 µm; T helper-2 cells, 92 µm; dendritic cells, 64 µm; elastin, 16 µm; lymphatic vessels, 92 µm; (d) plots of the normalized spectral end members for each label, extracted from pure pixels in the raw image data. Scale bar in panel (c), 50 µm. A. HMM The HMM was a custom-built setup with a four-channel detection design [Fig. 1(a)]. In place of standard interference bandpass filters, we placed ATBFs in front of each PMT for improved spectral discrimination (Fig. S3, Table S1 of Supplement 1). The ATBFs shift transmission to shorter wavelengths as the incidence angle increases, as all interference filters do, but maintain better transparency, bandwidth, and shape of the transmission profile at high incidence angles, as compared to standard filters. In addition, we routed three different-wavelength, femtosecond laser sources into the system to provide a range of excitation conditions for a variety of fluorescent labels. Detection optics were ray-traced to collect as much of the divergent cone of light coming from the back aperture of the objective as possible (Fig. S2, Supplement 1). Hyperspectral images were acquired by alternating laser excitation source and ATBF angle on a frame-by-frame basis [Fig. 1(b)]. To compensate for differences in PMT gain, we imaged a calibration light source and calculated appropriate image scaling factors for the PMT gain settings (Fig. S4, Supplement 1). The accuracy of this calibration technique was confirmed by reconstructing known fluorophore spectra (Fig. S5, Supplement 1). Each fluorophore in a sample exhibits a unique combination of excitation and emission intensities, defined as the spectral end member for that fluorophore. Images containing multiple fluorophores were unmixed using nonnegative least squares to fit each pixel's 48-channel hyperspectral signal to a linear combination of spectral end members. End members were based on literature curves for fluorophore excitation and emission or were extracted from single-color samples or from the mixed-color samples themselves, depending on the specific sample features. Finally, false-color composite images were generated that color-code each structure by its fluorophore identity. B. Fluorescent Bead Samples We demonstrated the ability of the HMM to differentiate highly overlapped fluorescent labels by imaging 10 colors of fluorescent polystyrene beads embedded in agarose gel [Fig. 2(a)]. A 48-channel hyperspectral image was acquired for the bead mixture [Fig. 2(b)] and for samples containing only one bead color (data not shown). We first extracted spectral end members from the single-color samples and used these to unmix the image of the mixed-bead sample [Fig. S6(a), Supplement 1]. We then selected beads in the mixed-bead image that represented the 10 different colors (now easily distinguishable after the first iteration of unmixing) and created a refined set of spectral end members that were used to, again, unmix the 48-channel data from the mixed bead sample, yielding an image with 10 separate colors [Figs. 2(c) and 2(d); Fig. S6(b) of Supplement 1]. This iterative approach allowed an unambiguous identification of beads representing each of the 10 colors in the mixed bead sample so we could define spectral end members that were minimally affected by any drift in imaging parameters (e.g., laser powers) over the ${\sim}{4}\;{\rm h}$ required to take image stacks of each single-color and the mixed-bead sample. This iterative approach resulted in a modestly smaller residual after unmixing [Figs. S6(c) and S6(d), Supplement 1]. Single-pixel wide intensity profiles show high contrast and good signal to noise across all 10 unmixed colors [Fig. 2(e)]. Some cross talk between the blue and blue-green colors as well as the red-orange and red colors was apparent. To assess how well-separated the 10 colors in the final unmixed image were, we thresholded the image for each color using the Rènyi entropy method (FIJI) and found that less than 8.6% of total image pixels were above threshold for more than one color, with most of the overlap occurring between the blue and blue-green (4.8%), red-orange and red (1.7%), and carmine and crimson (0.3%) colors (Table S2, Supplement 1). Further, we correlated the intensity across the 10 color unmixed image pixel-by-pixel and found that all color combinations had a correlation coefficient of less than 0.1, except for three bead pairs: blue versus blue-green (0.5), yellow versus orange (0.16), and red-orange versus red (0.2) (Table S3, Supplement 1). Several factors likely contributed to overlap among these unmixed images, including the short wavelength limit of our detection cutting off the emission of the blue beads (${\sim}{72}\%$ of the area under the emission curve cut-off, leading to a low SNR for the blue beads and similarity between the end members for the blue and blue-green beads), the crossover between channels B and C occurring right between the peaks for the yellow and orange emitting beads, and the similar shape of the spectral end members (although different brightness) for carmine and crimson. In addition, we measured actual bead spectra with a spectrometer and found that many of the emission profiles were redshifted and broader when compared to the spectra provided by the manufacturer, likely because the spectra provided by the manufacturer were measured for dyes in aqueous solution at low concentration (Fig. S7, Supplement 1). After calibration, the spectral end members we measured with the hyperspectral microscope [Fig. 2(f)] closely matched the spectrometer-measured spectral shapes, but varied widely in brightness across different bead colors as well as across different excitation laser wavelengths, highlighting the importance of multiple excitation sources for clean unmixing. Both the yellow/orange and red-orange/red pairs exhibited significant spectral overlap in measurements of the real emission spectra, likely leading to the difficulty in separation we observed in our unmixing analysis. We estimated the signal-to-background and signal-to-noise ratios by selecting individual beads and clear background areas for each color channel. We found that signal to background ranged from 15 to 2400 across the 10 unmixed colors, while the SNR varied from 13 to 440 (Table S4, Supplement 1). To assess how well our 10-color image data could be used for clustering and image segmentation, we used K-means clustering on the unmixed data, assuming 10 clusters and masking away the background. We found that the beads clustered very well, with some confusion between the blue and blue-green beads (Fig. S8, Supplement 1). Finally, we estimated what a more standard, 4-channel single excitation source image of the mixed bead sample would look like by summing the data across each ATBF angle for channels A–D for the 800-nm excitation (Fig. S9, Supplement 1). This calculated 4-channel data set was unable to visualize three bead colors (yellow-green, red-orange, and red) and exhibited overlap between bead colors that were well separated in the 48-channel image (e.g., yellow versus orange, carmine versus crimson), frequently leading to misidentification of beads (e.g., a ${\sim}{170}\%$ and ${\sim}{380}\%$ false-positive rate for blue and orange beads, respectively). C. Cells Labeled with Multiple Fusion Proteins In order to test the capability of the HMM in cultured cells, we generated HeLa cells with multiple fusion proteins—fluorescent proteins fused to different subcellular proteins [Fig. 3(a), Table S5 of Supplement 1]. In cells transfected with a single plasmid, fusion proteins could be readily imaged on our system (Fig. S13, Supplement 1). However, we found that using standard transient transfection techniques with multiple fusion proteins created cells with such varying expression levels of the different fusion proteins that it was difficult to separate the colors due to large intensity differences, even with hyperspectral detection. Therefore, we used a polycistronic expression system under the control of one promoter that enabled more even expression of a pair of fusion constructs that were separated by 2A self-cleaving peptide sites (Fig. S10, Table S6 of Supplement 1) [34]. We then generated cell lines using lentiviral transduction of one or two of these paired fusion constructs. We were not able to efficiently two-photon excite mKO2 (fused to TOMM20), one of the fusion constructs expressed in a cell line, so the cell lines provided two or three colors total. We then transiently transfected other plasmids that drove expression of additional single-fusion constructs or paired-fusion constructs in these cells. Using this approach, we were able to readily find fields of view containing single cells with up to seven colors, with sufficient signal to noise for each color to be unmixed from the 48-channel data (Table S7, Supplement 1). To evaluate the ability of the HMM to differentiate highly spatially and spectrally overlapped fluorescent labels, we imaged live cells stably expressing mTFP1-CEBPA/mNeon-Keratin (labeling nucleus and keratin fibers, respectively) and transiently transfected to express sfGFP-Caveolin (cell membrane invaginations). We clearly separated mTFP1, sfGFP, and mNeon labels in the final image, although they exhibit emission peaks separated by only 18 and 7 nm, respectively [Fig. 3(b); Fig. S14 of Supplement 1). Caveolin appeared as small dots and keratin as fine meshwork, as expected. To establish the capability of the HMM to separate more colors in live cells, we imaged a FOV containing cells stably expressing mTFP1-CEBPA237/mNeon-Keratin and sfGFP-Caveolin/mKO2-TOMM20, additionally transfected with mTurquoise-VASP (focal adhesions), mAmetrine-LaminB1 (nuclear membrane), tdTomato-Calreticulin (endoplasmic reticulum), and dKatushka-CENPB (nucleus). Each of the seven colors was well separated despite significant spatial and spectral overlap [Fig. 3(c); Fig. S15 of Supplement 1]. We found the lateral spatial resolution of the unmixed images to be ${\sim}{0.5}\;{\unicode{x00B5}{\rm m}}$ by measuring the full width at half-maximum of small keratin extensions in these cells [Fig. 3(c), inset]. We then imaged another group of cells expressing the same combination of labels [Fig. 3(d); Fig. S16 of Supplement 1], although tdTomato-Calreticulin was not visible in this FOV. We found two cells, each expressing a different combination of four labels [Fig. 3(d), insets i and ii]. We saw that mAmetrine-LaminB1 outlined the nucleus, which was labeled throughout by dKatushka-CENPB. sfGFP-Caveolin was distributed throughout the cell and was contained in small extensions near the cell edge. mTurquoise-VASP appeared to be expressed throughout the cell membrane, creating a "haze" in the final projected image. In another group of cells [Fig. 3(e); Fig. S17 of Supplement 1], we found a cell in which we could visualize seven different labels simultaneously. We digitally removed the mTurquoise-VASP [Fig. 3(e), inset iii] and the tdTomato-Calreticulin colors [Fig. 3(e), inset iii'] to provide visual clarity for each label. We saw that mTFP1-CEBPA tended to label most of the nuclear body, while dKatushka-CENPB primarily labeled punctate structures within the nucleus. We also observed that mNeon-Keratin tended to form dense rings around the nucleus and a fine meshwork throughout the rest of the cell body. D. In vivo Mouse Cortex Labeled with Exogenous Dyes and Fluorescent Proteins We demonstrated the ability of the HMM to differentiate multiple spectral labels in the living cerebral cortex of mice. We used both fluorescent protein expression as well as exogenous dyes to label multiple cell types and tissue structures simultaneously. We crossed mice expressing YFP in pyramidal neurons and GFP in microglia. We applied sulforhodamine 101 (SR101), which has been shown to label astrocytes and oligodendrocytes, both by pressure injection directly into the cortex and by IV injection [39]. We labeled the elastin along arterioles and the blood plasma through IV injection of Alexa Fluor 633 hydrazide [31] and Cascade Blue-conjugated dextran, respectively [Fig. 4(a)]. Animals were imaged either immediately after performing a craniotomy or after ${\sim}{3}$ weeks of recovery. We were able to visualize all tagged structures at and beneath the cortical surface [Fig. 4(b); Fig. S18 of Supplement 1]. Several expected anatomical features were observed, including the presence of astrocytes and their end feet along the outside of surface arterioles, along with an ${\sim}$1-µm-wide band of elastin adjacent to the vessel lumen. The imaged structures were dense, and digital removal of the SR101 and YFP colors enabled visualization of microglial processes along an arteriole [Fig. 4(b), right]. Due to the large dynamic range of the hyperspectral image, setting the contrast across each unmixed image for clear visualization of all structures simultaneously did not convey the amount of detail present. Taking the logarithm of the intensity values helped to elucidate these structures (Fig. S18, Supplement 1). We compared an image of microglia from a standard 4-channel microscope and our 48-channel hyperspectral microscope for both linear and logarithmic intensity values and found that details were comparable (Fig. S19, Supplement 1). We also demonstrated spectral unmixing at depths of up to 200 µm in the mouse cortex (Figs. 4(c) and 4(d), and Fig. S20 and S21 of Supplement 1), although the Cascade Blue signal dropped off quickly with depth due to power limitations of the 800-nm laser source. We further separated a number of intrinsic and exogenous signals in the dura-mater, including SHG from collagen (seen from both 900- and 1030-nm excitation, yielding a hyperspectral SHG end member), GFP-expressing patrolling monocytes, likely present due to postsurgical inflammation, and several unidentified autofluorescent species (Fig. S22, Supplement 1). One concern is the potential for variation in the spectral signature of fluorescent labels with imaging depth, due to broadening of the observed emission spectrum due to scattering-induced changes in the incidence angles on the ATBFs. To evaluate the severity of these effects, we took images of IV FITC-dextran at different depths in the cortex of a live, wild-type mouse at finely spaced emission wavelength settings to determine a depth-dependent emission profile of this dye. We found minimal changes in spectral shape at depths of up to 0.5 mm (Fig. S28, Supplement 1). As the number of spectral channels greatly exceeded the number of fluorescent labels, we also demonstrated unmixing with a reduced number of channels (Fig. S23, Supplement 1). We selected five channels corresponding to the brightest signal from one of the labeled structures for unmixing, and also demonstrated unmixing with 12 channels, including the other channels that could be acquired simultaneously alongside the five selected channels. We found that unmixed signal quality remained similar to the 48-channel unmixed images, but reduced imaging speed to approximately 11 s from the 35 s required for a 48-channel frame. SNR was measured for the same areas across the 48-, 12-, and 5-channel images. Cascade Blue and Alexa 633 SNRs remained similar despite changes in channel number, but SNR was reduced by approximately 68%, 25%, and 42% for the 12-channel GFP, YFP, and SR101 unmixed images, respectively, and 60%, 28%, and 42% for the 5-channel GFP, YFP, and SR101 unmixed images, respectively. In most cases, these changes were not visually apparent, and all SNRs remained above 40. Additionally, we used the HMM to distinguish the hues of color expressed by neurons in a Brainbow-like sample [40]. We simultaneously injected multiple adeno-associated viral (AAV) particles that each coded for the expression of a different color fluorescent protein. Different neurons took up different amounts of each AAV and thus expressed different ratios of each color, giving cells a unique hue (Fig. S24, Supplement 1). We used synthetically derived end members from literature spectra and two-photon cross sections for this unmixing. Although the unmixing was broadly successful, residuals were higher than for other samples with image-derived end members, as expected (Figs. S25–S26, Supplement 1). E. In vivo Imaging of Immune Cell Invasion after Induction of Dermal Inflammation We further demonstrated the potential applications of the HMM by visualizing the immune response to immunization with a protein antigen (OVA) and CFA in the skin. We used a combination of strategies to label multiple tissue structures and cell types, including transgenic mice, adoptive cell transfer, and injection of fluorescently labeled antibodies and small molecules, resulting in a mouse with 10 different cell types or tissue structures fluorescently labeled [Fig. 5(a)]. HMM image stacks were acquired at different locations near the inoculation site. We were not able to find a location where all 10 labeled cells or tissue structures were visible within a single FOV, but we did identify an imaging field where six of the ten labels were visible, along with two endogenous signals, autofluorescence and SHG [Fig. 5(b); Fig. S27 of Supplement 1). Macrophages and monocytes (csf1r-GFP mouse) were found at a high density near the epidermis [Fig. 5(c)]. Elastin (Alexa 633 hydrazide) was also clearly visible towards the epidermis. We identified blood vessels (BV 421 anti-CD31), which were dimly visible (likely because much of the emission of BV 421 is at a shorter wavelength than the bluest channel on the HMM). Moving deeper into the skin, we spotted dendritic cells (PE-CF594 anti-CD11c) and a large lymphatic vessel (Alexa 647 anti-LYVE-1). Despite significant overlap of the emission spectra of their labels [Fig. 5(d)], we were able to clearly distinguish lymphatic vessels and elastin. At a depth of ${\sim}{100}\;{\unicode{x00B5}{\rm m}}$, we were able to identify a single brightly labeled Th2 cell (adoptive transfer of OVA-sensitized T-cells expressing tdTomato). The emission spectra for the Th2 cell and dendritic cell labels also had a high degree of overlap [Figs. 5(a) and 5(d)], but the HMM was able to clearly separate them [Fig. 5(c)]. In addition to exogenous labels, we were able to identify and extract spectra for endogenous signals such as SHG from collagen (spectral end member with peaks at 450 and 515 nm with 900- and 1030-nm excitation, respectively) and autofluorescence from hair shafts (visible only with 800-nm excitation, but spectrally broad). Our HMM produced high-contrast images of multiple fluorescent labels with submicrometer spatial resolution and the capability to image hundreds of micrometers into scattering samples. Utilizing multiple, different wavelength excitation sources enabled the use of a wide variety of fluorescent labels (28 different fluorescent dyes, proteins, and beads used in this paper, not counting autofluorescent species and SHG), and multiplexed detection provided fine spectral resolution for differentiation between labels with similar emission characteristics. We demonstrated unmixing of 10 colors of fluorescent beads with highly overlapped emission spectra. We also imaged seven fluorescent protein fusion constructs in live cells, opening the door to studies of complex intracellular interactions involving cell membrane, organelle, and nuclear proteins. We further imaged five labeled structures simultaneously in a live mouse cortex, distinguished different neurons based on spectral hue in a Brainbow-like sample, and imaged multiple classes of immune and inflammatory cells at a site of skin immunization. These in vivo demonstrations establish the capability of the HMM to image the interactions of multiple cell types in animal models for normal and disease state physiology studies, using the full breadth of fluorescent labeling strategies available to the research community. Our HMM design compares favorably with a number of other approaches. Many multiplexed confocal fluorescence methods have been developed to address fluorescent label overlap [41–44], although they have not been translated to in vivo studies, largely due to the poor performance of confocal microscopy in the presence of optical scattering. Some multiphoton designs have utilized the wavelength tuning capabilities of modern femtosecond laser systems to separate different fluorescent labels by their excitation characteristics [25–28]. However, laser tuning requires long delays during image acquisition, on the order of seconds per frame, depending on the source. In addition, the small steps in excitation wavelength (on the order of 2–10 nm) do not generate sufficient variations in emission intensity to provide much of an advantage over a few, discrete excitation wavelengths. Our use of 800-, 900-, and 1030-nm excitation provides large differences in spectral emission signatures and enables excitation of labels that emit from blue to red. Recently available femtosecond lasers that can tune 100 nm in 25 ms, coupled with, for example, electro-optic tuning of beam divergence to maintain a common focal plane, could enable similar imaging to be accomplished with a single excitation source (at least for excitation wavelengths within the tuning range of the laser). Other techniques utilize multiplexed detection, such as spectrometers that are prism or grating-based [20,45], including most spectral detectors on commercial instruments. These are poorly suited for imaging in scattering samples, as these systems must utilize a descanned and confocal detection path to maintain spectral resolution and thus cannot make use of highly scattered emitted fluorescence for image formation. Lansford et al. utilized a liquid crystal tunable filter (LCTF) in place of a standard glass interference filter to increase spectral resolution [46]. However, LCTFs exhibit low passband transmission, poor out-of-band blocking, and a single unit costs as much as a high-end microscope objective, with multiple LCTFs required for multichannel detection. Ricard et al. utilized five different wavelength detection channels (using standard filters) over two different excitation wavelengths and demonstrated unmixing of six labels in a mouse glioblastoma model [27]. However, our system provides a higher level of multiplexing with ATBF detection, enabling the differentiation of closely overlapped labels. Currently, some commercially available instruments provide semicomparable capabilities to the HMM. These instruments have the ability to vary both the center wavelength and the bandwidth of each of four detection channels. Such flexible tuning of detection channels can be achieved through the use of combinations of angle-tuned short- and long-pass filters, or with translated, continuously variable dichroics [47]. With four tunable detection channels, such an instrument is well suited for rapidly changing detection bands for a particular experiment's dye configuration. With appropriate excitation lasers and scripting, end users may be able to replicate the data acquired in this paper, although the speed at which the channels can be spectrally tuned remains unclear (our setup aimed to minimize the time for angle adjustments of the ATBFs). We expect that other multiplex-specific systems will continue to be developed to take full advantage of the wide range of fluorescent labels currently available. Features other than emission spectra may be utilized for the differentiation of similar fluorescent species. In fluorescence lifetime imaging microscopy (FLIM), image contrast is generated by differences in exponential fluorescence lifetime decay rates, independent of spectral characteristics [48]. This imaging modality has found particular use in the identification of autofluorescent species and their varying redox states, which are often spectrally similar but exhibit large differences in fluorescence lifetimes [49–51]; this technique may also be applied to standard fluorescent labels [52–54]. The fluorescence lifetime of a fluorophore can depend significantly on local environmental conditions, such as pH, molecular conformation, and temperature [55–57], and thus adds additional information and complexity to the unmixing data set. Interestingly, studies in confocal microscopy and macroscopic fluorescence imaging have demonstrated utility in combining spectral detection with FLIM [42,58], and future iterations of the HMM would likely benefit from fewer spectral channels and the addition of lifetime measurements for particularly overlapped labels or the simultaneous identification of varying autofluorescent species. Although the HMM enabled visualization of multiple fluorescent labels in vivo, there were some significant limitations that future instrument iterations could address. The frame-by-frame acquisition approach limited imaging speed to ${\sim}{35}\;{\rm s}$ for a 48-channel image, which is too slow to catch fast cell dynamics such as calcium spiking, blood flow, or rapid cell migration, but fast enough for time-lapse imaging of cell dynamics over the scale of minutes or longer. As with any linear unmixing problem, theoretically only ${\rm N}$ channels must be acquired to identify ${\rm N}$ fluorophores. Therefore, for many of the samples in this paper, analysis of the full 48-channel data set was not required to separate the fluorophores present. Several examples of subsampled data sets were analyzed and show favorable unmixing results compared to the larger data set (Fig. S23, Supplement 1). For future in vivo experiments with this instrument, multiplexed samples could first be imaged across all 48 channels to identify the channels with high SNR and unique features for the dyes in that particular sample. Any channels with little or no signal could be left out of further images, significantly reducing sample acquisition time and data set size, and the remaining channel order optimized to minimize scan time using a basic algorithm (i.e., as many channels as possible acquired in a parallel manner). In addition, the spectral width of channels was not optimized here and depended on what ATBFs were commercially available. In hyperspectral imaging, SNR is generally improved by reducing the channel number, using spectrally broader channels, and rejecting channels with low SNR in the unmixing (provided a sufficient number of channels for unmixing is retained) [59]. A book chapter by Zimmermann provides a detailed discussion of the influence of factors such as channel number, channel bandwidth and separation, reference spectra, and SNR on unmixing precision in fluorescence microscopy [59]. To further increase speed, new instrument iterations could also include two-color two-photon excitation using synchronized pulses to excite fluorophores not otherwise usable [60–64], or temporal multiplexing of excitation wavelengths alongside time-gated signal detection to multiplex across excitation wavelengths in a pixel-by-pixel manner, dramatically increasing acquisition speed to detect faster cell dynamics and avoiding concerns about sample motion between different excitation lasers. In addition to scan speed limitations, we were concerned that imaging the same area with three excitation lasers risked sample damage. However, we imaged live cells typically sensitive to high levels of radiation (dendrites and microglia) with no signs of cell toxicity or damage. We also noticed that shorter laser wavelengths failed to penetrate deep into tissue, leading to the inability to excite certain dyes throughout the entire imaging tissue volume [for example, Cascade Blue dextran and the 800-nm source, Figs. 4(c) and 4(d)]. As demonstrated in this paper, in addition to instrumentation capable of spectral detection, effective multiplexing in vivo requires the development of robust labeling methods and access to the imaging site. Researchers have long realized the need for complex, in vivo studies in rodent disease models and have developed a wide range of mouse models that enable the study of Alzheimer's disease, stroke, and cancer, among many others. Within these models, surgical techniques have been developed to provide optical access for imaging in various organ systems, including the brain [65], spinal cord [66], heart [67–69], gut [70,71], kidney [72], lung [73], etc. In addition, new fluorescent labeling strategies provide unprecedented specificity in identifying structures and cell types. Confetti-based multicolor methods for cell identification within a population have been demonstrated in kidney [74] and gut [75]; a similar method termed multiaddressable genome-integrative color (MAGIC) markers has been demonstrated in embryonic neural progenitors in mouse and chick [76]; Brainbow labeling has been shown in a variety of organs [40,77–79]; and cobbled-together multicolor labeling strategies utilizing fluorescent protein expression as well as exogenous labels have been developed for multicellular interaction studies in cancer [27] and immunology [80–84]. Efforts to generate mouse lines with spectrally distinct labels for a wide range of cells (immune cells, vasculature, neurons, cancer cells, etc.) would enable a multitude of in vivo studies of multicellular interactions when combined with the surgical approaches that provide long-term optical access to different organs and imaging instruments that enable clean separation of different fluorescent labels even in scattering tissue, such as the HMM described here. Congressionally Directed Medical Research Programs (W81XWH-16-1-0666); National Institute of Biomedical Imaging and Bioengineering (EB002019); National Institute of Allergy and Infectious Diseases (AI102851). We thank Semrock, Inc. for the donation of some optical filters and for advice on the use of the ATBFs. We are grateful to Prof. Frank Wise and Dr. Chi-Yong Eom for providing comments on a draft of this paper, and to Daniel Rivera for help with image registration methods. See Supplement 1 for supporting content. 1. C. B. Black, T. D. Duensing, L. S. Trinkle, and R. T. Dunlay, "Cell-based screening using high-throughput flow cytometry," Assay Drug Dev. Technol. 9, 13–20 (2011). [CrossRef] 2. P. O. Krutzik, M. R. Clutter, A. Trejo, and G. P. Nolan, "Fluorescent cell barcoding for multiplex flow cytometry," Curr. Protoc. Cytom. 55, 6–31 (2011). [CrossRef] 3. S. C. De Rosa, J. M. Brenchley, and M. Roederer, "Beyond six colors: a new era in flow cytometry," Nat. Med. 9, 112–117 (2003). [CrossRef] 4. L. A. Herzenberg, D. Parks, B. Sahaf, O. Perez, M. Roederer, and L. A. Herzenberg, "The history and future of the fluorescence activated cell sorter and flow cytometry: a view from Stanford," Clin. Chem. 48, 1819–1827 (2002). [CrossRef] 5. T. Liechti and M. Roederer, "OMIP-051–28-color flow cytometry panel to characterize B cells and myeloid cells," Cytometry Part A 95, 150–155 (2019). [CrossRef] 6. M. Solomon, M. DeLay, and D. Reynaud, "Phenotypic analysis of the mouse hematopoietic hierarchy using spectral cytometry: from stem cell subsets to early progenitor compartments," Cytometry Part A 15, 1057–1065 (2020). [CrossRef] 7. P. K. Chattopadhyay, C. M. Hogerkorp, and M. Roederer, "A chromatic explosion: the development and future of multiparameter flow cytometry," Immunology 125, 441–449 (2008). [CrossRef] 8. F. Tarín, F. López-Castaño, C. García-Hernández, P. Beneit, H. Sarmiento, P. Manresa, O. Alda, B. Villarrubia, M. Blanes, J. Bernabéu, C. Amorós, S. Sánchez-Sánchez, C. Fernández-Miñano, F. De Paz, J. Verdú-Belmar, P. Marco, and E. Matutes, "Multiparameter flow cytometry identification of neoplastic subclones: a new biomarker in monoclonal gammopathy of undetermined significance and multiple myeloma," Acta Haematol. 141, 1–6 (2019). [CrossRef] 9. C. S. Ma and S. G. Tangye, "Flow cytometric-based analysis of defects in lymphocyte differentiation and function due to inborn errors of immunity," Front. Immunol. 10, 2108 (2019). [CrossRef] 10. C. A. Gedye, A. Hussain, J. Paterson, A. Smrke, H. Saini, D. Sirskyj, K. Pereira, N. Lobo, J. Stewart, C. Go, J. Ho, M. Medrano, E. Hyatt, J. Yuan, S. Lauriault, M. Meyer, M. Kondratyev, T. van den Beucken, M. Jewett, P. Dirks, C. J. Guidos, J. Danska, J. Wang, B. Wouters, B. Neel, R. Rottapel, and L. E. Ailles, "Cell surface profiling using high-throughput flow cytometry: a platform for biomarker discovery and analysis of cellular heterogeneity," PLoS ONE 9, e105602 (2014). [CrossRef] 11. M. J. Soloski and F. J. Chrest, "Multiparameter flow cytometry for discovery of disease mechanisms in rheumatic diseases," Arthritis Rheum. 65, 1148–1156 (2013). [CrossRef] 12. S. Blom, L. Paavolainen, D. Bychkov, R. Turkki, P. Mäki-Teeri, A. Hemmes, K. Välimäki, J. Lundin, O. Kallioniemi, and T. Pellinen, "Systems pathology by multiplexed immunohistochemistry and whole-slide digital image analysis," Sci. Rep. 7, 15580 (2017). [CrossRef] 13. E. C. Stack, C. Wang, K. A. Roman, and C. C. Hoyt, "Multiplexed immunohistochemistry, imaging, and quantitation: a review, with an assessment of Tyramide signal amplification, multispectral imaging and multiplex analysis," Methods 70, 46–58 (2014). [CrossRef] 14. P. Hofman, C. Badoual, F. Henderson, L. Berland, M. Hamila, E. Long-Mira, S. Lassalle, H. Roussel, V. Hofman, E. Tartour, and M. Ilié, "Multiplexed immunohistochemistry for molecular and immune profiling in lung cancer-just about ready for prime-time?" Cancers 11, 283 (2019). [CrossRef] 15. T. N. Gide, I. P. Silva, C. Quek, T. Ahmed, A. M. Menzies, M. S. Carlino, R. P. M. Saw, J. F. Thompson, M. Batten, G. V. Long, R. A. Scolyer, and J. S. Wilmott, "Close proximity of immune and tumor cells underlies response to anti-PD-1 based therapies in metastatic melanoma patients," Oncoimmunology 9, 1659093 (2020). [CrossRef] 16. M. P. Humphries, V. Bingham, F. Abdullahi Sidi, S. G. Craig, S. McQuaid, J. James, and M. Salto-Tellez, "Improving the diagnostic accuracy of the PD-L1 test with image analysis and multiplex hybridization," Cancers 12, 1114 (2020). [CrossRef] 17. S. Coy, R. Rashid, J. R. Lin, Z. Du, A. M. Donson, T. C. Hankinson, N. K. Foreman, P. E. Manley, M. W. Kieran, D. A. Reardon, P. K. Sorger, and S. Santagata, "Multiplexed immunofluorescence reveals potential PD-1/PD-L1 pathway vulnerabilities in craniopharyngioma," Neuro Oncol. 20, 1101–1112 (2018). [CrossRef] 18. D. Kobat, N. G. Horton, and C. Xu, "In vivo two-photon microscopy to 1.6-mm depth in mouse cortex," J. Biomed. Opt. 16, 106014 (2011). [CrossRef] 19. R. Cicchi, A. Crisci, G. Nesi, A. Cosci, S. Giancane, M. Carini, and F. S. Pavone, "Multispectral multiphoton lifetime analysis of human bladder tissue," Proc. SPIE 7161, 716116 (2009). [CrossRef] 20. S. Kumazaki, M. Hasegawa, M. Ghoneim, Y. Shimizu, K. Okamoto, M. Nishiyama, H. Oh-Oka, and M. Terazima, "A line-scanning semi-confocal multi-photon fluorescence microscope with a simultaneous broadband spectral acquisition and its application to the study of the thylakoid membrane of a cyanobacterium Anabaena PCC7120," J. Microsc. 228, 240–254 (2007). [CrossRef] 21. I. Paylova, K. R. Hume, S. A. Yazinski, J. Flanders, T. L. Southard, R. S. Weiss, and W. W. Webb, "Multiphoton microscopy and microspectroscopy for diagnostics of inflammatory and neoplastic lung," J. Biomed. Opt. 17, 036014 (2012). [CrossRef] 22. W. R. Zipfel, R. M. Williams, R. Christie, A. Y. Nikitin, B. T. Hyman, and W. W. Webb, "Live tissue intrinsic emission microscopy using multiphoton-excited native fluorescence and second harmonic generation," Proc. Natl. Acad. Sci. USA 100, 7075–7080 (2003). [CrossRef] 23. M. Oheim, E. Beaurepaire, E. Chaigneau, J. Mertz, and S. Charpak, "Two-photon microscopy in brain tissue: parameters influencing the imaging depth," J. Neurosci. Methods 111, 29–37 (2001). [CrossRef] 24. M. Brondi, S. S. Sato, L. F. Rossi, S. Ferrara, and G. M. Ratto, "Finding a needle in a haystack: identification of EGFP tagged neurons during calcium imaging by means of two-photon spectral separation," Front. Mol. Neurosci. 5, 96 (2012). [CrossRef] 25. R. Orzekowsky-Schroeder, A. Klinger, B. Martensen, M. Blessenohl, A. Gebert, A. Vogel, and G. Huttmann, "In vivo spectral imaging of different cell types in the small intestine by two-photon excited autofluorescence," J. Biomed. Opt. 16, 116025 (2011). [CrossRef] 26. A. J. Radosevich, M. B. Bouchard, S. A. Burgess, B. R. Chen, and E. M. C. Hillman, "Hyperspectral in vivo two-photon microscopy of intrinsic contrast," Opt. Lett. 33, 2164–2166 (2008). [CrossRef] 27. C. Ricard and F. C. Debarbieux, "Six-color intravital two-photon imaging of brain tumors and their dynamic microenvironment," Front. Cellular Neurosci. 8, 57 (2014). [CrossRef] 28. L. E. Grosberg, A. J. Radosevich, S. Asfaha, T. C. Wang, and E. M. Hillman, "Spectral characterization and unmixing of intrinsic contrast in intact normal and diseased gastric tissues using hyperspectral two-photon microscopy," PLoS ONE 6, e19925 (2011). [CrossRef] 29. J. Schindelin, I. Arganda-Carreras, E. Frise, V. Kaynig, M. Longair, T. Pietzsch, S. Preibisch, C. Rueden, S. Saalfeld, B. Schmid, J. Y. Tinevez, D. J. White, V. Hartenstein, K. Eliceiri, P. Tomancak, and A. Cardona, "Fiji: an open-source platform for biological-image analysis," Nat. Methods 9, 676–682 (2012). [CrossRef] 30. E. Provost, J. Rhee, and S. D. Leach, "Viral 2A peptides allow expression of multiple proteins from a single ORF in Transgenic zebrafish embryos," Genesis 45, 625–629 (2007). [CrossRef] 31. Z. M. Shen, Z. Y. Lu, P. Y. Chhatbar, P. O'Herron, and P. Kara, "An artery-specific fluorescent dye for studying neurovascular coupling," Nat. Methods 9, 273–277 (2012). [CrossRef] 32. A. Gaylo-Moynihan, H. Prizant, M. Popovic, N. R. J. Fernandes, C. S. Anderson, K. K. Chiou, H. Bell, D. C. Schrock, J. Schumacher, T. Capece, B. L. Walling, D. J. Topham, J. Miller, A. V. Smrcka, M. Kim, A. Hughson, and D. J. Fowell, "Programming of distinct chemokine-dependent and -independent search strategies for Th1 and Th2 cells optimizes function at inflamed sites," Immunity 51, 298–309 (2019). [CrossRef] 33. A. Gaylo, M. G. Overstreet, and D. J. Fowell, "Imaging CD4 T cell interstitial migration in the inflamed dermis," Jove-J. Vis. Exp. 109, e53585 (2016). [CrossRef] 34. F. Sainsbury, M. Benchabane, M. C. Goulet, and D. Michaud, "Multimodal protein constructs for herbivore insect control," Toxins 4, 455–475 (2012). [CrossRef] 35. H. W. Ai, J. N. Henderson, S. J. Remington, and R. E. Campbell, "Directed evolution of a monomeric, bright and photostable version of Clavularia cyan fluorescent protein: structural characterization and applications in fluorescence imaging," Biochem. J. 400, 531–540 (2006). [CrossRef] 36. M. Cotlet, P. M. Goodwin, G. S. Waldo, and J. H. Werner, "A comparison of the fluorescence dynamics of single molecules of a green fluorescent protein: one- versus two-photon excitation," Chemphyschem 7, 250–260 (2006). [CrossRef] 37. N. C. Shaner, G. G. Lambert, A. Chammas, Y. Ni, P. J. Cranfill, M. A. Baird, B. R. Sell, J. R. Allen, R. N. Day, M. Israelsson, M. W. Davidson, and J. Wang, "A bright monomeric green fluorescent protein derived from Branchiostoma lanceolatum," Nat Methods 10, 407–409 (2013). [CrossRef] 38. D. M. Shcherbakova, M. A. Hink, L. Joosen, T. W. Gadella, and V. V. Verkhusha, "An orange fluorescent protein with a large Stokes shift for single-excitation multicolor FCCS and FRET imaging," J. Am. Chem. Soc. 134, 7913–7923 (2012). [CrossRef] 39. R. A. Hill and J. Grutzendler, "In vivo imaging of oligodendrocytes with sulforhodamine 101," Nat. Methods 11, 1081–1082 (2014). [CrossRef] 40. J. Livet, T. A. Weissman, H. N. Kang, R. W. Draft, J. Lu, R. A. Bennis, J. R. Sanes, and J. W. Lichtman, "Transgenic strategies for combinatorial expression of fluorescent proteins in the nervous system," Nature 450, 56–62 (2007). [CrossRef] 41. F. Bestvater, Z. Seghiri, M. S. Kang, N. Groner, J. Y. Lee, K. B. Im, and M. Wachsmuth, "EMCCD-based spectrally resolved fluorescence correlation spectroscopy," Opt. Express 18, 23818–23828 (2010). [CrossRef] 42. T. Niehorster, A. Loschberger, I. Gregor, B. Kramer, H. J. Rahn, M. Patting, F. Koberling, J. Enderlein, and M. Sauer, "Multi-target spectrally resolved fluorescence lifetime imaging microscopy," Nat. Methods 13, 257–262 (2016). [CrossRef] 43. M. B. Sinclair, D. M. Haaland, J. A. Timlin, and H. D. T. Jones, "Hyperspectral confocal microscope," Appl. Opt. 45, 6283–6291 (2006). [CrossRef] 44. J. Y. Tseng, A. A. Ghazaryan, W. Lo, Y. F. Chen, V. Hovhannisyan, S. J. Chen, H. Y. Tan, and C. Y. Dong, "Multiphoton spectral microscopy for imaging and quantification of tissue glycation," Biomed. Opt. Express 2, 218–230 (2011). [CrossRef] 45. F. Deng, C. Ding, J. C. Martin, N. M. Scarborough, Z. Song, G. S. Eakins, and G. J. Simpson, "Spatial-spectral multiplexing for hyperspectral multiphoton fluorescence imaging," Opt. Express 25, 32243–32253 (2017). [CrossRef] 46. R. Lansford, G. Bearman, and S. E. Fraser, "Resolution of multiple green fluorescent protein color variants and dyes using two-photon microscopy and imaging spectroscopy," J. Biomed. Opt. 6, 311–318 (2001). [CrossRef] 47. H. Gugel, I. Böhm, and F. Neugart, "Freely tunable spectral detection for multiphoton microscopy," Proc. SPIE 10882, 108820S (2019). [CrossRef] 48. R. Datta, T. M. Heaster, J. T. Sharick, A. A. Gillette, and M. C. Skala, "Fluorescence lifetime imaging microscopy: fundamentals and advances in instrumentation, analysis, and applications," J. Biomed. Opt. 25, 1–43 (2020). [CrossRef] 49. A. V. Meleshina, V. V. Dudenkova, A. S. Bystrova, D. S. Kuznetsova, M. V. Shirmanova, and E. V. Zagaynova, "Two-photon FLIM of NAD(P)H and FAD in mesenchymal stem cells undergoing either osteogenic or chondrogenic differentiation," Stem Cell Res. Ther. 8, 15 (2017). [CrossRef] 50. P. M. Schaefer, S. Kalinina, A. Rueck, C. A. F. von Arnim, and B. von Einem, "NADH autofluorescence-a marker on its way to boost bioenergetic research," Cytometry Part A 95, 34–46 (2019). [CrossRef] 51. T. S. Blacker, Z. F. Mann, J. E. Gale, M. Ziegler, A. J. Bain, G. Szabadkai, and M. R. Duchen, "Separating NADH and NADPH fluorescence in live cells and tissues using FLIM," Nat. Commun. 5, 3936 (2014). [CrossRef] 52. G. J. Kremers, E. B. van Munster, J. Goedhart, and T. W. Gadella Jr., "Quantitative lifetime unmixing of multiexponentially decaying fluorophores using single-frequency fluorescence lifetime imaging microscopy," Biophys. J. 95, 378–389 (2008). [CrossRef] 53. C. Madeira, N. Estrela, J. A. B. Ferreira, S. M. Andrade, S. M. B. Costa, and E. P. Melo, "Fluorescence lifetime imaging microscopy and fluorescence resonance energy transfer from cyan to yellow fluorescent protein validates a novel method to cluster proteins on solid surfaces," J. Biomed. Opt. 14, 044035 (2009). [CrossRef] 54. R. Nothdurft, P. Sarder, S. Bloch, J. Culver, and S. Achilefu, "Fluorescence lifetime imaging microscopy using near-infrared contrast agents," J. Microsc. 247, 202–207 (2012). [CrossRef] 55. A. Pliss, L. Zhao, T. Y. Ohulchanskyy, J. Qu, and P. N. Prasad, "Fluorescence lifetime of fluorescent proteins as an intracellular environment probe sensing the cell cycle progression," ACS Chem. Biol. 7, 1385–1392 (2012). [CrossRef] 56. R. Penjweini, A. Andreoni, T. Rosales, J. Kim, M. Brenner, D. Sackett, J. H. Chung, and J. Knutson, "Intracellular oxygen mapping using a myoglobin-mCherry probe with fluorescence lifetime imaging," J. Biomed. Opt. 23, 107001 (2018). [CrossRef] 57. H. J. Lin, P. Herman, and J. R. Lakowicz, "Fluorescence lifetime-resolved pH imaging of living cells," Cytometry Part A 52, 77–89 (2003). [CrossRef] 58. J. T. Smith, M. Ochoa, and X. R. M. Intes, "UNMIX-ME: spectral and lifetime fluorescence unmixing via deep learning," Biomed. Opt. Express 11, 3857–3874 (2020). [CrossRef] 59. T. Zimmermann, "Spectral imaging and linear unmixing in light microscopy," in Microscopy Techniques, J. Rietdorf, ed. (Springer, 2005), pp. 245–265. 60. J. R. Lakowicz, I. Gryczynski, H. Malak, and Z. Gryczynski, "Two-color two-photon excitation of fluorescence," Photochem. Photobiol. 64, 632–635 (1996). [CrossRef] 61. P. Mahou, M. Zimmerley, K. Loulier, K. S. Matho, G. Labroille, X. Morin, W. Supatto, J. Livet, D. Debarre, and E. Beaurepaire, "Multicolor two-photon tissue imaging by wavelength mixing," Nat. Methods 9, 815–818 (2012). [CrossRef] 62. C. Stringari, L. Abdeladim, G. Malkinson, P. Mahou, X. Solinas, I. Lamarre, S. Brizion, J. B. Galey, W. Supatto, R. Legouis, A. M. Pena, and E. Beaurepaire, "Multicolor two-photon imaging of endogenous fluorophores in living tissues by wavelength mixing," Sci. Rep. 7, 3792 (2017). [CrossRef] 63. L. Abdeladim, K. S. Matho, S. Clavreul, P. Mahou, J. M. Sintes, X. Solinas, I. Arganda-Carreras, S. G. Turney, J. W. Lichtman, A. Chessel, A. P. Bemelmans, K. Loulier, W. Supatto, J. Livet, and E. Beaurepaire, "Multicolor multiscale brain imaging with chromatic multiphoton serial microscopy," Nat. Commun. 10, 1662 (2019). [CrossRef] 64. E. P. Perillo, J. W. Jarrett, Y.-L. Liu, A. Hassan, D. C. Fernée, J. R. Goldak, A. Bonteanu, D. J. Spence, H.-C. Yeh, and A. K. Dunn, "Two-color multiphoton in vivo imaging with a femtosecond diamond Raman laser," Light: Sci. Appl. 6, e17095 (2017). [CrossRef] 65. R. Mostany and C. Portera-Cailliau, "A craniotomy surgery procedure for chronic brain imaging," J. Vis. Exp. 12, e680 (2008). [CrossRef] 66. M. J. Farrar, I. M. Bernstein, D. H. Schlafer, T. A. Cleland, J. R. Fetcho, and C. B. Schaffer, "Chronic in vivo imaging in the mouse spinal cord using an implanted chamber," Nat. Methods 9, 297–302 (2012). [CrossRef] 67. J. S. Jones, D. M. Small, and N. Nishimura, "In vivo calcium imaging of cardiomyocytes in the beating mouse heart with multiphoton microscopy," Front. Physiol. 9, 969 (2018). [CrossRef] 68. S. Lee, C. Vinegoni, P. F. Feruglio, L. Fexon, R. Gorbatov, M. Pivoravov, A. Sbarbati, M. Nahrendorf, and R. Weissleder, "Real-time in vivo imaging of the beating mouse heart at microscopic resolution," Nat. Commun. 3, 1054 (2012). [CrossRef] 69. C. Vinegoni, A. D. Aguirre, S. Lee, and R. Weissleder, "Imaging the beating heart in the mouse using intravital microscopy techniques," Nat. Protoc. 10, 1802–1819 (2015). [CrossRef] 70. N. Rakhilin, B. Barth, J. Choi, N. L. Munoz, S. Kulkarni, J. S. Jones, D. M. Small, Y. T. Cheng, Y. Cao, C. LaVinka, E. Kan, X. Dong, M. Spencer, P. Pasricha, N. Nishimura, and X. Shen, "Simultaneous optical and electrical in vivo analysis of the enteric nervous system," Nat. Commun. 7, 11800 (2016). [CrossRef] 71. L. Ritsma, E. J. Steller, S. I. Ellenbroek, O. Kranenburg, I. H. Borel Rinkes, and J. van Rheenen, "Surgical implantation of an abdominal imaging window for intravital microscopy," Nat. Protoc. 8, 583–594 (2013). [CrossRef] 72. D. M. Small, W. Y. Sanchez, S. Roy, M. J. Hickey, and G. C. Gobe, "Multiphoton fluorescence microscopy of the live kidney in health and disease," J. Biomed. Opt. 19, 020901 (2014). [CrossRef] 73. M. R. Looney, E. E. Thornton, D. Sen, W. J. Lamm, R. W. Glenny, and M. F. Krummel, "Stabilized imaging of immune surveillance in the mouse lung," Nat. Methods 8, 91–96 (2011). [CrossRef] 74. S. Brahler, H. Y. Yu, H. Suleiman, G. M. Krishnan, B. T. Saunders, J. B. Kopp, J. H. Miner, B. H. Zinselmeyer, and A. S. Shaw, "Intravital and kidney slice imaging of podocyte membrane dynamics," J. Am. Soc. Nephrol. 27, 3285–3290 (2016). [CrossRef] 75. H. J. Snippert, L. G. van der Flier, T. Sato, J. H. van Es, M. van den Born, C. Kroon-Veenboer, N. Barker, A. M. Klein, J. van Rheenen, B. D. Simons, and H. Clevers, "Intestinal crypt homeostasis results from neutral competition between symmetrically dividing Lgr5 stem cells," Cell 143, 134–144 (2010). [CrossRef] 76. K. Loulier, R. Barry, P. Mahou, Y. Le Franc, W. Supatto, K. S. Matho, S. H. Ieng, S. Fouquet, E. Dupin, R. Benosman, A. Chedotal, E. Beaurepaire, X. Morin, and J. Livet, "Multiplex cell and lineage tracking with combinatorial labels," Neuron 81, 505–520 (2014). [CrossRef] 77. D. Cai, K. B. Cohen, T. Luo, J. W. Lichtman, and J. R. Sanes, "Improved tools for the Brainbow toolbox," Nat. Methods 10, 540–547 (2013). [CrossRef] 78. L. Dumas, C. Heitz-Marchaland, S. Fouquet, U. Suter, J. Livet, C. Moreau-Fauvarque, and A. Chedotal, "Multicolor analysis of oligodendrocyte morphology, interactions, and development with Brainbow," Glia 63, 699–717 (2015). [CrossRef] 79. Y. A. Pan, T. Freundlich, T. A. Weissman, D. Schoppik, X. C. Wang, S. Zimmerman, B. Ciruna, J. R. Sanes, J. W. Lichtman, and A. F. Schier, "Zebrabow: multispectral cell labeling for cell tracing and lineage analysis in zebrafish," Development 140, 2835–2846 (2013). [CrossRef] 80. S. Gossa, D. Nayak, B. H. Zinselmeyer, and D. B. McGavern, "Development of an immunologically tolerated combination of fluorescent proteins for in vivo two-photon imaging," Sci. Rep. 4, 6664 (2014). [CrossRef] 81. I. Y. Hwang, C. Park, K. Harrison, and J. H. Kehrl, "TLR4 signaling augments B lymphocyte migration and overcomes the restriction that limits access to germinal center dark zones," J. Exp. Med. 206, 2641–2657 (2009). [CrossRef] 82. N. Ueno, M. B. Lodoen, G. L. Hickey, E. A. Robey, and J. L. Coombes, "Toxoplasma gondii-infected natural killer cells display a hypermotility phenotype in vivo," Immunol. Cell Biol. 93, 508–513 (2015). [CrossRef] 83. M. G. Overstreet, A. Gaylo, B. R. Angermann, A. Hughson, Y. M. Hyun, K. Lambert, M. Acharya, A. C. Billroth-Maclurg, A. F. Rosenberg, D. J. Topham, H. Yagita, M. Kim, A. Lacy-Hulbert, M. Meier-Schellersheim, and D. J. Fowell, "Inflammation-induced interstitial migration of effector CD4(+) T cells is dependent on integrin alphaV," Nat. Immunol. 14, 949–958 (2013). [CrossRef] 84. S. Uderhardt, A. J. Martins, J. S. Tsang, T. Lammermann, and R. N. Germain, "Resident macrophages cloak tissue microlesions to prevent neutrophil-driven inflammatory damage," Cell 177, 541–555 (2019). [CrossRef] C. B. Black, T. D. Duensing, L. S. Trinkle, and R. T. Dunlay, "Cell-based screening using high-throughput flow cytometry," Assay Drug Dev. Technol. 9, 13–20 (2011). P. O. Krutzik, M. R. Clutter, A. Trejo, and G. P. Nolan, "Fluorescent cell barcoding for multiplex flow cytometry," Curr. Protoc. Cytom. 55, 6–31 (2011). S. C. De Rosa, J. M. Brenchley, and M. Roederer, "Beyond six colors: a new era in flow cytometry," Nat. Med. 9, 112–117 (2003). L. A. Herzenberg, D. Parks, B. Sahaf, O. Perez, M. Roederer, and L. A. Herzenberg, "The history and future of the fluorescence activated cell sorter and flow cytometry: a view from Stanford," Clin. Chem. 48, 1819–1827 (2002). T. Liechti and M. Roederer, "OMIP-051–28-color flow cytometry panel to characterize B cells and myeloid cells," Cytometry Part A 95, 150–155 (2019). M. Solomon, M. DeLay, and D. Reynaud, "Phenotypic analysis of the mouse hematopoietic hierarchy using spectral cytometry: from stem cell subsets to early progenitor compartments," Cytometry Part A 15, 1057–1065 (2020). P. K. Chattopadhyay, C. M. Hogerkorp, and M. Roederer, "A chromatic explosion: the development and future of multiparameter flow cytometry," Immunology 125, 441–449 (2008). F. Tarín, F. López-Castaño, C. García-Hernández, P. Beneit, H. Sarmiento, P. Manresa, O. Alda, B. Villarrubia, M. Blanes, J. Bernabéu, C. Amorós, S. Sánchez-Sánchez, C. Fernández-Miñano, F. De Paz, J. Verdú-Belmar, P. Marco, and E. Matutes, "Multiparameter flow cytometry identification of neoplastic subclones: a new biomarker in monoclonal gammopathy of undetermined significance and multiple myeloma," Acta Haematol. 141, 1–6 (2019). C. S. Ma and S. G. Tangye, "Flow cytometric-based analysis of defects in lymphocyte differentiation and function due to inborn errors of immunity," Front. Immunol. 10, 2108 (2019). C. A. Gedye, A. Hussain, J. Paterson, A. Smrke, H. Saini, D. Sirskyj, K. Pereira, N. Lobo, J. Stewart, C. Go, J. Ho, M. Medrano, E. Hyatt, J. Yuan, S. Lauriault, M. Meyer, M. Kondratyev, T. van den Beucken, M. Jewett, P. Dirks, C. J. Guidos, J. Danska, J. Wang, B. Wouters, B. Neel, R. Rottapel, and L. E. Ailles, "Cell surface profiling using high-throughput flow cytometry: a platform for biomarker discovery and analysis of cellular heterogeneity," PLoS ONE 9, e105602 (2014). M. J. Soloski and F. J. Chrest, "Multiparameter flow cytometry for discovery of disease mechanisms in rheumatic diseases," Arthritis Rheum. 65, 1148–1156 (2013). S. Blom, L. Paavolainen, D. Bychkov, R. Turkki, P. Mäki-Teeri, A. Hemmes, K. Välimäki, J. Lundin, O. Kallioniemi, and T. Pellinen, "Systems pathology by multiplexed immunohistochemistry and whole-slide digital image analysis," Sci. Rep. 7, 15580 (2017). E. C. Stack, C. Wang, K. A. Roman, and C. C. Hoyt, "Multiplexed immunohistochemistry, imaging, and quantitation: a review, with an assessment of Tyramide signal amplification, multispectral imaging and multiplex analysis," Methods 70, 46–58 (2014). P. Hofman, C. Badoual, F. Henderson, L. Berland, M. Hamila, E. Long-Mira, S. Lassalle, H. Roussel, V. Hofman, E. Tartour, and M. Ilié, "Multiplexed immunohistochemistry for molecular and immune profiling in lung cancer-just about ready for prime-time?" Cancers 11, 283 (2019). T. N. Gide, I. P. Silva, C. Quek, T. Ahmed, A. M. Menzies, M. S. Carlino, R. P. M. Saw, J. F. Thompson, M. Batten, G. V. Long, R. A. Scolyer, and J. S. Wilmott, "Close proximity of immune and tumor cells underlies response to anti-PD-1 based therapies in metastatic melanoma patients," Oncoimmunology 9, 1659093 (2020). M. P. Humphries, V. Bingham, F. Abdullahi Sidi, S. G. Craig, S. McQuaid, J. James, and M. Salto-Tellez, "Improving the diagnostic accuracy of the PD-L1 test with image analysis and multiplex hybridization," Cancers 12, 1114 (2020). S. Coy, R. Rashid, J. R. Lin, Z. Du, A. M. Donson, T. C. Hankinson, N. K. Foreman, P. E. Manley, M. W. Kieran, D. A. Reardon, P. K. Sorger, and S. Santagata, "Multiplexed immunofluorescence reveals potential PD-1/PD-L1 pathway vulnerabilities in craniopharyngioma," Neuro Oncol. 20, 1101–1112 (2018). D. Kobat, N. G. Horton, and C. Xu, "In vivo two-photon microscopy to 1.6-mm depth in mouse cortex," J. Biomed. Opt. 16, 106014 (2011). R. Cicchi, A. Crisci, G. Nesi, A. Cosci, S. Giancane, M. Carini, and F. S. Pavone, "Multispectral multiphoton lifetime analysis of human bladder tissue," Proc. SPIE 7161, 716116 (2009). S. Kumazaki, M. Hasegawa, M. Ghoneim, Y. Shimizu, K. Okamoto, M. Nishiyama, H. Oh-Oka, and M. Terazima, "A line-scanning semi-confocal multi-photon fluorescence microscope with a simultaneous broadband spectral acquisition and its application to the study of the thylakoid membrane of a cyanobacterium Anabaena PCC7120," J. Microsc. 228, 240–254 (2007). I. Paylova, K. R. Hume, S. A. Yazinski, J. Flanders, T. L. Southard, R. S. Weiss, and W. W. Webb, "Multiphoton microscopy and microspectroscopy for diagnostics of inflammatory and neoplastic lung," J. Biomed. Opt. 17, 036014 (2012). W. R. Zipfel, R. M. Williams, R. Christie, A. Y. Nikitin, B. T. Hyman, and W. W. Webb, "Live tissue intrinsic emission microscopy using multiphoton-excited native fluorescence and second harmonic generation," Proc. Natl. Acad. Sci. USA 100, 7075–7080 (2003). M. Oheim, E. Beaurepaire, E. Chaigneau, J. Mertz, and S. Charpak, "Two-photon microscopy in brain tissue: parameters influencing the imaging depth," J. Neurosci. Methods 111, 29–37 (2001). M. Brondi, S. S. Sato, L. F. Rossi, S. Ferrara, and G. M. Ratto, "Finding a needle in a haystack: identification of EGFP tagged neurons during calcium imaging by means of two-photon spectral separation," Front. Mol. Neurosci. 5, 96 (2012). R. Orzekowsky-Schroeder, A. Klinger, B. Martensen, M. Blessenohl, A. Gebert, A. Vogel, and G. Huttmann, "In vivo spectral imaging of different cell types in the small intestine by two-photon excited autofluorescence," J. Biomed. Opt. 16, 116025 (2011). A. J. Radosevich, M. B. Bouchard, S. A. Burgess, B. R. Chen, and E. M. C. Hillman, "Hyperspectral in vivo two-photon microscopy of intrinsic contrast," Opt. Lett. 33, 2164–2166 (2008). C. Ricard and F. C. Debarbieux, "Six-color intravital two-photon imaging of brain tumors and their dynamic microenvironment," Front. Cellular Neurosci. 8, 57 (2014). L. E. Grosberg, A. J. Radosevich, S. Asfaha, T. C. Wang, and E. M. Hillman, "Spectral characterization and unmixing of intrinsic contrast in intact normal and diseased gastric tissues using hyperspectral two-photon microscopy," PLoS ONE 6, e19925 (2011). J. Schindelin, I. Arganda-Carreras, E. Frise, V. Kaynig, M. Longair, T. Pietzsch, S. Preibisch, C. Rueden, S. Saalfeld, B. Schmid, J. Y. Tinevez, D. J. White, V. Hartenstein, K. Eliceiri, P. Tomancak, and A. Cardona, "Fiji: an open-source platform for biological-image analysis," Nat. Methods 9, 676–682 (2012). E. Provost, J. Rhee, and S. D. Leach, "Viral 2A peptides allow expression of multiple proteins from a single ORF in Transgenic zebrafish embryos," Genesis 45, 625–629 (2007). Z. M. Shen, Z. Y. Lu, P. Y. Chhatbar, P. O'Herron, and P. Kara, "An artery-specific fluorescent dye for studying neurovascular coupling," Nat. Methods 9, 273–277 (2012). A. Gaylo-Moynihan, H. Prizant, M. Popovic, N. R. J. Fernandes, C. S. Anderson, K. K. Chiou, H. Bell, D. C. Schrock, J. Schumacher, T. Capece, B. L. Walling, D. J. Topham, J. Miller, A. V. Smrcka, M. Kim, A. Hughson, and D. J. Fowell, "Programming of distinct chemokine-dependent and -independent search strategies for Th1 and Th2 cells optimizes function at inflamed sites," Immunity 51, 298–309 (2019). A. Gaylo, M. G. Overstreet, and D. J. Fowell, "Imaging CD4 T cell interstitial migration in the inflamed dermis," Jove-J. Vis. Exp. 109, e53585 (2016). F. Sainsbury, M. Benchabane, M. C. Goulet, and D. Michaud, "Multimodal protein constructs for herbivore insect control," Toxins 4, 455–475 (2012). H. W. Ai, J. N. Henderson, S. J. Remington, and R. E. Campbell, "Directed evolution of a monomeric, bright and photostable version of Clavularia cyan fluorescent protein: structural characterization and applications in fluorescence imaging," Biochem. J. 400, 531–540 (2006). M. Cotlet, P. M. Goodwin, G. S. Waldo, and J. H. Werner, "A comparison of the fluorescence dynamics of single molecules of a green fluorescent protein: one- versus two-photon excitation," Chemphyschem 7, 250–260 (2006). N. C. Shaner, G. G. Lambert, A. Chammas, Y. Ni, P. J. Cranfill, M. A. Baird, B. R. Sell, J. R. Allen, R. N. Day, M. Israelsson, M. W. Davidson, and J. Wang, "A bright monomeric green fluorescent protein derived from Branchiostoma lanceolatum," Nat Methods 10, 407–409 (2013). D. M. Shcherbakova, M. A. Hink, L. Joosen, T. W. Gadella, and V. V. Verkhusha, "An orange fluorescent protein with a large Stokes shift for single-excitation multicolor FCCS and FRET imaging," J. Am. Chem. Soc. 134, 7913–7923 (2012). R. A. Hill and J. Grutzendler, "In vivo imaging of oligodendrocytes with sulforhodamine 101," Nat. Methods 11, 1081–1082 (2014). J. Livet, T. A. Weissman, H. N. Kang, R. W. Draft, J. Lu, R. A. Bennis, J. R. Sanes, and J. W. Lichtman, "Transgenic strategies for combinatorial expression of fluorescent proteins in the nervous system," Nature 450, 56–62 (2007). F. Bestvater, Z. Seghiri, M. S. Kang, N. Groner, J. Y. Lee, K. B. Im, and M. Wachsmuth, "EMCCD-based spectrally resolved fluorescence correlation spectroscopy," Opt. Express 18, 23818–23828 (2010). T. Niehorster, A. Loschberger, I. Gregor, B. Kramer, H. J. Rahn, M. Patting, F. Koberling, J. Enderlein, and M. Sauer, "Multi-target spectrally resolved fluorescence lifetime imaging microscopy," Nat. Methods 13, 257–262 (2016). M. B. Sinclair, D. M. Haaland, J. A. Timlin, and H. D. T. Jones, "Hyperspectral confocal microscope," Appl. Opt. 45, 6283–6291 (2006). J. Y. Tseng, A. A. Ghazaryan, W. Lo, Y. F. Chen, V. Hovhannisyan, S. J. Chen, H. Y. Tan, and C. Y. Dong, "Multiphoton spectral microscopy for imaging and quantification of tissue glycation," Biomed. Opt. Express 2, 218–230 (2011). F. Deng, C. Ding, J. C. Martin, N. M. Scarborough, Z. Song, G. S. Eakins, and G. J. Simpson, "Spatial-spectral multiplexing for hyperspectral multiphoton fluorescence imaging," Opt. Express 25, 32243–32253 (2017). R. Lansford, G. Bearman, and S. E. Fraser, "Resolution of multiple green fluorescent protein color variants and dyes using two-photon microscopy and imaging spectroscopy," J. Biomed. Opt. 6, 311–318 (2001). H. Gugel, I. Böhm, and F. Neugart, "Freely tunable spectral detection for multiphoton microscopy," Proc. SPIE 10882, 108820S (2019). R. Datta, T. M. Heaster, J. T. Sharick, A. A. Gillette, and M. C. Skala, "Fluorescence lifetime imaging microscopy: fundamentals and advances in instrumentation, analysis, and applications," J. Biomed. Opt. 25, 1–43 (2020). A. V. Meleshina, V. V. Dudenkova, A. S. Bystrova, D. S. Kuznetsova, M. V. Shirmanova, and E. V. Zagaynova, "Two-photon FLIM of NAD(P)H and FAD in mesenchymal stem cells undergoing either osteogenic or chondrogenic differentiation," Stem Cell Res. Ther. 8, 15 (2017). P. M. Schaefer, S. Kalinina, A. Rueck, C. A. F. von Arnim, and B. von Einem, "NADH autofluorescence-a marker on its way to boost bioenergetic research," Cytometry Part A 95, 34–46 (2019). T. S. Blacker, Z. F. Mann, J. E. Gale, M. Ziegler, A. J. Bain, G. Szabadkai, and M. R. Duchen, "Separating NADH and NADPH fluorescence in live cells and tissues using FLIM," Nat. Commun. 5, 3936 (2014). G. J. Kremers, E. B. van Munster, J. Goedhart, and T. W. Gadella, "Quantitative lifetime unmixing of multiexponentially decaying fluorophores using single-frequency fluorescence lifetime imaging microscopy," Biophys. J. 95, 378–389 (2008). C. Madeira, N. Estrela, J. A. B. Ferreira, S. M. Andrade, S. M. B. Costa, and E. P. Melo, "Fluorescence lifetime imaging microscopy and fluorescence resonance energy transfer from cyan to yellow fluorescent protein validates a novel method to cluster proteins on solid surfaces," J. Biomed. Opt. 14, 044035 (2009). R. Nothdurft, P. Sarder, S. Bloch, J. Culver, and S. Achilefu, "Fluorescence lifetime imaging microscopy using near-infrared contrast agents," J. Microsc. 247, 202–207 (2012). A. Pliss, L. Zhao, T. Y. Ohulchanskyy, J. Qu, and P. N. Prasad, "Fluorescence lifetime of fluorescent proteins as an intracellular environment probe sensing the cell cycle progression," ACS Chem. Biol. 7, 1385–1392 (2012). R. Penjweini, A. Andreoni, T. Rosales, J. Kim, M. Brenner, D. Sackett, J. H. Chung, and J. Knutson, "Intracellular oxygen mapping using a myoglobin-mCherry probe with fluorescence lifetime imaging," J. Biomed. Opt. 23, 107001 (2018). H. J. Lin, P. Herman, and J. R. Lakowicz, "Fluorescence lifetime-resolved pH imaging of living cells," Cytometry Part A 52, 77–89 (2003). J. T. Smith, M. Ochoa, and X. R. M. Intes, "UNMIX-ME: spectral and lifetime fluorescence unmixing via deep learning," Biomed. Opt. Express 11, 3857–3874 (2020). T. Zimmermann, "Spectral imaging and linear unmixing in light microscopy," in Microscopy Techniques, J. Rietdorf, ed. (Springer, 2005), pp. 245–265. J. R. Lakowicz, I. Gryczynski, H. Malak, and Z. Gryczynski, "Two-color two-photon excitation of fluorescence," Photochem. Photobiol. 64, 632–635 (1996). P. Mahou, M. Zimmerley, K. Loulier, K. S. Matho, G. Labroille, X. Morin, W. Supatto, J. Livet, D. Debarre, and E. Beaurepaire, "Multicolor two-photon tissue imaging by wavelength mixing," Nat. Methods 9, 815–818 (2012). C. Stringari, L. Abdeladim, G. Malkinson, P. Mahou, X. Solinas, I. Lamarre, S. Brizion, J. B. Galey, W. Supatto, R. Legouis, A. M. Pena, and E. Beaurepaire, "Multicolor two-photon imaging of endogenous fluorophores in living tissues by wavelength mixing," Sci. Rep. 7, 3792 (2017). L. Abdeladim, K. S. Matho, S. Clavreul, P. Mahou, J. M. Sintes, X. Solinas, I. Arganda-Carreras, S. G. Turney, J. W. Lichtman, A. Chessel, A. P. Bemelmans, K. Loulier, W. Supatto, J. Livet, and E. Beaurepaire, "Multicolor multiscale brain imaging with chromatic multiphoton serial microscopy," Nat. Commun. 10, 1662 (2019). E. P. Perillo, J. W. Jarrett, Y.-L. Liu, A. Hassan, D. C. Fernée, J. R. Goldak, A. Bonteanu, D. J. Spence, H.-C. Yeh, and A. K. Dunn, "Two-color multiphoton in vivo imaging with a femtosecond diamond Raman laser," Light: Sci. Appl. 6, e17095 (2017). R. Mostany and C. Portera-Cailliau, "A craniotomy surgery procedure for chronic brain imaging," J. Vis. Exp. 12, e680 (2008). M. J. Farrar, I. M. Bernstein, D. H. Schlafer, T. A. Cleland, J. R. Fetcho, and C. B. Schaffer, "Chronic in vivo imaging in the mouse spinal cord using an implanted chamber," Nat. Methods 9, 297–302 (2012). J. S. Jones, D. M. Small, and N. Nishimura, "In vivo calcium imaging of cardiomyocytes in the beating mouse heart with multiphoton microscopy," Front. Physiol. 9, 969 (2018). S. Lee, C. Vinegoni, P. F. Feruglio, L. Fexon, R. Gorbatov, M. Pivoravov, A. Sbarbati, M. Nahrendorf, and R. Weissleder, "Real-time in vivo imaging of the beating mouse heart at microscopic resolution," Nat. Commun. 3, 1054 (2012). C. Vinegoni, A. D. Aguirre, S. Lee, and R. Weissleder, "Imaging the beating heart in the mouse using intravital microscopy techniques," Nat. Protoc. 10, 1802–1819 (2015). N. Rakhilin, B. Barth, J. Choi, N. L. Munoz, S. Kulkarni, J. S. Jones, D. M. Small, Y. T. Cheng, Y. Cao, C. LaVinka, E. Kan, X. Dong, M. Spencer, P. Pasricha, N. Nishimura, and X. Shen, "Simultaneous optical and electrical in vivo analysis of the enteric nervous system," Nat. Commun. 7, 11800 (2016). L. Ritsma, E. J. Steller, S. I. Ellenbroek, O. Kranenburg, I. H. Borel Rinkes, and J. van Rheenen, "Surgical implantation of an abdominal imaging window for intravital microscopy," Nat. Protoc. 8, 583–594 (2013). D. M. Small, W. Y. Sanchez, S. Roy, M. J. Hickey, and G. C. Gobe, "Multiphoton fluorescence microscopy of the live kidney in health and disease," J. Biomed. Opt. 19, 020901 (2014). M. R. Looney, E. E. Thornton, D. Sen, W. J. Lamm, R. W. Glenny, and M. F. Krummel, "Stabilized imaging of immune surveillance in the mouse lung," Nat. Methods 8, 91–96 (2011). S. Brahler, H. Y. Yu, H. Suleiman, G. M. Krishnan, B. T. Saunders, J. B. Kopp, J. H. Miner, B. H. Zinselmeyer, and A. S. Shaw, "Intravital and kidney slice imaging of podocyte membrane dynamics," J. Am. Soc. Nephrol. 27, 3285–3290 (2016). H. J. Snippert, L. G. van der Flier, T. Sato, J. H. van Es, M. van den Born, C. Kroon-Veenboer, N. Barker, A. M. Klein, J. van Rheenen, B. D. Simons, and H. Clevers, "Intestinal crypt homeostasis results from neutral competition between symmetrically dividing Lgr5 stem cells," Cell 143, 134–144 (2010). K. Loulier, R. Barry, P. Mahou, Y. Le Franc, W. Supatto, K. S. Matho, S. H. Ieng, S. Fouquet, E. Dupin, R. Benosman, A. Chedotal, E. Beaurepaire, X. Morin, and J. Livet, "Multiplex cell and lineage tracking with combinatorial labels," Neuron 81, 505–520 (2014). D. Cai, K. B. Cohen, T. Luo, J. W. Lichtman, and J. R. Sanes, "Improved tools for the Brainbow toolbox," Nat. Methods 10, 540–547 (2013). L. Dumas, C. Heitz-Marchaland, S. Fouquet, U. Suter, J. Livet, C. Moreau-Fauvarque, and A. Chedotal, "Multicolor analysis of oligodendrocyte morphology, interactions, and development with Brainbow," Glia 63, 699–717 (2015). Y. A. Pan, T. Freundlich, T. A. Weissman, D. Schoppik, X. C. Wang, S. Zimmerman, B. Ciruna, J. R. Sanes, J. W. Lichtman, and A. F. Schier, "Zebrabow: multispectral cell labeling for cell tracing and lineage analysis in zebrafish," Development 140, 2835–2846 (2013). S. Gossa, D. Nayak, B. H. Zinselmeyer, and D. B. McGavern, "Development of an immunologically tolerated combination of fluorescent proteins for in vivo two-photon imaging," Sci. Rep. 4, 6664 (2014). I. Y. Hwang, C. Park, K. Harrison, and J. H. Kehrl, "TLR4 signaling augments B lymphocyte migration and overcomes the restriction that limits access to germinal center dark zones," J. Exp. Med. 206, 2641–2657 (2009). N. Ueno, M. B. Lodoen, G. L. Hickey, E. A. Robey, and J. L. Coombes, "Toxoplasma gondii-infected natural killer cells display a hypermotility phenotype in vivo," Immunol. Cell Biol. 93, 508–513 (2015). M. G. Overstreet, A. Gaylo, B. R. Angermann, A. Hughson, Y. M. Hyun, K. Lambert, M. Acharya, A. C. Billroth-Maclurg, A. F. Rosenberg, D. J. Topham, H. Yagita, M. Kim, A. Lacy-Hulbert, M. Meier-Schellersheim, and D. J. Fowell, "Inflammation-induced interstitial migration of effector CD4(+) T cells is dependent on integrin alphaV," Nat. Immunol. 14, 949–958 (2013). S. Uderhardt, A. J. Martins, J. S. Tsang, T. Lammermann, and R. N. Germain, "Resident macrophages cloak tissue microlesions to prevent neutrophil-driven inflammatory damage," Cell 177, 541–555 (2019). Abdeladim, L. Abdullahi Sidi, F. Acharya, M. Achilefu, S. Aguirre, A. D. Ahmed, T. Ai, H. W. Ailles, L. E. Alda, O. Allen, J. R. Amorós, C. Anderson, C. S. Andrade, S. M. Andreoni, A. Angermann, B. R. Arganda-Carreras, I. Asfaha, S. Badoual, C. Bain, A. J. Baird, M. A. Barker, N. Barry, R. Barth, B. Batten, M. Bearman, G. Beaurepaire, E. Bell, H. Bemelmans, A. P. Benchabane, M. Beneit, P. Bennis, R. A. Benosman, R. Berland, L. Bernabéu, J. Bernstein, I. M. Bestvater, F. Billroth-Maclurg, A. C. Bingham, V. Black, C. B. Blacker, T. S. Blanes, M. Blessenohl, M. Bloch, S. Blom, S. Böhm, I. Bonteanu, A. Borel Rinkes, I. H. Bouchard, M. B. Brahler, S. Brenchley, J. M. Brenner, M. Brizion, S. Brondi, M. Burgess, S. A. Bychkov, D. Bystrova, A. S. Cai, D. Campbell, R. E. Cao, Y. Capece, T. Cardona, A. Carini, M. Carlino, M. S. Chaigneau, E. Chammas, A. Charpak, S. Chattopadhyay, P. K. Chedotal, A. Chen, B. R. Chen, S. J. Chen, Y. F. Cheng, Y. T. Chessel, A. Chhatbar, P. Y. Chiou, K. K. Choi, J. Chrest, F. J. Christie, R. Chung, J. H. Cicchi, R. Ciruna, B. Clavreul, S. Cleland, T. A. Clevers, H. Clutter, M. R. Cohen, K. B. Coombes, J. L. Cosci, A. Costa, S. M. B. Cotlet, M. Coy, S. Craig, S. G. Cranfill, P. J. Crisci, A. Culver, J. Danska, J. Datta, R. Davidson, M. W. Day, R. N. De Paz, F. De Rosa, S. C. Debarbieux, F. C. Debarre, D. DeLay, M. Deng, F. Ding, C. Dirks, P. Dong, C. Y. Dong, X. Donson, A. M. Draft, R. W. Du, Z. Duchen, M. R. Dudenkova, V. V. Duensing, T. D. Dumas, L. Dunlay, R. T. Dunn, A. K. Dupin, E. Eakins, G. S. Eliceiri, K. Ellenbroek, S. I. Enderlein, J. Estrela, N. Farrar, M. J. Fernandes, N. R. J. Fernández-Miñano, C. Fernée, D. C. Ferrara, S. Ferreira, J. A. B. Feruglio, P. F. Fetcho, J. R. Fexon, L. Flanders, J. Foreman, N. K. Fouquet, S. Fowell, D. J. Fraser, S. E. Freundlich, T. Frise, E. Gadella, T. W. Gale, J. E. Galey, J. B. García-Hernández, C. Gaylo, A. Gaylo-Moynihan, A. Gebert, A. Gedye, C. A. Germain, R. N. Ghazaryan, A. A. Ghoneim, M. Giancane, S. Gide, T. N. Gillette, A. A. Glenny, R. W. Go, C. Gobe, G. C. Goedhart, J. Goldak, J. R. Goodwin, P. M. Gorbatov, R. Gossa, S. Goulet, M. C. Gregor, I. Groner, N. Grosberg, L. E. Grutzendler, J. Gryczynski, I. Gryczynski, Z. Gugel, H. Guidos, C. J. Haaland, D. M. Hamila, M. Hankinson, T. C. Harrison, K. Hartenstein, V. Hasegawa, M. Hassan, A. Heaster, T. M. Heitz-Marchaland, C. Hemmes, A. Henderson, F. Henderson, J. N. Herman, P. Herzenberg, L. A. Hickey, G. L. Hickey, M. J. Hill, R. A. Hillman, E. M. Hillman, E. M. C. Hink, M. A. Ho, J. Hofman, P. Hofman, V. Hogerkorp, C. M. Horton, N. G. Hovhannisyan, V. Hoyt, C. C. Hughson, A. Hume, K. R. Humphries, M. P. Hussain, A. Huttmann, G. Hwang, I. Y. Hyatt, E. Hyman, B. T. Hyun, Y. M. Ieng, S. H. Ilié, M. Im, K. B. Intes, X. R. M. Israelsson, M. James, J. Jarrett, J. W. Jewett, M. Jones, H. D. T. Jones, J. S. Joosen, L. Kalinina, S. Kallioniemi, O. Kan, E. Kang, H. N. Kang, M. S. Kara, P. Kaynig, V. Kehrl, J. H. Kieran, M. W. Kim, J. Kim, M. Klein, A. M. Klinger, A. Knutson, J. Kobat, D. Koberling, F. Kondratyev, M. Kopp, J. B. Kramer, B. Kranenburg, O. Kremers, G. J. Krishnan, G. M. Kroon-Veenboer, C. Krummel, M. F. Krutzik, P. O. Kulkarni, S. Kumazaki, S. Kuznetsova, D. S. Labroille, G. Lacy-Hulbert, A. Lakowicz, J. R. Lamarre, I. Lambert, G. G. Lambert, K. Lamm, W. J. Lammermann, T. Lansford, R. Lassalle, S. Lauriault, S. LaVinka, C. Le Franc, Y. Leach, S. D. Lee, J. Y. Legouis, R. Lichtman, J. W. Liechti, T. Lin, H. J. Lin, J. R. Liu, Y.-L. Livet, J. Lo, W. Lobo, N. Lodoen, M. B. Long, G. V. Longair, M. Long-Mira, E. Looney, M. R. López-Castaño, F. Loschberger, A. Loulier, K. Lu, Z. Y. Lundin, J. Luo, T. Ma, C. S. Madeira, C. Mahou, P. Mäki-Teeri, P. Malak, H. Malkinson, G. Manley, P. E. Mann, Z. F. Manresa, P. Marco, P. Martensen, B. Martin, J. C. Martins, A. J. Matho, K. S. Matutes, E. McGavern, D. B. McQuaid, S. Medrano, M. Meier-Schellersheim, M. Meleshina, A. V. Melo, E. P. Menzies, A. M. Mertz, J. Meyer, M. Michaud, D. Miller, J. Miner, J. H. Moreau-Fauvarque, C. Morin, X. Mostany, R. Munoz, N. L. Nahrendorf, M. Nayak, D. Neel, B. Nesi, G. Neugart, F. Ni, Y. Niehorster, T. Nikitin, A. Y. Nishimura, N. Nishiyama, M. Nolan, G. P. Nothdurft, R. O'Herron, P. Ochoa, M. Oheim, M. Oh-Oka, H. Ohulchanskyy, T. Y. Okamoto, K. Orzekowsky-Schroeder, R. Overstreet, M. G. Paavolainen, L. Pan, Y. A. Park, C. Parks, D. Pasricha, P. Paterson, J. Patting, M. Pavone, F. S. Paylova, I. Pellinen, T. Pena, A. M. Penjweini, R. Pereira, K. Perez, O. Perillo, E. P. Pietzsch, T. Pivoravov, M. Pliss, A. Popovic, M. Portera-Cailliau, C. Prasad, P. N. Preibisch, S. Prizant, H. Provost, E. Qu, J. Quek, C. Radosevich, A. J. Rahn, H. J. Rakhilin, N. Rashid, R. Ratto, G. M. Reardon, D. A. Remington, S. J. Reynaud, D. Rhee, J. Ricard, C. Ritsma, L. Robey, E. A. Roederer, M. Roman, K. A. Rosales, T. Rosenberg, A. F. Rossi, L. F. Rottapel, R. Roussel, H. Roy, S. Rueck, A. Rueden, C. Saalfeld, S. Sackett, D. Sahaf, B. Saini, H. Sainsbury, F. Salto-Tellez, M. Sanchez, W. Y. Sánchez-Sánchez, S. Sanes, J. R. Santagata, S. Sarder, P. Sarmiento, H. Sato, S. S. Sauer, M. Saunders, B. T. Saw, R. P. M. Sbarbati, A. Scarborough, N. M. Schaefer, P. M. Schaffer, C. B. Schier, A. F. Schindelin, J. Schlafer, D. H. Schmid, B. Schoppik, D. Schrock, D. C. Schumacher, J. Scolyer, R. A. Seghiri, Z. Sell, B. R. Sen, D. Shaner, N. C. Sharick, J. T. Shaw, A. S. Shcherbakova, D. M. Shen, X. Shen, Z. M. Shimizu, Y. Shirmanova, M. V. Silva, I. P. Simons, B. D. Simpson, G. J. Sinclair, M. B. Sintes, J. M. Sirskyj, D. Skala, M. C. Small, D. M. Smith, J. T. Smrcka, A. V. Smrke, A. Snippert, H. J. Solinas, X. Solomon, M. Soloski, M. J. Song, Z. Sorger, P. K. Southard, T. L. Spence, D. J. Spencer, M. Stack, E. C. Steller, E. J. Stewart, J. Stringari, C. Suleiman, H. Supatto, W. Suter, U. Szabadkai, G. Tan, H. Y. Tangye, S. G. Tarín, F. Tartour, E. Terazima, M. Thompson, J. F. Thornton, E. E. Timlin, J. A. Tinevez, J. Y. Tomancak, P. Topham, D. J. Trejo, A. Trinkle, L. S. Tsang, J. S. Tseng, J. Y. Turkki, R. Turney, S. G. Uderhardt, S. Ueno, N. Välimäki, K. van den Beucken, T. van den Born, M. van der Flier, L. G. van Es, J. H. van Munster, E. B. van Rheenen, J. Verdú-Belmar, J. Verkhusha, V. V. Villarrubia, B. Vinegoni, C. Vogel, A. von Arnim, C. A. F. von Einem, B. Wachsmuth, M. Waldo, G. S. Walling, B. L. Wang, C. Wang, T. C. Wang, X. C. Webb, W. W. Weiss, R. S. Weissleder, R. Weissman, T. A. Werner, J. H. White, D. J. Williams, R. M. Wilmott, J. S. Wouters, B. Xu, C. Yagita, H. Yazinski, S. A. Yeh, H.-C. Yu, H. Y. Yuan, J. Zagaynova, E. V. Zhao, L. Ziegler, M. Zimmerley, M. Zimmerman, S. Zimmermann, T. Zinselmeyer, B. H. Zipfel, W. R. ACS Chem. Biol. (1) Acta Haematol. (1) Arthritis Rheum. (1) Assay Drug Dev. Technol. (1) Biochem. J. (1) Biomed. Opt. Express (2) Biophys. J. (1) Cancers (2) Chemphyschem (1) Clin. Chem. (1) Curr. Protoc. Cytom. (1) Cytometry Part A (4) Front. Cellular Neurosci. (1) Front. Immunol. (1) Front. Mol. Neurosci. (1) Front. Physiol. (1) Immunol. Cell Biol. (1) J. Am. Chem. Soc. (1) J. Am. Soc. Nephrol. (1) J. Exp. Med. (1) J. Microsc. (2) J. Neurosci. Methods (1) J. Vis. Exp. (1) Jove-J. Vis. Exp. (1) Light: Sci. Appl. (1) Nat Methods (1) Nat. Commun. (4) Nat. Immunol. (1) Nat. Med. (1) Nat. Methods (8) Nat. Protoc. (2) Neuro Oncol. (1) Oncoimmunology (1) Photochem. Photobiol. (1) Proc. Natl. Acad. Sci. USA (1) Proc. SPIE (2) Sci. Rep. (3) Stem Cell Res. Ther. (1) » Supplement 1 I think this is from the original submission. I cannot delete in the system.
CommonCrawl
Modeling optimal treatment strategies in a heterogeneous mixing model Seoyun Choe1 & Sunmi Lee2 Many mathematical models assume random or homogeneous mixing for various infectious diseases. Homogeneous mixing can be generalized to mathematical models with multi-patches or age structure by incorporating contact matrices to capture the dynamics of the heterogeneously mixing populations. Contact or mixing patterns are difficult to measure in many infectious diseases including influenza. Mixing patterns are considered to be one of the critical factors for infectious disease modeling. A two-group influenza model is considered to evaluate the impact of heterogeneous mixing on the influenza transmission dynamics. Heterogeneous mixing between two groups with two different activity levels includes proportionate mixing, preferred mixing and like-with-like mixing. Furthermore, the optimal control problem is formulated in this two-group influenza model to identify the group-specific optimal treatment strategies at a minimal cost. We investigate group-specific optimal treatment strategies under various mixing scenarios. The characteristics of the two-group influenza dynamics have been investigated in terms of the basic reproduction number and the final epidemic size under various mixing scenarios. As the mixing patterns become proportionate mixing, the basic reproduction number becomes smaller; however, the final epidemic size becomes larger. This is due to the fact that the number of infected people increases only slightly in the higher activity level group, while the number of infected people increases more significantly in the lower activity level group. Our results indicate that more intensive treatment of both groups at the early stage is the most effective treatment regardless of the mixing scenario. However, proportionate mixing requires more treated cases for all combinations of different group activity levels and group population sizes. Mixing patterns can play a critical role in the effectiveness of optimal treatments. As the mixing becomes more like-with-like mixing, treating the higher activity group in the population is almost as effective as treating the entire populations since it reduces the number of disease cases effectively but only requires similar treatments. The gain becomes more pronounced as the basic reproduction number increases. This can be a critical issue which must be considered for future pandemic influenza interventions, especially when there are limited resources available. The timely and effective countermeasures of influenza challenge global health experts around the world, especially when limited resources are available. Mathematical modeling has made significant contributions to understanding the spread of influenza, and also providing useful insights to control or decrease the disease burden [1–4]. A number of mathematical models assume random or homogeneous mixing for the influenza dynamics, which can provide a good approximation to real epidemiological phenomenon [5, 6]. This simple assumption of homogeneous mixing can be extended to more general mathematical models with multi-patches or age structure by incorporating contact matrices to capture the dynamics of the heterogeneously mixing populations [1, 7]. In general, contact or mixing patterns are difficult to measure in many infectious diseases including influenza. There is no doubt that mixing patterns are considered to be one of the critical factors for infectious disease modeling. There are many different approaches that allow us to investigate the impact of contact patterns on the transmission dynamics of infectious diseases. The age-dependent contact matrices based on empirical social data have been estimated [8, 9]. Age-dependent transmission matrices that describe the mixing and the probability of infection are studied using synthetic data [10]. In these studies, it has been noted that contact patterns are strongly dependent on distinct age groups, and therefore, the heterogeneity of contact patterns should be recognized as an important feature for the realistic modeling of many infectious diseases. Moreover, individual based models or network based models can provide more details on the disease dynamics by studying the effects of heterogeneous and clustered contact patterns. Contact patterns and their underlying network structures have shown to be one of the critical factors for determining the characteristics of infectious disease transmission [11, 12]. Also, several different heterogeneity types in infectious disease models have been incorporated, such as susceptibility, infectivity and mixing patterns [13, 14]. Agent or network based models have been developed to study effective controls in the influenza pandemic [2, 15, 16]. Deterministic models which are much simpler than network based models have been successfully employed to study the transmission dynamics of various infectious diseases and continued to produce valuable insights. Preferred mixing has been used to highlight the role of contact patterns in the HIV transmission dynamics [17, 18]. The impact of selective mixing is studied in the transmission of STDs [19]. In these studies, the contact rate matrices are formulated in terms of activity levels and subpopulation sizes by using a proportionate mixing assumption. The relation between the basic reproduction number and the initial exponential growth rate of an epidemic to models with heterogeneous mixing has been studied [20–22]. The authors show that an epidemic with heterogeneous mixing may have a quite different epidemic size than an epidemic with homogeneous mixing, even though they may have the same reproduction number and initial exponential growth rate. Determination of the final size of an epidemic under the assumption of heterogeneous mixing requires additional data from the initial exponential growth stage of the epidemic [21]. More recently, a two-group influenza model has been used to study the impact of heterogeneous mixing on the probability of the extinction of influenza [23, 24]. It has been pointed out that heterogeneous mixing between two subgroups would play a key role to explain the delays in the geographic spread of the 2009 H1N1 pandemic observed in Mexico and Japan. In this manuscript, a deterministic two-group model is used to study the influenza transmission dynamics in heterogeneous environments. A two-group model allows for different activity levels and heterogeneous mixing between subgroups. In particular, two groups are coupled by a mixing matrix whose entries p ij , i, j=1,2 represent the proportion of individuals in group i that contact individuals in the other group j. This two-group model of influenza is an extension of the prototype model by Brauer [20] in which a two-group model is used to investigate the impact of proportionate mixing on the basic reproduction number and the final epidemic size. Now, our model involves more extensive heterogeneous mixing scenarios between the two groups. Specifically, several mixing scenarios are considered, including proportionate mixing, preferred mixing and like-with-like mixing, by varying the group mixing fractions. We explore how this mixing pattern can affect the basic reproduction number and the final epidemic size. Heterogeneous mixing certainly changes the reproduction number and the final epidemic size, but it is not trivial to determine whether different mixing assumptions can change them substantially or not. The level of transmissibility measured by the basic reproduction number \(\mathcal {R}_{0}\) is varied to highlight the differences and similarities for the results under several heterogeneous mixing scenarios. Moreover, we formulate an optimal control framework to investigate how these mixing patterns will influence the effectiveness of group-specific treatment strategies in the two-group model. Under various mixing scenarios, optimal group-specific treatment strategies and the corresponding influenza outcomes are compared. This can help us address some of the important issues such as allocating optimal treatments for future pandemic preparedness plans. A heterogeneous mixing model We consider a two-group influenza model based on a standard compartmental SITR model. Two additional compartments, a latent class and an asymptomatic class, are included due to the characteristics of influenza. Each class is divided into two subpopulations of sizes N 1 and N 2. For each group i=1,2, we have a susceptible class S i , a latent class L i , an infected class with symptoms I i , an asymptomatic infected class without symptoms A i and a treated class T i . A two-group influenza model involves two different age groups, which are connected by a mixing matrix (p ij ) for i,j=1,2, by allowing for the possibility of subgroups with different activity levels and heterogeneous mixing between these subgroups. A two-group influenza model with two subgroups can be written as $$ \begin{aligned} S_{1}'&=-a_{1}\left[p_{11}\frac{S_{1}(I_{1}+\sigma T_{1}+\delta A_{1})}{N_{1}}+p_{12}\frac{S_{1}(I_{2}+\sigma T_{2}+\delta A_{2})}{N_{2}}\right],\\ L_{1}'&=a_{1}\left[p_{11}\frac{S_{1}(I_{1}+\sigma T_{1}+\delta A_{1})}{N_{1}}+p_{12}\frac{S_{1}(I_{2}+\sigma T_{2}+\delta A_{2})}{N_{2}}\right]-\kappa_{1}L_{1},\\ I_{1}'&=p\kappa_{1}L_{1}-(\alpha_{1}+u_{1})I_{1},\\ A_{1}'&=(1-p)\kappa_{1}L_{1}-\eta_{1}A_{1},\\ T_{1}'&=u_{1}I_{1}-\alpha_{T,1}T_{1},\\ D_{1}'&=d_{1}\alpha_{1}I_{1}+d_{T,1}\alpha_{T,1}T_{1},\\ \end{aligned} $$ $$ \begin{aligned} S_{2}'&=-a_{2}\left[p_{21}\frac{S_{2}(I_{1}+\sigma T_{1}+\delta A_{1})}{N_{1}}+p_{22}\frac{S_{2}(I_{2}+\sigma T_{2}+\delta A_{2})}{N_{2}}\right],\\ L_{2}'&=a_{2}\left[p_{21}\frac{S_{2}(I_{1}+\sigma T_{1}+\delta A_{1})}{N_{1}}+p_{22}\frac{S_{2}(I_{2}+\sigma T_{2}+\delta A_{2})}{N_{2}}\right] -\kappa_{2}L_{2},\\ I_{2}'&=p\kappa_{2}L_{2}-(\alpha_{2}+u_{2})I_{2},\\ A_{2}'&=(1-p)\kappa_{2}L_{2}-\eta_{2}A_{2},\\ T_{2}'&=u_{2}I_{2}-\alpha_{T,2}T_{2},\\ D_{2}'&=d_{2}\alpha_{2}I_{2}+d_{T,2}\alpha_{T,2}T_{2}. \end{aligned} $$ For each group i, κ i is the rate of passage from the latent to the symptomatic infective or asymptomatic infective classes; p is the fraction of latent members who become symptomatic infectious, and the fraction (1−p) progress to the asymptomatic stage; δ is the infectivity reduction factor for the asymptomatic class and σ is the infectivity reduction factor for treated members; α i (α T,i ) is the natural recovery rate from the infected (treated) to the removed stage and η i is the rate of passage from the asymptomatic to the removed stage. Also, u i is a constant treatment rate, which will be modified as a time-dependent treatment rate in the next section. d i (d T,i ) is the disease-induced death rate from the infected class (the treated class). In this manuscript, we generalize the proportional mixing assumption to the preferred mixing one and we carry out mathematical analysis under different mixing patterns. Suppose that the members of group i make a i contacts per unit time and that the fraction of contacts made by the members of group i with the members of group j is p ij , for i, j=1,2; then we have the following: $$ p_{11} + p_{12} = p_{21} + p_{22} = 1. $$ For our two-group influenza model, we consider preferred mixing, in which a fraction π i of each group mixes randomly with its own group and the remaining members mix proportionately. Thus, preferred mixing is given by $$ \begin{aligned} p_{11} &= \pi_{1} + (1-\pi_{1})p_{1},&&&&p_{12} = (1-\pi_{1})p_{2},\\ p_{21} &= (1-\pi_{2})p_{1}, &&&&p_{22} = \pi_{2}+(1-\pi_{2})p_{2},\\ \end{aligned} $$ $$ \begin{aligned} p_{i} = \frac{(1-\pi_{i})a_{i} N_{i}} {(1-\pi_{1})a_{1} N_{1} + (1-\pi_{2})a_{2} N_{2}}. \end{aligned} $$ More details on the preferred mixing formulation can be found in previous studies [18, 20]. The impact of mixing patterns on the contact matrix Let us investigate the impact of different mixing patterns on the contact matrix. The contact matrix is defined as the product of group activity level, a i and the group mixing proportions p ij (i,j=1,2) given in (2). $$ C = \left[ \begin{array}{cc} a_{1}p_{11} & a_{1}p_{12} \\ a_{2}p_{21} & a_{2}p_{22} \end{array} \right]. $$ To illustrate the impact of different mixing patterns, several group mixing fractions are chosen; C 1 (π 1=π 2=0), C 2 (π 1=0.25,π 2=0.75), C 3 (π 1=π 2=0.5), C 4 (π 1=0.75,π 2=0.25) and C 5 (π 1=π 2=1) using a 1=0.5260,a 2=0.2670 and N 1=1750 and N 2=250. Then, we get the following contact matrices: $$ C_{1} = \left[ \begin{array}{cc} 0.4904 & 0.0356 \\ 0.2489 & 0.0181 \end{array} \right], C_{2} = \left[ \begin{array}{cc} 0.5167 & 0.0093 \\ 0.0652 & 0.2018 \end{array} \right], C_{3} = \left[ \begin{array}{cc} 0.5082 & 0.0178 \\ 0.1245 & 0.1425 \end{array} \right], $$ $$ C_{4} = \left[ \begin{array}{cc} 0.5025 & 0.0235 \\ 0.1645 & 0.1025 \end{array} \right], C_{5} = \left[ \begin{array}{cc} 0.5260 & 0 \\ 0 & 0.2670 \end{array} \right]. $$ When the group mixing fraction is π 1=π 2=0, we have proportionate mixing which is a special case of preferred mixing (C 1). It is also possible to have like-with-like mixing when π 1=π 2=1, in which members of each group mixes only with members of the same group. That is, for like-with-like mixing, p 11=p 22=1 and p 12=p 21=0 (C 5). For like-with-like mixing, the contact matrix is a diagonal matrix. The basic reproduction number One of the most important factors in mathematical epidemiology is the basic reproduction number, which is the average number of secondary infectious cases when one infectious individual is introduced to a whole susceptible population. The basic reproduction number can be calculated by using the next generation matrix approach, outlined in [25, 26]. Since the model includes treatments, we also compute the controlled reproduction number in the presence of constant treatment rates (u i ). Let x=(L 1,I 1,A 1,T 1,L 2,I 2,A 2,T 2)T and F(x) represent all the new infection rates. The net transition rates out of the corresponding compartment are represented by V(x). Then, we find the Jacobian matrix of F(x) and V(x) evaluated at the disease-free equilibrium point x ∗, which consists of S 1=N 1,S 2=N 2 and the rest of the components zero. The spectral radius of the matrix F V −1 yields the basic reproduction number (\(\mathcal {R}_{0}\)) and the controlled reproduction number (\(\mathcal {R}_{c}\)) in the presence of treatments (more details are given in Appendix A). As a result, the basic reproduction number \(\mathcal {R}_{0}\) with u 1=u 2=0 is $$ \mathcal{R}_{0}=\frac{1}{2}\left(a_{1}p_{11}\Phi_{1}+a_{2}p_{22}\Phi_{2}+\sqrt{(a_{1}p_{11} \Phi_{1}-a_{2}p_{22}\Phi_{2})^{2}+4a_{1}a_{2}p_{12}p_{21}\Phi_{1}\Phi_{2}} \right), $$ where \(\Phi _{1}=\left (\frac {\delta (1-p)}{\eta _{1}}+\frac {p}{\alpha _{1}} \right), \Phi _{2}=\left (\frac {\delta (1-p)}{\eta _{2}}+\frac {p}{\alpha _{2}} \right)\). The controlled reproduction number \(\mathcal {R}_{c}\) is $$ \mathcal{R}_{c}=\frac{1}{2}\left(a_{1}p_{11}\Gamma_{1}+a_{2}p_{22}\Gamma_{2}+\sqrt{(a_{1}p_{11}\Gamma_{1}-a_{2}p_{22}\Gamma_{2})^{2}+4a_{1}a_{2}p_{12}p_{21}\Gamma_{1}\Gamma_{2}} \right), $$ where \(\Gamma _{1}=\left (\frac {\delta (1-p)}{\eta _{1}}+\frac {p(\alpha _{T,1}+\sigma u_{1})}{\alpha _{T,1}(\alpha _{1}+u_{1})} \right), \Gamma _{2}=\left (\frac {\delta (1-p)}{\eta _{2}} + \frac {p(\alpha _{T,2}+\sigma u_{2})}{\alpha _{T,2}(\alpha _{2}+u_{2})} \right)\). The expressions for the basic reproduction number \(\mathcal {R}_{0}\) and the controlled reproduction number \(\mathcal {R}_{c}\) have been generalized from the ones with proportional mixing [20] to the ones with preferred mixing. The activity levels and the mixing fractions play a critical role in the basic reproduction number. For instance, taking partial derivatives of p ij with respect to π i , we can show that p 11 and p 22 increase and p 12 and p 21 decrease as either π 1 or π 2 increase to 1. This results in the basic reproduction number increasing as preferred mixing becomes like-with-like mixing. Numerical sensitivity analysis of \({\mathcal {R}}_{0}\) and \({\mathcal {R}}_{c}\) is carried out in the next section. The final size relation For a one-group epidemic model, there is a final size relation that makes it possible to calculate the size of the epidemic from the reproduction number [5, 22, 27, 28]. In this section, we establish a final size relation for the two-group model (1) with u 1=u 2=0. This relation does not involve the basic reproduction number explicitly but still makes it possible to calculate the size of the epidemic from the model parameters. The final size relation of the model (1) can be obtained as $$ \left[ \begin{array}{c} \ln\frac{S_{1}(0)}{S_{1}(\infty)} \\ \ln\frac{S_{2}(0)}{S_{2}(\infty)} \end{array} \right] = \left[ \begin{array}{c} a_{1} p_{11}\Phi_{1}\left(1-\frac{S_{1}(\infty)}{N_{1}(0)} \right)+ a_{1} p_{12}\Phi_{2}\left(1-\frac{S_{2}(\infty)}{N_{2}(0)}\right) \\ a_{2} p_{21}\Phi_{1}\left(1-\frac{S_{1}(\infty)}{N_{1}(0)} \right)+ a_{2} p_{22}\Phi_{2}\left(1-\frac{S_{2}(\infty)}{N_{2}(0)}\right) \end{array} \right]. $$ For the relation between the final size relation and the basic reproduction number, we use the eigenvector v of \(\mathcal {R}_{0}\) as in [21], then $$ \mathbf{v}=[v_{1},0,0,0,v_{2},0,0,0]^{T}, $$ $$\begin{array}{@{}rcl@{}} v_{1}=\frac{a_{1} p_{11} \Phi_{1} -a_{2} p_{22} \Phi_{2} \sqrt{(a_{1}p_{11}\Phi_{1} -a_{2}p_{22}\Phi_{2})^{2}+4a_{1}a_{2}p_{12}p_{21}\Phi_{1}\Phi_{2}}}{2a_{1} p_{2} \Phi_{1} (1-\pi_{2})}, v_{2}=1. \end{array} $$ The eigenvalue and the eigenvector can be written as $$ \left[ \begin{array}{cc} p_{11} \Phi_{1} v_{1} & (1-\pi_{1})p_{1} \Phi_{2} v_{2} \\ (1-\pi_{2})p_{2} \Phi_{1} v_{1} & p_{22} \Phi_{2} v_{2} \end{array} \right]\left[ \begin{array}{c} a_{1} \\ a_{2} \end{array}\right] = \left[\begin{array}{c} \mathcal{R}_{0} v_{1} \\ \mathcal{R}_{0} v_{2} \end{array} \right]. $$ Also, the activity levels can be found in terms of \(\mathcal {R}_{0}\) using (4) and (5), $${}a_{1}=\mathcal{R}_{0}\left(\!\frac{v_{1}p_{22}-v_{2}(1-\pi_{1})p_{1}}{p_{11}p_{22}-p_{12}p_{21}}\!\right)\left(\!\frac{1}{\Phi_{1} v_{1}}\!\right),a_{2}=\mathcal{R}_{0}\left(\frac{v_{2}p_{11}-v_{1}(1-\pi_{2})p_{2}}{p_{11}p_{22}-p_{12}p_{21}}\right)\left(\frac{1}{\Phi_{2} v_{2}}\right). $$ When these values are substituted into the final sized system, S 1(∞) and S 2(∞) can be expressed in terms of the model parameters. As seen in the analytic expression above, the group specific final sizes are coupled with each other in a complex way. In the previous study [21], it can be simplified under proportionate mixing and shown that the final epidemic size in group 1 is larger than in group 2 when a 1>a 2. Also, it has been pointed out that \(\mathcal {R}_{0}\) alone is not enough to determine the final epidemic size due to this complex coupling. It is difficult to observe how different mixing patterns affect the final size relation. Therefore, we carry out sensitivity analysis numerically as mixing patterns are varied in the following section. The details on the computation of the final size relation are given in Appendix A and the references [20, 21]. Modeling optimal treatment strategy Optimal control theory has been used frequently in a number of biological and epidemiological models (see [29] and the references therein). For influenza transmission models, optimal interventions are identified and the impact of optimal interventions on the influenza dynamics are investigated [30–32]. Various intervention strategies such as vaccination, antiviral treatment, and isolation controls are studied; optimal strategies for the 1918 influenza pandemic with limited resources [33] and age-dependent optimal vaccination strategies are investigated in context of the transmission dynamics of the 2009 influenza pandemic [34, 35]. We employ optimal control theory to explore the impact of antiviral treatment in situations that mimic 1918-like influenza pandemic scenarios. We modify model (1) by incorporating time-dependent control functions to measure the effectiveness of group-specific treatment strategies. Intervention strategies (policies) are modeled by the functions u i (t)(i=1,2) that externally control the number of treated cases. The objective functional \(\mathcal {F}\) over a finite time interval [ 0,T] is given by the expression: $$\begin{array}{@{}rcl@{}} \mathcal{F}(u_{1}(t),u_{2}(t)) = {\int_{0}^{T}} \left(C_{1}I_{1}(t)+C_{2}I_{2}(t)+\frac{W_{1}}{2}{u_{1}^{2}}(t)+\frac{W_{2}}{2}{u_{2}^{2}}(t)\right) {dt}. \end{array} $$ We choose to model the control efforts via a linear combination of quadratic terms, u i 2(t)(i=1,2). The constants C 1,C 2 are the weight constants for infected individuals and W 1,W 2 are the relative costs of the interventions. We might include the cost of deaths in the objective functional so that we would emphasize the cost of disease-induced deaths. However, it turns out that the results including the cost of deaths and the ones without the cost of deaths are almost indistinguishable (results are not shown here). The optimal control problem is that of finding optimal functions (u 1 ∗(t),u 2 ∗(t)) such that $$\begin{array}{@{}rcl@{}} \mathcal {F}(u_{1}^{*}(t), u_{2}^{*}(t)) = min_{\Omega} \mathcal {F}(u_{1}(t),u_{2}(t)), \end{array} $$ where Ω={(u 1(t),u 2(t))∈(L 1(0,T))2∥0≤u 1(t),u 2(t)≤b, t∈ [ 0,T]} subject to the state equations given by (1) with initial conditions. The existence of optimal controls is guaranteed from standard results on optimal control theory [36]. Pontryagin's Maximum Principle is used to establish necessary conditions that must be satisfied by an optimal solution [37]. Derivations of the necessary conditions are shown in Appendix B. A two point boundary method [29] is employed to find numerical solutions to (7). First, the state system (1) is solved forward with initial conditions. Then, the adjoint system with transversality conditions is solved backward in time. Finally, the optimality condition is updated and whole steps are iterated until convergence is achieved. The baseline parameter values are given in Table 1, which has been taken [20]. Table 1 Parameter definitions and baseline values used in the numerical simulations We present the numerical simulations associated with implementing optimal treatment control functions as well as their effect on two-group influenza dynamics under different mixing patterns. In order to investigate the impact of mixing patterns, the group mixing fractions π i are varied from 0 to 1 for each i=1,2, including proportionate mixing (π i =0), half mixing (π i =0.5) and like-with-like mixing (π i =1). The results in the absence of treatments First, we illustrate the influenza dynamics of (1) in the absence of treatments (u 1=u 2=0). Figure 1 compares the group specific incidence under three mixing scenarios: proportionate mixing (dotted curve), half mixing (solid curve) and like-with-like mixing (dashed curve). Furthermore, the results are shown under two different values of \(\mathcal {R}_{0}\) (using a moderate value and a higher value on the left and right, respectively). The group 2 incidence is smaller under like-with-like mixing than the one using proportionate mixing, while the group 1 incidence is larger. The incidence in the lower activity level group 2 gets significantly larger as the mixing becomes more proportionate than the one in the higher activity group. Hence, this leads to the total incidence or the final epidemic size getting smaller as the mixing becomes like-with-like mixing. Also, it clearly shows more significant differences in the final epidemic size as \({\mathcal {R}}_{0}\) gets larger in the right panels. The impact of mixing patterns on the group-specific incidence. The number of incidence for each group is displayed under proportionate mixing (dotted), half mixing (solid) and like-with-like mixing (dashed). The left panels show the results for the moderate value of \(\mathcal {R}_{0}=1.32\), while the right panels show the results for the higher value of \(\mathcal {R}_{0}=2.45\) Next, the basic reproduction number \({\mathcal {R}}_{0}\) is displayed as a function of group mixing fractions in Fig. 2. The left panel shows the basic reproduction number \(\mathcal {R}_{0}\) using a moderate value of activity levels (\(\mathcal {R}_{0} \in \, [\!1.35, 1.45]\)) while the right one using a higher value of activity levels (\(\mathcal {R}_{0} \in \, [\!2.45, 2.55]\)). Both panels show that the basic reproduction number gets slightly larger as preferred mixing becomes like-with-like mixing (either π 1 or π 2 becomes 1). This is consistent with the analytic expression for \(\mathcal {R}_{0}\). Since p 11 and p 22 increase and p 12 and p 21 decrease as either π 1 or π 2 become 1, the basic reproduction number increases as preferred mixing becomes like-with-like mixing. Using the parameter values given here, it is worth mentioning that the effect of the lower activity group mixing fraction (π 2) on the values of \({\mathcal {R}}_{0}\) is slightly more significant than the higher activity group mixing fraction (π 1). The slope for the axis of π 2 increases more than the slope for the axis of π 1 in both panels. The impact of mixing patterns on the basic reproduction number \(\mathcal {R}_{0}\). The basic reproduction number \(\mathcal {R}_{0}\) is displayed as a function of π 1 and π 2, when activity levels are fixed (a moderate R 0 in the left panel and a higher R 0 in the right panel) We compute the final epidemic size, which is the number of members of the population who are infected over the course of the epidemic, N−S ∞ with \(S_{\infty }={\lim }_{t\rightarrow \infty } S(t)\). This can be described in terms of the final attack ratio, (1−S ∞ /N). In Fig. 3, the final attack ratio is displayed under various mixing scenarios (six different combinations of π 1 and π 2). The left and middle panels show the final attack ratios for group 1 and group 2, respectively, while the right panel shows the total final attack ratio. Note that the range of \({\mathcal {R}}_{0}\) is between 1.72 and 1.82 (x-axis) using a 1=0.526 and a 2=0.267 as the mixing fractions are varied. The final attack ratio for group 1 gets larger as the mixing becomes like-with-like mixing (π 1=π 2=1), while it becomes significantly smaller in group 2. Consequently, the total final attack ratio becomes smaller as the mixing becomes like-with-like mixing. Proportionate mixing makes the individuals in group 2 more likely to get infected than like-with-like mixing resulting in a significantly increased the final attack ratio in group 2. This leads to the result that the total final attack ratio follows exactly the same order (i.e. the total final attack ratio becomes smaller as π 2 becomes to 1 in the right panel). Moreover, all results follow the order of a mixing fraction for the group 2 (π 2) whether decreasing or increasing in the final attack ratio (all panels). Therefore, the basic reproduction number and the final attack ratio are not consistent and this reconfirms that the basic reproduction number alone is not sufficient to determine whether preferred mixing increases the final epidemic size or not [21]. The impact of mixing patterns on the final attack ratio. The impact of different mixing patterns on the final attack ratio is displayed using a 1=0.526 and a 2=0.267. All results are in order of π 2, whether decreasing or increasing and regardless of the different combinations of mixing fractions These results are dependent on the group activity level, the group mixing fraction, and the group population size. We present a summary of the impact of these parameters on the final attack ratio as group activity levels and group population sizes are varied from the baseline scenarios in the next section. The impact of mixing patterns on the group-specific optimal treatment We present the numerical simulations associated with implementing optimal treatment control functions as well as their effect on two-group influenza dynamics under different mixing patterns. Also, the impact of different levels of transmissibility is investigated by varying the basic reproduction number. Figures 4 and 5 show the results under three different mixing patterns using a moderate value of R 0∈ [ 1.73,1.79]. Likewise in the previous section, three mixing patterns are chosen as proportionate mixing, half mixing and like-with-like mixing. In Fig. 4, the proportion of incidence and cumulative incidence in the presence of optimal treatments (red curves) are compared with the results in the absence of treatments (black curves). The results show that there are no outbreak in the presence of treatments and this indicates that group-specific optimal treatments are effective enough to prevent outbreaks regardless of mixing patterns. The impact of optimal age-specific controls under different mixing patterns. The proportion of group-specific incidence in the presence of optimal treatment (red curves) is displayed under the three mixing patterns for a moderate value of \(\mathcal {R}_{0}\). The results are compared with the ones in the absence of treatment (black curves) Optimal age-specific controls under different mixing patterns. Optimal group-specific treatment and the corresponding incidence are displayed (top panels) under the three mixing patterns using a moderate value of \(\mathcal {R}_{0}\). The proportion of cumulative treated and cumulative infected individuals are displayed under proportionate mixing, half mixing and like-with-like mixing (bottom panels) Figure 5 illustrates the impact of mixing patterns on the group-specific optimal treatment controls and the proportion of incidence under the three mixing patterns. Note that the time window of treatment is wider for group 1 under all mixing patterns while the time period of treatment becomes smaller for group 2 as the mixing becomes like-with-like (top panels). This is due to the fact that group 1 has a higher activity level and a larger population size than group 2. Also, the cumulative treated proportion becomes smaller as mixing becomes like-with-like for group 2, while it is the opposite for group 1. As noted in the previous section, the incidence in the lower activity level group 2 gets significantly larger as the mixing becomes more proportionate than the one in the higher activity group 1. Hence, this leads to the final epidemic size getting smaller and it requires less treatment as mixing becomes like-with-like mixing (bottom panels). Figures 6 and 7 show the results under three different mixing patterns using a higher value of \(\mathcal {R}_{0} \in \, [\!2.45,2.53]\) and higher activity levels a 1=0.742,a 2=0.377. Again, Fig. 6 shows the proportion of incidence and cumulative in the presence of optimal treatments (red curves) are compared with the results in the absence of treatments (black curves). Since the basic reproduction number becomes higher, optimal treatments can not stop the outbreaks under all mixing patterns. Figure 7 displays the group-specific optimal treatment controls and the proportion of incidence. We observe that the time period of treatment gets smaller for group 1 but larger for group 2, than the ones using a moderate \(\mathcal {R}_{0}\). Hence, this results in the cumulative treated cases increasing significantly in both groups. As \(\mathcal {R}_{0}\) becomes higher, the number of infected individuals in group 2 increases dramatically as the mixing becomes proportionate. Note that group 2 (the lower activity group) is more sensitive to mixing patterns. The impact of optimal age-specific controls under different mixing patterns. The proportion of group-specific incidence in the presence of optimal treatment (red curves) is displayed under the three mixing patterns for a higher value of \(\mathcal {R}_{0}\). The results are compared with the ones in the absence of treatment (black curves) Optimal age-specific controls under different mixing patterns. Optimal group-specific treatment and the corresponding incidence are displayed (top panels) under the three mixing patterns using a higher value of \(\mathcal {R}_{0}\). The proportion of cumulative treated and cumulative infected individuals are displayed under proportionate mixing, half mixing and like-with-like mixing (bottom panels) The impact of mixing patterns on the group-specific final epidemic size Figure 8 displays the group-specific final attack ratio and the total final attack ratio as a function of \({\mathcal {R}}_{0}\) in the absence of treatment under three distinct mixing patterns. The basic reproduction number is increased as the activity levels are increased in the ranges of a 1∈ [ 0.1,0.827] and a 2∈ [ 0.05,0.42]. The left panel shows that the final attack ratio for group 1 becomes almost indistinguishable regardless of mixing. The final attack ratio for group 2 has the largest value under proportionate mixing (circled), while it becomes smaller as the mixing becomes like-with-like (triangle). This results in the fact that the total (both groups) final attack ratio follows the order of group 2. Final attack ratio in the absence of treatment. The final attack ratio in the absence of treatment is displayed as a function of \(\mathcal {R}_{0}\) under three mixing patterns. The basic reproduction number is increased as the activity levels increase in the range of a 1∈ [ 0.1,0.827] and a 2∈ [ 0.05,0.42] Figure 9 presents the comparisons of the final attack ratio and the proportion of cumulative treated as a function of \({\mathcal {R}}_{0}\) in the presence of treatment. It is clear how optimal treatment strategies and the mixing fractions affect the final attack ratio. Similar to the results in the absence of treatment, the final attack ratio for group 1 is almost indistinguishable under all mixing patterns. Again, the final attack ratio for group 2 becomes smaller as the mixing becomes like-with-like. However, these results show that optimal treatment strategies can significantly limit the severity of outbreaks when \({\mathcal {R}}_{0}\) is brought below a certain threshold (the controlled reproduction number, \({\mathcal {R}}_{c}\)). Particularly, for the lower activity group, the reduction is dramatic (triangle in the top middle panel). The cumulative treated results are consistent with the final attack ratio results (more infected, more treatment needed in the bottom panels). Final attack ratio in the presence of treatment. The impact of the mixing patterns on the final attack ratio is illustrated. The final attack ratio in the presence of optimal treatment strategies is displayed as a function of \(\mathcal {R}_{0}\) under the three mixing patterns The impact of control parameters There are two critical control parameters that change the corresponding dynamics greatly. One of them is the control upper bound, b i , which represents the maximum level of effectiveness for implementing the treatment. The influenza outcomes are dependent on the control upper bound in a straightforward fashion. As the control upper bound is decreased, the magnitude of control decreases for all mixing patterns as shown in Fig. 10. This leads to a longer time of treatment in both groups and interestingly, it results in larger costs (more treated cases) and larger infected cases. This indicates that the higher treatment rate is more effective (less treated cases and less infected as seen in Fig. 5). The impact of different control upper bounds. The optimal group-specific treatment and the corresponding incidence are displayed under the three mixing patterns. The results are presented using a lower control bound (b 1=b 2=0.2) The other control parameters are the weight constants or the relative costs of treatments which can play a critical role in the influenza dynamics. As we increase these parameters, the relative cost of control increases and therefore, the magnitude of the optimal controls decreases, resulting in the increase of infected individuals and the final epidemic size in both groups regardless of the mixing pattern in Fig. 11. The impact of different weight constants. The optimal group-specific treatment and the corresponding incidence are displayed under the the three mixing patterns. The results are presented using a higher weight constant (W 1=W 2=100) The impact of different activity levels and subpopulation sizes All simulation results so far have been based on the case where group 1 is a higher activity group and a larger population size than the ones for group 2 (a 1>a 2, N 1>N 2). Now we investigate the impact of different group activity levels and subpopulation sizes on the optimal treatments and the resulting two group influenza dynamics. There are a total of nine scenarios as we vary group activity levels (a 1,a 2) and subpopulation sizes (N 1,N 2). Only some selected results are presented because the rest of the cases are identical. Baseline scenario: a 1>a 2, N 1>N 2 Scenario 1: a 1>a 2, N 1=N 2 Scenario 2: a 1>a 2, N 1<N 2 Scenario 3: a 1=a 2, N 1=N 2 Scenario 4: a 1<a 2, N 1<N 2 1. Let us consider the first scenario for a 1>a 2 and N 1=N 2: the number of infected and treated cases for group 1 under the baseline scenario is larger than the ones under the first scenario, while it is the opposite for group 2 under all mixing patterns. However, the total number of infected and treated cases is larger under the baseline scenario than the first scenario. When mixing is proportionate (π i =0), the total cumulative treated (infected) number of the baseline scenario is 403 (894) and the one for the first scenario is 376 (837), respectively. Now, when the mixing is like-with-like (π i =1), the total cumulative treated number of the baseline scenario is 399 (885) and the one for the first scenario is 236 (522), respectively. Interestingly, the baseline scenario requires more treatment, but the number of infected people is larger than the first scenario, and the difference becomes more significant as mixing becomes more like-with-like. 2. The second scenario is a 1>a 2 and N 1<N 2: this scenario is for the case where the higher activity group has a smaller population size than the lower activity group. The total cumulative treated (infected) number is 360 (799) for proportionated mixing and 73 (159) for like-with-like mixing, respectively. This result demonstrates that optimal treatment has more significant impact to reduce the epidemic size as mixing become like-with-like when a higher activity group has a smaller subpopulation size. The time using treatment control for group 2 is longer than the one of group 1 under proportionate mixing. However, the time using treatment of group 2 decreases by increasing π 1, so that the time of treatment for group 1 is longer than the one for group 2 as mixing becomes like-with-like. This suggests that the time duration of implementing treatment depends on the group activity levels (the higher activity requires a longer time to implement). 3. The third scenario is that the activity level and the population size for group 1 and group 2 are the same. The basic reproduction number, the final attack ratios and optimal group-specific treatments are almost the same under all mixing patterns. It shows that the impact of mixing patterns is not significant when groups have the same activity level and the same population size. Lastly, the scenario 4 for a 1<a 2 and N 1<N 2 is exactly identical as the baseline scenario, hence, the results are omitted. We have studied the dynamics of influenza transmission in a two-group model in which two groups are connected via a mixing matrix. The model proposed here represents two age groups with different activity levels and distinct mixing patterns. Several mixing patterns are considered such as proportionate and preferred mixing by varying the group mixing fractions π i for i=1,2. The impact of these mixing patterns is illustrated on the basic reproduction number and the group specific final epidemic size. Also, the intensity of \({\mathcal {R}}_{0}\) is varied by using different values of the group specific activity levels. The basic reproduction number \({\mathcal {R}}_{0}\) increases as the mixing becomes like-with-like mixing. Interestingly, in the absence of treatments, the opposite is true for the final epidemic size, which gets smaller as mixing becomes like-with-like as reported [23]. This is consistent with the observations that the basic reproduction number alone is not enough to determine the final epidemic size in a heterogeneous model [21]. However, the basic reproduction number and the final epidemic size depend on the group activity level and the group population size as well. Under our baseline scenarios (a 1>a 2 and N 1>N 2), the final attack ratios decreases as π 2 increases to 1, which implies that the group 2 mixing fraction determines the order of the final attack ratio. Using different sets of parameters, the order might changes depending on either π 1 or π 2. Furthermore, we formulated an optimal framework to investigate group-specific optimal treatment strategies under various mixing scenarios. For a moderate value of \(\mathcal {R}_{0}\), the optimal treatment can prevent the outbreaks in both groups under all mixing patterns. The treatment time is longer for group 1 and the treated cases are larger since it has a higher activity level and a large population size, regardless of mixing patterns. For a higher value of \(\mathcal {R}_{0}\), the optimal treatment can not stop the outbreaks in both groups under all mixing patterns. Compared with the moderate \(\mathcal {R}_{0}\) results, the treatment time for group 1 decreases, but for group 2 increases. This is due to the fact that the number of those infected in the lower activity group gets significantly larger as the mixing becomes more proportionate. Therefore, treating more people is necessary with emphasis on the lower activity group, when two groups mix proportionately. Optimal treatment strategies can significantly limit the severity of outbreaks when \({\mathcal {R}}_{0}\) is brought below a certain threshold (the controlled reproduction number, \({\mathcal {R}}_{c}\)). Under optimal treatments in both groups, the controlled reproduction number \({\mathcal {R}}_{c}\) and the final attack ratio decrease slightly as mixing becomes like-with-like. Again, the final attack ratio is in the order of either π 1 or π 2. Preferred mixing changes the basic reproduction number, the controlled reproduction number and the final epidemic size in a rather complex way and the effect gets more substantial as the epidemic gets more severe. Further, the impact of different group activity levels and subpopulation sizes are explored for the optimal treatments and the resulting two group influenza dynamics. First, the group mixing fractions can play a key role in the final attack ratio. For the case of a 1>a 2 and N 1=N 2, π 2 determines the order of the final attack ratio, i.e., the total final attack ratio becomes smaller as π 2 becomes 1. For the case of a 1>a 2 and N 1<N 2, the total final attack ratio becomes smaller as π 1 becomes 1. Based on these results, the effectiveness of optimal treatments is dependent on the group specific parameters. In general, the optimal treatment becomes more efficient as the mixing becomes more like-with-like. The efficiency of optimal treatments becomes more substantial when a higher activity group has a smaller population size (a 1>a 2 and N 1<N 2). Also, the time duration of implementing treatment depends on the group activity levels while the final attack ratios are more sensitive to the group population size. Sensitivity analysis for the control upper bound and the weight constant has been carried out. The results using a lower upper bound (b=0.2) and a higher weight constant (W=100) show that the magnitude of the treatment controls decreases and therefore, the total amount of treated and infected cases are increased in both groups under all mixing patterns. Clearly, this indicates that a more intensive treatment or a higher treatment rate is able to more efficiently reduce the total number of infected individuals with less treatment. For the parameters used here, our results indicate that treatment of both groups with a higher rate is the most effective, regardless of mixing scenarios. However, proportionate mixing requires more treated cases for all combinations of different group activity levels and group population sizes. In other word, as the mixing becomes more like-with-like mixing, treatment of the more active group in the population is almost as effective as treating the entire population, since it reduces the number of disease cases effectively but requires the similar treatments. The gain is more pronounced as the basic reproduction number increases. This can be a critical issue which has to be considered for future epidemic interventions, especially when there are limited resources. This study focuses on the two-group influenza model to explore the effect of heterogeneous mixing on the group-specific optimal treatment. This simple two-group influenza model can be used for any general disease which consists of two different activity levels and different mixing patterns. Furthermore, this work can be generalized to a multi-group influenza model (with more age groups) so that it can capture more interesting and realistic epidemiological scenarios. This will be carried out in our future study. The basic reproductive number is calculated by using the methodology (the next generation matrix approach) outlined in [26]. Now, let F(x) represent the rate of appearance of new infections. The net transition rates out of the corresponding compartment are represented by V(x). Then, we find the Jacobian matrix of F(x) and V(x) and denote them \({\mathbf {F}}=\left [\frac {\partial F}{\partial \mathbf {x}_{j}}\right ]\) and \(\textbf {V}=\left [\frac {\partial V}{\partial \mathbf {x}_{j}}\right ]\), evaluated at the disease free equilibrium point x ∗, which consists of S 1=N 1, S 2=N 2 with the rest of them zero. $$ \mathbf{F} = \left[ \begin{array}{cccccccc} 0 & a_{1} p_{11} \frac{S_{1}}{N_{1}} & a_{1} p_{11} \frac{S_{1} \delta}{N_{1}} & a_{1} p_{11}\frac{S_{1} \sigma}{N_{1}} & 0 & a_{1} p_{12} \frac{S_{1}}{N_{2}} & a_{1} p_{12} \frac{S_{1} \delta}{N_{2}} & a_{1} p_{12}\frac{S_{1} \sigma}{N_{2}}\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & a_{2} p_{21} \frac{S_{2}}{N_{1}} & a_{2} p_{21} \frac{S_{2} \delta}{N_{1}} & a_{2} p_{21}\frac{S_{2} \sigma}{N_{1}} & 0 & a_{2} p_{22} \frac{S_{2}}{N_{2}} & a_{2} p_{22} \frac{S_{2} \delta}{N_{2}} & a_{2} p_{22}\frac{S_{2} \sigma}{N_{2}}\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{array} \right]. $$ In F, \(a_{1}p_{12}\frac {S_{1}}{N_{2}}\) is replaced by the balance relation \(\frac {a_{2} p_{1}}{N_{1}}=\frac {a_{1} p_{2}}{N_{2}}\). Then, we get $$ a_{1}((1-\pi_{1})p_{2})\frac{S_{1}}{N_{2}}=a_{2}((1-\pi_{1})p_{1})\frac{S_{1}}{N_{1}}. $$ $$\mathbf{V} = \left[ \begin{array}{cccccccc} \kappa_{1} & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ -p\kappa_{1} & \alpha_{1}+u_{1} & 0 & 0 & 0 & 0 & 0 & 0\\ -(1-p)\kappa_{1} & 0 & \eta_{1} & 0 & 0 & 0 & 0 & 0\\ 0 & -u_{1} & 0 & \alpha_{T,1} & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & \kappa_{2} & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & -p\kappa_{2} & \alpha_{2}+u_{2} & 0 & 0\\ 0 & 0 & 0 & 0 & -(1-p)\kappa_{2} & 0 & \eta_{2} & 0\\ 0 & 0 & 0 & 0 & 0 & -u_{2} & 0 & \alpha_{T,2} \end{array} \right]. $$ The matrix F V −1 has six zero eigenvalues and the remaining two eigenvalues are the roots of the following quadratic equation: $$\lambda^{2}-(p_{11}a_{1}\Gamma_{1}+p_{22}a_{2}\Gamma_{2})\lambda+(p_{11}p_{22}-p_{12}p_{21})a_{1}a_{2}\Gamma_{1}\Gamma_{2}=0. $$ The controlled reproduction \(\mathcal {R}_{c}\) is the largest of these two eigenvalues, which is $$ \mathcal{R}_{c}=\frac{1}{2}\left(a_{1}p_{11}\Gamma_{1}+a_{2}p_{22}\Gamma_{2}+\sqrt{(a_{1}p_{11} \Gamma_{1}-a_{2}p_{22}\Gamma_{2})^{2}+4a_{1}a_{2}p_{12}p_{21}\Gamma_{1}\Gamma_{2}} \right), $$ where \(\Gamma _{1}=\left (\frac {\delta (1-p)}{\eta _{1}}+\frac {p(\alpha _{T,1}+\sigma u_{1})}{\alpha _{T,1}(\alpha _{1}+u_{1})} \right), \Gamma _{2}=\left (\frac {\delta (1-p)}{\eta _{2}}+\frac {p(\alpha _{T,2}+\sigma u_{2})}{\alpha _{T,2}(\alpha _{2}+u_{2})} \right)\). The basic reproduction number \(\mathcal {R}_{0}\) is \(\mathcal {R}_{c}\) with u 1=u 2=0: $$\mathcal{R}_{0}=\frac{1}{2}\left(a_{1}p_{11}\Phi_{1}+a_{2}p_{22}\Phi_{2}+\sqrt{(a_{1}p_{11}\Phi_{1}-a_{2}p_{22}\Phi_{2})^{2}+4a_{1}a_{2}p_{12}p_{21}\Phi_{1}\Phi_{2}} \right), $$ Next, we compute the final size relation by introducing the notation g(∞) for \({\lim }_{\textit {t}\rightarrow \infty }g(t)\) and \(\hat {g}\) for \(\int _{0}^{\infty }g(t)dt\) assuming g is a nonnegative integrable function defined for 0≤t<∞. $$\begin{aligned} &L_{1}(\infty)=0, \ \ I_{1}(\infty)=0, \ \ A_{1}(\infty)=0, \ \ T_{1}(\infty)=0,\\ &L_{2}(\infty)=0, \ \ I_{2}(\infty)=0, \ \ A_{2}(\infty)=0, \ \ T_{2}(\infty)=0,\\ &S_{1}(0)+L_{1}(0)-S_{1}(\infty)=N_{1}(0)-S_{1}(\infty)=\kappa_{1}L_{1},\\ &S_{2}(0)+L_{2}(0)-S_{2}(\infty)=N_{2}(0)-S_{2}(\infty)=\kappa_{2}L_{2}. \end{aligned} $$ Also, we have the following: $$\begin{aligned} \phi_{1}\hat{I_{1}}&=\alpha_{T,1}\hat{T_{1}}, \ \ \phi_{2}\hat{I_{2}}=\alpha_{T,2}\hat{T_{2}},\\ (1-\rho)\kappa_{1}\hat{L_{1}}&=\eta_{1}\hat{A_{1}}, \ \ (1-\rho)\kappa_{2}\hat{L_{2}}=\eta_{2}\hat{A_{2}}, \\ \rho\kappa_{1}\hat{L_{1}}&=(\alpha_{1}+u_{1})\hat{I_{1}}, \ \ \rho\kappa_{2}\hat{L_{2}}=(\alpha_{2}+u_{2})\hat{I_{2}}. \end{aligned} $$ Using the first equation in (1), we have $$ \begin{array}{lcl} -\frac{S_{1}'}{S_{1}}=\left[\frac{a_{1} p_{11}}{N_{1}}(I_{1}+\sigma T_{1}+\delta A_{1})+\frac{a_{1} p_{12}}{N_{2}}(I_{2}+\sigma T_{2}+\delta A_{2})\right]. \end{array} $$ Integrating Eq. (9), $$ \begin{aligned} \ln\frac{S_{1}(0)}{S_{1}(\infty)}&=\frac{a_{1}p_{11}}{N_{1}}\left(\hat{I_{1}}+\sigma \hat{T_{1}}+\delta\hat{A_{1}}\right)+\frac{a_{1} p_{12}}{N_{2}}\left(\hat{I_{2}}+\hat{\sigma T_{2}}+\hat{\delta A_{2}}\right)\\ &= a_{1} p_{11}\left(\frac{\rho(\alpha_{T,1}+\sigma u_{1})}{\alpha_{T,1}(\alpha_{1}+ u_{1})}+\frac{(1-\rho)\delta}{\eta_{1}}\right)\left(1-\frac{S_{1}(\infty)}{N_{1}} \right)\\ &\quad+a_{1}p_{12}\left(\frac{\rho(\alpha_{T,2}+\sigma u_{2})}{\alpha_{T,2}(\alpha_{2}+ u_{2})}+\frac{(1-\rho)\delta}{\eta_{2}}\right)\left(1-\frac{S_{2}(\infty)}{N_{2}}\right)\\ \therefore \ln\frac{S_{1}(0)}{S_{1}(\infty)}&= a_{1} p_{11}\Gamma_{1}\left(1-\frac{S_{1}(\infty)}{N_{1}} \right)+a_{1} p_{12}\Gamma_{2}\left(1-\frac{S_{2}(\infty)}{N_{2}}\right). \end{aligned} $$ $$ \begin{aligned} -\frac{S_{2}'}{S_{2}}=\left[ \frac{a_{2} p_{21}}{N_{1}}(I_{1}+\sigma T_{1}+\delta A_{1})+\frac{a_{2} p_{22}}{N_{2}}(I_{2}+\sigma T_{2}+\delta A_{2})\right]. \end{aligned} $$ Also, integrating Eq. (11), $$ \begin{aligned} \ln\frac{S_{2}(0)}{S_{2}(\infty)}&=\frac{a_{2} p_{21}}{N_{1}}(\hat{I_{1}}+\hat{\sigma T_{1}}+\hat{\delta A_{1}})+\frac{a_{2} p_{22}}{N_{2}}\left(\hat{I_{2}}+\hat{\sigma T_{2}}+\hat{\delta A_{2}}\right)\\ &= a_{2} p_{21}\left(\frac{\rho(\alpha_{T,1}+\sigma u_{1})}{\alpha_{T,1}(\alpha_{1}+ u_{1})}+\frac{(1-\rho)\delta}{\eta_{1}}\right)\left(1-\frac{S_{1}(\infty)}{N_{1}} \right)\\ &\quad +a_{2}p_{22}\left(\frac{\rho(\alpha_{T,2}+\sigma u_{2})}{\alpha_{T,2}(\alpha_{2}+ u_{2})}+\frac{(1-\rho)\delta}{\eta_{2}}\right)\left(1-\frac{S_{2}(\infty)}{N_{2}}\right),\\ \therefore \ln\frac{S_{2}(0)}{S_{2}(\infty)}&= a_{2} p_{21}\Gamma_{1}\left(1-\frac{S_{1}(\infty)}{N_{1}} \right)+a_{2} p_{22}\Gamma_{2}\left(1-\frac{S_{2}(\infty)}{N_{2}}\right). \end{aligned} $$ The final size relation in the absence of treatment (u 1=u 2=0) can be written as: $$ \left[ \begin{array}{c} \ln\frac{S_{1}(0)}{S_{1}(\infty)} \\ \ln\frac{S_{2}(0)}{S_{2}(\infty)} \end{array} \right] = \left[ \begin{array}{c} a_{1} p_{11}\Phi_{1}\left(1-\frac{S_{1}(\infty)}{N_{1}(0)} \right)+a_{1} p_{12}\Phi_{2}\left(1-\frac{S_{2}(\infty)}{N_{2}(0)}\right) \\ a_{2} p_{21}\Phi_{1}\left(1-\frac{S_{1}(\infty)}{N_{1}(0)} \right)+a_{2} p_{22}\Phi_{2}\left(1-\frac{S_{2}(\infty)}{N_{2}(0)}\right) \end{array} \right]. $$ The optimal control problem for the two-group influenza model is formulated to minimize the number of infected individuals for a finite time interval at a minimal cost. We define our objective functional as follows: $$ \mathcal{F}\left(u_{1}(t),u_{2}(t)\right) ={\int_{0}^{T}} \left(C_{1}I_{1}(t)+C_{2}I_{2}(t)+\frac{W_{1}}{2}{u_{1}^{2}}(t)+\frac{W_{2}}{2}{u_{2}^{2}}(t) \right) dt. $$ Then, we seek an optimal pair (U ∗,X ∗) such that $$ \mathcal{F}(u_{1}^{*}(t), u_{2}^{*}(t)) = min_{\Omega} \mathcal {F}(u_{1}(t),u_{2}(t)), $$ where Ω={(u 1(t),u 2(t))∈(L 1(0,T))2∥0≤u 1(t),u 2(t)≤b, t∈ [ 0,T]} subject to the state equations given by (1) with initial conditions. The existence of optimal controls is guaranteed from standard results in optimal control theory [36]. The necessity conditions of optimal solutions are derived from Pontryagin's Maximum Principle [37]. This principle converts the systems (1), (6), (7) into the problem of minimizing the Hamiltonian H given by $$ \begin{aligned} H&= C_{1}I_{1}(t)+C_{2}I_{2}(t)+\frac{W_{1}}{2}{u_{1}^{2}}(t)+\frac{W_{2}}{2}{u_{2}^{2}}(t)\\ &\quad +\lambda_{S_{1}}\left(-a_{1}\left[p_{11}\frac{S_{1}(I_{1}+\sigma T_{1}+\delta A_{1})}{N_{1}}+p_{12}\frac{S_{1}(I_{2}+\sigma T_{2}+\delta A_{2})}{N_{2}}\right]\right)\\ &\quad +\lambda_{S_{2}}\left(-a_{2}\left[p_{21}\frac{S_{2}(I_{1}+\sigma T_{1}+\delta A_{1})}{N_{1}}+p_{22}\frac{S_{2}(I_{2}+\sigma T_{2}+\delta A_{2})}{N_{2}}\right]\right)\\ &\quad +\lambda_{L_{1}}\left(a_{1}\left[p_{11}\frac{S_{1}(I_{1}+\sigma T_{1}+\delta A_{1})}{N_{1}}+p_{12}\frac{S_{1}(I_{2}+\sigma T_{2}+\delta A_{2})}{N_{2}}\right]-\kappa_{1}L_{1}\right)\\ &\quad +\lambda_{L_{2}}\left(a_{2}\left[p_{21}\frac{S_{2}(I_{1}+\sigma T_{1}+\delta A_{1})}{N_{1}}+p_{22}\frac{S_{2}(I_{2}+\sigma T_{2}+\delta A_{2})}{N_{2}}\right] -\kappa_{2}L_{2}\right)\\ &\quad +\lambda_{I_{1}}\left(p\kappa_{1}L_{1}-(\alpha_{1}+u_{1}(t))I_{1}\right) +\lambda_{I_{2}}\left(p\kappa_{2}L_{2}-(\alpha_{2}+u_{2}(t))I_{2}\right)\\ &\quad +\lambda_{A_{1}}\left((1-p)\kappa_{1}L_{1}-\eta_{1}A_{1}\right) +\lambda_{A_{2}}\left((1-p)\kappa_{2}L_{2}-\eta_{2}A_{2}\right)\\ &\quad +\lambda_{T_{1}}\left(u_{1}(t)I_{1}-\alpha_{T,1}T_{1}\right) +\lambda_{T_{2}}\left(u_{2}(t)I_{2}-\alpha_{T,2}T_{2}\right) \end{aligned} $$ From this Hamiltonian and Pontryagin's Maximum Principle [37], we obtain the following theorem: Theorem. There exist optimal controls \(u_{1}^{*}(t), u_{2}^{*}(t)\) and corresponding solutions, \(S^{*}_{i}, L^{*}_{i}\), \(I^{*}_{i}\), \(A^{*}_{i}\) and \(T^{*}_{i}\) that minimizes \(\mathcal {F}(u_{1}(t), u_{2}(t))\) over the domain Ω. In order for the above statement to be true, it is necessary that there exist continuous functions \(\lambda _{S_{i}(t)}\), \(\lambda _{L_{i}(t)}\), \(\lambda _{I_{i}(t)}\), \(\lambda _{A_{i}(t)}\) and \(\lambda _{T_{i}(t)}\) for i=1,2 such that $$ \begin{aligned} \dot \lambda_{S_{i}(t)}&= a_{i}\left\{ p_{i1}\frac{(I_{1}+\sigma T_{1}+\delta A_{1})}{N_{1}}+p_{i2}\frac{(I_{2}+\sigma T_{2}+\delta A_{2})}{N_{2}}\right\} (\lambda_{S_{i}}-\lambda_{L_{i}}),\\ \dot \lambda_{L_{i}(t)}&= \kappa_{i}\left\{ (\lambda_{L_{i}}-\lambda_{A_{i}})-p(\lambda_{I_{i}}-\lambda_{A_{i}}) \right\},\\ \dot\lambda_{I_{i}(t)}&= -C_{i}+a_{1}p_{1i}\frac{S_{1}}{N_{i}}(\lambda_{S_{1}}-\lambda_{L_{1}})+a_{2}p_{2i} \frac{S_{2}}{N_{i}}(\lambda_{S_{2}}-\lambda_{L_{2}})+(\alpha_{i}+u_{i})\lambda_{I_{i}}-u_{i}\lambda_{T_{i}},\\ \dot \lambda_{A_{i}(t)}&= a_{1}p_{1i}\frac{S_{1}\delta}{N_{i}}(\lambda_{S_{1}}-\lambda_{L_{1}})+a_{2}p_{2i} \frac{S_{2}\delta}{N_{i}}(\lambda_{S_{2}}-\lambda_{L_{2}})+\eta_{i} \lambda_{A_{i}},\\ \dot \lambda_{T_{i}(t)}&= a_{1}p_{1i}\frac{S_{1}\sigma}{N_{i}}(\lambda_{S_{1}}-\lambda_{L_{1}})+a_{2}p_{2i} \frac{S_{2}\sigma}{N_{i}}(\lambda_{S_{2}}-\lambda_{L_{2}})+\alpha_{T,i}\lambda_{T_{i}}, \end{aligned} $$ with the transversality conditions, $$ \lambda_{S_{i}}(T)=\lambda_{L_{i}}(T)=\lambda_{I_{i}}(T)=\lambda_{A_{i}}(T)=\lambda_{T_{i}}(T)=0, \text{ } i = 1,2. $$ $$ \begin{aligned} u_{1}^{*}(t) &= min\left\{ max\left\{0, \frac{I_{1}}{W1}(\lambda_{I_{1}} - \lambda_{T_{1}})\right\}, 1 \right\},\\ u_{2}^{*}(t) &= min\left\{ max\left\{0, \frac{I_{2}}{W2}(\lambda_{I_{2}} - \lambda_{T_{2}})\right\}, 1 \right\}. \end{aligned} $$ Proof. The existence of optimal controls follows from Corollary 4.1 of [36] since the integrand of J is a convex function of U(t) and the state system satisfies the Lipschitz property with respect to the state variables. The following can be derived from the Pontryagin's Maximum Principle [37]: $$\begin{aligned} \lambda_{S_{1}}'=-\frac{\partial H}{\partial S_{1}} & =\left(\lambda_{S_{1}}-\lambda_{L_{1}}\right) \left[a_{1}\left(p_{11}\frac{(I_{1}+\sigma T_{1}+\delta A_{1})}{N_{1}}+p_{12}\frac{(I_{2}+\sigma T_{2}+\delta A_{2})}{N_{2}}\right)\right],\\ \lambda_{L_{1}}'=-\frac{\partial H}{\partial L_{1}} &=\kappa_{1}\left(\lambda_{L_{1}}-\lambda_{A_{1}}\right) +p\kappa_{1}\left(\lambda_{A_{1}}-\lambda_{I_{1}}\right),\\ \lambda_{I_{1}}'=-\frac{\partial H}{\partial I_{1}} &=-C_{1}+\left(\lambda_{S_{1}}-\lambda_{L_{1}}\right) \left(a_{1}p_{11}\frac{S_{1}}{N_{1}}\right) +\left(\lambda_{S_{2}}-\lambda_{L_{2}}\right) \left(a_{2}p_{21}\frac{S_{2}}{N_{1}}\right)\\ &\quad+\lambda_{I_{1}}\left(\alpha_{1}+u_{1}\right)-\lambda_{T_{1}}u_{1}, \end{aligned} $$ $$\begin{aligned} \lambda_{A_{1}}'=-\frac{\partial H}{\partial A_{1}} & =(\lambda_{S_{1}}-\lambda_{L_{1}})\left(a_{1}p_{11}\frac{\delta S_{1}}{N_{1}}\right) +(\lambda_{S_{2}}-\lambda_{L_{2}}) \left(a_{2}p_{21}\frac{\delta S_{2}}{N_{1}}\right)+\lambda_{A_{1}}\eta_{1}, \end{aligned} $$ $$\begin{aligned} \lambda_{T_{1}}'=-\frac{\partial H}{\partial T_{1}} & =(\lambda_{S_{1}}-\lambda_{L_{1}}) \left(a_{1}p_{11}\frac{\sigma S_{1}}{N_{1}}\right) +(\lambda_{S_{2}}-\lambda_{L_{2}})\left(a_{2}p_{21}\frac{\sigma S_{2}}{N_{1}}\right)+\lambda_{T_{1}}\alpha_{T,1}, \end{aligned} $$ $$\begin{aligned} \lambda_{S_{2}}'=-\frac{\partial H}{\partial S_{2}} &=\left(\lambda_{S_{2}}-\lambda_{L_{2}}\right) \left[a_{2}\left(p_{21}\frac{I_{1}+\sigma T_{1}+\delta A_{1}}{N_{1}}+p_{22}\frac{I_{2}+\sigma T_{2}+\delta A_{2}}{N_{2}}\right)\right], \\ \lambda_{L_{2}}'=-\frac{\partial H}{\partial L_{2}} & =\kappa_{2}\left(\lambda_{L_{2}}-\lambda_{A_{2}}\right)+p\kappa_{2}(\lambda_{A_{2}}-\lambda_{I_{2}}), \\ \lambda_{I_{2}}'=-\frac{\partial H}{\partial I_{2}} & =-C_{2}+(\lambda_{S_{1}}-\lambda_{L_{1}})\left(a_{1}p_{12}\frac{S_{1}}{N_{2}}\right) +(\lambda_{S_{2}}-\lambda_{L_{2}})\left(a_{2}p_{22}\frac{S_{2}}{N_{2}}\right)\\ &\quad +\lambda_{I_{2}}(\alpha_{2}+u_{2})-\lambda_{T_{2}}u_{2}, \end{aligned} $$ $$\begin{aligned} \lambda_{A_{2}}'=-\frac{\partial H}{\partial A_{2}} &=(\lambda_{S_{1}}-\lambda_{L_{1}}) \left(a_{1}p_{12}\frac{\delta S_{1}}{N_{2}}\right) +(\lambda_{S_{2}}-\lambda_{L_{2}}) \left(a_{2}p_{22}\frac{\delta S_{2}}{N_{2}}\right)+\lambda_{A_{2}}\eta_{2}, \end{aligned} $$ $$\begin{aligned} \lambda_{T_{2}}'=-\frac{\partial H}{\partial T_{2}} &=(\lambda_{S_{1}}-\lambda_{L_{1}}) \left(a_{1}p_{12}\frac{\sigma S_{1}}{N_{2}}\right) +(\lambda_{S_{2}}-\lambda_{L_{2}}) \left(a_{2}p_{22}\frac{\sigma S_{2}}{N_{2}}\right)+\lambda_{T_{2}}\alpha_{T,2}, \end{aligned} $$ with \(\lambda _{S_{i}}\), \(\lambda _{L_{i}}\), \(\lambda _{I_{i}}\), \(\lambda _{A_{i}}\), \(\lambda _{T_{i}} (i=1,2)\) and evaluated at the optimal controls and corresponding states, which results in the adjoint system (17). The Hamiltonian H is minimized with respect to the controls at the optimal controls, so we differentiate H with respect to u 1 and u 2 on the set Ω, giving the following optimality conditions: $$\begin{aligned} \frac{\partial H}{\partial u_{1}} = W_{1}u_{1}-\lambda_{I_{1}}I_{1}+\lambda_{T_{1}}I_{1} = 0,\\ \frac{\partial H}{\partial u_{2}} = W_{2}u_{2}-\lambda_{I_{2}}I_{2}+\lambda_{T_{2}}I_{2} = 0.\\ \end{aligned} $$ Solving for u 1 ∗,u 2 ∗, then $$\therefore u_{1}^{*}=\frac{I_{1}(\lambda_{I_{1}}-\lambda_{T_{1}})}{W_{1}},\quad u_{2}^{*}=\frac{I_{2}(\lambda_{I_{2}}-\lambda_{T_{2}})}{W_{2}}. $$ By using the standard argument for bounds a≤u i (t)≤b for i=1,2, we have the optimality conditions (15). □ Anderson RM, May RM, Anderson B. Infectious Diseases of Humans: Dynamics and Control. Oxford: Oxford university press; 1970. Ferguson NM, Cummings DAT, Cauchemez S, Fraser C, Riley S, Meeyai A, et al. Strategies for containing an emerging influenza pandemic in southeast asia. Nature. 2005; 437(7056):209–14. Heesterbeek JAP, Vol. 5. Mathematical Epidemiology of Infectious Diseases: Model Building, Analysis and Interpretation. New York: John Wiley & Sons; 2000. Longini IM, Nizam A, Xu S, Ungchusak K, Hanshaoworakul W, Cummings DAT, et al. Containing pandemic influenza at the source. Science. 2005; 309(5737):1083–7. Arino J, Brauer F, Van den Driessche P, Watmough J, Wu J. Simple models for containment of a pandemic. J R Soc Interface. 2006; 3(8):453–7. Arino J, Brauer F, Van Den Driessche P, Watmough J, Wu J. A model for influenza with vaccination and antiviral treatment. J Theor Biol. 2008; 253(1):118–30. Hethcote HW, Van Ark JW. Epidemiological models for heterogeneous populations: proportionate mixing, parameter estimation, and immunization programs. Math Biosci. 1987; 84(1):85–118. Mossong J, Hens N, Jit M, Beutels P, Auranen K, Mikolajczyk R, et al. Social contacts and mixing patterns relevant to the spread of infectious diseases. PLoS Med. 2008; 5(3):74. Wallinga TPJ, Kretzschmar M. Using data on social contacts to estimate age-specific transmission parameters for respiratory-spread infectious agents. Am J Epidemiol. 2006; 164(10):936–44. Del Valle SY, Hyman JM, Hethcote HW, Eubank SG. Mixing patterns between age groups in social networks. Soc Netw. 2007; 29(4):539–54. Bansal S, Grenfell BT, Meyers LA. When individual behaviour matters: homogeneous and network models in epidemiology. J R Soc Interface. 2007; 4(16):879–91. Volz EM, Miller JC, Galvani A, Meyers LA. Effects of heterogeneous and clustered contact patterns on infectious disease dynamics. PLoS Comput Biol. 2011; 7(6):1002042. Yates A, Antia R, Regoes RR. How do pathogen evolution and host heterogeneity interact in disease emergence?Proc R Soc London B: Biol Sci. 2006; 273(1605):3075–83. Apolloni A, Poletto C, Ramasco J, Jensen P, Colizza V. Metapopulation epidemic models with heterogeneous mixing and travel behaviour. Theor Biol Med Modell. 2014; 11(1):3. Gani R, Hughes H, Fleming D, Griffin T, Medlock J, Leach S. Potential impact of antiviral drug use during influenza pandemic. Emerg Infect Dis. 2005; 11(9):1355–62. Longini IM, Halloran ME, Nizam A, Yang Y. Containing pandemic influenza with antiviral agents. Am J Epidemiol. 2004; 159(7):623–33. Jacquez JA, Simon CP, Koopman J, Sattenspiel L, Perry T. Modeling and analyzing hiv transmission: the effect of contact patterns. Math Biosci. 1988; 92(2):119–99. Nold A. Heterogeneity in disease-transmission modeling. Math Biosci. 1980; 52(3):227–40. Hyman JM, Li J. Behavior changes in sis std models with selective mixing. SIAM J Appl Math. 1997; 57(4):1082–94. Brauer F. Epidemic models with heterogeneous mixing and treatment. Bull Math Biol. 2008; 70(7):1869–85. Brauer F. Heterogeneous mixing in epidemic models. Can Appl Math Q. 2012; 20(1):1–13. Ma J, Earn DJD. Generality of the final size formula for an epidemic of a newly invading infectious disease. Bull Math Biol. 2006; 68(3):679–702. Nishiura H, Chowell G, Safan M, Castillo-Chavez C. Pros and cons of estimating the reproduction number from early epidemic growth rate of influenza a (h1n1) 2009. Theor Biol Med Model. 2010; 7(1):1. Nishiura H, Cook AR, Cowling BJ, Ramasco J, Jensen P, Colizza V. Assortativity and the probability of epidemic extinction: A case study of pandemic influenza a (h1n1-2009). Interdiscip Perspect Infect Dis. 2011:194507. doi:10.1155/2011/194507. Diekmann O HJ, MG R. The construction of next-generation matrices for compartmental epidemic models. J R Soc Interface. 2010; 7(47):873–85. Van den Driessche P, Watmough J. Reproduction numbers and sub-threshold endemic equilibria for compartmental models of disease transmission. Math Biosci. 2002; 180(1):29–48. Arino J, Brauer F, Van Den Driessche P, Watmough J, Wu J. A final size relation for epidemic models. Math Biosci Eng. 2007; 4(2):159. Brauer F. The kermack-mckendrick epidemic model revisited. Math Biosci. 2005; 198(2):119–31. Lenhart S, Workman JT. Optimal control applied to biological models. New York: Chapman & Hall/CRC; 2007. Rowthorn RE, Laxminarayan R, Gilligan CA. Optimal control of epidemics in metapopulations. J R Soc Interface. 2009; 6(41):1135–44. González-Parra PA, Lee S, Velazquez L, Castillo-Chavez C. A note on the use of optimal control on a discrete time model of influenza dynamics. Math Biosci Eng. 2011; 8:183–97. Lee S, Chowell G, Castillo-Chavez C. Optimal control for pandemic influenza: the role of limited antiviral treatment and isolation. J Theor Biol. 2010; 265(2):136–50. Lee S, Morales R, Castillo-Chavez C. A note on the use of influenza vaccination strategies when supply is limited. Math Biosci Eng. 2011; 8(1):171–82. Lee S, Golinski M, Chowell G. Modeling optimal age-specific vaccination strategies against pandemic influenza. Bull Math Biol. 2012; 74(4):958–80. Lee J, Kim J, Kwon HD. Optimal control of an influenza model with seasonal forcing and age-dependent transmission rates. J Theor Biol. 2013; 317:310–20. Fleming WH, Rishel RW. Deterministic and stochastic optimal control. New York: Springer Verlag; 1975. Pontryagin LS, Boltyanskii VG, Gamkrelidze RV. The mathematical theory of optimal processes. New Jersey: Wiley; 1962. This work was supported a grant from Kyung Hee University in 2013 (KHU-20130683). Department of Mathematics, Graduate School, Kyung Hee University, Seoul, 02447, Korea Seoyun Choe Department of Applied Mathematics, Kyung Hee University, Yongin-si, 446-701, Korea Sunmi Lee Correspondence to Sunmi Lee. Both authors have equally contributed to the development of the model, the analysis of the results and the writing of the paper. Both authors read and approved the final manuscript. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Choe, S., Lee, S. Modeling optimal treatment strategies in a heterogeneous mixing model. Theor Biol Med Model 12, 28 (2015). https://doi.org/10.1186/s12976-015-0026-x DOI: https://doi.org/10.1186/s12976-015-0026-x A two-group influenza model Heterogeneous mixing Preferred mixing Optimal control theory Targeted treatment strategies
CommonCrawl
Monitoring European data with prospective space–time scan statistics: predicting and evaluating emerging clusters of COVID-19 in European countries Mingjin Xue1, Zhaowei Huang1, Yudi Hu1, Jinlin Du1,2, Miao Gao1, Ronglin Pan1, Yuqian Mo1, Jinlin Zhong1 & Zhigang Huang1,2 Coronavirus disease 2019 (COVID-19) has become a pandemic infectious disease and become a serious public health crisis. As the COVID-19 pandemic continues to spread, it is of vital importance to detect COVID-19 clusters to better distribute resources and optimizing measures. This study helps the surveillance of the COVID-19 pandemic and discovers major space–time clusters of reported cases in European countries. Prospective space–time scan statistics are particularly valuable because it has detected active and emerging COVID-19 clusters. It can prompt public health decision makers when and where to improve targeted interventions, testing locations, and necessary isolation measures, and the allocation of medical resources to reduce further spread. Using the daily case data of various countries provided by the European Centers for Disease Control and Prevention, we used SaTScan™ 9.6 to conduct a prospective space–time scan statistics analysis. We detected statistically significant space–time clusters of COVID-19 at the European country level between March 1st to October 2nd, 2020 and March 1st to October 2nd, 2021. Using ArcGIS to draw the spatial distribution map of COVID-19 in Europe, showing the emerging clusters that appeared at the end of our study period detected by Poisson prospective space–time scan statistics. The results show that among the 49 countries studied, the regions with the largest number of reported cases of COVID-19 are Western Europe, Central Europe, and Eastern Europe. Among the 49 countries studied, the country with the largest cumulative number of reported cases is the United Kingdom, followed by Russia, Turkey, France, and Spain. The country (or region) with the lowest cumulative number of reported cases is the Faroe Islands. We discovered 9 emerging clusters, including 21 risky countries. This result can provide timely information to national public health decision makers. For example, a country needs to improve the allocation of medical resources and epidemic detection points, or a country needs to strengthen entry and exit testing, or a country needs to strengthen the implementation of protective isolation measures. As the data is updated daily, new data can be re-analyzed to achieve real-time monitoring of COVID-19 in Europe. This study uses Poisson prospective space–time scan statistics to monitor COVID-19 in Europe. Coronavirus disease 2019 (COVID-19), which is caused by the highly pathogenic virus severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), was first detected in Wuhan, China, in December 2019 and has since become a pandemic infectious disease also become a serious public health crisis [1]. As recognized by the World Health Organization(WHO), the use of mathematical methods to establish a dynamic spread model of infectious diseases in the early stages of an infectious disease epidemic plays a key role in providing decision-makers based on data evidence. At present, the COVID-19 pandemic has promoted the unprecedented development of infectious disease transmission dynamics models and incorporated them into policy formulation and public health practices [2]. These infectious disease transmission dynamics model provides a scientific method to study the dynamics of disease transmission and to derive long-term and short-term predictions. These predictions clearly integrate assumptions about the epidemiological process affecting disease transmission and surveillance. During the outbreak of the COVID-19 pandemic, transmission dynamics models are very valuable. It can identify possible trends in the development of the disease, evaluate the effectiveness of the interventions, and predict the extent of spread of the disease [2]. Surveillance of space–time clusters of cases is one of the main ways to detects outbreaks of infectious diseases [3]. During the period of emerging infectious diseases such as COVID-19, the implementation of space–time monitoring is crucial, which can predict emerging clusters in advance, implement targeted intervention measures, early detection, and medical resource allocation. Space time scan statistics(STSS) is a method proposed by Kulldorff [4] to quickly monitor disease clusters based on scan statistics and find high-risk areas in advance. STSS are widely used in the monitoring of major infectious diseases. It can study the areas of high or low aggregation of diseases, and choose different data models to determine whether the space–time distribution of the observed diseases is accidental or random. To put it simply, it uses scan statistics to detect clusters of outliers (eg, outliers outside of a given baseline condition). This scan statistic uses a moving cylinder to scan the area, looking for potential space–time clusters of cases [4]. The bottom of the cylinder is the space scanning window, and the height reflects the time scanning window. The center of the cylinder is defined as the geographic coordinates of the center of each region. For example, if the number of cases in the space–time clusters scanned exceeds 50% of the population at risk, it indicates that the outside of the scanning cylinder is a low-risk area. In its scanning cylinder, the results will show the location, size and duration of statistically significant cluster disease cases. In order to routinely monitor the epidemic, prospective space–time scan statistics [5] is a method of detecting "active" or emerging disease clusters, which can be used to monitor ongoing epidemics. Scan statistics will detect clusters that are "active" at the end of the study period. The main purpose of using prospective scan statistics instead of retrospective scan statistics is to only focus on the significant clusters that are "active" or that exist at the time of analysis. It ignores the clusters that may have existed before are no longer a threat to the public health neighborhood [5]. For instance, prospective space–time scan statistics have been used to detect Shigellosis [6], measles [7], syndrome surveillance [8], and recently COVID-19 [9,10,11]. The results indicate that prospective scanning is a tool that low-income and middle-income countries can use to detect emerging clusters and implement specific control policies and interventions to slow the spread of COVID-19 [12]. Since COVID-19 data is updated daily, prospective space–time scan statistics can help to monitor the pandemic in time, and the focus in this study is on Europe. This study helps the surveillance of the COVID-19 pandemic and discovers major space–time clusters of reported cases in European countries. Prospective space–time scan statistics are particularly valuable because it has detected active and emerging COVID-19 clusters [13]. It can prompt public health decision makers when and where to improve targeted interventions, testing locations, and necessary isolation measures, and the allocation of medical resources to reduce further spread. In order to prove the effectiveness of using prospective space–time scan statistics, we report the results of two time periods: March 1, 2020 to October 2, 2020 and March 1, 2021 to October 2, 2021. Compare the statistical results of prospective scans in Europe in 2020 with the results of actual risk areas in Europe in 2021, evaluate the effect of prospective space–time scan statistics, and propose clusters of emerging clusters that we have discovered. Since COVID-19 is a highly infectious disease that all people are susceptible to, we decided not to adjust for age. However, infants, young children, the elderly, and people with a previous medical history accounted for the vast majority of deaths from COVID-19, which can be corrected using the age-adjusted Bernoulli model, but this is not within the scope of this study. We collected COVID-19 case and population data from the European Center for Disease Control and Prevention. These data can be obtained for free on the page (https://www.ecdc.europa.eu/en/cases-2019-ncov-eueea). For the time being, these data are currently updated daily, and we are using the data available between March 1, 2020 to October 2, 2020 and March 1, 2021 to October 2, 2021. From a spatial perspective, if COVID-19 is clustered at the national level, the number of confirmed cases per day will be used for scanning statistics. Using the spatial location information in the COVID-19 dataset and the geographic information we obtained on Google Maps, we matched the geographic location information of the corresponding country to the case dataset. Our analysis focuses on 30 countries in the European Union and 19 countries outside the European Union, excluding some cases in European island countries and very small populations (without information). The COVID-19 data set of 49 countries(For each country code, see Table 1) reported the number of daily cases, so we can directly use the case data of each day (with missing values, you can query the daily cumulative number of cases announced by the WHO. The number of cases in the previous day (Nk -1) subtract from the number of cases (Nk) on the day, so that you can get the number of new cases). The COVID-19 data set reports the cumulative number of cases in each country from March 1st to October 2nd, 2021 (Fig. 1). Table 1 Assignment table of each country code Cumulative number of COVID-19 cases in European countries between March 1st and October 2nd, 2021 (used for the statistical analysis) Poisson prospective space–time scan statistics Space–time scan statistics is an extension of space scan statistics proposed by Professor Kulldorff of Harvard Medical School in 1997. It adds a time dimension to the original space scan statistics, so that the scan statistics can detect clusters in both time and space. In order to identify the space–time clusters that are still occurring or "active", we use a Poisson prospective space–time scan statistics [5, 14, 15], and in SaTScan™ 9.6 realization(The parameters are shown in Table 2). Compared with the circular window of space scan statistical data, the space–time scan window has also become a cylinder correspondingly. The size of the scanning window of the cylinder corresponds to the spatial range, and the height corresponds to the time. The size and position of the scanning window of the cylinder change all the time, so that the space–time scan statistics can be used to determine the time and place of the epidemic. In-depth analysis of the size and scale of the gathering point, so as to realize the early recognition of the outbreak. Table 2 Parameters used for the Prospective STSS analysis The process of space–time scan statistics includes the following four aspects. First, set a coordinate point in the study area as the center of the scanning window on the bottom of the cylinder. Second, gradually increase the radius and height of the bottom surface of the cylindrical scanning window until the time and space constraints of the maximum scanning window are reached. Repeat the same scanning process for all positions of the cylinder scanning window in the study area. Third, the expected number of cases can be calculated based on the number of observed cases inside the scanning cylinder and outside the scanning cylinder, the expected incidence rate can be calculated based on the number of observed cases and the number of people, and the incidence period can be calculated according to the selection of scanning frequencies at different times. The log likelihood ratio(LLR) of the test statistics is composed of the actual incidence and the expected incidence; LLR is used to evaluate the degree of abnormality in the number of cases in the scan window. The larger the log-likelihood ratio, the degree of abnormal disease in the scan window Bigger. Finally, a standard Monte Carlo simulation method is used to evaluate the statistical significance of the scanned cylinder. We assume that COVID-19 cases follow a Poisson distribution according to the population of the geographic area. Null hypothesis H0: The risk of COVID-19 within the scanning area is the same as that outside the scanning area, and the intensity μ is proportional to the population at risk. Alternative Hypothesis H1: The risk of COVID-19 in the scanning cylinder is higher. The expected number of COVID-19 cases (μ) under the null hypothesis H0 is shown in Eq. (1): $$\mu =p*C/P$$ where p represents the population in the scanning cylinder, C represents the total number of cases, and P represents the total population. The log-likelihood ratio is used to identify the window of outliers (high risk) in COVID-19 scanning, and it is defined as Eq. (2): $$LLR=\frac{{L}_{Z}}{{L}_{0}}=\frac{{\left(\frac{{N}_{Z}}{{\upmu }_{Z}}\right)}^{{N}_{Z}}{\left(\frac{{N}_{T}-{N}_{Z}}{{\mu }_{T}-{\mu }_{Z}}\right)}^{{N}_{T}-{N}_{Z}}}{{\left(\frac{{N}_{T}}{{\mu }_{T}}\right)}^{{N}_{T}}}$$ where LZ is the likelihood function of the scanning cylinder Z, L0 is the likelihood function of the cylinder H0; μZ is the expected number of events in the scanning cylinder Z; μT is the total expected number of theoretical events in the entire research space–time range: \({\mu }_{T}=\sum {\mu }_{Z}\); NT is the total number of COVID-19 cases observed in Europe during the study period. NZ is the number of COVID-19 cases observed in scanning cylinder Z. When the likelihood ratio is greater than 1, the risk of scanning the cylinder increases, that is: \(\frac{\mathrm{NZ}}{\mathrm{\mu Z}}>\frac{{\mathrm{N}}_{\mathrm{T}}-{\mathrm{N}}_{\mathrm{Z}}}{{\mathrm{N}}_{\mathrm{T}}-{\upmu }_{\mathrm{Z}}}\). In order to avoid the assumption that the relative risk of COVID-19 is homogeneous in a significant space–time cluster, we also report and visualize the relative risk of each country belonging to the cluster. From Eq. (3), the relative risk (RR) of each position in the cluster can be obtained: $$RR=\frac{{\mathrm{N}}_{\mathrm{Z}}/{\upmu }_{\mathrm{Z}}}{({\mathrm{N}}_{\mathrm{T}}-{\mathrm{N}}_{\mathrm{Z}})/({\upmu }_{\mathrm{T}}-{\upmu }_{\mathrm{Z}})}$$ where NZ is the total number of COVID-19 cases in a country, μZ is the expected number of cases in a country, NT is the total number of cases observed in Europe. RR is the estimated risk within a location divided by the risk outside the location (ie, other locations). For example, if a country's RR is 3, then the population of that country will be three times more likely to be exposed to COVID-19. The reported clusters also have relative risks, which are derived in the same way as Eq. (3); but the RR of the cluster is the estimated risk (observed value/expected value) within the cluster divided by the risk outside the cluster. We define the scanning time in days as the unit, and the scanning area in the country as the unit. In order to avoid very large clusters, we used 2020 data to try 5, 10, 15, 20, 25, and 50% of high-risk populations as spatial scanning windows. When 5%, 15%, 25%, and 50% of high-risk populations are used as the maximum scanning window [16], the number of cities covered by certain clusters exceeds 30% of the total number of geographic countries (> 14 countries), which is not suitable or not conducive to disease monitoring [17]. In other words, the total number of clusters calculated by scanning has covered 90% of geographic countries. Therefore, under comprehensively weighing the accuracy of clustering and the actual situation of disease monitoring, the maximum spatial scanning area analyzed in 2020 is set to 20% of the population at risk, and the maximum spatial scanning area analyzed in 2021 is set to 10% of the population at risk. The other settings are the same. The maximum temporal cluster duration is set to 50% of the total study duration, the minimum temporal cluster duration is set to two longest incubation periods (14 days), the minimum number of cases is set to 5 cases, and the number of Monte Carlo iterations is set to 999 times. The space–time scan analysis adopts the Poisson probability model. According to the Poisson distribution principle, the LLR of different windows is calculated, and the Monte Carlo method is used for testing to evaluate the statistical significance of the space–time clusters. When P < 0.05, it can be considered that the relative risk of cases inside the window and the relative risk of cases outside the window are statistically significant. The area with the largest LLR value is regarded as the main cluster, and the other areas with statistically significant LLR values ​​are regarded as the secondary clusters. Use ArcGIS™ 10.2 to visualize the results of space–time scanning. Root mean square error The expected value predicted by the model is compared with the actual value to judge the prediction effect. We use the root mean square error (RMSE) method to analyze. The RMSE represents the distance between the expected value and the true value. It is the square root of the deviation between the observed value and the true value and the square root of the ratio of the number of observations N. In actual measurement, the number of observations N is always limited, and the true value can only be Replace with the most reliable (best) value (Eq. 4). RMSE is very sensitive to very large or very small errors in a set of measurements, so RMSE is a good indicator of the precision of a measurement, which is why RMSE is widely used. Therefore, this method was adopted in our study. $$RMSE=\sqrt{\frac{{\sum }_{i=1}^{N}{\left({x}_{i}-{\widehat{x}}_{i}\right)}^{2}}{N}}$$ where RMSE means root mean square error; i means variable i; N means number of non-missing data points; xi means actual observations time series; \({\widehat{x}}_{i}\) means estimated time series; a range of RMSE values not exceeding 2 is reasonable. The spatial distribution of the European population Figure 2 shows the spatial distribution of the population of 49 countries studied in Europe from March 1,2021 to October 2, 2021 (as of the end of 2019). There are 6 levels in total. The top 3 countries (or regions) in population are Russia, Germany, and Turkey, and the least populated region is Gibraltar. The population of Northern Europe is relatively small. Western Europe has the densest population distribution, followed by Central Europe. Because Eastern Europe is located at the junction of the Eurasian plates, the population is also densely distributed. The spatial distribution map of the population of various countries in the European region between March 1st and October 2nd, 2021(data as of the end of 2019) The spatial distribution of the cumulative number of COVID-19 cases in Europe Figure 3 shows the spatial distribution of the cumulative number of cases in 49 countries studied in Europe from March 1,2021 to October 2, 2021. The results show that among the 49 countries studied, the regions with the largest number of reported cases of COVID-19 are Western Europe, Central Europe, and Eastern Europe. The results in Fig. 1 show that among the 49 countries studied, the country with the largest cumulative number of reported cases is the United Kingdom (the darkest color), followed by Russia, Turkey, France, and Spain. The country (or region) with the lowest cumulative number of reported cases is the Faroe Islands. Spatial distribution map of COVID-19 cases in Europe between March 1st and October 2nd, 2021 Space–time scan statistics Table 3 shows the statistically significant space–time clusters of COVID-19 epidemics in European countries from March 1,2020 to October 2, 2020 and from March 1, 2021 to October 2, 2021. Table 3 Space–time scan statistics Table 4 shows the relative risk values of countries included in each COVID-19 space–time cluster from March 1, 2020 to October 2, 2020 and from March 1, 2021 to October 2, 2021. Table 4 Location Relative Risk (RR = relative risk; LLR = log likelihood ratio; ID = country code) The space–time cluster of COVID-19—March 1, 2020 to October 2, 2020 Cluster 1 is located in central and western Europe and contains 12 countries. A total of 431,383 cases have been observed. The cluster RR value is 3.55. Among them, the RR value of 10 countries is > 1, and the RR value of Czechia is the largest, which is 4.79. Cluster 2 is located in Western Europe and includes 3 countries including Spain. The cluster has observed 448,381 cases with an RR value of 2.74. The RR of the three risk countries are all > 1. Spain has the highest RR of 2.73. Cluster 3 includes 6 countries in Eastern Europe, with a cluster RR value of 1.99, of which 5 countries have RR > 1, and Ukraine has the highest RR value of 2.43. This cluster has a total of 296,155 observed cases. Cluster 4 is located in northwestern Europe. It has reported 130,271 cases in 5 countries. The cluster RR value is 2.88. Among them, 3 countries have RR > 1, and Netherlands has the highest RR of 4.30. Figure 4 shows the location and spatial distribution of four statistically significant space–time clusters of the COVID-19 epidemic in Europe from March 1, 2020 to October 2, 2020, corresponding to the four clusters in the 2020 scan statistics in Table 3. Compared with Fig. 3, the results of the Poisson prospective space–time scan statistics are roughly the same as the actual results of COVID-19, which shows that the epidemiological statistical method is feasible. Therefore, we have made prospective results on the risk of the COVID-19 outbreak in Europe in 2021. Spatial distribution of emerging space–time clusters of COVID-19 between March 1st and October 2nd, 2020 Forecast analysis results from March 1st, 2020 to October 2nd, 2020 Table 5 shows the predicted values obtained by the statistical analysis of the prospective Poisson space–time scan from March 1st, 2020 to October 2nd, 2020, compared with the observed values in the same period in 2021, using the RMSE method. Table 5 Root mean square error results analysis Cluster 1 includes only one country, Turkey in Eastern Europe. At the time of this study, Turkey's RR is 3.65, with 4,635,765 observed cases. Cluster 2 contains some countries in Eastern Europe, and the cluster RR value is 5.69. There are 5 countries in total, of which 4 countries show RR > 1. They are Serbia (RR = 13.20), Ukraine (RR = 6.93), the country with the largest RR value., Moldova (RR = 6.14), Romania (RR = 1.63), 1,028,106 cases were observed. Cluster 3 only reported one country, the United Kingdom in northwestern Europe, with an RR of 1.74 and 3,162,801 observed cases. Cluster 4 is located in central Europe, with a cluster RR value of 3.87, including 3 risk countries which are San Marino (RR = 5.15), Switzerland (RR = 3.88), Monaco (RR = 3.87), with a total of 284,648 observed cases. Cluster 5 only reported one country, Belarus (RR = 3.18), located in Eastern Europe, with 263,274 observed cases. Cluster 6 also reported a country, Cyprus located in the northeast of the Mediterranean (Note: Although Cyprus belongs to Asia geographically, it is part of Europe historically, culturally, and politically, and is one of the countries in the European Economic Area), The cluster's RR value is 1.93, and 46,026 cases were observed. Cluster 7 also contains only one country, Andorra, with an RR value of 7.82 and 3,885 observed cases. Cluster 8 is located in the central part of Europe and contains 7 countries. The cluster RR value is 1.31. Among them, 3 risk countries exhibit RR > 1, namely: Bosnia and Herzegovina (RR = 10.46), Slovenia (RR = 1.82), Croatia (RR = 1.35). Cluster 9 is located in a peninsula at the southern tip of western Europe. Only one area in Gibraltar is reported, with an RR value of 1.56, and 1,196 cases were observed. Figure 5 shows the location and spatial distribution of 9 statistically significant space–time clusters of the COVID-19 epidemic in Europe from March 1 to October 2, 2021, corresponding to the 9 clusters in the 2021 scan statistics in Table 3 (the specific numerical statistical results are shown in Fig. 6), including the 21 risk countries shown in Table 4 (the corresponding RR value statistical results of each country / region are shown in Figs. 7 and 8). The results show that the central and eastern regions of Europe are the center of COVID-19 in Europe. Countries with large populations are more likely to become high-risk areas, such as Turkey, the United Kingdom, and Ukraine. The results in Fig. 5 include some small-scale countries or regions, indicating that the definition of high-risk areas can be understood as areas with a high incidence of COVID-19, that is, more than 10% (20%) of the total population of the country or region. It is worth noting that Serbia is the country with the highest expected risk (Fig. 8). As shown in Figs. 3, 7 and 8, Moldova, Romania, Bulgaria, Switzerland, Andorra, and Belarus, which originally had a small cumulative number of cases, are all expected to be high-risk countries, indicating that the incidence rate of these countries is at a high level. 2021 high-risk clusters of COVID-19 in Europe Spatial distribution of emerging space–time clusters of each country of COVID-19 between March 1st and October 2nd, 2021 COVID-19 high-risk country description distribution in Europe in 2021 In this study, we used Poisson prospective space–time scan statistics to conduct space–time monitoring of COVID-19 in Europe. During the study period from March 1, 2020 to October 2, 2020 and from March 1, 2021 to October 2, 2021, we discovered emerging clusters of COVID-19 space–time clusters at the national level in the European region. In the prospective space–time scan statistics used in 2020, we set the maximum spatial scan area to 20% of the population at risk. This is because the scan results of other thresholds show that the number of countries included in some scan clusters is greater than the total number of countries studied 30% of the total number (> 14 countries), which leads to a significant reduction in the feasibility and effectiveness of disease surveillance. Secondly, the number of countries included in the scan results of other cutoffs exceeds 90% or more of the number of countries studied, and there are even repeated test results, indicating that the accuracy of clustering is too low and does not meet the actual situation of disease surveillance. This also shows that setting the maximum spatial scan area to 20% of the population at risk has the best fit. When we analyzed the 2021 data set, we used the same method, but set the maximum spatial scan area to 10% of the population at risk. There are two main reasons: first, this is the setting after trying the previous method, and the result is similar to the previous analysis of the 2020 data set, that is, the number of countries included in some clusters exceeds 30% of the total number of countries studied or the results of repeated scanning appear, and the accuracy of the results is not high and does not conform to the actual situation. Second, we consider that the COVID-19 epidemic in 2020 is particularly serious, and more of it occurs in the form of outbreaks. The number of COVID-19 cases in various countries has suddenly increased, medical resources are scarce, there is no specific drug treatment, and human and material resources are insufficient. In response to the spread of COVID-19 at that time, and there was no vaccine developed at that time, so the 2020 data set analysis set 20% of the population at risk. On the contrary, China and the United States have developed the vaccine at the end of 2020. It is expected that in the past two years, people in most countries will be able to be vaccinated. Medical resources are sufficient, and human resources are increasing. Countries around the world have taken effective preventive measures, which effectively blocked the spread of COVID-19. Therefore, comprehensively weighing various situations, we finally set the maximum spatial scan area to 10% of the population at risk, and obtained the best fitting effect. The accuracy of cluster is relatively high, which is more in line with the actual situation of disease surveillance. Combining the results of Figs. 3 and 4, the prospective space–time scan statistics from March 1, 2020 to October 2, 2020 reflect the actual incidence in 2021, in addition, the RMSE value shown in Table 5 in the results is 1.6789, which shows that the method we use is feasible. Through the setting of parameters, we have well predicted the actual situation of the COVID-19 epidemic in 2021, which also shows that our considerations are correct. It is precisely because of the better prediction effect in 2020 that we have made a prediction of the COVID-19 epidemic in 2021. Figure 5 shows that there will be the next wave of new COVID-19 epidemics in central and eastern Europe, of which country with the highest predicted relative risk is Serbia, followed by 21 countries including Bosnia and Herzegovina. This will provide public health decision-makers in relatively risky countries with information on the space–time development of disease outbreaks [18], prepare in advance for the prevention and control of the COVID-19 epidemic, strengthen restrictions on crowd movement, complete effective measures such as isolation and protection, and stop the re-eruption and spread of COVID-19. Although our research has made some contributions, there are still some shortcomings and prospects. First of all, the prospective space–time scan statistics we use have certain limitations. It is a form in which the bottom scanning window is circular or elliptical. The scanning window is easily included in some surrounding areas that are not at risk. This makes the results have a certain error. In research areas with obvious spatial heterogeneity, a circle may be a bad choice [19]. This is very significant, because many of the clusters we have detected include some sea areas, which is obviously impractical. The solution to this problem is to change the circular or elliptical scanning window into an arbitrary shape. Flexibly shaped scan statistics [20] define the scanning window by connecting K-nearest neighbors to the focal area, which is especially suitable for detecting irregularly shaped clusters. Secondly, our data only includes the population, the number of confirmed cases, and the lack of subsequent potential infections. This result is largely the result of our testing work, and may not be a good representative of the real situation of the virus and the real space–time distribution. The only way to solve this phenomenon is to pass large-scale testing. Thirdly, when we apply the prospective space–time scan statistics method, the results of repeated scan statistics appear. This is the same as many statistics, and false positive results may eventually appear. But SaTScan™ provides a recurrence interval measurement, which quantifies the likelihood of accidentally observing clusters. We checked the recurrence intervals of our analysis and found that they were closely related to the p-values ​​we used to identify clusters. Prospective space–time scanning statistics have undeniable benefits for disease surveillance and are used by many public health agencies around the world [21]. Considering the recurrence interval of SaTScan™, it is recommended [22]. Fourth, we predict that the RR value of some clusters is not large, but there may be large risk differences within them, such as cluster 8 in 2021 in Table 3, where the relative risk span of 7 countries ranges from 0.1 to 10. This indicates that there are both high-risk countries and low-risk countries in some clusters. Local analysis of these countries or regions can provide a more accurate understanding of the counties or regions where the COVID-19 outbreak is at risk. Fifth, COVID-19 is more harmful to the elderly and people with pre-existing diseases. Our study did not use age and other related factors to correct. The results are not yet a good representative of the true situation of the overall population. Later studies can use the age-adjusted Bernoulli model to explain cases and deaths, while also adjusting other related factors. Sixth, our research is aimed at the national-level COVID-19 reports in the European region, and the accuracy needs to be improved. Furthermore, the number of countries we included in the study does not cover the entire European continent well, and there are certain errors in the results. We used open data from the European Centers for Disease Control and Prevention to detect emerging space–time clusters of COVID-19 in two different time periods in Europe. We suggest that emerging cluster countries with high RR and LLR values should pay attention to strengthen efforts in the implementation of national grass-roots monitoring and overseas imports, and take corresponding protective and quarantine measures in a timely manner to stop the spread and spread of COVID-19. Poisson prospective space–time scan statistical methods can effectively detect emerging clusters of COVID-19, and can monitor disease outbreaks when new data are available. In addition, we emphasize the importance of data sharing. During the COVID-19 pandemic, the availability of data sharing can monitor emerging and active clusters of cases with high accuracy, which is important for both regional decision makers and researchers. This can effectively use our epidemiological knowledge to effectively prevent and control the spread and spread of the COVID-19 epidemic, and provide sufficient space–time dynamic information and theoretical basis for public health decision-makers. Properly launch the implementation of epidemic prevention and control measures. The data is publicly available. The European Center for Disease Control and Prevention is an open access database. Researchers can access the relevant data set by logging in to https://www.ecdc.europa.eu/en/cases-2019-ncov-eueea. Data will be available on request by email to the corresponding author. Dror AA, Eisenbach N, Taiber S, et al. Vaccine hesitancy: the next challenge in the fight against COVID-19. Eur J Epidemiol. 2020;35(8):775–9. https://doi.org/10.1007/s10654-020-00671-y. Becker AD, Grantz KH, Hegde ST, Bérubé S, Cummings DAT, Wesolowski A. Development and dissemination of infectious disease dynamic transmission models during the COVID-19 pandemic: what can we learn from other pathogens and how can we move forward? The Lancet Digital Health. 2021;3(1):e41–50. https://doi.org/10.1016/s2589-7500(20)30268-5. Ladoy A, Opota O, Carron PN, et al. Size and duration of COVID-19 clusters go along with a high SARS-CoV-2 viral load: A spatio-temporal investigation in Vaud state. Switzerland Sci Total Environ. 2021;787: 147483. https://doi.org/10.1016/j.scitotenv.2021.147483. Kulldorff M. A spatial scan statistic. Communications In Statistics-Theory and Methods. 1997;26(6):1481–96. Kulldorff M. Prospective time periodic geographical disease surveillance using a scan statistic. J R Stat Soc Ser A. 2001;164(1):61–72. Jones RC, Liberatore M, Fernandez JR, Gerber SI. Use of a prospective space- time scan statistic to prioritize shigellosis case investigations in an urban jurisdiction. Public Health Rep. 2006;121(2):133–9. Yin, F., Li, X., Ma, J., & Feng, Z. (2007). The early warning system based on the prospective space-time permutation statistic. Wei sheng yan jiu= Journal of hygiene research, 36(4), 455–458. Yih, W. K., Deshpande, S., Fuller, C., Heisey-Grove, D., Hsu, J., Kruskal, B. A., Kulldorff, M., Leach, M., Nordin, J., Patton-Levine, J., Puga, E., Sherwood, E., Shui, I., & Platt, R. (2010). Evaluating real-time syndromic surveillance signals from ambulatory care data in four states. Public health reports (Washington, D.C. : 1974), 125(1), 111–120. https://Doi.org/https://doi.org/10.1177/003335491012500115. Xu F, Beard K. A comparison of prospective space-time scan statistics and spatiotemporal event sequence based clustering for COVID-19 surveillance. PLoS ONE. 2021;16(6): e0252990. https://doi.org/10.1371/journal.pone.0252990. Rosillo N, Del-Aguila-Mejia J, Rojas-Benedicto A, et al. Real time surveillance of COVID-19 space and time clusters during the summer 2020 in Spain. BMC Public Health. 2021;21(1):961. https://doi.org/10.1186/s12889-021-10961-z. Hohl A, Delmelle EM, Desjardins MR, Lan Y. Daily surveillance of COVID-19 using the prospective space-time scan statistic in the United States. Spat Spatiotemporal Epidemiol. 2020;34: 100354. https://doi.org/10.1016/j.sste.2020.100354. Tyrovolas S, Gine-Vazquez I, Fernandez D, et al. Estimating the COVID-19 spread through real-time population mobility patterns: surveillance in low- and middle-income Countries. J Med Internet Res. 2021;23(6): e22999. https://doi.org/10.2196/22999. Desjardins MR, Hohl A, Delmelle EM. Rapid surveillance of COVID-19 in the United States using a prospective space-time scan statistic: detecting and evaluating emerging clusters. Appl Geogr. 2020;118: 102202. https://doi.org/10.1016/j.apgeog.2020.102202. Kulldorff M, Athas WF, Feurer EJ, Miller BA, Key CR. Evaluating cluster alarms: a space-time scan statistic and brain cancer in Los Alamos, New Mexico. Am J Public Health. 1998;88(9):1377–80. Kulldorff M. A spatial scan statistic. Communications in Statistics - Theory and Methods. 2007;26(6):1481–96. https://doi.org/10.1080/03610929708831995. Ma Q, Gao J, Zhang W, et al. Spatio-temporal distribution characteristics of COVID-19 in China: a city-level modeling study. BMC Infect Dis. 2021;21(1):816. https://doi.org/10.1186/s12879-021-06515-8. Xu M, Cao C, Zhang X, et al. Fine-scale space-time Cluster detection of COVID-19 in Mainland China using retrospective analysis. Int J Environ Res Public Health. 2021;18(7):3583. Andrade LA, Gomes DS, Goes MAO, et al. Surveillance of the first cases of COVID-19 in Sergipe using a prospective spatiotemporal analysis: the spatial dispersion and its public health implications. Rev Soc Bras Med Trop. 2020;53: e20200287. https://doi.org/10.1590/0037-8682-0287-2020. Takahashi K, Kulldorff M, Tango T, Yih K. A flexibly shaped space-time scan statistic for disease outbreak detection and monitoring. Int J Health Geogr. 2008;7:14. https://doi.org/10.1186/1476-072X-7-14. Tango T, Takahashi K. A flexibly shaped spatial scan statistic for detecting clusters. Int J Health Geogr. 2005;4:11. https://doi.org/10.1186/1476-072X-4-11. Greene SK, Peterson ER, Balan D, et al. Detecting COVID-19 Clusters at High Spatiotemporal Resolution, New York City, New York, USA, June-July 2020. Emerg Infect Dis. 2021;27(5):1500–4. Kulldorff M, Kleinman K. Comments on 'a critical look at prospective surveillance using a scan statistic' by T. Correa, M. Costa, and R. Assuncao. Stat Med. 2015;34(7):1094–5. Doi:https://doi.org/10.1002/sim.6430. We are very grateful to the European Centre for Disease Control and Prevention and the World Health Organization for providing open data, which enabled our research to proceed smoothly. This research was funded by Special project in key areas of ordinary colleges and universities in Guangdong Province: Research on core technologies and prediction models for active monitoring and identification of major infectious disease epidemics(2020ZDZX3055) and Guangdong Medical University Innovation Experimental Project Fund (SYDY004). This research was also funded by special research project on the prevention and control of COVID-19 in general colleges and universities of Guangdong Provincial Department of Education: Research on the prevention and control of COVID-19 among medical college students based on system dynamics and their willingness to prevent and control (2020KZDZX1106) and the Guangdong Educational Science "Thirteenth Five-Year Plan" project (2019GXJK226). Guangdong Medical University, Zhanjiang, Guangdong Province, China Mingjin Xue, Zhaowei Huang, Yudi Hu, Jinlin Du, Miao Gao, Ronglin Pan, Yuqian Mo, Jinlin Zhong & Zhigang Huang Pension Industry Research Institute, Guangdong Medical University, Guangdong Province, Zhanjiang, China Jinlin Du & Zhigang Huang Mingjin Xue Zhaowei Huang Yudi Hu Jinlin Du Miao Gao Ronglin Pan Yuqian Mo Jinlin Zhong Zhigang Huang Mingjin Xue was responsible for research design, data collection and processing, statistical analysis, drawing results and drafting manuscripts. Zhaowei Huang was responsible for the design and data statistics of the research. Yudi Hu, Miao Gao, Yuqian Mo and Jinlin Zhong were responsible for checking and sorting out the research references. Zhigang Huang and Jinlin Du were responsible for revising, reviewing and proofreading the manuscript. All authors read and approved the final manuscript. The first author: Mingjin Xue, master student, the main research direction is: epidemiology and statistics (disease prevention and control). E-mail: [email protected] Corresponding author: Dr. Zhigang Huang, professor, associate dean of School of Public Health, Guangdong Medical University, the main research direction is epidemiology and statistics. E-mail: [email protected] Correspondence to Zhigang Huang. Statement that the raw data used in this study does not require any administrative privileges and all data has been anonymized before acquisition (no information about any human is involved). This study statement confirms that all methods were carried out in accordance with relevant guidelines and regulations in the declaration. Xue, M., Huang, Z., Hu, Y. et al. Monitoring European data with prospective space–time scan statistics: predicting and evaluating emerging clusters of COVID-19 in European countries. BMC Public Health 22, 2183 (2022). https://doi.org/10.1186/s12889-022-14298-z DOI: https://doi.org/10.1186/s12889-022-14298-z Space–time clusters
CommonCrawl
Journal Home About Issues in Progress Current Issue All Issues Feature Issues Vol. 12, Issue 11, pp. 7139-7148 •https://doi.org/10.1364/BOE.444144 Low-consumption photoacoustic method to measure liquid viscosity Yingying Zhou, Chao Liu, Xiazi Huang, Xiang Qian, Lidai Wang, and Puxiang Lai Yingying Zhou,1,2,3,6 Chao Liu,1,6 Xiazi Huang,2,3,6 Xiang Qian,4 Lidai Wang,1,5,7 and Puxiang Lai2,3,8 1Department of Biomedical Engineering, City University of Hong Kong, Hong Kong 2Department of Biomedical Engineering, Hong Kong Polytechnic University, Hong Kong 3The Hong Kong Polytechnic University Shenzhen Research Institute, Shenzhen, China 4Tsinghua-Shenzhen International Graduate School, Tsinghua University, Shenzhen, China 5City University of Hong Kong Shenzhen Research Institute, Shenzhen, China 6These authors contributed equally to this work [email protected] [email protected] Yingying Zhou https://orcid.org/0000-0002-9313-6825 Chao Liu https://orcid.org/0000-0003-3411-0833 Puxiang Lai https://orcid.org/0000-0003-4811-2012 Y Zhou C Liu X Huang X Qian L Wang P Lai Yingying Zhou, Chao Liu, Xiazi Huang, Xiang Qian, Lidai Wang, and Puxiang Lai, "Low-consumption photoacoustic method to measure liquid viscosity," Biomed. Opt. Express 12, 7139-7148 (2021) Citation alert Non-contact optical hand-held viscosity sensor with incident angle and irradiation timing controls Masato Eguchi, et al. Opt. Express 26(26) 34070-34080 (2018) Measurement of viscosity of lyotropic liquid crystals by means of rotating laser-trapped... Qingkun Liu, et al. Extended photoacoustic transport model for characterization of red blood cell morphology in... Nasire Uluc, et al. Biomed. Opt. Express 9(6) 2785-2809 (2018) Table of Contents Category Photoacoustic Imaging and Spectroscopy High power lasers Laser beams Stimulated Raman scattering Original Manuscript: September 23, 2021 Revised Manuscript: October 17, 2021 Manuscript Accepted: October 19, 2021 Article Outline Equations (8) Viscosity measurement is important in many areas of biomedicine and industry. Traditional viscometers are usually time-consuming and require huge sample volumes. Microfluidic viscometry may overcome the challenge of large sample consumption but suffers from a long process time and a complicated structure design and interaction. Here, we present a photoacoustic method that measures the liquid viscosity in a simple microfluidic-based tube. This new viscosity measurement method embraces fast detection speed and low fluid consumption, offering a new tool for efficient and convenient liquid viscosity measurement in a broad range of applications. © 2021 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement Measurement of viscosity, an important thermophysical property, plays a key role in material characterization in many fields of biomedicine and industry [1,2]. For example, blood viscosity is an important clinical parameter due to its close tie with cardiovascular diseases [1,3,4]. The oil viscosity is one of the most important parameters to evaluate the automotive engine oil conditions [2]. Various methods have been developed to determine the liquid viscosity. Conventional viscometer usually takes a long time and consumes a large volume of liquid for accurate measurement. For example, it may take ∼1 mL amount and up to ∼1 hour for one measurement, which is not ideal for frequent monitoring or when samples are precious or scarce [5–7]. To reduce sample consumption and increase the efficiency, microfluidic methods have been developed in recent years, where the sample consumption has been reduced to hundreds of microliters. For example, Lan et al. designed a co-axial microfluidic device to measure the Newtonian fluid viscosity, which allows for a stable liquid/liquid annular co-laminar flow measurement based on Navier-Stokes equations [8]. Kim et al. proposed to deliver the sample and reference fluids separately into two inlets of a Y-shaped microfluidic device so that one can measure the sample viscosity by measuring the interfacial width, which is induced at the downstream of the Y-shaped device [9]. Kang et al. demonstrated a high-precision microfluidic viscometer with a microfluidic channel array composed of 100 indicating channels using only ∼100 µL sample [10]. These methods have significantly reduced the sample consumption, but the supporting microstructure of the device needs to be specifically designed in advance, which is usually complex, costly, and time-consuming. Moreover, the compatibility relies heavily on the interaction between the microstructure and the test sample. That is, for different types of samples, one may need to design different supporting microstructures, which further tightens the requirements and increases the design complexity and operation cost [11]. Therefore, to broaden the application of microfluidic-based viscosity measurement to more labs and to make blood viscometry available for frequent testing, there is an urgent need for a high-speed, low dose consuming, and precise viscosity measurement method that can be used in a broad range of generic microfluidic-based channels. Photoacoustic (PA) tomography is a new imaging technique that has been developed for a wide range of applications [12–22], including liquid viscosity measurement. For example, in 2010 Lou et al. proposed to use the frequency spectrum of measured photoacoustic signals to distinguish different liquids [23]. Built upon that mechanism, Zhao et al. realized the detection of blood viscosity in vivo by analyzing the frequency spectrum of blood photoacoustic signals [24]. The measurement accuracy in these methods, however, are limited by the ultrasonic bandwidth and noise. Here, we present a new method to measure liquid viscosity in generic microfluidic-based tubes based on the PA Grueneisen relaxation effect [25]. We use a dual-pulse photoacoustic flowmetric method [26] to measure the liquid flow speed in a generic microfluidic-based tube. The average flow speed is inversely proportional to the viscosity. Thus, after calibration, we can determine the viscosity from the flow speed. In the dual-pulse photoacoustic flow measurement, the flow alters the ratio between two PA amplitudes via the Grueneisen relaxation effect. We establish a model between the dual-pulse PA signals and the viscosity in the microfluidic-based tube. Different viscous liquids and blood samples are tested, yielding results that are well consistent with the theoretical values. Experimental results indicate that the proposed method can potentially be a new tool for efficient and convenient liquid viscosity measurement and even be engineered in the future for daily monitoring blood viscosity with only a small amount of blood. 2.1 System setup Figure 1(a) shows a dual-pulse photoacoustic viscosity measurement platform. A 532-nm laser (7-ns pulse width, VPFL-G-20, Spectra-Physics, USA) serves as the excitation laser pulse source, and its output is separated into two paths by a polarizing beam splitter (PBS1, PBS051, Thorlabs Inc, USA). A half-wave plate (HWP1, GCL-060633, Daheng Optics, China) is placed in front of PBS1 to adjust the energy ratio between the two daughter laser beams. One of the laser beams transmits in free space and the other is coupled into a 30-m single-mode fiber (SMF, HB450-SC, Fibercore, UK), which delays the laser pulse by ∼146 ns, as shown in Fig. 1(b). To suppress the stimulated Raman scattering effect in the SMF, another half-wave plate (HWP2, GCL-060633, Daheng Optics, China) is added before the fiber to adjust the polarization state. The pulse energy of each laser beam is ∼90 nanojoules. The two beams are recombined by another polarizing beam splitter (PBS2, PBS051, Thorlabs Inc, USA) and coupled into a 2-m single-mode fiber (SM fiber, P1-460B-FC-2, Thorlabs Inc, USA), and then delivered to the scanning PA probe. Light from the 2-m fiber is focused by two achromatic doublets (AC064-013-A, Thorlabs Inc, USA) and excites the sample. The generated photoacoustic signal is received by a piezoelectric ultrasound transducer (UT, V214-BC-RM, Olympus-NDT). The excitation light and acoustic detection are confocally and coaxially aligned to optimize the sensitivity. Detailed information on the probe can be found in our previous publications [27,28]. Driven by a constant pressure produced by an air compressor (JUBA, Yuteng Hardware and Electrical Tech, China), different viscous liquids flow in a transparent circle polyvinyl chloride micro-tube (0.25-mm inner diameter, TYGON S-54- HL, Norton Performance Plastics, China) at different speeds. Considering that the channels of the tube can affect the measurement results [29], we used identical tubes in all experiments. In addition, note that the flow speed at different positions inside the tube is different, leading to different Grueneisen relaxation effects. Thus, in experiment the laser beam was focused onto the same depth of the microtube for all measurements, and same position on the A-line of the generated signals was chosen to extract the PA signal amplitudes for each measurement. The exact temporal position was kept constant all through, making sure PA signals were from the same depth of the microtube in the study. In other words, the location of the peak-to-peak shape in the A-line of the generated photoacoustic signal for the first sample is marked as the standard position. We then adjust the distance between the excitation laser beams and the tube to ensure that photoacoustic signals from the subsequent targets appear in the identical position. Fig. 1. (a) Schematic of the dual-wavelength photoacoustic viscosity measurement platform. HWP, half-wave plate; PBS, polarizing beam splitter; SMF, single-mode fiber; UL, ultrasound lens; UT, ultrasound transducer. (b) 146-ns time delay between the two optical pulses. Download Full Size | PPT Slide | PDF 2.2 Sample preparation Different viscous liquids are obtained by mixing ink solutions with different contents of surfactants solutions (20% Tween 20 aqueous solutions, Scientific Phygene, China). Tween 20 is a polyoxyethylene sorbitol ester containing 20 units of ethylene oxide per unit of sorbitol, which is widely used in microfluidic studies as an oil-in-water (O/W) emulsifier. Blood samples are bovine whole blood (3.2% sodium citrate added, Hongquan Bio Inc, Guangzhou, China). To mimic anemia and polycythemia conditions, different viscous blood samples are obtained by mixing different concentrations of plasma and hemocytes [11]. In brief, the normal whole blood (∼40% haematocrit) is centrifuged at 1400 rpm for 3 minutes to separate the plasma and hemocytes. The upper half is extracted as the plasma and the remaining half as the blood with ∼80% haemotocrit. Then, 75% plasma volume and 25% plasma volume are mixed with appropriate hemocytes to mimic anemia (∼20% haematocrit) and polycythemia (∼60% haematocrit) conditions, respectively. The whole bovine blood is used as the normal group. Note that the PA viscosity measurement method is based on optical absorption of the target, which is not suitable for clear solutions at 532 nm wavelength. If the viscosity of a transparent solution is to be measured, optical absorbers should be added and stirred with the solution under the condition that the optical absorbers will not alter the viscous performance of the solutions considerably. On the other hand, if the wavelength of the excitation laser can be modified to match the absorption spectrum of the transparent solutions, no additional absorbers will be needed. 2.3 Working principle Based on Poiseuille's law [30], the flow rate of liquid, Q, in a cylindrical tube is (1)$$Q = \frac{{({{P_2} - {P_1}} )\pi {r^4}}}{{8\xi l}}$$ where ${P_2}$ and ${P_1}$ are the pressures at the inlet and outlet ends of the tube, r and l are the inner diameter and the length of the tube, and $\xi $ is the liquid viscosity. As the flow rate Q is directly related to flow speed ($v$), $\textrm{Q} = \pi {r^2}v$, combining with Eq. (1), the liquid viscosity $\xi $ can be expressed as (2)$$\xi = \frac{{({{P_2} - {P_1}} )\pi {r^4}}}{{8lQ}} = \frac{{\varDelta p{r^2}}}{{8lv}}$$ The pressure difference (${\Delta }p$) can be controlled by the air compressor and the parameters ($r$ and $l$) of the tube are constant. Thus, the liquid viscosity $\xi $ can be calculated from the flow speed v. The flow speed is obtained based on the dual-pulse photoacoustic platform described above (Fig. 1(a)). In a linear range, the two sequentially excited photoacoustic signals $(P{A_1}$ and $P{A_2})$ can be written as [31,32] (3)$$P{A_1} = k{{\varGamma}_0}\eta {F_1}{\mu _a}$$ (4)$$P{A_2} = k({{\varGamma}_0} + {\varDelta \varGamma} )\eta {F_2}{\mu _a}$$ where k is the detection sensitivity, ${{\Gamma }_0}$ is the Grueneisen parameter at the baseline temperature, ${\Delta \Gamma}$ is the changed Grueneisen parameter due to the first pulse heating effect, $\eta $ is the coefficient for light-to-heat conversion, ${F_1}$ and ${F_2}$ are the optical fluences, and ${\mu _a}$ is the optical absorption coefficient. The changed Grueneisen parameter $\Delta \Gamma$ is proportional to the local temperature rise, which can be modeled as [25,33–35] (5)$${\varDelta \varGamma} = a{F_1}{\mu _a}{e^{ - ({{\tau_a} + bv} )\delta t}}$$ where a and b are constant coefficients, ${\tau _a}$ is a constant related to the thermal conduction, v is the flow speed, and $\delta t$ is the known time delay between the two pulse excitations. As mentioned above, the optical fluences, ${F_1}$ and ${F_2}$ are the same in our experiment. Thus, the ratio between the second and the first photoacoustic signals can be simplified as (6)$$Ratio = \frac{{P{A_2}}}{{P{A_1}}} = \frac{{k({{\varGamma} _0} + {\varDelta \varGamma} )\eta {F_2}{\mu _a}}}{{k{{\varGamma} _0}\eta {F_1}{\mu _a}}} = 1 + \frac{{{\varDelta \varGamma} }}{{{{\varGamma} _0}}} = 1 + \frac{{a{F_1}{\mu _a}{e^{ - ({{\tau_a} + bv} )\delta t}}}}{{{{\varGamma} _0}}}$$ Due to the different thermal properties of the absorption materials, parameter a may be either positive or negative, which means $P{A_2}$ can be larger or smaller than $P{A_1}$. Taking logarithmic operation on both sides of Eq. (6), we obtain (7)$$ln ({|{1 - Ratio} |} )={-} ({{\tau_a} + bv} )\delta t + ln\frac{{|a |{F_1}{\mu _a}}}{{{{\varGamma} _0}}} = mv + n$$ where $m ={-} b\delta t,\; \textrm{and}\; n ={-} \delta t{\tau _a} + ln\frac{{|\textrm{a} |{F_1}{\mu _{a}}}}{{{{\varGamma }_0}}}$, both of which can be determined via system calibration. Therefore, the flow speed v can be determined from the two PA signals. The liquid viscosity can be calculated from (8)$$\xi = \frac{{{\varDelta} p{r^2}m}}{{8l[{ln ({|{1 - Ratio} |} )- n} ]}}$$ 3.1 System calibration To obtain the system parameters m and n in Eq. (7), a pure ink sample (no surfactant) was used to calibrate the relationship between the PA 'Ratio' and the preset flow speed. The air compressor in this calibration was removed temporally, and the liquid flow speed was set by a syringe pump within a range from 1 to 25 mm/s. The values of $ln ({|{1 - Ratio} |} )$ at different flow speeds are plotted in Fig. 2(a). As expected, the decay constant is approximately a linear function of the flow speed with a determination coefficient (R2) of 0.97, indicating a strong dependence of the dual-pulse signal ratio on the flow speed. The fitted curve between these two parameters can be expressed by $ln ({|{1 - Ratio} |} )= 0.05v - 3.68$, where 0.05 and −3.68 correspond to m and n in Eq. (7), respectively. With the calibration results, the viscosity can be computed from $\; \xi = \frac{{0.05\Delta p{r^2}}}{{8l[{ln ({|{1 - Ratio} |} )+ 3.68} ]}}$. Fig. 2. (a) Ratio between the two PA amplitudes versus the preset set flow speed. Error bars are standard deviations based on 80 measurements. (b) Measured viscosity versus the theoretical set viscosity. (c) Statistical differences between the measured viscosities and theoretical values. G1-G5: Group1-5. Because the pressure difference ($\mathrm{\Delta }p$) and tube parameters ($r$ and $l$) are constant, we can simplify the relationship between the liquid viscosity and the signal amplitude ratio to $\xi = \frac{c}{{ln ({|{1 - Ratio} |} )+ 3.68}}$, where $c = \frac{{0.05\mathrm{\Delta }p{r^2}}}{{8l}}$ is a constant if one type of viscous solution is applied to calibrate the system. 3.2 Phantom viscosity measurement Next, 90% surfactant-containing solution (with a viscosity coefficient of 7.10 cP) was first used as the calibration sample. The solution flowed in the microfluidic tube under a constant pressure produced from the air compressor. The sample was excited by two sequential laser pulses. The ratio of the two PA signals was computed. The value of c was calculated as 2.72 for the setup. Then, samples of various viscosities (10%, 20%, 30%, 50%, and 70% of surfactant-containing solutions; around 50 µL usage for each sample) were tested under the same pressure. Note that establishing and calibrating the photoacoustic system could be time consuming, and it also takes some time to prepare the samples. When the system and samples are ready, as the repetition rate of the laser pulses was 8 kHz, thus the measurement process can be completed in one dual-pulse excitation. To increase the accuracy, one sample was excited by 10,000 times for averaging. The entire testing takes about 1.25 seconds for one sample flowing in the tube. Considering the injection time and preparing time, the whole process could be finished in several minutes. Based on the experimental results, the viscosity coefficients of these solutions were computed to be 1.22, 1.62, 2.10, 3.35, and 4.99 cP, respectively. These are compared with the theoretical solution viscosities, as shown in Fig. 2(b). The error bar represents the standard deviation based on 80 measurements. The two sets of data are linearly correlated with a determination coefficient (R2) of 0.92. We further calculate the statistical differences between the measured and theoretical values for different groups using paired t-test, which is shown in Fig. 2(c). The p-values indicate no significant differences between the measured values and theoretical values. These demonstrates that the proposed Grueneisen-based dual-pulse PA method can precisely determine the liquid viscosity. Note that higher error bars for more viscous liquids is associated with the nature of the logarithmic operation that amplifies the divergence of measured Grueneisen effect. This point also sets a limitation to the proposed viscosity measurement method, which needs to be measured several times to increase the accuracy. 3.3 Blood viscosity measurement Since hematocrit is closely related to viscosity, different viscous bovine blood samples were prepared by mixing the hematocrit with different concentrations of plasm as described in Materials and Methods. Each blood sample had a volume around 50 µL and was injected into the microfluidic tube. Then the blood flowed under the same constant pressure as the calibration one. As shown in Fig. 3, the measured blood viscosity results show a good linear relationship with the hematocrit concentrations and agree well with the literatures in which 20% hematocrit (anemia) induces a blood viscosity of ${\sim} 2 \times {10^{ - 3}}Pa \cdot s$, normal human or bovine blood has a viscosity of $3 - 4 \times {10^{ - 3}}Pa \cdot s,$ and the 60% hematocrit (polycythemia) induces a viscosity of $6 - 8 \times {10^{ - 3}}Pa \cdot s$ [36]. The results validate that dual-pulse PA viscosity measurement method can measure viscosity with low blood consumption. Another advantage is that the method is free from any chemical or labelling process. The blood sample can be recollected for retest or other uses, which is meaningful for precious or scarce samples. Fig. 3. Comparison of measured viscosity values for different groups of blood samples. The error bar represents the standard deviation based on 25 measurements. In this study, a new method is proposed to measure the liquid viscosity in a generic microtube using a dual-pulse PA platform. An inverse model is developed to quantify the liquid viscosity from two PA signals. We experimentally demonstrate that, with calibration, we can successfully measure the viscosity using only one droplet of liquid/ blood. Note that this dual-pulse PA measurement method uses only regular microtubes, instead of specifically designed supporting structures, greatly reducing the complexity of system design and the operational cost. Moreover, a whole measurement process for one sample only takes less than one minute excluding the time for system and sample preparation, which can be further shortened if necessary. The platform only requires a tiny dosage of the sample (around 50 µL), which is essential for long-term daily viscosity monitoring or precious and scarce samples. Therefore, the proposed dual-pulse PA viscosity measurement method embraces fast detection speed, low consumption, and low cost in micro tubes. However, higher viscous samples generate larger error bar in our current stage, which is associated with the nature of the logarithmic operation that amplifies the divergence of measured Grueneisen effect. The measured accuracy can be ensured through increasing the measuring times, which can also be further improved with a higher pulse energy laser, less-scattering micro-based tubes and higher consistent pressure coming from the air compressor. Overall, this is only a proof-of-concept study to demonstrate the principle and feasibility of the proposed PA method; in the future, the system will be further engineered to increase the accuracy till it can be directly compared with other advanced equipment or methods. From a long-term aspect, it opens a new venue for efficient microfluidic-based liquid viscosity measurement and this platform may potentially be engineered into a small point-of-care device with tiny lasers and fibers to test the blood viscosity in the future, which would benefit a large population of patients with abnormal blood viscosities. The spectrum of the excitation beams In order to ensure the same wavelengths for the two excitations beams, the spectrum of the excitations beams were tested using spectrometer and the result is shown in Fig. 4. Only one peak wavelength of 532 nm is observed, which indicates that the stimulated Raman scattering (SRS) effect is suppressed. Fig. 4. The spectrum of the excitation beam (including two paths), showing a sole wavelength of 532 nm. Regular PA signals from phantoms A batch of well mixed solutions of different viscosities are shown in Fig. 5(a). To investigate if the regular (single-beam) PA signal significantly depends on the concentration of surfactant, different solutions were measured and compared under the same conditions. All solutions were injected into the microfluidic tube without flowing and excited by one laser pulse. The pulse energy was ∼90 nanojoules, and the pulse repetition rate was 8 kHz. As shown in Fig. 5(b), the single-beam photoacoustic signal amplitude changes very slightly with the contents of surfactants. The peak-to-peak amplitudes are extracted and shown in Fig. 5(c). As seen, compared with the solution containing 10% surfactants, the single-beam PA signal amplitude of the 70% solution only reduces by ∼2.78%, and the signal amplitudes of the 50%, 30%, and 20% solutions drop by merely about 2.59%, 1.89%, and 0.90%, respectively. The result suggests that the difference of the surfactant concentration and the liquid viscosity induces very limited variation to the optical absorption coefficient of the solution and hence the single-beam PA signal strength. Moreover, the small divergences, if exist, can be easily compensated based on the measured results. Fig. 5. (a) Photograph of surfactant-ink solutions of different viscosities (containing 10%, 20%, 30%, 50% and 70% of surfactant, respectively). (b) Normalized PA signals of different surfactant-ink solutions. (c). PA signal (peak to peak) amplitude as a function of surfactant concentration. Dual-pulse PA signals and liquid viscosity After confirming the small absorption fluctuations caused by different concentrations of surfactants, solutions were injected in turns and flowed in the microfluidic tube under a constant pressure driven by the air compressor. Measurements were performed with the dual-pulse PA setting as described above. For each group of solutions, dual-pulse PA signals, as illustrated in Fig. 6(a), were obtained. As seen, the two PA signals corresponding to the two pulses can be easily distinguished and separated. The peak-to-peak values of the second and the first photoacoustic signals are extracted, and the ratio between them is defined as 'Ratio'. The values of $ln ({|{1 - Ratio} |} )$ at different surfactant concentrations are plotted in Fig. 6(b). As seen, the measured logarithm value shows an inverse proportional trend with the solution concentration. This is consistent with the fitted curve derived based on Eqs. (7) and (2), which follows a relation $ln ({|{1 - Ratio} |} )$ $= \frac{{24.26}}{{con}} - 3.51$ with a determination coefficient (R2) of 0.97, where $con$ is the surfactant concentration. This result, again, confirms the feasibility of the proposed dual-pulse PA method for viscosity measurement. Fig. 6. (a) A representative dual-pulse PA signal. (b) The value of $ln ({|{1 - Ratio} |} )$ versus the surfactant concentration. Error bars are standard deviation based on 80 measurements. Hong Kong Innovation and Technology Commission (GHP/043/19SZ, GHP/044/19GD, ITS/022/18); Hong Kong Research Grant Council (11101618, 11103320, 11215817, 15217721, 21205016, 25204416, R5029-19); National Natural Science Foundation of China (81627805, 81671726, 81930048); Guangdong Science and Technology Commission (2019A1515011374, 2019BT02X105); Science, Technology and Innovation Commission of Shenzhen Municipality (JCYJ20160329150236426, JCYJ20170413140519030, JCYJ20170818104421564, SGDX20190917094601717). The authors thank Jiyu Li of The City University of Hong Kong for offering the microfluidic system for detection. This work was partially supported by the National Natural Science Foundation of China (NSFC) (81930048, 81671726, 81627805), Guangdong Science and Technology Commission (2019BT02X105, 2019A1515011374), Hong Kong Innovation and Technology Commission (GHP/043/19SZ, GHP/044/19GD, ITS/022/18), and Hong Kong Research Grant Council (15217721, R5029-19, 25204416, 21205016, 11215817, 11101618, 11103320), and Shenzhen Science and Technology Innovation Commission (SGDX20190917094601717, JCYJ20170818104421564, JCYJ20160329150236426, JCYJ20170413140519030). L.W. has a financial interest in PATech Limited, which, however, did not support this work. Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request within 5 years after publication date. 1. R. Rosencranz and S. A. Bogen, "Clinical laboratory measurement of serum, plasma, and blood viscosity," Am J Clin Pathol 125(suppl_1), S78–S86 (2006). [CrossRef] 2. B. Jakoby, M. Scherer, M. Buskies, and H. Eisenschmid, "An automotive engine oil viscosity sensor," Ieee Sens J 3(5), 562–568 (2003). [CrossRef] 3. G. D. O. Lowe, M. M. Drummond, A. R. Lorimer, I. Hutton, C. D. Forbes, C. R. M. Prentice, and J. C. Barbenel, "Relation between extent of coronary-artery disease and blood-viscosity," Brit Med J 280(6215), 673–674 (1980). [CrossRef] 4. G. D. O. Lowe, F. G. R. Fowkes, J. Dawes, P. T. Donnan, S. E. Lennie, and E. Housley, "Blood-viscosity, fibrinogen, and activation of coagulation and leukocytes in peripheral arterial-disease and the normal population in the edinburgh artery study," Circulation 87(6), 1915–1920 (1993). [CrossRef] 5. S. Shin and D. Y. Keum, "Measurement of blood viscosity using mass-detecting sensor," Biosens Bioelectron 17(5), 383–388 (2002). [CrossRef] 6. Y. J. Kang, J. Ryu, and S. J. Lee, "Label-free viscosity measurement of complex fluids using reversal flow switching manipulation in a microfluidic channel," Biomicrofluidics 7(4), 044106 (2013). [CrossRef] 7. H. Kim, Y. I. Cho, D. H. Lee, C. M. Park, H. W. Moon, M. Hur, J. Q. Kim, and Y. M. Yun, "Analytical performance evaluation of the scanning capillary tube viscometer for measurement of whole blood viscosity," Clin Biochem 46(1-2), 139–142 (2013). [CrossRef] 8. W. J. Lan, S. W. Li, J. H. Xu, and G. S. Luo, "Rapid measurement of fluid viscosity using co-flowing in a co-axial microfluidic device," Microfluid Nanofluid 8(5), 687–693 (2010). [CrossRef] 9. S. Kim, K. C. Kim, and E. Yeom, "Microfluidic method for measuring viscosity using images from smartphone," Opt Laser Eng 104, 237–243 (2018). [CrossRef] 10. Y. J. Kang, S. Y. Yoon, K. H. Lee, and S. Yang, "A highly accurate and consistent microfluidic viscometer for continuous blood viscosity measurement," Artif Organs 34(11), 944–949 (2010). [CrossRef] 11. L. Liu, D. Hu, and R. H. W. Lam, "Microfluidic viscometer using a suspending micromembrane for measurement of biosamples," Micromachines (Basel) 11(10), 934 (2020). [CrossRef] 12. L. H. V. Wang and S. Hu, "Photoacoustic tomography: in vivo imaging from organelles to organs," Science 335(6075), 1458–1462 (2012). [CrossRef] 13. Y. Zhou, J. Chen, C. Liu, C. Liu, P. Lai, and L. Wang, "Single-shot linear dichroism optical-resolution photoacoustic microscopy," Photoacoustics 16, 100148 (2019). [CrossRef] 14. P. Lai, L. Wang, J. W. Tay, and L. V. Wang, "Photoacoustically guided wavefront shaping for enhanced optical focusing in scattering media," Nat Photonics 9(2), 126–132 (2015). [CrossRef] 15. X. Huang, W. Shang, H. Deng, Y. Zhou, F. Cao, C. Fang, P. Lai, and J. Tian, "Clothing spiny nanoprobes against the mononuclear phagocyte system clearance in vivo: Photoacoustic diagnosis and photothermal treatment of early stage liver cancer with erythrocyte membrane-camouflaged gold nanostars," Applied Materials Today 18, 100484 (2020). [CrossRef] 16. H. Li, F. Cao, Y. zhou, Z. yu, and P. Lai, "Interferometry-free noncontact photoacoustic detection method based on speckle correlation change," Opt. Lett. 44(22), 5481–5484 (2019). [CrossRef] 17. Z. P. Yu, H. H. Li, and P. X. Lai, "Wavefront shaping and its application to enhance photoacoustic imaging," Appl Sci-Basel 7(12), 1320 (2017). [CrossRef] 18. F. Cao, Z. Qiu, H. Li, and P. Lai, "Photoacoustic imaging in oxygen detection," Appl. Sci. 7(12), 1262 (2017). [CrossRef] 19. Y. Wang, Y. Zhan, M. Tiao, and J. Xia, "Review of methods to improve the performance of linear array-based photoacoustic tomography," J. Innovative Opt. Health Sci. 13(02), 2030003 (2019). [CrossRef] 20. C. Liu, J. B. Chen, Y. C. Zhang, J. Y. Zhu, and L. D. Wang, "Five-wavelength optical-resolution photoacoustic microscopy of blood and lymphatic vessels," Adv Photonics 3(01), 016002 (2021). [CrossRef] 21. X. Chen, W. Qi, and L. Xi, "Deep-learning-based motion-correction algorithm in optical resolution photoacoustic microscopy," Vis Comput Ind Biomed Art 2(1), 12 (2019). [CrossRef] 22. Y. Ma, C. Lu, K. Xiong, W. Zhang, and S. Yang, "Spatial weight matrix in dimensionality reduction reconstruction for micro-electromechanical system-based photoacoustic microscopy," Vis. Comput. Ind. Biomed. Art 3(1), 22 (2020). [CrossRef] 23. C. G. Lou and D. Xing, "Photoacoustic measurement of liquid viscosity," Appl. Phys. Lett. 96(21), 211102 (2010). [CrossRef] 24. Y. Zhao, S. Z. Yang, Y. T. Wang, Z. Yuan, J. L. Qu, and L. W. Liu, "In vivo blood viscosity characterization based on frequency-resolved photoacoustic measurement," Appl. Phys. Lett. 113(14), 143703 (2018). [CrossRef] 25. L. Wang, C. Zhang, and L. V. Wang, "Grueneisen relaxation photoacoustic microscopy," Phys. Rev. Lett. 113(17), 174301 (2014). [CrossRef] 26. C. Liu, Y. Liang, and L. Wang, "Single-shot photoacoustic microscopy of hemoglobin concentration, oxygen saturation, and blood flow in sub-microseconds," Photoacoustics 17, 100156 (2020). [CrossRef] 27. Y. Zhou, S. Liang, M. Li, C. Liu, P. Lai, and L. Wang, "Optical-resolution photoacoustic microscopy with ultrafast dual-wavelength excitation," J. Biophotonics 13(6), e201960229 (2020). [CrossRef] 28. L. D. Wang, K. Maslov, J. J. Yao, B. Rao, and L. H. V. Wang, "Fast voice-coil scanning optical-resolution photoacoustic microscopy," Opt. Lett. 36(2), 139–141 (2011). [CrossRef] 29. J. Zhou and I. Papautsky, "Viscoelastic microfluidics: progress and challenges," Microsyst. Nanoeng. 6(1), 113 (2020). [CrossRef] 30. J. Pfitzner, "Poiseuille and his law," Anaesthesia 31, 273–275 (1976). 31. M. H. Xu and L. H. V. Wang, "Photoacoustic imaging in biomedicine," Rev. Sci. Instrum. 77(4), 041101 (2006). [CrossRef] 32. W. H. L. V. Wang, "Biomedical optics: principles and imaging," in Biomedical Optics, 362 (Wiley, 2007), pp. 283–321. 33. L. D. Wang, J. J. Yao, K. I. Maslov, W. X. Xing, and L. H. V. Wang, "Ultrasound-heated photoacoustic flowmetry," J. Biomed. Opt 18(11), 117003 (2013). [CrossRef] 34. A. Sheinfeld and A. Eyal, "Photoacoustic thermal diffusion flowmetry," Biomed. Opt. Express 3(4), 800–813 (2012). [CrossRef] 35. W. Liu, B. X. Lan, L. Hu, R. M. Chen, Q. F. Zhou, and J. J. Yao, "Photoacoustic thermal flowmetry with a single light source," J Biomed Opt 22(9), 096001 (2017). [CrossRef] 36. M. S. Litwin and K. Chapman, "Physical factors affecting human blood viscosity," J. Surg. Res. 10(9), 433–436 (1970). [CrossRef] Article Order R. Rosencranz and S. A. Bogen, "Clinical laboratory measurement of serum, plasma, and blood viscosity," Am J Clin Pathol 125(suppl_1), S78–S86 (2006). [Crossref] B. Jakoby, M. Scherer, M. Buskies, and H. Eisenschmid, "An automotive engine oil viscosity sensor," Ieee Sens J 3(5), 562–568 (2003). G. D. O. Lowe, M. M. Drummond, A. R. Lorimer, I. Hutton, C. D. Forbes, C. R. M. Prentice, and J. C. Barbenel, "Relation between extent of coronary-artery disease and blood-viscosity," Brit Med J 280(6215), 673–674 (1980). G. D. O. Lowe, F. G. R. Fowkes, J. Dawes, P. T. Donnan, S. E. Lennie, and E. Housley, "Blood-viscosity, fibrinogen, and activation of coagulation and leukocytes in peripheral arterial-disease and the normal population in the edinburgh artery study," Circulation 87(6), 1915–1920 (1993). S. Shin and D. Y. Keum, "Measurement of blood viscosity using mass-detecting sensor," Biosens Bioelectron 17(5), 383–388 (2002). Y. J. Kang, J. Ryu, and S. J. Lee, "Label-free viscosity measurement of complex fluids using reversal flow switching manipulation in a microfluidic channel," Biomicrofluidics 7(4), 044106 (2013). H. Kim, Y. I. Cho, D. H. Lee, C. M. Park, H. W. Moon, M. Hur, J. Q. Kim, and Y. M. Yun, "Analytical performance evaluation of the scanning capillary tube viscometer for measurement of whole blood viscosity," Clin Biochem 46(1-2), 139–142 (2013). W. J. Lan, S. W. Li, J. H. Xu, and G. S. Luo, "Rapid measurement of fluid viscosity using co-flowing in a co-axial microfluidic device," Microfluid Nanofluid 8(5), 687–693 (2010). S. Kim, K. C. Kim, and E. Yeom, "Microfluidic method for measuring viscosity using images from smartphone," Opt Laser Eng 104, 237–243 (2018). Y. J. Kang, S. Y. Yoon, K. H. Lee, and S. Yang, "A highly accurate and consistent microfluidic viscometer for continuous blood viscosity measurement," Artif Organs 34(11), 944–949 (2010). L. Liu, D. Hu, and R. H. W. Lam, "Microfluidic viscometer using a suspending micromembrane for measurement of biosamples," Micromachines (Basel) 11(10), 934 (2020). L. H. V. Wang and S. Hu, "Photoacoustic tomography: in vivo imaging from organelles to organs," Science 335(6075), 1458–1462 (2012). Y. Zhou, J. Chen, C. Liu, C. Liu, P. Lai, and L. Wang, "Single-shot linear dichroism optical-resolution photoacoustic microscopy," Photoacoustics 16, 100148 (2019). P. Lai, L. Wang, J. W. Tay, and L. V. Wang, "Photoacoustically guided wavefront shaping for enhanced optical focusing in scattering media," Nat Photonics 9(2), 126–132 (2015). X. Huang, W. Shang, H. Deng, Y. Zhou, F. Cao, C. Fang, P. Lai, and J. Tian, "Clothing spiny nanoprobes against the mononuclear phagocyte system clearance in vivo: Photoacoustic diagnosis and photothermal treatment of early stage liver cancer with erythrocyte membrane-camouflaged gold nanostars," Applied Materials Today 18, 100484 (2020). H. Li, F. Cao, Y. zhou, Z. yu, and P. Lai, "Interferometry-free noncontact photoacoustic detection method based on speckle correlation change," Opt. Lett. 44(22), 5481–5484 (2019). Z. P. Yu, H. H. Li, and P. X. Lai, "Wavefront shaping and its application to enhance photoacoustic imaging," Appl Sci-Basel 7(12), 1320 (2017). F. Cao, Z. Qiu, H. Li, and P. Lai, "Photoacoustic imaging in oxygen detection," Appl. Sci. 7(12), 1262 (2017). Y. Wang, Y. Zhan, M. Tiao, and J. Xia, "Review of methods to improve the performance of linear array-based photoacoustic tomography," J. Innovative Opt. Health Sci. 13(02), 2030003 (2019). C. Liu, J. B. Chen, Y. C. Zhang, J. Y. Zhu, and L. D. Wang, "Five-wavelength optical-resolution photoacoustic microscopy of blood and lymphatic vessels," Adv Photonics 3(01), 016002 (2021). X. Chen, W. Qi, and L. Xi, "Deep-learning-based motion-correction algorithm in optical resolution photoacoustic microscopy," Vis Comput Ind Biomed Art 2(1), 12 (2019). Y. Ma, C. Lu, K. Xiong, W. Zhang, and S. Yang, "Spatial weight matrix in dimensionality reduction reconstruction for micro-electromechanical system-based photoacoustic microscopy," Vis. Comput. Ind. Biomed. Art 3(1), 22 (2020). C. G. Lou and D. Xing, "Photoacoustic measurement of liquid viscosity," Appl. Phys. Lett. 96(21), 211102 (2010). Y. Zhao, S. Z. Yang, Y. T. Wang, Z. Yuan, J. L. Qu, and L. W. Liu, "In vivo blood viscosity characterization based on frequency-resolved photoacoustic measurement," Appl. Phys. Lett. 113(14), 143703 (2018). L. Wang, C. Zhang, and L. V. Wang, "Grueneisen relaxation photoacoustic microscopy," Phys. Rev. Lett. 113(17), 174301 (2014). C. Liu, Y. Liang, and L. Wang, "Single-shot photoacoustic microscopy of hemoglobin concentration, oxygen saturation, and blood flow in sub-microseconds," Photoacoustics 17, 100156 (2020). Y. Zhou, S. Liang, M. Li, C. Liu, P. Lai, and L. Wang, "Optical-resolution photoacoustic microscopy with ultrafast dual-wavelength excitation," J. Biophotonics 13(6), e201960229 (2020). L. D. Wang, K. Maslov, J. J. Yao, B. Rao, and L. H. V. Wang, "Fast voice-coil scanning optical-resolution photoacoustic microscopy," Opt. Lett. 36(2), 139–141 (2011). J. Zhou and I. Papautsky, "Viscoelastic microfluidics: progress and challenges," Microsyst. Nanoeng. 6(1), 113 (2020). J. Pfitzner, "Poiseuille and his law," Anaesthesia 31, 273–275 (1976). M. H. Xu and L. H. V. Wang, "Photoacoustic imaging in biomedicine," Rev. Sci. Instrum. 77(4), 041101 (2006). W. H. L. V. Wang, "Biomedical optics: principles and imaging," in Biomedical Optics, 362 (Wiley, 2007), pp. 283–321. L. D. Wang, J. J. Yao, K. I. Maslov, W. X. Xing, and L. H. V. Wang, "Ultrasound-heated photoacoustic flowmetry," J. Biomed. Opt 18(11), 117003 (2013). A. Sheinfeld and A. Eyal, "Photoacoustic thermal diffusion flowmetry," Biomed. Opt. Express 3(4), 800–813 (2012). W. Liu, B. X. Lan, L. Hu, R. M. Chen, Q. F. Zhou, and J. J. Yao, "Photoacoustic thermal flowmetry with a single light source," J Biomed Opt 22(9), 096001 (2017). M. S. Litwin and K. Chapman, "Physical factors affecting human blood viscosity," J. Surg. Res. 10(9), 433–436 (1970). Barbenel, J. C. Bogen, S. A. Buskies, M. Cao, F. Chapman, K. Chen, J. Chen, J. B. Chen, R. M. Cho, Y. I. Dawes, J. Deng, H. Donnan, P. T. Drummond, M. M. Eisenschmid, H. Eyal, A. Fang, C. Forbes, C. D. Fowkes, F. G. R. Housley, E. Hu, D. Hu, L. Hu, S. Huang, X. Hur, M. Hutton, I. Jakoby, B. Kang, Y. J. Keum, D. Y. Kim, J. Q. Kim, K. C. Lai, P. Lai, P. X. Lam, R. H. W. Lan, B. X. Lan, W. J. Lee, D. H. Lee, K. H. Lee, S. J. Lennie, S. E. Li, H. Li, H. H. Li, M. Li, S. W. Liang, S. Liang, Y. Litwin, M. S. Liu, L. Liu, L. W. Liu, W. Lorimer, A. R. Lou, C. G. Lowe, G. D. O. Lu, C. Luo, G. S. Ma, Y. Maslov, K. Maslov, K. I. Moon, H. W. Papautsky, I. Park, C. M. Pfitzner, J. Prentice, C. R. M. Qi, W. Qiu, Z. Qu, J. L. Rao, B. Rosencranz, R. Ryu, J. Scherer, M. Shang, W. Sheinfeld, A. Shin, S. Tay, J. W. Tian, J. Tiao, M. Wang, L. D. Wang, L. H. V. Wang, L. V. Wang, W. H. L. V. Wang, Y. T. Xi, L. Xia, J. Xing, D. Xing, W. X. Xiong, K. Xu, J. H. Xu, M. H. Yang, S. Z. Yao, J. J. Yeom, E. Yoon, S. Y. Yu, Z. P. Yuan, Z. Yun, Y. M. Zhan, Y. Zhang, C. Zhang, Y. C. Zhao, Y. Zhou, J. Zhou, Q. F. Zhou, Y. Zhu, J. Y. Adv Photonics (1) Am J Clin Pathol (1) Anaesthesia (1) Appl Sci-Basel (1) Appl. Phys. Lett. (2) Appl. Sci. (1) Applied Materials Today (1) Artif Organs (1) Biomed. Opt. Express (1) Biomicrofluidics (1) Biosens Bioelectron (1) Brit Med J (1) Clin Biochem (1) Ieee Sens J (1) J Biomed Opt (1) J. Biomed. Opt (1) J. Biophotonics (1) J. Innovative Opt. Health Sci. (1) J. Surg. Res. (1) Microfluid Nanofluid (1) Micromachines (Basel) (1) Microsyst. Nanoeng. (1) Nat Photonics (1) Opt Laser Eng (1) Opt. Lett. (2) Photoacoustics (2) Phys. Rev. Lett. (1) Rev. Sci. Instrum. (1) Vis Comput Ind Biomed Art (1) Vis. Comput. Ind. Biomed. Art (1) Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here. Alert me when this article is cited. Click here to see a list of articles that cite this paper View in Article | Download Full Size | PPT Slide | PDF Equations on this page are rendered with MathJax. Learn more. (1) Q = ( P 2 − P 1 ) π r 4 8 ξ l (2) ξ = ( P 2 − P 1 ) π r 4 8 l Q = Δ p r 2 8 l v (3) P A 1 = k Γ 0 η F 1 μ a (4) P A 2 = k ( Γ 0 + Δ Γ ) η F 2 μ a (5) Δ Γ = a F 1 μ a e − ( τ a + b v ) δ t (6) R a t i o = P A 2 P A 1 = k ( Γ 0 + Δ Γ ) η F 2 μ a k Γ 0 η F 1 μ a = 1 + Δ Γ Γ 0 = 1 + a F 1 μ a e − ( τ a + b v ) δ t Γ 0 (7) l n ( | 1 − R a t i o | ) = − ( τ a + b v ) δ t + l n | a | F 1 μ a Γ 0 = m v + n (8) ξ = Δ p r 2 m 8 l [ l n ( | 1 − R a t i o | ) − n ] Ruikang (Ricky) Wang, Editor-in-Chief Issues in Progress Feature Issues
CommonCrawl
Functional Equation For Gamma Function The derivative of the logarithm of the Gamma function is called the digamma function; higher derivatives are the polygamma functions. View Answer. We will then examine how the psi function proves to be useful in the computation of in nite rational sums. 0 will enter the above. Note: The Gamma function is new in Excel 2013 and so is not available in earlier versions of Excel. The given differential equation is named after the German mathematician and astronomer Friedrich Wilhelm Bessel who studied this equation in detail and showed (in \(1824\)) that its solutions are expressed in terms of a special class of functions called cylinder functions or Bessel functions. Number Required. asked Apr 9 '17 at 13:43. This article describes the formula syntax and usage of the GAMMA function in Microsoft Excel. Several types of quadratures are discussed and compared for different classes of wavelets. in terms of the Gauss hypergeometric function, well known transformation formulae. The Gamma Function 1 1. In the present chapter we have collected some properties of the Gamma function. Graph functions, plot data, evaluate equations, explore transformations, and much more - for free! Start Graphing Four Function and Scientific Check out the newest. The functions gamma and lgamma return the gamma function Γ(x) and the natural logarithm of the absolute value of the gamma function. Bessel functions of the first kind (sometimes called ordinary Bessel functions), are denoted by J n (x), where n is the order. The graph of a cubic function is an example of a cubic curve. Ships with Tracking Number!. The graph of a function represents all points in f(x). Posts about Functions written by pgwoolfe. To use these functions, choose Calc > Calculator. Bazhanova,b,∗,SergeyM. ) Get more help from Chegg. Because these numbers are not symbolic objects, you get floating-point results. A function assigns exactly one output to each input of a specified type. Functions and equations Here is a list of all of the skills that cover functions and equations! These skills are organized by grade, and you can move your mouse over any skill name to preview the skill. Using integration by part, one can easily prove the fundamental formula , which implies (knowing that ) Consequently, we have If you are interested to learn more about the Gamma function, click HERE. Gamma[z] (193 formulas) Primary definition (1 formula) Specific values (34 formulas) General characteristics (8 formulas) Series representations (43 formulas) Integral representations (10 formulas) Product representations (5 formulas) Limit representations (7 formulas) Differential equations (1 formula) Transformations (22 formulas) Identities. A KPI is a quantifiable measurement, such as monthly gross profit or quarterly employee turnover, that is used to monitor an organization's performance. External Equations. CIRCULAR FUNCTIONS/TRIGONOMETRIC EQUATIONS AND IDENTITIES. Sergeevc a Department of Theoretical Physics, Research School of Physics and Engineering, Australian National University, Canberra, ACT 0200, Australia. Full text Full text is available as a scanned copy of the original print version. The aim of this paper is to derive functional equations and di erential equations using novel generating functions for the Bernstein polynomials. Books Advanced Search Today's Deals New Releases Amazon Charts Best Sellers & More The Globe & Mail Best Sellers New York Times Best Sellers. The equation follows: Note: The quantity is forced to be. Here we will concentrate on the problem for real variables xand y. $\endgroup$ – Venkataramana Oct 25 '13 at 14:38. Qiu-Ming Luo and Feng Qi. Get answers to your questions about special functions with interactive calculators. Wolfram|Alpha can compute values for multiple variants of zeta functions as well as help you explore other functionalities, such as visualization and series expansion. The Factorial Function and ( s) 5 1. The following figures give a first idea what the Hadamard Gamma-function looks like. The lower incomplete gamma and the upper incomplete gamma function, as defined above for real positive s and x, can be developed into holomorphic functions, with respect both to x and s, defined for almost all combinations of complex x and s. (2) Showthat Z. You may consult any library for more information on this function. 1869 Analytic Continuation of some zeta functions from arXiv Front: math. In particular, expand implements the functional equations of the exponential function and the logarithm, the gamma function and the polygamma function, and the addition theorems for the trigonometric functions and the hyperbolic functions. Solve an equation that include Gamma. Equations and functions are not the same thing, but they can be related in several ways. It supports symbolic functionality through Maxima and numeric evaluation through mpmath and scipy. For arguments outside the range of the table, the values of the gamma function are calculated by the recursion formula and, when necessary, linear interpolation. The gamma functions is given by the integral : [math]{\displaystyle \Gamma (z)=\int _{0}^{\infty }x^{z-1}e^{-x}\,dx}[/math] The beta function is given by : [. A natural question is to determine if the gamma function is the only solution of the functional equation ? The answer is clearly no as may be seen if we consider, for example, the functions cos(2m p x) G (x), where m is any non null integer and which satisfy both ( 4 ) and ( 5 ). Since logarithms are nothing more than exponents, you can use the rules of exponents with logarithms. In this article, we show the evaluation of several different types of integrals otherwise. Functional equation involving gamma function. Given two polynomials P and Q and a class of functions (e. Gamma( ) = 0. MS Excel: Formulas and Functions - Listed by Category. This means that both xand acan be variables when the function is used inside a model equation. And the fact that the functional equation for Riemann's zeta function drives the greatest open problem in all of mathematics underscores the importance of the present topic even more emphatically. Use induction (for example, to nd the values of the function on Z). The contribution deals with functional equations which characterize known elementary functions. Linear equations have infinite sets of ordered pairs that satisfy their equation. We shall assume that h(x) and g(x) are de ned and continuous on the interval axb, and that the kernel is de ned and continuous on axb and ayb. an equation in which the unknown is a function. • LINPACK: logical. The work is divided into three parts, addressing: functional equations and inequalities in linear spaces; Ulam-Hyers-Rassias stability of functional equations; and functional equations in set-valued functions. Get a printable copy (PDF file) of the complete article (345K), or click on a page image below to browse page by page. Everything is organized into eight folders: calc (single variable calculus) mv (multivariable calculus and optimization) lin (linear algebra) de (differential equations) pr (probability) quad (Gaussian quadrature) sp (special functions) gnrl (general stuff. ALTERNATIVEREPRESENTATIONS OFTECHNOLOGY The technology that is available to a firm can be represented in a variety of ways. Often, the equation relates the value of a function (or functions) at some point with its values at other points. Solve an equation that include Gamma. If a functional equation involves a function f(x) which has R or an interval I R as its domain, then there are several things that we may be able to do. These properties are stated in the form of functional equations whose continuous solutions are exactly functions under. An easy consequence of the reflection formula is Γ 1 2 = √ π. We shall assume that h(x) and g(x) are de ned and continuous on the interval axb, and that the kernel is de ned and continuous on axb and ayb. Bessel's equation Frobenius' method Γ(x) Bessel functions Remarks A second linearly independent solution can be found via reduction of order. Most of the time, the functions I have in mind are real-valued functions of a single real variable. We also derive corresponding rational expansions for Dirichlet L -functions and multiple log gamma functions in terms of higher order Bernoulli polynomials. MS Excel: Formulas and Functions - Listed by Category. Available functions include airy, elliptic, bessel, gamma, beta, hypergeometric, parabolic cylinder, mathieu, spheroidal wave, struve, and kelvin. Hecke L-functions, and their functional equations. The functional equation reflects an interplay between symmetries of the Weyl group and reciprocities of the combinatorial object. To start practicing, just click on any link. Note: Version markers indicate the version of Excel a function was introduced. For functions N !N, look in other bases. Books Advanced Search Today's Deals New Releases Amazon Charts Best Sellers & More The Globe & Mail Best Sellers New York Times Best Sellers. Functional equations in arithmetic case. The origin of the symmetric form of the functional equation for the Eulerian Zeta and for the alternating Zeta, connected with odd numbers. However, the methods used to solve functional equations can be quite different than the methods for isolating a traditional variable. Here we recall some of its analytic properties. Van Nostrand Reinhold Compan. The Gamma function is de ned as ( n) = (n 1)! (26) (the factorial function with its argument shifted down by 1) if n is a positive integer. GAMMA FUNCTION Abstract. For a set of supplied parameters, the Excel Gamma. For complex numbers and non-integers, the Gamma function corresponds to. The gamma function is defined for x > 0 in integral form by the improper integral known as Euler's integral of the second kind. Yang-Mills correlation functions from Dyson-Schwinger equations Markus Q. Abstract We study a generalized stability problem for Cauchy and Jensen functional equations satisfied for all pairs of vectors x,y from a linear space such that [gamma](x)=[gamma](y) or [gamma](x+y)=[gamma](x-y) with a given function [gamma]. You may consult any library for more information on this function. PRODUCTION FUNCTIONS 1. Special functions (scipy. Gamma function, generalization of the factorial function to nonintegral values, introduced by the Swiss mathematician Leonhard Euler in the 18th century. GAMMA uses the following equation: Г(N+1) = N * Г(N). Identities Proving Identities Trig Equations Trig Inequalities Evaluate Functions Simplify Statistics Arithmetic Mean Geometric Mean Quadratic Mean Median Mode Order Minimum Maximum Probability Mid-Range Range Standard Deviation Variance Lower Quartile Upper Quartile Interquartile Range Midhinge. and the relation Γ(x+1)=xΓ(x) is the important functional equation. The Gamma Function and the Pochhammer Symbol. $ $\endgroup$ - user58512 Jan 29 '13 at 13:27 $\begingroup$ @user58512 this is my question infact i do not know how to use the functional equation. 2 Bessel's equation and the method of Frobenius We are going to solve Bessel's equation (3) using a power series method developed in the nineteenth century by the German mathematician Ferdinand Frobenius. MM7D Functional Equations. The PDF function is evaluated at the value x. Use inverse trigonometric functions to find the solutions of the equation that are in the interval [0, 2π). How to Integrate Using the Gamma Function. Values of the gamma function at rational points are of broad interest. Graph functions, plot data, evaluate equations, explore transformations, and much more - for free! Start Graphing Four Function and Scientific Check out the newest. 1/20/2005 The Transmission Line Wave Equation. Also includes composite equations, equations with several unknown functions of several variables, vector and matrix equations, more. It is common to name a function either f(x) or g(x) instead of y. Functions Symmetry Calculator Find if the function is symmetric about x-axis, y-axis or origin step-by-step. satisfying/(x + 1) = x/(x) is the zero function. Using integration by part, one can easily prove the fundamental formula , which implies (knowing that ) Consequently, we have If you are interested to learn more about the Gamma function, click HERE. A solution of the modified Bessel's equation of order v is called a modified Bessel function of order v. The Bessel function. Dirac Delta Function Consider the function fε (t) defined by fε (t) = 1 ε , 0 ≤ t ≤ ε 0 , t > ε where ε > 0. The origin of the symmetric form of the functional equation for the Eulerian Zeta and for the alternating Zeta, connected with odd numbers. 8 Incomplete Gamma and Related Functions; 9 Airy and Related Functions; 10 Bessel Functions; 11 Struve and Related Functions; 12 Parabolic Cylinder Functions; 13 Confluent Hypergeometric Functions; 14 Legendre and Related Functions; 15 Hypergeometric Function; 16 Generalized Hypergeometric Functions & Meijer G-Function; 17 q-Hypergeometric and. To any lattice , de ne (t) = X x2 e ˇtxx; where t>0, t2R. Buy A Study Of Certain Functional Equations For The [theta]-functions by Edward Burr Van Vleck, Francis Todd H'Doubler (ISBN: 9781179122410) from Amazon's Book Store. Here's a list of all the functions available in each category. 3in} x \ge 0; \gamma > 0 \) Since the general form of probability functions can be expressed in terms of the standard distribution, all subsequent formulas in this section are given for the standard form of the function. Since n! is a special case of the gamma function, any distribution which uses the combination function C(n,p) is essentially using the gamma function. OK, stand by for more details about all this: Injective. It is widely encountered in physics and engineering, partially because of its use in. The existence and uniqueness of Γn(x) follows from [8], which actually producesa Weierstrassproductexpansionfor Γn(x+1)−1 from thesecon-ditions that shows it to be an entire function of order nwith zeros at the. You can change the language of Google Sheets functions between English and 21 other languages. We will see the functional equations of twisted L-functions and try to understand the idea of the converse theorem. are in the domain of f(x)). I believe that theorem is for ordinary differential equations though, but it makes it seem unlikely that there's a simple looking differential equation that gives the Gamma function as a solution. For complex numbers and non-integers, the Gamma function corresponds to. Available functions include airy, elliptic, bessel, gamma, beta, hypergeometric, parabolic cylinder, mathieu, spheroidal wave, struve, and kelvin. of EECS A: Such functions do exist ! For example, the functions V(ze)= −γz and V()ze= +γz each satisfy this transmission line wave equation (insert these into the differential equation and see for yourself!). The logarithmic derivative of gamma is implemented by the digamma function psi. The ones marked * may be different from the article in the profile. Here we recall some of its analytic properties. Since logarithms are nothing more than exponents, you can use the rules of exponents with logarithms. $\endgroup$ – Venkataramana Oct 25 '13 at 14:38. It is shown that Weng's zeta functions associated with arbitrary semisimple algebraic groups defined over the rational number field and their maximal parabolic subgroups satisfy the functional equations. , problems where you were supposed to determine a particular function. While the gamma function behaves like a factorial for natural numbers (a discrete set), its extension to the positive real numbers (a continuous set) makes it useful for modeling situations involving continuous change, with important applications to calculus, differential equations, complex analysis, and statistics. The MATLAB ® gammainc function uses the definition of the lower incomplete gamma function, gammainc(z, nu) = 1 - igamma(nu, z)/gamma(nu). Read "On differential euqations and functional equations. Functional equations are equations where the unknowns are functions, rather than a traditional variable. A natural question is to determine if the gamma function is the only solution. Algebra 1 answers to Chapter 9 - Quadratic Functions and Equations - 9-1 Quadratic Graphs and Their Properties - Practice and Problem-Solving Exercises - Page 538 30 including work step by step written by community members like you. For t2R >0, z2C, de ne tz:= ezlogt, where logtis he ordinary real logarithm. Everything is organized into eight folders: calc (single variable calculus) mv (multivariable calculus and optimization) lin (linear algebra) de (differential equations) pr (probability) quad (Gaussian quadrature) sp (special functions) gnrl (general stuff. - Henry Ricardo, MAA Reviews The main purpose and merits of the bookare the many solved, unsolved, partially solved problems and hints about several particular functional equations. Newest functional-equations questions feed. The PDF function for the gamma distribution returns the probability density function of a gamma distribution, with the shape parameter a and the scale parameter. They are im- portant in math as well as in physical sciences (physics and engineering). Chapter 5 SPECIAL FUNCTIONS 5. $\begingroup$ Tate's thesis gives a nice explanation of the prime factors and the Gamma function in the functional equation for Dedekind zeta functions. Functional equations 1. Motivated essentially by the success of the applications of the Mittag-Leffler functions in many areas of science and engineering, the authors present, in a unified manner, a detailed account or rather a brief survey of the Mittag-Leffler function, generalized Mittag-Leffler functions, Mittag-Leffler type functions, and their interesting and useful properties. Mos t American books focus only on the assignment rule (formula), but this makes a mess later on in abstract algebra, linear algebra, computer programming etc. Find The Phase Shift Of A Sine Or Cosine Function Precalculus. Many important functions in applied sciences are defined via improper integrals. Comparisons and uniform framework. Solve the System of Functions, Substitute for. For arguments outside the range of the table, the values of the gamma function are calculated by the recursion formula and, when necessary, linear interpolation. The duplication formula is a special case of the multiplication theorem. Discrete Functions Equation: The properties of linear digital filters defined as transformations of discrete functions are described in the following. and the relation Γ(x+1)=xΓ(x) is the important functional equation. Gamma Function The factorial function can be extended to include non-integer arguments through the use of Euler's second integral given as z!= 0 e−t tz dt (1. Worksheet Trig Equations Worksheets Library And. The classical gamma function Γ(s) wasintroducedbyEuler(Euler'ssecondintegral): Γ(s)= ∞ 0 e−xxs−1dx, Res >0. BESSEL EQUATIONS AND BESSEL FUNCTIONS Bessel functions form a class of the so called special functions. Gamma function, generalization of the factorial function to nonintegral values, introduced by the Swiss mathematician Leonhard Euler in the 18th century. 2 Euler's gamma function You are probably familiar with the gamma function ( s), which plays a key role in the functional equation of not only the Riemann zeta function but many of the more general zeta functions and L-series we wish to consider. equation and check if you have a true identity or not (2 is a solution to x3 3 = 5, but not 7). We will see the functional equations of twisted L-functions and try to understand the idea of the converse theorem. Further, we prove some properties of gamma and beta functions of complex variables, which are useful to define. Gamma Function Calculator is an online statistics and probability tool for data analysis programmed to compute the special kind of factorial which is used in various probability distribution functions, and as such it is applicable in the fields of probability and statistics, as well as combinatorics. Several of the answers were from the perspective of Tate's thesis, which I don't really have the background to appreciate yet, so I'm asking for another pers. Actually if I remember correctly, the newest edition of Arfken has a full chapter dedicated to the gamma function, and other functions related to it (actually the book has every thing related to math for physics). Using the definition (1) we find that ∂J ν(z) ∂ν ν=n = J n(z)ln z 2 − z 2 n X∞ k=0 (−1)kψ(n+k+1) (n+k)!k! z 2 2k, 1. Sine Function at Rational Argument, Finite Product of Gamma Functions and Infinite Product Representation. Abramowitz and I. The gamma function. Ultimately, we will provide de nitions for the psi function - also known as the digamma function - as well as the polygamma functions. Gamma Function Quasiconformal Mapping Logarithmic Derivative Edinburgh Math Ultraspherical Polynomial These keywords were added by machine and not by the authors. What in the world is a function? Although it may seem at first like a function is some foreign creature in Algebra land, a function is really just an equation with a fancy name and fancy notation. When (appropriately normalized), it is denoted by Y p(x), and is called the Bessel function of the second kind of order p. For integer values the functional equation becomes Γ(n+1)=n!, and it's why the gamma function can be seen as an extension of the factorial function to real non null positive numbers. 3) are called critical curves. The derivative of the logarithm of the Gamma function is called the digamma function; higher derivatives are the polygamma functions. In this video, I introduce the Gamma Function (the generalized factorial), prove some of its properties (including a property which allows you to find 1/2 factorial), and apply the Gamma Function. This video is relevant to students undertaking the Year 12 subject of Mathematical Methods CAS Units 3 and 4 in the State of Victoria, Australia. If x is a floating-point value, then gamma returns a floating-point value. Chapter 10 Bessel Functions F. For a positive whole number n , the factorial (written as n !) is defined by n ! = 1 × 2 × 3 ×⋯× ( n − 1) × n. The functional equations for gamma lead to various identities for lngamma which can be applied via expand. Functions and equations Here is a list of all of the skills that cover functions and equations! These skills are organized by grade, and you can move your mouse over any skill name to preview the skill. Bessel functions arise when the method of separation of variables is applied to the Laplace or Helmholtz equation in cylindrical or spherical coordinates. then I put in f(y)=z, and f(z)=z+1 immediately follows. Special values of multiple gamma functions 3 is increasing and that satisfies (7) Γn(x+1) = Γn(x) Γn−1(x) and Γn(1) = 1. For complex numbers and non-integers, the Gamma function corresponds to. His formula contained a constant, which had a value between 1/100 and 1/30. The motivation to study properties of generalized k-Gamma and k-Beta functions is the fact that (x) n,k appears in the combinatorics of creation and annihilation operators [3 and refs there in]. The second solution (Y v or N v) is called a Bessel Function of the second kind and is denoted by n n (x). We p rove that the Riemann zeta function and the Euler gamma function cannot satisfy a class of alge- braic differential equations with functional coefficients that are connected to the zeros of. Gamma, Beta, Erf Gamma: Differential equations (1 formula). More gamma factors (real shifts) now available in Pari/gp g=gammamellininvinit([0,0,1,1,1]) gammamellininv(g,x) Conclusion For any L function, easy to produce equations ∑ na xsatisfied by the Dirichlet series, with →0 exponentially. This brief monograph on the gamma function was designed by the author to fill what he perceived as a gap in the literature of mathematics, which often treated the gamma function in a manner he described as both sketchy and overly complicated. gamma-related functions in the subsections to follow, as well as important identities. Convex Functions 1 1. The gamma function is a continuous extension to the factorial function, which is only de ned for the nonnegative integers. We also derive identities corresponding to the degree elevation and subdivision formulas for Bezier curves. 1869 Analytic Continuation of some zeta functions from arXiv Front: math. Riemann's functional equation. Following Qiaochu Yuan, the gamma function shows up in the functional equation of the zeta function as the factor in the Euler product corresponding to the "prime at infinity", and it occurs there as the Mellin transform of some gaussian function. Bessel Equation In the method of separation of variables applied to a PDE in cylindrical. Special function. Williamson Introduction Motivation and History De nition Related Functions Behavior Area Under the Curve Critical Points The Bluntness of The Gamma Function Conclusion Bibliography Questions About the Integral of ( x) When considering the graph of the Gamma Function, one might be lead to consider. In particular, when the Mellin transforms are written. Properties General. The existence and uniqueness of Γn(x) follows from [8], which actually producesa Weierstrassproductexpansionfor Γn(x+1)−1 from thesecon-ditions that shows it to be an entire function of order nwith zeros at the. See Example 10. In order to investigate the fundamental properties of q-Bernstein basis functions, we give generating functions for these basis functions and their functional and di erential equations. This is why we thought it would be a good idea to have a page on this function with its basic properties. Sine Function at Rational Argument, Finite Product of Gamma Functions and Infinite Product Representation. Karatsuba described the function, which determines the value of this constant. Solving functional equation. Since and then. We highly encourage the reader to try these examples on their own before reading the solutions; they are good practice problems! Example 3. Karatsuba described the function, which determines the value of this constant. When referring to applying. We present a new approach to the static finite temperature correlation functions of the Heisenberg chain based on functional equations. Full Answer. We will show that many properties which Γ( z ) enjoys extend in a natural way to the function Γ( x , z ). Every time, -factors will turn up. CAT Functions, Graphs and Statistics Current Proficiency: Novice View Cheatsheet. We give a sufficient criterion for generic local functional equations for submodule zeta functions associated to nilpotent algebras of endomorphisms defined over number fields. Replacing the desired function with something perhaps easier to use, and then working backwards later. The function has an infinite set of singular points , which are the simple poles with residues. Gamma Function Quasiconformal Mapping Logarithmic Derivative Edinburgh Math Ultraspherical Polynomial These keywords were added by machine and not by the authors. Chapter 10 Bessel Functions F. L-function in the numerator and switch the summation, we can relate it to a similar double Dirichlet series with the arguments and the twisting characters interchanged, with modi ed correction factors. The book contains many classical results as well as important, more recent results. The gamma function uses some calculus in its definition, as well as the number e Unlike more familiar functions such as polynomials or trigonometric functions, the gamma function is defined as the improper integral of another function. This paper derives the Bessel functions through use of a series solution to a di erential equation, develops the di erent kinds of Bessel functions, and explores the topic of zeroes. 5, SL5, Win8, WP8, PCL 47 and. 18) is often taken as a de nition for the gamma function ( z). Hadamard (1894) found that the function is an entire analytic function that coincides with for. Gamma Function The factorial function can be extended to include non-integer arguments through the use of Euler's second integral given as z!= 0 e−t tz dt (1. Bessel's equation Frobenius' method Γ(x) Bessel functions Remarks A second linearly independent solution can be found via reduction of order. For numerical x, the functional equation is used to shift the argument to the range 0 < x < 1. schools directory calendar. Maybe the most famous among them is the Gamma Function. Ask Question Browse other questions tagged functional-equations gamma-function or ask your own question. Airy functions are solutions to the differential equation \(f''(x) - x f(x) = 0\). 997138977051 Please note that the values of the gamma function are based on a table where the arguments lie on the interval of with an increment of 0. Most of the time, the functions I have in mind are real-valued functions of a single real variable. Bessel Equation In the method of separation of variables applied to a PDE in cylindrical. Functional Equation Periodic Function Zeta Function Gamma Function Theta Function These keywords were added by machine and not by the authors. Artin, E,The gamma function Holt, Rinehard and Winston, 1964 English translation by M Butler of "Einfuhrung in die Theorie der Gammafunktion", Hamb Math Einzelschr 11, Verlag B G Teuber, 1931 Google Scholar. and the relation Γ(x+1)=xΓ(x) is the important functional equation. Elliptic gamma-function and multi-spin solutions of the Yang–Baxter equation Vladimir V. Free Online Library: Structural and recurrence relations for hypergeometric-type functions by Nikiforov-Uvarov method. $\begingroup$ Tate's thesis gives a nice explanation of the prime factors and the Gamma function in the functional equation for Dedekind zeta functions. Euler's Gamma function is de ned by the. 1 Julian Havil, Gamma: Exploring Euler's Constant, Princeton University Press, 2003. reference-request riemann-zeta-function functional-equations gamma-function. The functional equation shows that the Riemann zeta function has zeros at −2, −4,…. While it can be used in many domains, it is most often used when looking at rates. What you should know? - Riemann Functional Equation - Legendre Duplication Formula for Gamma Function - Euler Reflection. Klopsch, B & Voll, C 2009, ' Igusa-type functions associated to finite formed spaces and their functional equations ' Transactions of the American Mathematical Society, vol. The Gamma Function by Emil Artin. LaTeX has many of these defined as commands:. By using difference equation (1) one can. 1 Cauchy Difference 511 13. Karatsuba described the function, which determines the value of this constant. This article describes the formula syntax and usage of the GAMMA function in Microsoft Excel. This tutorial explains linear equations and shows you the difference between equations that are linear and ones that are not. 1 Graphing Quadratic Functions 249 Graphing Quadratic Functions GRAPHING A QUADRATIC FUNCTION A has the form y = ax2 + bx + c where a ≠ 0. Differential equations (1 formula) Ordinary linear differential equations and wronskians (1 formula) Gamma. Kogiso and F. You may consult any library for more information on this function. 1) Γ(s) = Z ∞ 0 e −t ts 1dt. Keywords Generalizations of the incomplete gamma function Kampé de Fériet functions integrals of Bessel functions Citation Miller, Allen R. Gamma( ) = 0. 2 Pompeiu Functional Equation and Its Generalizations 523 13. trigonometric functions (sin, cos, tan), logarithms and exponentials (log, exp), limits (lim), as well as trace and determinant (tr, det). doc 3/6 Jim Stiles The Univ. While it can be used in many domains, it is most often used when looking at rates. A modification of the first Kummer matrix function including two complex variables was introduced in [7]. edu) Department of Mathematical Sciences University of Massachusetts at Lowell Lowell, MA 01854, USA April 8, 2009 1 Bessel's Equations For each non-negative constant p, the associated Bessel Equation is x 2 d2y dx2 +x dy dx +(x −p2)y = 0, (1. One very convenient and instructive way to introduce Bessel functions is due to generating function. We give a sufficient criterion for generic local functional equations for submodule zeta functions associated to nilpotent algebras of endomorphisms defined over number fields. On the other hand, we show other properties which are sufficient for robustness. ) (Chapter 6 is about the gamma function. This page was last edited on 27 August 2019, at 20:12. You may consult any library for more information on this function. A very vague question: What is the derivative of the gamma function? Here's what I've got, using differentiation under the integral. The variable F represents the graph of a function. As a project, I should learn on my. Logarithmic functions are the inverse of exponential functions. Every time, -factors will turn up. The functional equation in question for the Riemann zeta function takes the simple form = (−) where Z(s) is ζ(s) multiplied by a gamma-factor, involving the gamma function. Special values of geometric Γ and Γ v: 픽 q [t] case. GraphPad Curve Fitting Guide Available functions for user-defined equations Feedback on: GraphPad Curve Fitting Guide - Available functions for user-defined equations REG_Available_functions CURVE FITTING WITH PRISM 7 > Entering a user-defined model into Prism > Available functions for user-defined equations / Dear Support Staff,. Gaussian Function The Gaussian function or the Gaussian probability distribution is one of the most fundamen-tal functions. Other important functional equations for the gamma function are Euler's reflection formula. (Chapter 6 is about the gamma function. This is what Mathematica tells me. "Wolfram|Alpha knows a lot about special functions such as Airy functions, Bessel functions, elliptic functions, hypergeometric functions—the list goes on and on" Clicking to "hypergeometric functions" link one gets basically WA examples on Gamma, Beta, error, Legendre etc. Classical L-functions for GL(2) Hecke operators and Euler products simplest Rankin-Selberg L-functions and meromorphic continuation and functional equation of relevant Eisenstein series [updated 17:07, Jul 09, 2010] Waveforms. ON PARAMETRIZATION OF THE q-BERNSTEIN BASIS FUNCTIONS AND THEIR APPLICATIONS YILMAZ SIMSEK DEDICATED TO PROFESSOR IVAN DIMOVSKI'S CONTRIBUTIONS Abstract. special)¶The main feature of the scipy. where Γ is the gamma function defined above and \(\Gamma_{x}(a)\) is the incomplete gamma function. satisfying/(x + 1) = x/(x) is the zero function. Solutions to Exercises 2. The log of the inverse gamma complementary cumulative distribution function of y given shape alpha and scale beta R inv_gamma_rng (reals alpha, reals beta) Generate an inverse gamma variate with shape alpha and scale beta; may only be used in generated quantities block. Abstract We study a generalized stability problem for Cauchy and Jensen functional equations satisfied for all pairs of vectors x,y from a linear space such that [gamma](x)=[gamma](y) or [gamma](x+y)=[gamma](x-y) with a given function [gamma]. This includes the binomial distribution. Dirac Delta Function Consider the function fε (t) defined by fε (t) = 1 ε , 0 ≤ t ≤ ε 0 , t > ε where ε > 0. This article describes the formula syntax and usage of the GAMMA function in Microsoft Excel. Ask Question Browse other questions tagged functional-equations gamma-function or ask your own question. The logarithmic derivative of gamma is implemented by the digamma function psi. The equation 1) (1 - x 2)y" - 2xy' + ν(ν + 1)y = 0 ν real. (Report) by "Electronic Transactions on Numerical Analysis"; Computers and Internet Mathematics Differential equations Research Functional equations Functions Functions (Mathematics) Polynomials. In this video, I introduce the Gamma Function (the generalized factorial), prove some of its properties (including a property which allows you to find 1/2 factorial), and apply the Gamma Function. Similarly, it is very easy to check if a function is a solution to a functional equation : you plug the function into the equation and check if you have a true identity or not. The second solution (Y v or N v) is called a Bessel Function of the second kind and is denoted by n n (x). The reciprocal of the gamma function is an entire function. For example, the gamma function satisfies the functional equations. While it can be used in many domains, it is most often used when looking at rates. Function: bffac ( expr , n ) Bigfloat version of the factorial (shifted gamma) function. To use these functions, choose Calc > Calculator. In this respect, expand is the inverse function of combine. Often, the equation relates the value of a function (or functions) at some point with its values at other points. The most useful ones are defined for any integer n by the series. of EECS A: Such functions do exist ! For example, the functions V(ze)= −γz and V()ze= +γz each satisfy this transmission line wave equation (insert these into the differential equation and see for yourself!). The Functional Equation of ( s) 3 1. F : X X X -+ IR by F(x, y) = vex) - u(y) (x, Y E X).
CommonCrawl
Arne Pommerening's Webblog on Forest Biometrics and Quantitative Ecology This is a site for facilitating an exchange on current issues in Forest Biometrics, Quantitative Ecology, Plant Growth Analysis, Woodland Structure and Theoretical Forest Science. Weibull distribution for characterising stem-diameter structure The Weibull density distribution is known as [latex]f_w(dbh) = \frac{\gamma}{\beta}\left(\frac{dbh – \alpha}{\beta}\right)^{\gamma – 1} e^{-\left(\frac{dbh – \alpha}{\beta}\right)^\gamma}[/latex], where [latex]\alpha[/latex] is the location, [latex]\beta[/latex] the scale and [latex]\gamma[/latex] is the shape parameter, i.e. the parameters of the Weibull distribution are interpretable, which is always a good property of models. The cumulative distribution function, i.e. the integral of density function, is much simpler: [latex]F_w(dbh) = 1 – e^{\left(\frac{dbh – \alpha}{\beta}\right)^{\gamma}}[/latex] The Weibull density distribution allows characterising tree stem-diameter distributions by providing trend curves but more importantly by summarising stem diameters by means of three parameters that can be interpreted. The shape parameter of the Weibull distribution, [latex]\gamma[/latex] is of particular interest in this context and the following interpretation aid can be used (Burkhart and Tomé, 2012, p. 198): [latex]1.0 < \gamma < 3.6[/latex] Skewed to the right [latex]\gamma > 3.6[/latex] Skewed to the left [latex]\gamma \le 1.0[/latex] Negative exponential, reverse j-shaped When [latex]\gamma[/latex] is less than 1, the distribution is reverse j-shaped found in uneven-aged forest stands and when [latex]\gamma[/latex] equals 1, a negative exponential distribution results. If [latex]\gamma = 3.6[/latex], the Weibull distribution approximates a normal distribution and this value divides left- and right-skewed curves. In general, [latex]\gamma > 1[/latex] gives bell-shapes typical of even-aged forest stands. The location parameter is directly related to the minimum diameter in a stand (Burkhart and Tomé, 2012, p. 265 ). In the context of diameter distributions, all model parameters must be positive. Histogram of example dbh data overlaid by Weibull density functions with different shape parameters but the same location and scale parameters. The Weibull distribution is named after Swedish mathematician Waloddi Weibull, who described it in detail in 1951, although it was first identified by Fréchet (1927) and first applied by Rosin and Rammler (1933) to describe a particle size distribution. The Weibull distribution is one of the most flexible and most commonly applied model for tree stem-diameters. The model parameters are comparatively easy to estimate and the distribution is easy to apply. How can it be estimated? Robinson and Hamann (2010, p. 164ff.) described in detail how the three parameters of the Weibull distribution can be estimated using R and the maximum-likelihood method. The grey box below gives an adaptation of that method: dweibull3 <- function(x, gamma, beta, alpha) { (gamma/beta) * ((x -alpha)/beta)^(gamma - 1) * (exp(-((x - alpha)/beta)^gamma)) loss.w3 <- function(p, data) sum(log(dweibull3(data, p[1], p[2], p[3]))) mle.w3.nm <- optim(c(gamma = 1, beta = 5, alpha = 10), loss.w3, data = myData$dbh, hessian = TRUE, control = list(fnscale = -1)) mle.w3.nm$par # model parameters # Check whether the curve looks OK xx <- seq(10, 60, 1) hist(myData$dbh, freq = F, breaks = 50, xlim = c(10, 60)) lines(xx, dweibull3(xx, mle.w3.nm$par[1], mle.w3.nm$par[2], mle.w3.nm$par[3]), lty = 1, col = "red") Since R only provides implementations for the two-parameter version, the code starts with a new function dweibull3() implementing the three-parameter version. This is followed by a maximum-likelihood loss function using the previously defined function dweibull3(). This loss function in turn now forms one of the arguments of the optim() function used for carrying out the regression. myData is a data frame that includes a vector of stem diameters that can be addressed by myData$dbh. An alternative to nonlinear maximum-likelihood based regression are percentile estimations. The idea of this approach is to estimate the three parameters of the Weibull distribution from selected points of the distribution, e.g. the 63rd or 95th percentile. The theory of percentile estimators is well explained in Clutter et al. (1983, p. 127ff.). However, percentile estimations can be very valuable where maximum-likelihood regression does not produce any or unsuitable results. Some methods of percentile estimation have also been linked to straightforward sampling methods, so that the parameters of the Weibull function can almost be directly sampled in the field without much effort. Also, percentile methods can be used to identify starting values for nonlinear regression. Below one example method of percentile estimations is provided. According to Wenk et al. (1990, p. 198f.) and Burkhart and Tomé (2012, p. 265) [latex]\hat{\beta} = d_{63\%} – d_{min}[/latex]. [latex]d_{63\%} = \alpha + \beta[/latex] and can be interpreted as the diameter where approximately 63% of all trees are smaller in diameter. [latex]d_{min} = \hat{\alpha}[/latex] , i.e. the minimum diameter in a tree population or forest stand. Finally, Gerold(1988) suggested that [latex]\gamma [/latex] can be estimated from [latex]d_{\min}[/latex] and the diameter where approximately 95\% of all trees are smaller in diameter. [latex]\hat{\gamma} = \frac{ln(-ln(1-0.95))}{ln \left(\frac{d_{95\%}-d_{\min}}{d_{63\%}-d_{\min}}\right)} = \frac{1.09719}{ln \left(\frac{d_{95\%}-d_{\min}}{d_{63\%}-d_{\min}}\right)}[/latex] [latex]d_{\min}[/latex], [latex]d_{63\%}[/latex] and [latex]d_{95\%}[/latex] can be estimated from any empirical diameter distribution but also by employing a simple sampling procedure. Based on a systematic sampling grid approximately ten sample points need to be identified in every forest stand along with the first twelve tree neighbours nearest to each sample point. Römisch (1983) found that [latex]d_{63\%} [/latex] can be estimated from the diameter of the fifth largest tree out of twelve sample trees and that [latex]d_{95\%}[/latex] can be estimated from the largest diameter tree out of ten sample trees nearest to the sample point. [latex]d_{63\%}[/latex] and [latex]d_{95\%}[/latex] are then calculated as the arithmetic means of all ten samples. [latex]d_{\min}[/latex] is the smallest diameter of all sample trees (Gerold, 1988). Burkhart, H. and Tomé, M., 2012. Modeling forest trees and stands. Springer, Dordrecht. Clutter, J. L, Fortson, J. C., Pienaar, L. V., Brister, G. H. and Bailey, R. L., 1983. Timber management. A quantitative approach. John Wiley & Sons, New York. Fréchet, M., 1927. Sur la loi de probabilité de l'écart maximum. Annales de la Société Polonaise de Mathematique 6: 93-116. Gerold, D., 1988. Describing stem diameter structure and its development by using the Weibull distribution. Wissenschaftliche Zeitschrift der Technischen Universität Dresden 37: 221-224. Rosin, P. and Rammler, E., 1933. The laws governing the fineness of powdered Coal. Journal of the Institute of Fuel 7: 29–36. Robinson , A. P. and Hamann , J. D., 2010. Forest analytics with R. An introduction. Use R! Springer. New York, 339p. Römisch, K., 1983. A mathematical model for simulating growth and thinnings of even-aged pure stands. PhD thesis Technical University Dresden. Dresden, 197p. Wenk, G., Antanaitis, V. and Šmelko, Š ., 1990. Forest growth and yield science. Deutscher Landwirtschaftsverlag. Berlin, 448p. Weibull, W., 1951. A statistical distribution function of wide application. Journal of Applied Mechanics 18: 293–297. Published 3 February, 2020 Categorized as Okategoriserade By Arne Pommerening My background is in forest science with a PhD in forest biometrics (from Göttingen University (Germany) and a Habilitation in forest biometrics (from BOKU University Vienna (Austria). For eleven years I have been working in the fields of quantitative forest management and quantitative ecology at Bangor University (North Wales, UK) before working for a short while in Switzerland. Since 2014 I work as a Professor in Mathematical Statistics Applied to Forest Science at the Swedish University of Agricultural Sciences (SLU in Umeå and my research areas include woodland structure analysis and modelling, spatio-temporal dynamics of plant point patterns, individual-based modelling with a focus on plant interactions, plant growth analysis, methods of quantifying and monitoring biodiversity and the analysis of human behaviour of selecting trees. Much of my research is computer-based using simulation experiments and my research is strongly interdisciplinary and international. View all of Arne Pommerening's posts. Elias says: Hej, I tried this code on my dataset, but had this error, which I have no idea how to fix: > dweibull3 > loss.w3 > mle.w3.nm <- optim(c(gamma = 1, beta = 5, alpha = 10), + loss.w3, data = p4$dia, hessian = TRUE, + control = list(fnscale = -1)) Error in fn(par, …) : unused argument (data = p4$dia) (gamma/beta) * ((x -alpha)/beta)^(gamma – 1) * (exp(-((x – alpha)/beta)^gamma)) loss.w3 <- function(dia, p3) loss.w3, data = p4$dia, hessian = TRUE, Adrian Goodwin says: Goodwin (2021) shows that the optimum (lower) bound (location parameter "a") of the Weibull is not "directly" associated with the smallest diameter (D1), but rather, can be estimated as a function of D1, mean diameter (D), sample size N, and skewness or shape parameter (c) such that a=D1-(D-D1)/(N^(1/c)-1), where the shape parameter "c" is solely a function of skewness. Consequently, the optimum location parameter "a" can be negative when skewness is negative, and the loss of fit suffered by constraining a>=0 in such situations can be surprisingly large – the Kolmogorov-Smirnov score can be 50% larger. I call this phenomenon "constraint shock" because it is a large effect that will come as a surprise to forest biometricians. The problem can be resolved with left hand truncation – but this is of course more complicated, but it is a lot more accurate than the standard approach. There hasn't been much work on trends in diameter skewness. Most Australian plantations are negatively skewed until around age 20 or until first thinning. However, before assuming that a Weibull PDF with a positive bound is a good model, it's a good idea to test for negative skewness. Goodwin A N (2021) A Blind Spot in the Use of the Weibull Function for Modeling Diameter Distributions. For. Sci. 67(2):125-132 Visiting BOKU University in Vienna Research in the time of corona Lessons learnt from lecturing Quantifying non-spatial species diversity Adrian Goodwin on Weibull distribution for characterising stem-diameter structure Arne Pommerening on Basal area in larger trees and the growth compensation point Arne Pommerening on Tree interactions and individual-based modelling Arne Pommerening on The Chapman-Richards growth function Arne Pommerening on Exempelsida
CommonCrawl
How eddy current brakes function Take the following example: where a rectangular sheet of metal is entering a constant magnetic field at $v \dfrac{m}{s}$. Due to Faraday's law of induction + Lenz's law, we can state that an eddy current will be generated to oppose the increase of magnetic flux through the sheet of metal, so as to produce a magnetic field coming out of the page (represented by the red dots). Intuitively, I believe that this induced magnetic field should act as a 'brake' on the metal plate, as Lenz's law implies that the induced current should always in some way act against the motion, but I don't see how to calculate this 'retarding' force that would act to reduce the plate's speed? electromagnetism induction Joshua Lin Joshua LinJoshua Lin I will use a very crude model to find the maximum restoring force, whereas the real force will be smaller. Imagine the disk to be made of rectangular loops, such that their horizontal parts are superimposed at the upper and lower edges of the plate, while their vertical parts are alined consecutive to each other and each having "width" $dx$. Then the EMF along each rectangular loop will be $Blv$, where $l$ is the vertical length of the plate, and $v$ the velocity at which the disk is moving into the magnetic field. The current is then $I = Blv/R = Bvt\sigma dx$, where $t$ is the thickness of the plate and $\sigma$ is the conductivity of the material. The restoring force due to each loop is thus $dF = lB^2 vt\sigma dx$, which implies that the maximum force is $F_{max} = \mathcal{V}vB^2 \sigma$ where $\mathcal{V}$ is the volume of the plate within the magnetic field. This crude model implies that the largest possible force occurs right before the disk is completely within the magnetic field region, and abruptly vanishes upon entering said region. Ali MohAli Moh $\begingroup$ How did you get the force from the current? $\endgroup$ – Joshua Lin $\begingroup$ Force exerted on a current carrying wire by a magnetic field is $d\vec{F} = I d\vec{l}\times\vec{B}$ $\endgroup$ – Ali Moh $\begingroup$ But dont eddy currents go in a loop? So therefore the ILB of the right part of the loop force should cancel with the ILB of the left part of the loop? $\endgroup$ $\begingroup$ But the left part of the loop is outside the region where we have a magnetic field... the the very left $\endgroup$ $\begingroup$ @AliMoh I'm using this formula for my calculations,what would be the unit of conductivity? $\endgroup$ – Pupil I think that the most intuitive way of getting a grasp of the phenomenon is by means of energetic considerations. The induced current is due to the relative motion and it dissipates energy to a rate of $RI^2$, where $R$ is the resistivity of the material and $I$ is the induced current. Hence the moving object is losing energy at this rate and when it comes to a stop there will be no more eddy currents. Phoenix87Phoenix87 $\begingroup$ I suppose you could use this to calculate the rate of slowing, but the velocity does get smaller, and so there is an acceleration, and thus a force, and so I guess my real question is what exactly is the force that retards the motion, and how to calculate it? $\endgroup$ Part of the issue you're forgetting is the dynamics of the situation. The surface fields are not perpendicular because of the movement and the physical delay in setting up the induced eddy current to produce a counter field. If the velocity is very low or static then the forces cancel, which is why eddy breaking doesn't work well at low speeds. With regards to calculating the breaking force, this is very complicated. The force that you get from magnets moving near copper or aluminum structures depends on many factors, including: The strength of the magnetic field in the metal and the magnitude of the change in field strength. This is affected by the size and strength of the magnet the position of the magnet(s) relative to the metal part, which relates to the field strength the shape, thickness and geometry of the metal. And Velocity of the magnet/metal as faster speeds yield more force (up to a point). Most measurements are done empirically but estimations of the force can be done with three-dimensional FEA (Finite Element Analysis). I had a fundamental misunderstanding of eddy currents. I believed that eddy currents were formed simply in the part of the metal that was already submerged in the magnetic field, but in reality it is actually something like (source: boredofstudies.org) this, where only half the eddy current is actually in the field. If this is the case, then you can just use $F = qv*B = IL*B$, probably with some integration, and you can find the force. So the retarding force is just a variation on the lorentz force. Not the answer you're looking for? Browse other questions tagged electromagnetism induction or ask your own question. Eddy current - What determines the intensity of eddy current braking? How does a force on electrons produce a force on a metal plate Faraday's law and "infinite induction" How does back-EMF oppose the input-current? Electromagnetic braking hypothetical situation? How to find the direction of an eddy current? How eddy currents effect the impedance of source? Magnetic field inside conductor and linearity of eddy currents Influence of Lorentz force on Eddy Current
CommonCrawl
Smart Learning Environments Exploring knowledge graphs for the identification of concept prerequisites Rubén Manrique ORCID: orcid.org/0000-0001-8742-20941, Bernardo Pereira2 & Olga Mariño3 Smart Learning Environments volume 6, Article number: 21 (2019) Cite this article Learning basic concepts before complex ones is a natural form of learning. Automated systems and instructional designers evaluate and order concepts' complexity to successfully generate and recommend or adapt learning paths. This paper addresses the specific challenge of accurately and adequately identifying concept prerequisites using semantic web technologies for a basic understanding of a particular concept within the context of learning: given a target concept c, the goals are to (a) find candidate concepts that serve as possible prerequisite for c; and, (b) evaluate the prerequisite relation between the target and candidates concepts via a supervised learning model. Our four step approach consists of (i) an exploration of Knowledge Graphs in order to identify possible candidate concepts; (ii) the creation of a set of potential concepts; (iii) deployment of supervised learning model to evaluate a proposed list of prerequisite relationships regarding the target set; and, (iv) validation of our approaching using a ground truth of 80 concepts from different domains (with a precision varying between 76% and 96%). The automatic identification of prerequisite relationships between concepts has been identified as one of the cornerstones for modern, large-scale online educational applications (Gasparetti et al. 2018; Talukdar and Cohen 2012; Pan et al. 2017). Prerequisite relations exist as a natural dependency between concepts in cognitive processes when people learn, organize, apply, and generate knowledge (Laurence and Margolis 1999). Recently, there has been a growing interest in automatic approaches for identifying prerequisites (Liang et al. 2015; Pan et al. 2017; Wang et al. 2016) and their applications in the organization of learning resources (Manrique et al. 2018), and automatic reading list generation (Fabbri et al. 2018). Most of these approaches take advantage of natural language processing techniques and machine learning strategies to extract latent connections among concepts in large document corpora to find prerequisite dependencies. However, the extraction of latent connections is not trivial when dealing with unstructured data. This research uses open knowledge graphs as the main source to identify prerequisite dependencies. The term "Knowledge Graph" (KG) has been recently used to refer to graph-based knowledge representations such as the one promoted by the Semantic Web community with the RDF standard. According to (Paulheim 2017), the term "Knowledge Graph" was coined by Google in 2012, and is used to refer to Semantic Web knowledge bases such as DBpediaFootnote 1 or YAGOFootnote 2 (Paulheim 2017). The following characteristics define a knowledge graph: describes real world entities and their interrelations, organized in a graph; defines possible classes and relations of entities in a schema; allows for potentially interrelating arbitrary entities with each other; and, covers various topical domains. Freebase, WikidataFootnote 3, DBpedia and YAGO are identified as the main open KGs on the Semantic Web. KGs have several advantages for the prerequsite identification problem: they are supported by schemes (usually an ontology), relationships among concepts have meaning (i.e. they go beyond a simple hyperlink or co-ocurrence analysis), the degree of connection between concepts is higher, and they use query languages like SPARQL to support and simplify the extracttion process. Given the increasing amount of KGs being constantly published, updated and interconnected, our hypothesis is that the prerequisite identification can be more effective, automatic and generalizable to multiple domains by using the KGs. Thus, given a target concept cFootnote 4, the goal is to identify its prerequisites in the KG's concept space. We first traverse the KG background knowledge to build a set of candidate prerequisite concepts via their semantic relations. Then, the candidate set is reduced using a pruning method as computing the prerequisite relations between c and all related concepts in the KG space is computationally expensive and impractical. Finally, the resulting candidate concepts are evaluated via a supervised learning model. This paper is the continuation of an initial study conducted by the authors in (Manrique et al. 2019). Our previous work presented a superficial validation of the proposed approach being partially validated with a small set of concepts (15) from a single domain. Additionally, an important disadvantage of the supervised method used in our initial study is the high computational cost of the feature extraction process. This makes it unsuitable for critical response systems or for those where it is not possible to decouple the prerequisite identification step. The current work significantly expands the set of concepts used as well as the number of domains to validate the proposed approach. Additionally, a more efficient and simple supervised method is presented using fewer features but with similar performances to the state of the art algorithms. The main contributions of this paper are fourfold: (i) a search strategy for candidate concepts in the conceptual space of a KG; (ii) a pruning method to reduce the set of candidate concepts to only the potential "prerequisite" concepts; (iii) the assembling of a supervised learning model as a final step of the process to evaluate the prerequisite relation between the selected candidates and the target concept; and, finally, (iv) a dataset of 80 concepts in different domains for which the complete exploratory process are carried out to identify their prerequisites. The paper is organized as follows: "Background and related work" section review literature on the identification of prerequisites concepts and important KG's definitions. The "Prerequisite identification approach" section overviews our proposed approach. The searching strategy and the pruning method are presented in detail in "Candidate search module" section. The supervised learning model use for prerequisite evaluation is presented in "Prerequisite evaluation module" section. Next, we present the evaluation setup and the discussion of the results. The paper ends with conclusions and future work. Background and related work From a pedagogical point of view, a prerequisite is a dependency relation that states which concepts a student has to learn before moving to the next (Liang et al. 2017; Pan et al. 2017; Gordon et al. 2016; Adorni et al. 2019). Prerequisites are a way of making sure that learners enter into a new topic/concept with the prior knowledge required to its understanding. This, not only helps the learner to understand more easily, but it also helps he/she to feel more comfortable and confident with the subject matter (Gasparetti et al. 2018). Although the lowest level of granularity for specifying prerequisites is between concepts, the prerequisites relationships can be naturally extended to define precedence relations between elements of greater granularity in instructional and non-instructional learning processes (e.g. learning resources, units, and courses). The problem of identifying prerequisite relationships in an automatic way has been addressed recently by researchers in the natural language area (Talukdar and Cohen 2012; Wang et al. 2016; Liang et al. 2015; Pan et al. 2017). Most of them follow a supervised learning approach using binary classification techniques over a set of features extracted using Natural Language Processing (NLP) algorithms. Talukdar and Cohen (2012), for example, proposes a supervised method for predicting prerequisite dependencies between Wikipedia pages. They exploit the Wikipedia's link graph and the graph built upon the edit histories of each page. Features such as random walk with restart (RWR) and Pagerank scores are extracted from such graph. Then, a Maximum Entropy (MaxEnt) classifier is trained to predict whether the prerequisite relationship between two Wikipedia articles. Another supervised learning-based method is presented in Pan et al. (2017). Pan et al. (2017) implements different binary classification algorithms on the basis of a set of features extracted from MOOCs. The algorithms consider novel contextual and structural features extracted from course structure and from videos information such as the survivor time of a concept in a video, the order in which the concept appears in the course structure, and how frequent a concept is mentioned in videos after its first occurrence. Three small datasets of Coursera courses were built for the evaluation of the proposed method, and according to the authors, state-of-the-art results were achieved. Liang et al. (2018) address the problem of the lack of large scale labels for training through active learning. Active learning is a special case of semi-supervised machine learning in which the learning algorithm actively chooses which examples to label. Results show that active learning can be used to reduce the amount of training data required for concept prerequisite learning. More importantly, the authors also propose a novel set of graph-based features for representing concept pairs. Another NLP oriented approach to extract features is presented by Changuel et al. (2015). They state that the problem of determining an effective learning path from a corpus of documents depends on the accurate identification of concepts that are learning objectives and the identification of concepts that are prerequisites. For this purpose, an automatic method for concept annotation driven by supervised machine learning algorithms is proposed. The machine learning algorithms are applied on a set of contextual and local characteristics. For each concept identified in the text (the automatic annotation tool Text2Onto was used Cimiano and Völker (2005)) a set of n-grams (i.e. sequences of n words) from the contextual neighborhood are extracted: Part-Of-Speech (POS) tags and stemmed words windows. A binary contextual feature is then built to indicate the absence or presence of each n-gram in a given contextual window. Additionally, a set of local features are also calculated: (i) features that indicate if the concept is written in bold, italics, colored or big size format compared with the rest of the document text, (ii) capitalization that indicates if the concept has a capitalized first letter and (iii) syntactic features that indicate the syntactic information about the concept (i.e. subject, direct object, a compound noun, etc). Authors of this work state that experiments conducted on a dataset composed by 150 HTML documents give a precision greater than 70% in the categorization of the concepts. Gasparetti et al. (2018) uses a machine learning approach to identify a prerequisite relationship between two learning resources. This research assesses the relationship of prerequisites between learning resources and not between concepts. Given a pair of learning resources, a feature vector is built by considering both the lexical and wikipedia-based attributes extracted from their content. Then, a binary classifier is trained to recognize a prerequisite relationship. Each learning resource is annotated with the Wikipedia articles found in the text using Tagme tool (Ferragina and Scaiella 2012). Based on the annotations, they calculate: (i) a taxonomic distance between two resources using the common categories in the category hierarchy of Wikipedia, (ii) the number of hyperlinks that link annotations of the two resources. The evaluation of prerequisite relationships was carried out in 14 different domains and the results show a precision above 70%. Different from previous learning-based methods that require a training set and an extensive feature extraction process, a simple reference distance RefD is proposed by Liang et al. (2015) to measure a prerequisite relation among concepts. This measure can be easily adapted to other contexts and measures of similarity or reference, however, RefD is less accurate than supervised proposals (Pan et al. 2017). Finally, (Wang et al. 2016) proposes a strategy to obtain concepts maps from textbooks in which concepts are linked with prerequisite relationships. A set of objective functions is proposed taking advantage of the order and frequency in which concepts appear in the textbook structure. This work does not produce a relation between two concepts but rather a domain concept map. Our approach differs from previous ones as it exploits semantic relations in KGs to identify concepts prerequisites. Figure 1 shows an example of the type of knowledge that can be exploited in a KG. Let C be the concept space of a KG and c∈C a target concept for which we want to know its prerequisites. Our hypothesis is that it is possible to exploit the knowledge presented in KG about a given concept and its relationships to identify prerequisite concepts. A concept in a KG is basically a node connected to other concepts through predicates (edges) that describe semantic relationships. According to (Damljanovic et al. 2012) it is possible to exploit two types of knowledge depending on the predicate that are considered. The first type is the hierarchical knowledge that is acquired through predicates that express membership and child-parent relationships. This type of knowledge describes the membership of the concept to a set of classes or categories. The second type of knowledge is established by transversal predicates that express a direct non-hierarchical semantic relationship between two concepts. Figure 1 shows both types of predicates for the "For Loop" concept. Fragment of hierarchical (gray boxes) and non-hierarchical (blue boxes) knowledge for the " For Loop " concept extracted from DBpedia KG Prerequisite identification approach The proposed process for the identification of prerequisites is presented in Fig. 2. The process is composed of two main modules. The candidate search module and the prerequisite evaluation module. The candidate search module is responsible for retrieving related concepts that may be potential prerequisites and worth evaluating. Since in practice it is not computationally feasible to evaluate the prerequisite relationship between the target concept and all the concepts that belong to C,Footnote 5 it is necessary to perform a search for potential prerequisite candidates. This candidate set is built in two steps: first, exploiting the hierarchical and non-hierarchical knowledge that can be extracted from the target concept via linkage analysis, and, second, using a pruning method to reduce the set of generated candidates. As a result of the search module, a final candidate set of concepts M is then obtained. Prerequisite Concept identification process. Target concept c in yellow The second module evaluates the prerequisite relation between all possible pairs of concepts of the final candidate set and the target concept (i.e., {(c,cp)|∀cp∈M}). The evaluation is carried out by means of a supervised model and a set of features extracted from the KG and scholarly paper corpus. The result of this two modules process is a list of prerequisites for the target concept c. The following sections explain in detail the candidate search module, the pruning method and the supervised model. Candidate search module The candidate search module builds an initial list of candidates composed of the concepts sharing membership categories with the target concept and of concepts linked with the target concept through non-hierarchical properties: (i) Common memberships (CM): All concepts that share a common category with the target concept are retrieved. Figure 1 shows that all concepts belonging to the "Category:Iteration in programming" are included as prerequisite candidates for the "For Loop" target concept. (ii) Linked Concepts (LC): Concepts linked to the target through non-hierarchical paths up to lmax hops are added to the candidate set. lmax is a configurable parameter that defines the maximum path length considered between the target and the farthest candidate concept to be considered. Figure 1 shows that the concepts "Control flow", "Iteration", "Do while loop" and "Control variable" are retrieved for lmax=1. Considering lmax=2, the concept "Variable (computer science)" is also included. It is important to note that we do not store all paths but only the discovered neighborhood as a list of concepts \(LC_{c}=\left \{n_{c_{1}}, n_{c_{2}},\dots,n_{c_{m}}\right \}\). Concept pruning via semRefD Our concept pruning strategy is based on a simple measure that analyzes references between concepts. This measure, named RefD, was proposed by (Liang et al. 2015) and is originally calculated to evaluate the degree to which a concept ca requires a concept cb as a prerequisite. The main notion behind RefD is that if most related concepts of ca refer to cb but few related concepts of cb refer to ca, then cb is more likely to be a prerequisite of ca. RefD is defined as: $$ RefD(c_{a},c_{b}) = \frac{\sum_{j=1}^{k} i(c_{j},c_{b}) s(c_{j},c_{a})}{\sum_{j=1}^{k} s(c_{j},c_{b})} - \frac{\sum_{j=1}^{k} i(c_{j},c_{a})s(c_{j},c_{b})}{\sum_{j=1}^{k} s(c_{j},c_{b})} $$ where k is the size of the concept universe, i(cj,ca) is an indicator function showing the existence of a reference between cj and ca, and s(cj,ca) weights the relationship between cj and ca. The values of RefD (ca,cb) vary between -1 and 1. The closer to 1 the more likely it is that cb is a prerequisite of ca. Although it has been proven that supervised methods overcome RefD in the task of evaluating prerequisite relations (Liang et al. 2018), its simplicity makes it ideal to be used as a pruning strategy. In our approach, RefD was slightly modified to be applied to KGs. The modified version of RefD, called SemRefD, takes into account semantic pathsFootnote 6 in KGs to indicate the likelihood of one concept being a prerequisite of another. The weighting function s(cj,ca) takes into account the common neighbors concepts in the KG and the distance between categories in the KG hierarchy whereas the indicator function i(cj,ca) takes into account the existence of a property path between the target and related concepts. The weighting and indicator functions are described as follows. Hierarchical weighting (HW) The link weight between two concepts is measured in terms of the distance of their categories in the hierarchy structure of the KG. If A is the set of categories of the concept ci and B is the set of categories of the concept cj, their HW is computed as: $$\begin{array}{*{20}l} HW\left(c_{i},c_{j}\right)={\text{max}}_{cat_{i}\in A,cat_{j}\in B}taxsim(cat_{i},cat_{j}) \\ taxsim(cat_{i},cat_{j})= \frac{\ \delta (root,cat_{lca})}{\ \delta \left(cat_{i},cat_{lca}\right)+\ \delta \left(cat_{j},cat_{lca}\right)+\ \delta (root,cat_{lca})} \end{array} $$ where δ(a,b) is the number of edges on the shortest path between a and b, and catlca is the Lowest Common Ancestor (LCA) of cati and catj. Given the hierarchy of categories T, the LCA of two categories cati and catj is the category of greatest depth in T that is the common ancestor of both cati and catj. taxsim expresses the similarity between two categories in terms of the distance to their LCA compared to the LCA depth. Non-hierarchical weighting (NHW) The common neighbors and their distances are used as a weighting strategy. We first extract the linked concepts LC for ci and cj, then, NHW is calculated as: $$\begin{array}{*{20}l} NHW\left(c_{i},c_{j}\right)=\sum_{n_{c}\ \in \ (LC_{c_{i}} \cup LC_{c_{j}})}{{\beta }^{l_{c_{i},n_{c}}}*{\beta }^{l_{c_{j},n_{c}}}} \end{array} $$ where βl(0<β≤1) is a penalization strategy depending on the path length. The larger the path length, the larger the penalty. \(l_{c_{i},n_{c}}\) is the path length between the concept ci and the linked concept nc. A large weighting value using NHW is assigned if the concepts share many common neighboring concepts and these neighbors are as close as possible to them. Joint weighting (JW) The joint similarity measure is expressed as the sum of HW and NHW: $$ JW\left(c_{i},c_{j}\right)=HW\left(c_{i},c_{j}\right)+NHW(c_{i},c_{j}) $$ As for the indicative function, the existence of a path between concepts is used. So, if there is a path of length l≤lmax that connects the concepts ci and cj, necessarily each concept must be in the LC of the other concept with maximum path length l. Therefore, the indicative function is calculated as: $$ i(c_{i},c_{j}) = \begin{cases} 0 & \quad\text{if}\ c_{j} \notin LC_{c_{i}} \\ 1 & \quad\text{if}\ c_{j} \in LC_{c_{i}} \end{cases} $$ The three weighting strategies mentioned results in three possible variations of the SemRefD: SemRefDHW,SemRefDNHW, and SemRefDJW. These variations are used in our experiments and the results are presented in what follows. Prerequisite evaluation module This module focuses on determining prerequisite relationships between concepts using machine learning techniques modeling the problem as a binary classification in which the output labels are "Prerequisite" and "No prerequisite". Different binary classifiers are implemented based on a set of features that are extracted from the KG and a document corpus. For a given concept pair (ca,cb), the following features are calculated: corpus-based features and graph-based features. Corpus features This set of features is designed to be calculated over a document corpus. The main motivation behind using corpus-based features is to analyze the co-occurrence of the concepts in documents. Given the concept pair (ca,cb), if in most of the documents where ca appears, cb also occurs, but not vice-versa, it is more likely that cb is a prerequisite of ca. We capture the previous principle with the following features: Pcorpus(ci) is the probability of finding a document that contains the ci concept in the corpus. \(P_{corpus}(c_{i})= \frac {\text {Documents that contains \(c_{i}\)}}{\text {Total documents in Corpus}}\). Pcorpus(ci|cj) is an estimation of the conditional probability. We look for occurrences of ci in the documents where cj occurs. Pcorpus(ci,cj) is an estimation of the joint probability Pcorpus(ci,cj)=Pcorpus(ci|cj)Pcorpus(cj). CCR(ci,cj) is the portion of documents in which both concepts occur in the corpus \(\frac {\text { Documents that contains \(c_{i}\)} \cap \text {Documents that contains \(c_{j}\)} }{\text {Total documents considered}}\). It is important to mention that given that a corpora usually contains millions of documents, we do not analyse the complete set of documents to calculate Pcorpus(ci|cj),Pcorpus(ci,cj) and CCR(ci,cj). Only top documents are considered. We select top documents using the search API of the considered corpus. More details about the selected corpus will be given in the next section. The final set of corpus-based features is composed by: Pcorpus(ca),Pcorpus(cb),Pcorpus(ca|cb),Pcorpus(cb|ca),Pcorpus(ca,cb) and CCR(ca,cb). Graph features The following characteristics are extracted from the KG. We take advantage of the interconnected RDF structure of the KG to extract a set of graph-based features. The Semantic Connectivity Score (SCS) proposed in Nunes et al. (2015). SCS measures latent connections between concept pairs in large graphs like KGs. SCS(ca,cb) is computed as: $$ SCS(c_{i},c_{j}) = 1 - \frac{1}{1+\left(\sum_{l=1}^{l_{max}} \beta^{l} |paths_{c_{i},c_{j}}^{< l>}| \right)} $$ \(|paths_{c,c_{j}}^{< l>}|\) is the number of paths of length l between the target concept ci and the concept cj in the KG, lmax, as was stated above, is the maximum length of path considered, and β is a damping factor that penalizes longer paths. Pgraph(ci). The probability that a given concept appears in a statement in the RDF graph. Pgraph(ci,cj). The joint probability calculated as the number of statements where both ci and cj appears divided by the total number of statements in the graph. Pgraph(ci|cj). The conditional probability calculated as \(P(c_{i}|c_{j})=\frac {P(c_{i},c_{j})}{P(c_{j})}\) PR(ci,cj). The difference between the pageranks of the concepts in the graph. The page rank values were borrowed from (Thalhammer and Rettinger 2016). RIO(ci). The ratio between the number of incoming links (In(ci)) and the number of outgoing links (Out(ci)) for ci. CN(ci,cj). The number of concepts that share a link with ci and cj (i.e. common neighbors). LP(ci,cj). Link Proportion. \(LP=\frac {|In(c_{i}) \cap In(c_{j})|}{In(c_{i})}\) The final set of graph-based features is composed by: SCS(ca,cb),Pgraph(ca),Pgraph(cb),Pgraph(ca,cb),Pgraph(ca|cb),Pgraph(cb|ca),PR(cb,ca),In(ca),In(cb),Out(ca),Out(cb),CN(ca,cb),DC(ca,cb),LP(ca,cb), and LP(cb,ca). The training of the supervised model is presented in the "Implementation" section. The implementation of SemRefD and the extraction process of the set of features for the supervised learning method were performed using the following resources: DBpedia: DBpedia is the most appropriate open KG to construct the semantic representation. The calculation of the prerequisite relationship must therefore be performed using DBpedia since it is not possible to uncouple both processes. The hierarchical structure of a concept is drawn from categories in DBpedia categorical system. Categories are extracted through dct:subject predicate, but only categories in the hierarchical structure processed by Kapanipathi et al. (2014) were used to avoid disconnected categories and cycles. For all our experiments, we set β=0.5 and lmax=(1,2) following previous experimental results presented in Manrique and Mariño (2017) that also follow this approach for penalizing property paths in DBpedia. Corpus of documents: The Corpus based features previously explained in "Corpus features" section are calculated over the Core corpusFootnote 7. This corpus contains more than 131 millions documents to date. Using the corpus' search APIs, relevant documents for each concept are retrieved and analyzed. For the calculation of Pcorpus(ci), it is assumed that the total number of documents returned by the Core search API using ci as query corresponds to the total number of documents that contains ci. Additionally, only the top 500 documents returned by the Core search engine are considered for the calculation of the conditional probability Pcorpus(ci|cj) and the common results CCR(ci,cj). We experimentally determined that the top 500 documents is computationally adequate. Documents in a lower ranking position only contain sporadic mentions of the considered concepts. Supervised model training process Two datasets were used for training and evaluation of the supervised learning model. The RefD2015Footnote 8 dataset (Liang et al. 2015), and the university course dependencies UCD datasetFootnote 9 (Liang et al. 2017). For all datasets, a manual reconciliation between the labels of the concepts and their respective URI in DBpedia 2016-10 were performed. For some concepts this reconciliation was not possible; as a result, some concept pairs were discarded. Details of the datasets are shown in Table 1. Table 1 Supervised Model Training and Evaluation Datasets The RefD2015 and UCD datasets contain concept prerequisite pairs extracted from university course dependencies. They developed a Web scraper to obtain the course dependencies from different University online course catalogs. Then, courses were linked to Wikipedia pages using automatic tools and manual pruning. While RefD2015 contains concept pairs in Computer Science (CS) and Math (MATH) domains, UCD only contains concept pairs in Computer Science (CS). To accurately label a given concept pair in the UCD dataset, each pair got labels from three different annotators. The majority of votes decided the final label assigned. In RefD2015, on the other hand, two experts verified all concept pairs. We employed XGBoost, one of the most powerful supervised learning algorithms to date (Chen and Guestrin 2016). The hyperparameters of XGBoost are selected using a grid-search over a parameter grid, applying a 10-fold cross-validation technique and using the 80% (selected using stratified sampling) of the dataset. The classes in the datasets were balanced by oversampling of the minority class via SMOTE technique (Chawla et al. 2002) during cross-validation. The values used were: (i) max depth: 40, (ii) min child weight: 1, and (iii) learning rate: 0.14. We tested the resulting XGBoost model by employing the remaining 20% of the dataset. The results obtained are reported in Table 2. For all experiments, we report the average Accuracy (A), Precision (P), Recall (R) and F1-score (F1). Table 2 Supervised Model Evaluation Although the results are slightly worse to those reported in Manrique et al. (2018), a smaller number of features were employed here obtaining a precision greater than 90%. This supervised model is used as the core component in the prerequisite evaluation module (see Fig. 2). Using the XGBoost classifier trained, we perform a feature importance analysis via the "gain" method which calculates each feature importance as the average gain over the number of splits (across all trees) that include that feature Footnote 10. The top 5 features are: Pcorpus(cb|ca),LP(ca,cb),CN(ca,cb), and SCS(ca,cb). The most important feature according to this analysis is Pcorpus(cb|ca), which indicates that the occurrence of concepts in a corpus is a great indicator of the prerequisite relationship. On the other hand, LP(ca,cb),CN(ca,cb) and SCS(ca,cb) measure the interconnection of the concepts in the graph in different ways. LP relates the concepts that have an output link to ca and cb. CN relates the concepts that share an outgoing link with ca and cb. Finally, SCS weighs the relationship between the concepts ca and cb in the graph based on the paths that join them and their length. In general, based on the results obtained here and in related studies, the analysis of the interconnections of the concepts in the graph are a good indication of a prerequisite relationship. Evaluation and discussion To evaluate the complete prerequisite identification approach (Fig. 2) we selected 80 target concepts in the domains of Computer Science (CS), Math (MATH), Physics (PHY) and Biology (BIO) (25 CS concepts, 25 MATH concepts, 15 PHY concepts and 15 BIO concepts). For each target concept the entire process was carried out and a list of prerequisites was obtained. Table 3 shows, as an example, the size of the initial set of candidates before pruning, for each search strategy for two concepts. Notice the vertiginous increase in the number of concepts when lmax=2. Considering that the diameter of the DBpedia graph is 6.27Footnote 11 values of lmax greater than 2 could lead to an explosion in the number of concepts retrieved. It is also important to note that the lack of homogeneity in the sizes of the candidate set when CM is used as a search strategy. While the "Machine Learning" concept is related to three different categories to which 456 different concepts belong, "Deep Learning" is related to only one category to which only 10 concepts belong. This can be attributed, to the fact that, there is no guarantee that there are a homogeneous number of categories associated with the concepts, and that relatively recent concepts are not so well described in the DBpedia version used (2016-10). Table 3 Resulting initial candidate size per search strategy for "Machine learning" and "Deep Learning" target concepts On the initial candidate set the pruning method using the SemRefD function is applied. Table 4 shows the final candidate set size (i.e., |M|) after applying the different proposed versions of SemRefD for the "Machine Learning" concept. For a concept in the initial candidate set to be included in M the value of SemRefD must exceed a threshold value θ. We select three different values of θ={0.1,0.2,0.3}, as a result, for each target concept, 27 different M sets are built corresponding to the different combinations of the searching and pruning functions. Theta values greater than 0.3 are not practical since, in some cases, the entire initial set of candidates is empty. Considering that the original paper used a maximum value of 0.05 for theta (Liang et al. 2015), we are being much more strict. As shown in Table 4, the number of concepts drops drastically by applying SemRefD in particular for θ=0.3. Table 4 Final candidate set size per search / pruning strategy for "Machine Learning" target concept It is also clear that despite the fact that the initial set of candidates built using LC with lmax=2 is at least 50 times the size using lmax=1 (see Table 3), after pruning, their size is only at most 2.2 times the size of LC with lmax=1. Considering that the concepts that are in the set constructed with LC with lmax=1 must also be found in LC with lmax=2, we can affirm that a large part of the concepts in M can be found through a direct relationship with the concept (i.e. a path of length one). In the final step all the concepts in M are evaluated via a supervised model to assess the prerequisites relations with the target concept. Considering the complete set of target concepts and all possible M sets, a total of 2812 different concepts pairs were assessed by the supervised model. Those candidate concepts identified as prerequisites constitute the output of the complete process. It was necessary to build a ground truth to evaluate how precise is our proposed process. To accurately label a given concept pair, we rely on human expert knowledge. We recruited 5 adjunct professors, 1 PhD student and 4 master students with backgrounds in the different domains involved. Three master students and one adjunct professor had CS and MATH backgrounds, two adjunct professors and the PhD student had PHY background whereas two professors and one master student had BIO background. For each candidate pair, at least three annotators decided whether the candidate concept is a prerequisite of the target concept or not. A majority vote rule was used to assign the final label. Due to the low and fixed number of participants per domain, the Fleiss' Kappa inter-rater reliability measure of agreement were used. We obtained a value of κ=0.42 for CS, κ=0.56 for MATH, κ=0.31 for PHY and κ=0.22 for BIO. According to (Landis and Koch 1977), this indicates a moderate agreement for CS and MATH, and a fair agreement for PHY and BIO. Tables 5 and 6 present the global results of the evaluation in terms of precision (P), true positives (TP) and false positives (FP) calculated using the previously constructed ground truth. While Table 5 presents the results obtained for CS and MATH, Table 6 presents the results for PHY and BIO. False positives should be interpreted as those concepts in M that were incorrectly identified as prerequisites. The true positives are those concepts that were correctly identified as prerequisites, and precision is defined as \( P = \frac {TP}{TP + FP}\). In both tables the highest values are shown in bold by pruning function and search method. Table 5 Results of the complete process for CS and MATH target concepts using as evaluation metrics P (precision), TP (true positives), and FP (false positive) Table 6 Results of the complete process for PHY and BIO target concepts using as evaluation metrics P (precision), TP (true positives), and FP (false positives) As it can be observed, the use of SemRefDHW as pruning function leads in most cases to a high number of FP and consequently to the lowest values of precision. Unlike SemRefDHW,SemRefDNHW led to the lowest FP values and the highest TP and, consequently, precision values. This can be attributed to the fact that the common neighbors is a better indicator of a strong link between concepts than the sharing of common categories (or nearby categories). The taxonomy of KG categories may not be descriptive enough to identify a strong link between the concepts. For example, the concepts "Artificial Intelligence" and "Neuroscience" despite being highly related concepts and sharing many common concepts in DBpedia, do not share any common category and the distance between categories in the taxonomy is not small enough to indicate a strong relationship. The above applies to all analyzed domains. Regarding θ, it is clear that with its increase there is an increment in the precision at the expense of a reduced number of identified prerequisites. An increase of theta from 0.1 to 0.3 implies an average reduction of 87% in TP (across all domains). An appropriate trade-off between the precision and the number of correct prerequisite concepts identified is achieved when the theta value is 0.2. We further explored the performance of the search strategies. Initially we expected that the final set M obtained by using LC(lmax=2) would be much larger compared to the rest of the strategies because the initial candidate set built with this strategy was on average 27 times larger than the one built with LC(lmax=1) and 35 times larger than the one built with CM. However, the average increase in M was only 1.5 times larger. This means that between 27 and 35 times more SemRefD computations were made to increase the concepts in the final set by only 1.7 times. This is clearly not practical and new ways of exploring linked concepts with lmax=2 should be proposed. One possible solution is to reduce the number of predicates that are considered in the traversing step. Regarding CM we discovered that it is not an appropriate strategy when: (a) concept categories are far from the concept main domain, and (b) the categories have a very small number of members. Consider the objective concept "Dijkstra's algorithm" and two of its associated categories: " Dutch inventions " and " 1959 in computer science". The category "Dutch inventions" is an example of the case (a) since most of the member concepts are associated with unrelated domains. For its part, the category "1959 in computer science" only contains one member and is precisely the concept "Dijkstra's algorithm"; therefore, this category does not bring any new concept to the candidate set (case (b)). Due to the above evidences the best search strategy is LC(lmax=1) since the concepts linked directly by a single predicate represent in most cases a significant semantic relationship (Manrique and Mariño 2017). Additionally, the number of SemRefD calculation keeps small while the number of TP still remains significant. Table 7 shows the comparison of the results of the different domains using LC(lmax=1) as a search strategy, SemRefDNHW and θ=0.2. We identify that the set of candidate concepts |M| is smaller for the BIO and PHY domains. On average, a concept in these domains has a candidate set twice as low as a concept in MATH or CS. This eventually results in a much smaller set of prerequisites. The above is a clear indication that the BIO/PHY concepts have a smaller number of interconnections between them in the KG, or equivalent, they are not so well described. As all our proposal is based on analyzing the connections between the concepts in the KG (i.e. SemRefD and the features of the supervised learning model), a lower performance was expected in comparison with the other domains considered. Table 7 Results comparison per domain using LC(lmax=1),SemRefDNHW and θ=0.2 It should also be noted that our supervised model was trained using concepts in the MATH and CS domains, so its prediction may be less accurate in other domains. Building broader training sets for the task of identifying prerequisites remains a subject of recent research (Liang et al. 2018). Regarding the prerequisite identification process, our approach presents the following limitations: The first limitation is that both the target concepts and the prerequisite concepts must exist in the KG concept space, in our case DBpedia. Although the knowledge represented in DBpedia is comprehensive, some concepts may be poorly modeled or even not yet represented affecting the performance of the supervised method to detect the prerequisite relationship and the prunning strategies. However, KGs are evolving rapidly. There is a constant increase in the number of concepts and relationships that are included in existing knowledge graphs such as DBpedia, Wikidata and YAGO. With the development of more complete KGs, a better performance of the strategies proposed in this article is to be expected. A second limitation is the lack of prerequisite annotated datasets in domains different from the ones used in this research to train the supervised model. Datasets that include a larger set of knowledge domains will allow building more generalizable models. Finally, we want to enrich the whole process by taking into consideration competencies and learner profiles. Our approach is intended as one-size-fits-all, as, for now, we are not considering learner profiles to determine core aspects such as learning styles, context, and motivations. Conclusion and future work This paper presents a process for the identification of concept's prerequisites that exploit mainly the semantic relations that can be found in a KG. With different search strategies and pruning functions, we are able to reduce the KG concept space to a candidate set of potentially prerequisites concepts. The results show that LC(lmax=1) and SemRefDNHW produced the best candidate sets to be used as input for the supervised model in the evaluation step. The final precision obtained using the above functions varies between 76% and 96% and depends mostly on a configurable parameter θ of the pruning method. Our future work will be oriented to strategies that consider the type of predicate in the candidate search as well as the use of other KGs different from DBpedia and combinations of them. Instead of managing KGs independently, we are interested in strategies that allow us to combine multiple KGs. This is a complex task as it is necessary to develop methods to identify vocabulary equivalences and perform a linkage process. Furthermore, as discussed above, we want also to build datasets that cover other knowledge domains for the construction of prerequisite identification supervised models. We are interested in evaluating our strategies in domains such as History, Geography, Politics and Economics. This, of course, is not a trivial task insofar in most of cases expert knowledge is required for the ground truth conformation. The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request. http://dbpedia.org/ http://www.yago-knowledge.org/ https://www.wikidata.org/ The target concept can be seen as a learning goal The number of concepts in a KG is usually in the order of millions. https://www.w3.org/TR/sparql11-property-paths/ https://core.ac.uk/ https://github.com/harrylclc/RefD-dataset https://github.com/harrylclc/eaai17-cpr-recover https://xgboost.readthedocs.io/en/latest/python/python_api.html see http://konect.uni-koblenz.de/networks/dbpedia-all Adorni, G., Alzetta, C., Koceva, F., Passalacqua, S., Torre, I. (2019). Artificial Intelligence in Education. In: Isotani, S., Millán, E., Ogan, A., Hastings, P., McLaren, B., Luckin, R. (Eds.)Springer, Cham, (pp. 1–13). Changuel, S., Labroche, N., Bouchon-Meunier, B. (2015). Resources sequencing using automatic prerequisite–outcome annotation. ACM Trans. Intell. Syst. Technol., 6(1), 6–1630. https://doi.org/10.1145/2505349. Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P. (2002). Smote: Synthetic minority over-sampling technique. J. Artif. Int. Res., 16(1), 321–357. Chen, T., & Guestrin, C. (2016). Xgboost: A scalable tree boosting system. In Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. KDD '16. https://doi.org/10.1145/2939672.2939785. http://doi.acm.org/10.1145/2939672.2939785. ACM, New York, (pp. 785–794). Cimiano, P., & Völker, J. (2005) In Montoyo, A., Muńoz, R., Métais, E. (Eds.), Text2Onto, (pp. 227–238). Berlin, Heidelberg: Springer. Damljanovic, D., Stankovic, M., Laublet, P. (2012). Linked data-based concept recommendation: Comparison of different methods in open innovation scenario. In: Simperl, E., Cimiano, P., Polleres, A., Corcho, O., Presutti, V. (Eds.) In The Semantic Web: Research and Applications. Springer, Berlin, Heidelberg, (pp. 24–38). Fabbri, A., Li, I., Trairatvorakul, P., He, Y., Ting, W., Tung, R., Westerfield, C., Radev, D. (2018). Tutorialbank: A manually-collected corpus for prerequisite chains, survey extraction and resource recommendation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). https://www.aclweb.org/anthology/P18-1057. https://doii.org/10.18653/v1/P18-1057. Association for Computational Linguistics, Melbourne, (pp. 611–20). Ferragina, P., & Scaiella, U. (2012). Fast and accurate annotation of short texts with wikipedia pages. IEEE Software, 29(1), 70–75. https://doi.org/10.1109/MS.2011.122. Gasparetti, F., Medio, C.D., Limongelli, C., Sciarrone, F., Temperini, M. (2018). Prerequisites between learning objects: Automatic extraction based on a machine learning approach. Telematics Informa., 35(5), 595–610. http://www.sciencedirect.com/science/article/pii/S0736585316304890. https://doi.org/10.1016/j.tele.2017.05.007. Gordon, J., Zhu, L., Galstyan, A., Natarajan, P., Burns, G. (2016). Modeling Concept Dependencies in a Scientific Corpus. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). https://www.aclweb.org/anthology/P16-1082. https://doi.org/10.18653/v1/P16-1082. Association for Computational Linguistics, Berlin, (pp. 866–75). Kapanipathi, P., Jain, P., Venkataramani, C., Sheth, A (2014). Hierarchical Interest Graph from Tweets. In Proceedings of the 23rd International Conference on World Wide Web. http://doi.acm.org/10.1145/2567948.2577353. https://doi.org/10.1145/2567948.2577353. ACM, New York, (pp. 311–2). Landis, J., & Koch, G. (1977). The measurement of observer agreement for categorical data. Biometrics, 33(1), 159–174. https://doi.org/10.2307/2529310. Laurence, S., & Margolis, E. (1999). Concepts and cognitive science. In: Margolis, E., & Laurence, S. (Eds.) In Concepts: Core Readings. MIT Press, USA, (pp. 3–81). Liang, C., Wu, Z., Huang, W., Giles, C.L. (2015). Measuring prerequisite relations among concepts. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Lisbon, Portugal. Liang, C., Ye, J., Wu, Z., Pursel, B., Giles, C.L. (2017). Recovering concept prerequisite relations from university course dependencies. In In the 7th Symposium on Educational Advances in Artificial Intelligence, (pp. 4786–4791). Liang, C., Ye, J., Wang, S., Pursel, B., Giles, C.L. (2018). Investigating active learning for concept prerequisite learning. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, New Orleans, Louisiana, USA, February 2-7, 2018. Manrique, R., Sosa, J., Marino, O., Nunes, B.P., Cardozo, N. (2018). Investigating learning resources precedence relations via concept prerequisite learning. In 2018 IEEE/WIC/ACM International Conference on Web Intelligence (WI). https://doi.org/10.1109/WI.2018.00-89, (pp. 198–205). Manrique, R., & Mariño, O. (2017) In RóŻewski, P., & Lange, C. (Eds.), Diversified semantic query reformulation, (pp. 23–37). Cham: Springer. Manrique, R., Pereira, B., Marino, O., Cardozo, N., Wolfgand, S. (2019). Towards the Identification of Concept Prerequisites Via Knowledge Graphs. In 2019 IEEE 19th International Conference on Advanced Learning Technologies (ICALT). https://doi.org/10.1109/ICALT.2019.00101, (pp. 332–6). Nunes, B.P., Fetahu, B., Kawase, R., Dietze, S., Casanova, M.A., Maynard, D. (2015) In Tweedale, J.W., Jain, L.C., Watada, J., Howlett, R.J. (Eds.), Interlinking Documents Based on Semantic Graphs with an Application, (pp. 139–155). Cham: Springer. Pan, L., Li, C., Li, J., Tang, J. (2017). Prerequisite relation learning for concepts in moocs. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. https://doi.org/10.18653/v1/P17-1133, (pp. 1447–56). Paulheim, H. (2017). Knowledge graph refinement: A survey of approaches and evaluation methods. Semantic Web, 8(3), 489–508. https://doi.org/10.3233/SW-160218. Talukdar, P., & Cohen, W. (2012). Crowdsourced comprehension: Predicting prerequisite structure in wikipedia. In Proceedings of the Seventh Workshop on Building Educational Applications Using NLP. https://www.aclweb.org/anthology/W12-2037. Association for Computational Linguistics, Montréal, (pp. 307–15). Thalhammer, A., & Rettinger, A. (2016) In Sack, H., Rizzo, G., Steinmetz, N., Mladenić, D., Auer, S., Lange, C. (Eds.), Pagerank on wikipedia: Towards general importance scores for entities, (pp. 227–240). Cham: Springer. Wang, S., Ororbia, A., Wu, Z., Williams, K., Liang, C., Pursel, B., Giles, C.L. (2016). Using prerequisites to extract concept maps from textbooks. In Proceedings of CIKM '16. https://doi.org/10.1145/2983323.2983725. ACM, New York, (pp. 317–326). Our sincere thanks to the professors and students of Sergio Arboleda University (Colombia) who participated as judges in the dataset construction process. Systems and Computing Engineering Department, School of Engineering. Universidad de los Andes,Cra 1 No 18A - 12 (111711), Bogotá, Colombia Rubén Manrique College of Engineering and Computer Science. Australian National University, 120 McCoy Circuit, Canberra, 2601, Australia Bernardo Pereira Systems and Computing Engineering Department, School of Engineering. Universidad de los Andes, Cra 1 No 18A - 12 (111711), Bogotá, Colombia Olga Mariño Search for Rubén Manrique in: Search for Bernardo Pereira in: Search for Olga Mariño in: RM drafted the initial manuscript and perform the experiment analysis. RM and OM led the dataset construction process. All authors reviewed the literature. BPN reviewed and revised the manuscript into its final shape. All authors read and approved the final manuscript. Correspondence to Rubén Manrique. Manrique, R., Pereira, B. & Mariño, O. Exploring knowledge graphs for the identification of concept prerequisites. Smart Learn. Environ. 6, 21 (2019) doi:10.1186/s40561-019-0104-3 Concept prerequisite identification Knowledge graphs Follow SpringerOpen SpringerOpen Twitter page SpringerOpen Facebook page
CommonCrawl
All-----TitleAuthor(s)AbstractSubjectKeywordAll FieldsFull Text-----About The Annals of Probability Ann. Probab. Volume 38, Number 2 (2010), 532-569. Taylor expansions of solutions of stochastic partial differential equations with additive noise Arnulf Jentzen and Peter Kloeden More by Arnulf Jentzen More by Peter Kloeden Full-text: Open access Enhanced PDF (421 KB) Article info and citation The solution of a parabolic stochastic partial differential equation (SPDE) driven by an infinite-dimensional Brownian motion is in general not a semi-martingale anymore and does in general not satisfy an Itô formula like the solution of a finite-dimensional stochastic ordinary differential equation (SODE). In particular, it is not possible to derive stochastic Taylor expansions as for the solution of a SODE using an iterated application of the Itô formula. Consequently, until recently, only low order numerical approximation results for such a SPDE have been available. Here, the fact that the solution of a SPDE driven by additive noise can be interpreted in the mild sense with integrals involving the exponential of the dominant linear operator in the SPDE provides an alternative approach for deriving stochastic Taylor expansions for the solution of such a SPDE. Essentially, the exponential factor has a mollifying effect and ensures that all integrals take values in the Hilbert space under consideration. The iteration of such integrals allows us to derive stochastic Taylor expansions of arbitrarily high order, which are robust in the sense that they also hold for other types of driving noise processes such as fractional Brownian motion. Combinatorial concepts of trees and woods provide a compact formulation of the Taylor expansions. Ann. Probab., Volume 38, Number 2 (2010), 532-569. First available in Project Euclid: 9 March 2010 Permanent link to this document https://projecteuclid.org/euclid.aop/1268143526 doi:10.1214/09-AOP500 Mathematical Reviews number (MathSciNet) MR2642885 Zentralblatt MATH identifier Primary: 35K90: Abstract parabolic equations 41A58: Series expansions (e.g. Taylor, Lidstone series, but not Fourier series) 65C30: Stochastic differential and integral equations 65M99: None of the above, but in this section Taylor expansions stochastic partial differential equations SPDE strong convergence stochastic trees Jentzen, Arnulf; Kloeden, Peter. Taylor expansions of solutions of stochastic partial differential equations with additive noise. Ann. Probab. 38 (2010), no. 2, 532--569. doi:10.1214/09-AOP500. https://projecteuclid.org/euclid.aop/1268143526 [1] Burrage, K. and Burrage, P. M. (2000). Order conditions of stochastic Runge–Kutta methods by B-series. SIAM J. Numer. Anal. 38 1626–1646. Mathematical Reviews (MathSciNet): MR1813248 Zentralblatt MATH: 0983.65006 Digital Object Identifier: doi:10.1137/S0036142999363206 [2] Butcher, J. C. (1987). The Numerical Analysis of Ordinary Differential Equations: Runge–Kutta and General Linear methods. Wiley, Chichester. Mathematical Reviews (MathSciNet): MR878564 [3] Berland, H., Owren, B. and Skaflestad, B. (2005). B-series and order conditions for exponential integrators. SIAM J. Numer. Anal. 43 1715–1727. [4] Berland, H., Skaflestad, B. and Wright, W. (2007). Expint—A Matlab package for exponential integrators. ACM Trans. Math. Software 33 1–17. [5] Cox, S. M. and Matthews, P. C. (2002). Exponential time differencing for stiff systems. J. Comput. Phys. 176 430–455. Digital Object Identifier: doi:10.1006/jcph.2002.6995 [6] Da Prato, G. and Zabczyk, J. (1992). Stochastic Equations in Infinite Dimensions. Encyclopedia of Mathematics and Its Applications 44. Cambridge Univ. Press, Cambridge. [7] Davie, A. M. and Gaines, J. G. (2001). Convergence of numerical schemes for the solution of parabolic stochastic partial differential equations. Math. Comp. 70 121–134. Digital Object Identifier: doi:10.1090/S0025-5718-00-01224-2 JSTOR: links.jstor.org [8] Deuflhard, P. and Bornemann, F. (2002). Scientific Computing with Ordinary Differential Equations. Texts in Applied Mathematics 42. Springer, New York. [9] Engel, K. and Nagel, R. (1999). One-Parameter Semigroups for Linear Evolution Equations. Springer, Berlin. [10] Gradinaru, M., Nourdin, I. and Tindel, S. (2005). Itô's- and Tanaka's-type formulae for the stochastic heat equation: The linear case. J. Funct. Anal. 228 114–143. Digital Object Identifier: doi:10.1016/j.jfa.2005.02.008 [11] Gyöngy, I. (1998). A note on Euler's approximations. Potential Anal. 8 205–216. Digital Object Identifier: doi:10.1023/A:1008605221617 [12] Gyöngy, I. (1998). Lattice approximations for stochastic quasi-linear parabolic partial differential equations driven by space-time white noise. I. Potential Anal. 9 1–25. [13] Gyöngy, I. (1999). Lattice approximations for stochastic quasi-linear parabolic partial differential equations driven by space-time white noise. II. Potential Anal. 11 1–37. [14] Gyöngy, I. and Krylov, N. (2003). On the splitting-up method and stochastic partial differential equations. Ann. Probab. 31 564–591. Digital Object Identifier: doi:10.1214/aop/1048516528 Project Euclid: euclid.aop/1048516528 [15] Hausenblas, E. (2003). Approximation for semilinear stochastic evolution equations. Potential Anal. 18 141–186. [16] Hochbruck, M., Lubich, C. and Selhofer, H. (1998). Exponential integrators for large systems of differential equations. SIAM J. Sci. Comput. 19 1552–1574. [17] Hochbruck, M. and Ostermann, A. (2005). Explicit exponential Runge–Kutta methods for semilinear parabolic problems. SIAM J. Numer. Anal. 43 1069–1090. Digital Object Identifier: doi:10.1137/040611434 [18] Hochbruck, M. and Ostermann, A. (2005). Exponential Runge–Kutta methods for parabolic problems. Appl. Numer. Math. 53 323–339. Digital Object Identifier: doi:10.1016/j.apnum.2004.08.005 [19] Hochbruck, M., Ostermann, A. and Schweitzer, J. (2008/09). Exponential Rosenbrock-type methods. SIAM J. Numer. Anal. 47 786–803. [20] Jentzen, A. (2007). Numerische Verfahren hoher Ordnung für zufällige Differentialgleichungen, Diplomarbeit. Johann Wolfgang Geothe Univ. Frankfurt am Main. [21] Jentzen, A. (2009). Higher order pathwise numerical approximations of SPDEs with additive noise. Submitted. [22] Jentzen, A. (2009). Pathwise numerical approximations of SPDEs with additive noise under non-global Lipschitz coefficients. Potential Anal. 4 375–404. Digital Object Identifier: doi:10.1007/s11118-009-9139-3 [23] Jentzen, A. and Kloeden, P. E. (2009). Pathwise Taylor schemes for random ordinary differential equations. BIT 49 113–140. [24] Jentzen, A. and Kloeden, P. E. (2009). Overcoming the order barrier in the numerical approximation of stochastic partial differential equations with additive space-time noise. Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci. 465 649–667. Digital Object Identifier: doi:10.1098/rspa.2008.0325 [25] Jentzen, A. and Kloeden, P. E. (2009). The numerical approximation of stochastic partial differential equations. Milan J. Math. 77 205–244. [26] Jentzen, A., Kloeden, P. E. and Neuenkirch, A. (2009). Pathwise approximation of stochastic differential equations on domains: Higher order convergence rates without global Lipschitz coefficients. Numer. Math. 112 41–64. [27] Kassam, A.-K. and Trefethen, L. N. (2005). Fourth-order time-stepping for stiff PDEs. SIAM J. Sci. Comput. 26 1214–1233. [28] Kloeden, P. E. and Jentzen, A. (2007). Pathwise convergent higher order numerical schemes for random ordinary differential equations. Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci. 463 2929–2944. [29] Kloeden, P. E. and Platen, E. (1992). Numerical Solution of Stochastic Differential Equations. Applications of Mathematics (New York) 23. Springer, Berlin. [30] Krogstad, S. (2005). Generalized integrating factor methods for stiff PDEs. J. Comput. Phys. 203 72–88. Digital Object Identifier: doi:10.1016/j.jcp.2004.08.006 [31] Lawson, J. D. (1967). Generalized Runge–Kutta processes for stable systems with large Lipschitz constants. SIAM J. Numer. Anal. 4 372–380. Digital Object Identifier: doi:10.1137/0704033 [32] Müller-Gronbach, T. and Ritter, K. (2007). Lower bounds and nonuniform time discretization for approximation of stochastic heat equations. Found. Comput. Math. 7 135–181. [33] Müller-Gronbach, T. and Ritter, K. (2007). An implicit Euler scheme with non-uniform time discretization for heat equations with multiplicative noise. BIT 47 393–418. [34] Müller-Gronbach, T., Ritter, K. and Wagner, T. (2008). Optimal pointwise approximation of a linear stochastic heat equation with additive space-time white noise. In Monte Carlo and Quasi-Monte Carlo Methods 2006 577–589. Springer, Berlin. [35] Milstein, G. N. (1995). Numerical Integration of Stochastic Differential Equations. Mathematics and Its Applications 313. Kluwer, Dordrecht. [36] Ostermann, A., Thalhammer, M. and Wright, W. M. (2006). A class of explicit exponential general linear methods. BIT 46 409–431. [37] Prévôt, C. and Röckner, M. (2007). A Concise Course on Stochastic Partial Differential Equations. Lecture Notes in Math. 1905. Springer, Berlin. [38] Rößler, A. (2009). Runge–Kutta Methods for the Numerical Solution of Stochastic Differential Equations. Shaker, Aachen. [39] Rößler, A. (2004). Stochastic Taylor expansions for the expectation of functionals of diffusion processes. Stoch. Anal. Appl. 22 1553–1576. [40] Rößler, A. (2006). Rooted tree analysis for order conditions of stochastic Runge–Kutta methods for the weak approximation of stochastic differential equations. Stoch. Anal. Appl. 24 97–134. [41] Sell, G. R. and You, Y. (2002). Dynamics of Evolutionary Equations. Applied Mathematical Sciences 143. Springer, New York. Zentralblatt MATH: 01710797 [42] Shardlow, T. (1999). Numerical methods for stochastic parabolic PDEs. Numer. Funct. Anal. Optim. 20 121–145. Digital Object Identifier: doi:10.1080/01630569908816884 The Institute of Mathematical Statistics Future Papers New content alerts Email RSS ToC RSS Article Turn Off MathJax What is MathJax? Taylor expansion for the solution of a stochastic differential equation driven by fractional Brownian motions Baudoin, Fabrice and Zhang, Xuejing, Electronic Journal of Probability, 2012 The stochastic wave equation in high dimensions: Malliavin differentiability and absolute continuity Sanz-Solé, Marta and Süss, André, Electronic Journal of Probability, 2013 Approximations of stochastic partial differential equations Di Nunno, Giulia and Zhang, Tusheng, The Annals of Applied Probability, 2016 Two-time-scale stochastic partial differential equations driven by $\alpha $-stable noises: Averaging principles Bao, Jianhai, Yin, George, and Yuan, Chenggui, Bernoulli, 2017 Feynman–Kac formula for the heat equation driven by fractional noise with Hurst parameter H < 1/2 Hu, Yaozhong, Lu, Fei, and Nualart, David, The Annals of Probability, 2012 Pathwise stochastic Taylor expansions and stochastic viscosity solutions for fully nonlinear stochastic PDEs Buckdahn, Rainer and Ma, Jin, The Annals of Probability, 2002 LINEAR MULTIFRACTIONAL STOCHASTIC VOLTERRA INTEGRO-DIFFERENTIAL EQUATIONS Nguyen, Tien Dung, Taiwanese Journal of Mathematics, 2013 Stochastic differential equations driven by $G$-Brownian motion with reflecting boundary conditions Lin, Yiqing, Electronic Journal of Probability, 2013 Infinite dimensional forward-backward stochastic differential equations and the KPZ equation Almada Monter, Sergio and Budhiraja, Amarjit, Electronic Journal of Probability, 2014 The Hölder and the Besov regularity of the density for the solution of a parabolic stochastic partial differential equation Morien, Pierre-Luc, Bernoulli, 1999 euclid.aop/1268143526
CommonCrawl
← Categories, What's the Point? Persistent Homology Talk at UIC: Slides → Introducing Categories Posted on April 24, 2013 by j2kun For a list of all the posts on Category Theory, see the Main Content page. It is time for us to formally define what a category is, to see a wealth of examples. In our next post we'll see how the definitions laid out here translate to programming constructs. As we've said in our soft motivational post on categories, the point of category theory is to organize mathematical structures across various disciplines into a unified language. As such, most of this post will be devote to laying down the definition of a category and the associated notation. We will be as clear as possible to avoid a notational barrier for newcomers, so if anything is unclear we will clarify it in the comments. Definition of a Category Let's recall some examples of categories we've seen on this blog that serve to motivate the abstract definition of a category. We expect the reader to be comfortable with sets, and to absorb or glaze over the other examples as comfort dictates. The reader who is uncomfortable with sets and functions on sets should stop here. Instead, visit our primers on proof techniques, which doubles as a primer on set theory (or our terser primer on set theory from a two years ago). The go-to example of a category is that of sets: sets together with functions between sets form a category. We will state exactly what this means momentarily, but first some examples of categories of "sets with structure" and "structure-preserving maps." Groups together with group homomorphisms form a category, as do rings and fields with their respective kinds of homomorphisms. Topological spaces together with continuous functions form a category, and metric spaces with distance-nonincreasing maps ("short" functions) form a sub-category. Vector spaces and linear maps, smooth manifolds and smooth maps, and algebraic varieties with rational maps all form categories. We could continue but the essential idea is clear: a category is some way to specify a collection of objects and "structure-preserving" mappings between those objects. There are three main features common to all of these examples: Composition of structure-preserving maps produces structure-preserving maps. Composition is associative. There is an identity map for each object. The main abstraction is that forgetting what the objects and mappings are and only considering how they behave allows one to deviate from the examples above in useful ways. For instance, once we see the formal definition below, it will become clear that mathematical (say, first-order logical) statements, together with proofs of implication, form a category. Even though a "proof" isn't strictly a structure-preserving map, it still fits with the roughly stated axioms above. One can compose proofs by laying the implications out one after another, this composition is trivially associative, and there is an identity proof. Thus, proofs provide a way to "transform" true statements into true statements, preserving the structure of boolean-valued truth. Another example is the category of ML types and computable functions in ML. Computable functions can be quite wild in their behavior, but they are nevertheless composable, associative, and equipped with an identity. And so the definition of a category seems to come as a natural consequence to think of all of these examples as special cases of one concept. Before we state the definition we should note that, for abstruse technical reasons, we cannot phrase the definition of a category as a "set" of objects and mappings between objects. This is already impossible for the category of sets, because there is no "set of all sets." Somehow (as illustrated by Russell's paradox) there are "too many" sets to do this. Likewise, there is no "set" of all groups, topological spaces, vector spaces, etc. This apparent difficulty requires some sidestepping. One possibility is to define a universe of non-paradoxical sets, and define categories by way of a universe. Another is to define a class, which bypasses set theory in another way. We won't deliberate on the differences between these methods of avoiding Russell's paradox. The reader need only know that it can be done. For our official definition, we will use the terminology of classes. Definition: A category $ \textbf{C}$ consists of the following data: A class of objects, denoted $ \textup{Obj}(\mathbf{C})$. For each pair of objects $ A,B$, a set $ \textup{Hom}(A,B)$ of morphisms. Sometimes these are called hom-sets. The morphisms satisfy the following conditions: For all objects $ A,B,C$ and morphisms $ f \in \textup{Hom}(A,B), g \in \textup{Hom}(B,C)$ there is a composition operation $ \circ$ and $ g \circ f \in \textup{Hom}(A,C)$ is a morphism. We will henceforth drop the $ \circ$ and write $ gf$. For all objects $ A$, there is an identity morphism $ 1_A \in \textup{Hom}(A,A)$. For all $ A,B$, and $ f \in \textup{Hom}(A,B)$, we have $ f 1_A = f$ and $ 1_B f = f$. Some additional notation and terminology: we denote a morphism $ f \in \textup{Hom}(A,B)$ in three ways. Aside from "as an element of a set," the most general way is as a diagram, $ \displaystyle A \xrightarrow{\hspace{.5cm} f \hspace{.5cm}} B$ Although we will often shorten a single morphism to the standard function notation $ f: A \to B$. Given a morphism $ f: A \to B$, we call $ A$ the source of $ f$, and $ B$ the target of $ f$. We will often name our categories, and we will do so in bold. So the category of sets with set-functions is denoted $ \mathbf{Set}$. When working with multiple categories, we will give subscripts on the hom-sets to avoid ambiguity, as in $ \textup{Hom}_{\mathbf{Set}}(A,B)$. A morphism from an object to itself is called an endomorphism, and the set of all endomorphisms of an object $ A$ is denoted $ \textup{End}_{\mathbf{C}}(A)$. Note that in the definition above we require that morphisms form a set. This is important because hom-sets will have additional algebraic structure (e.g., for certain categories $ \textup{End}_{\mathbf{C}}(A)$ will form a ring). Examples of Categories Lets now formally define some of our simple examples of categories. Defining a category amounts to specifying the objects and morphisms, and verifying the conditions in the definition hold. Define the category $ \mathbf{Set}$ whose objects are all sets, and whose morphisms are functions on sets. By now it should hopefully be clear that sets form a category, but let us go through the motions explicitly. Every set $ A$ has an identity function $ 1_A(x) = x$, and as we already know, functions on sets compose associatively. To verify this in complete detail would amount to writing down a general function as a relation, and using the definitions from elementary set theory. We have more pressing matters, but a reader who has not seen this before should consult our set theory primer. One can also define the category of finite sets $ \mathbf{FinSet}$ whose objects are finite sets and whose morphisms are again set-functions. As every object and morphism of $ \mathbf{FinSet}$ is one of $ \mathbf{Set}$, we call the former a subcategory of the latter. Finite categories The most trivial possible categories are those with only finitely many objects. For instance, define the category $ \mathbf{1}$ to have a single object $ \ast$ and a single morphism $ 1_{\ast}$, which must of course be the identity. The composition $ 1_{\ast} 1_{\ast}$ is forced to be $ 1_{\ast}$, and so this is a (rather useless) category. One can also imagine a category $ \mathbf{2}$ which has one non-identity morphism $ A \to B$ as well as examples of categories with any number of finite objects. Nothing interesting is going on here; we just completely specify the structure of the category object by object and morphism by morphism. Sometimes they come in handy, though. Poset Categories Here is an elementary example of a category that is nonetheless fundamental to modern discussions in topology and algebraic geometry. It will show up again in our work with persistent homology. Let $ X$ be any set, and define the category $ \mathbf{X}_\subset$ as follows. The objects of $ \mathbf{X}_\subset$ are all the subsets of $ X$. If $ A \subset B$ are subsets of $ X$, then the set $ \textup{Hom}(A,B)$ is defined to be the (unique, unnamed) singleton $ A \to B$. Otherwise, $ \textup{Hom}(A,B)$ is the empty set. Identities exist since every set is a subset of itself. The property of being a subset is transitive, so composition of morphisms makes sense and associativity is trivial since there is at most one morphism between any two objects. If the reader doesn't believe this, we state what composition is rigorously: define each morphism as a pair $ (A,B)$ and define the composition operation as a set-function $ \textup{Hom}(A,B) \times \textup{Hom}(B,C) \to \textup{Hom}(A,C)$, which sends $ ((A,B), (B,C)) \mapsto (A,C)$. Because this operation is so trivial, it seems more appropriate to state it with a diagram: We say this diagram commutes if all ways to compose morphisms (travel along arrows) have equal results. That is, in the diagram above, we assert that the morphism $ A \to B$ composed with the morphism $ B \to C$ is exactly the one morphism $ A \to C$. Usually one must prove that a diagram commutes, but in this case we are defining the composition operation so that commutativity holds. The reader can now directly verify that composition is associative: $ (a,b)((b,c)(c,d)) = (a,b)(b,d) = (a,d) = (a,c)(c,d) = ((a,b)(b,c))(c,d)$ More generally, it is not hard to see how any transitive reflexive relation on a set (including partial orders) can be used to form a category: objects are elements of the set, and morphisms are unique arrows which exist when the objects are (asymmetrically) related. The subset category above is a special case where the set in question is the power set of $ X$, and the relation is $ \subset$. The familiar reader should note that the most prominent example of this in higher mathematics is to have $ X$ be the topology of a topological space (the set of open subsets). Next, define the category $ \mathbf{Grp}$ whose objects are groups and whose morphisms are group homomorphisms. Recall briefly that a group is a set $ G$ endowed with a sensible (associative) binary operation denoted by multiplication, which has an identity and with respect to which every element has an inverse. For the uninitiated reader, just replace any abstract "group" by the set of all nonzero rational numbers with usual multiplication. It will suffice for this example. A group homomorphism is a set-function $ f: A \to B$ which satisfies $ f(xy) = f(x)f(y)$ (here the binary operations on the left and right side of the equal sign are the operations in the two respective groups). Being set-functions, group homomorphisms are composable as functions and associatively so. Given any homomorphism $ g: B \to C$, we need to show that the composite is again a homomorphism: $ \displaystyle gf(xy) = g(f(xy)) = g(f(x)f(y)) = g(f(x))g(f(y)) = gf(x) gf(y)$ Note that there are three multiplication operations floating around in this equation. Groups (as all sets) have identity maps, and the identity map is a perfectly good group homomorphism. This verifies that $ \mathbf{Grp}$ is indeed a category. While we could have stated all of these equalities via commutative diagrams, the pictures are quite large and messy so we will avoid them until later. A similar derivation will prove that rings form a category, as do vector spaces, topological spaces, and fields. We are unlikely to use these categories in great detail in this series, so we refrain from giving them names for now. One special example of a category of groups is the category $ \mathbf{Ab}$ of abelian groups (for which the multiplication operation is commutative). This category shows up as the prototypical example of a so-called "abelian category," which means it has enough structure to do homology. In a more discrete domain, define by $ \mathbf{Graph}$ as follows. The objects in this category are triples $ (V,E, \varphi)$ where $ \varphi: E \to V \times V$ represents edge adjacency. We will usually supress $ \varphi$ by saying vertices $ v,w$ are adjacent instead of $ \varphi(e) = (v,w)$. The morphisms in this category are graph homomorphisms. That is, if $ G = (V,E,\varphi)$ and $ G' = (V', E',\varphi')$, then $ f \in \textup{Hom}_{\mathbf{Graph}}(G,G')$ is a pair of set functions on $ f_V: V \to V', f_E: E \to E'$, satisfying the following commutative diagram. Here we denote by $ (f,f)$ the map which sends $ (u,v) \to (f(u), f(v))$. This diagram is quite a mouthful, but in words it requires that whenever $ v,w$ are adjacent in $ G$, then $ f(v), f(w)$ are adjacent in $ G'$. Rewriting the diagram as an equation more explicitly, we are saying that if $ e \in E$ is an edge with $ \varphi(e) = (v,w)$, then it must be the case that $ (f_V(v), f_V(w)) = \varphi'(f_E(e))$. This is how one "preserves" the structure of a graph. To prove this is a category, we can observe that composition makes sense: given two pairs $ \displaystyle f_V:V \to V'$ $ \displaystyle f_E: E \to E'$ $ \displaystyle g_{V'}: V' \to V"$ $ \displaystyle g_{E'}: E' \to E"$ we can compose each morphism individually, getting $ gf_V: V \to V"$ and $ gf_E: E \to E"$, and the following hefty-looking commutative diagram. Let's verify commutativity together: we already know the two squares on the left and right commute (by hypothesis, they are morphisms in this category). So we have two things left to check, that the three ways to get from $ E$ to $ V" \times V"$ are all the same. This amounts to verifying the two equalities $ \displaystyle (g_{V'}, g_{V'}) (f_V, f_V) \varphi = (g_{V'}, g_{V'}) \varphi' f_E = \varphi" g_{E'} f_E$ But indeed, going left to right in the above equation, each transition from one expression to another only swaps two morphisms within one of the commutative squares. In other words, the first equality is already enforced by the commutativity of the left-hand square, and the second by the right. We are literally only substituting what we already know to be equal! If it feels like we didn't actually do any work there (aside from unravelling exactly what the diagrams mean), then you're already starting to see the benefits of category theory. It can often feel like a cycle: commutative diagrams make it easy to argue about other commutative diagrams, and one can easily get lost in the wilderness of arrows. But more often than not, devoting a relatively small amount of time up front to show one diagram commutes will make a wealth of facts and theorems follow with no extra work. This is part of the reason category theory is affectionately referred to as "abstract nonsense." Diagram Categories Speaking of abstract nonsense: next up is a very abstract example, but thinking about it will reinforce the ideas we've put forth in this post while giving a sneak peek to our treatment of universal properties. Fix an object $ A$ in an arbitrary category $ \mathbf{C}$. Define the category $ \mathbf{C}_A$ whose objects are morphisms with source $ A$. That is, an object of $ \mathbf{C}_A$ looks like Where $ B$ ranges over all objects of $ \mathbf{C}$. The morphisms of $ \mathbf{C}_A$ are commutative diagrams of the form where as we said earlier the stipulation asserted by the diagram is that $ f' = gf$. Let's verify that the axioms for a category hold. Suppose we have two such commutative diagrams with matching target and source, say, Note that the arrows $ g: A \to C$ must match up in both diagrams, or else composition does not make sense! Then we can combine them into a single commutative diagram: If it is not obvious that this diagram commutes, we need only write down the argument explicitly: by the first diagram $ \beta f = g$ and $ \gamma g = h$, and piecing these together we have $ \gamma \beta f = \gamma g = h$. Associativity of this "piecing together" follows from the associativity of the morphisms in $ \mathbf{C}$. The identity morphism is a diagram whose two "legs" are both $ f: A \to B$ and whose connecting morphism is just $ 1_B$. This kind of "diagram" category is immensely important, and we will revisit it and many variants of it in the future. A quick sneak peek: this category is closely related to the universal properties of polynomial rings, free objects, and limits. As a slight generalization, you can define a category whose objects consist of pairs of morphisms with a shared source (or target), i.e., We challenge the reader to write down what a morphism of this category would look like, and prove the axioms for a category hold (it's not hard, but a bit messy). We will revisit categories like this one in our post on universal properties; this particular one is intimately related to products. Next time we'll jump straight into some code, and realize the definition of a category as a type in ML. We'll see how some of our examples of categories here can be implemented using the code, and inspect the pro's and con's of the computational version of our definition. This entry was posted in Category Theory, Primers and tagged categories. Bookmark the permalink. 13 thoughts on "Introducing Categories" colinbul Awesome posts, really looking forward to the rest of this series. Marcelo de Almeida Reblogged this on Being simple and commented: A good introduction, specially related to the graphs of categories, to understand interpolation theory of Banach spaces The set of morphisms from an object to itself is called an endomorphism, and the set of all endomorphisms of an object A is denoted \textup{End}_{\mathbf{C}}(A). Do you mean a morphism from an object to itself is called an endomorphism? Or the set of morphisms? As every object and morphism of \mathbf{FinSet} is one of \mathbf{Set}, we call the latter a subcategory of the former. Do you mean the former a subcategory of the latter? j2kun Yes, definitely. This may sound like a silly question, but nothing ventured, nothing gained. How do you draw categorical diagrams in your blog posts? I want to learn how to do this. I use a package in LaTeX called tikz (and an online editor called writeLatex). The process ends up looking like this: http://imgur.com/WIvTslh "The set of morphisms from an object to itself is called an endomorphism" That sounds like an endomorphism is the same thing as the identity morphism, but here the word "object" means a mathematical "type" (if I can use that word). Like a map from a vector in a space, V, to another (possibly the same, possibly different) vector, also in V. Am I correct? It sounds like you are correct, so I'll just reiterate and give examples. There can be many non-identity endomorphisms. For example, in the category of sets with functions as morphisms, an endomorphism is just a function from a set to itself. For vector spaces, every square matrix is an endomorphism. For types and programs, it would a program that accepts as input, say, an integer, and produces as output an integer. There are no other restrictions, other than that they have to be morphisms as defined by the category. Thanks for your fast reply. Your examples are very clear, but I have a remaining question: In the example where you said that "an endomorphism is just a function from a set to itself", this sounds like it *is* the identity morphism since it recovers exactly the same set as was input. Is this true, or am I misunderstanding what you mean by a function operating on a set? It's the latter: a "function on a set" means that it takes as input elements of the set, and produces as output elements in the set. So, for example, a function on {0,1,2,3} might be f(x) = 3-x. OK, I was misunderstanding/misusing the terminology. Many thanks for clearing things up for me! awesoham You're following Aluffi's treatment *very* closely, I see. That's nice, as I'm studying that book at the moment, so I can get a little help from your discussions! pengujn Small nitpick, but the last exercise may be confusing to the unfamiliar reader. It doesn't specify what is fixed and based off the previous example one might think Z is fixed instead of B,C.
CommonCrawl
Remarks on the damped nonlinear Schrödinger equation EECT Home Boundary controllability of the Korteweg-de Vries equation on a tree-shaped network September 2020, 9(3): 693-719. doi: 10.3934/eect.2020029 Pointwise control of the linearized Gear-Grimshaw system Roberto de A. Capistrano-Filho 1, , Vilmos Komornik 2,3,, and Ademir F. Pazoto 4, Departamento de Matemática, Universidade Federal de Pernambuco, Recife, Pernambuco 50740-545, Brazil College of Mathematics and Computational Science, Shenzhen University, Shenzhen 518060, China Département de Mathématique, Université de Strasbourg, 7 rue René Descartes, 67084 Strasbourg Cedex, France Institute of Mathematics, Federal University of Rio de Janeiro, P.O. Box 68530, CEP 21941-909, Rio de Janeiro, RJ, Brazil Received June 2019 Revised September 2019 Published September 2020 Early access December 2019 In this paper we consider the problem of controlling pointwise, by means of a time dependent Dirac measure supported by a given point, a coupled system of two Korteweg-de Vries equations on the unit circle. More precisely, by means of spectral analysis and Fourier expansion we prove, under general assumptions on the physical parameters of the system, a pointwise observability inequality which leads to the pointwise controllability by using two control functions. In addition, with a uniqueness property proved for the linearized system without control, we are also able to show pointwise controllability when only one control function acts internally. In both cases we can find, under some assumptions on the coefficients of the system, the sharp time of the controllability. Keywords: Coupled KdV equation, Gear-Grimshaw system, pointwise observability, pointwise controllability, feedback stabilization, nonharmonic analysis. Mathematics Subject Classification: Primary: 35Q53, 93B07, 93B05; Secondary: 93B52. Citation: Roberto de A. Capistrano-Filho, Vilmos Komornik, Ademir F. Pazoto. Pointwise control of the linearized Gear-Grimshaw system. Evolution Equations & Control Theory, 2020, 9 (3) : 693-719. doi: 10.3934/eect.2020029 C. Baiocchi, V. Komornik and P. Loreti, Ingham-Beurling type theorems with weakened gap conditions, Acta Math. Hungar., 97 (2002), 55-95. doi: 10.1023/A:1020806811956. Google Scholar A. Beurling, Interpolation for an Interval in, $\mathbb{R}^1$ in The Collected Works of Arne Beurling. Vol. 2. Harmonic Analysis (eds. L. Carleson, P. Malliavin, J. Neuberger and J. Wermer), Contemporary Mathematicians. Birkhäuser Boston, Inc., Boston, MA, 1989. Google Scholar J. L. Bona, G. Ponce, J.-C. Saut and M. M. Tom, A model system for strong interaction between internal solitary waves, Comm. Math. Physics, 143 (1992), 287-313. doi: 10.1007/BF02099010. Google Scholar R. A. Capistrano-Filho, A. F. Pazoto and V. Komornik, Stabilization of the Gear-Grimshaw system on a periodic domain, Commun. Contemp. Math., 16 (2014), 1450047, 22 pp. doi: 10.1142/S0219199714500473. Google Scholar R. A. Capistrano-Filho and F. A. Gallego, Asymptotic behavior of Boussinesq system of KdV-KdV type, J. Differential Equations, 265 (2018), 2341-2374. doi: 10.1016/j.jde.2018.04.034. Google Scholar R. A. Capistrano-Filho, F. A. Gallego and A. F. Pazoto, Neumann boundary controllability of the Gear-Grimshaw system with critical size restrictionson on the spatial domain, Z. Angew. Math. Phys., 67 (2016), Art. 109, 36 pp. doi: 10.1007/s00033-016-0705-4. Google Scholar R. A. Capistrano-Filho, F. A. Gallego and A. F. Pazoto, Boundary controllability of a nonlinear coupled system of two Korteweg-de Vries equations with critical size restrictions on the spatial domain, Math. Control Signals Systems, 29 (2017), Art. 6, 37 pp. doi: 10.1007/s00498-017-0186-9. Google Scholar C. Castro and E. Zuazua, Une remarque sur les séries de Fourier non-harmoniques et son application à la contrôlabilité des cordes avec densité singulière, C. R. Acad. Sci. Paris Sŕ. I, 323 (1996), 365-370. Google Scholar E. Cerpa and A. F. Pazoto, A note on the paper On the controllability of a coupled system of two Korteweg-de Vries equations, Commun. Contemp. Math., 13 (2011), 183-189. doi: 10.1142/S021919971100418X. Google Scholar M. Dávila, Estabilização de um sistema acoplado de equaç oes tipo KdV, Proceedings of the $45^\circ$ Seminário Brasileiro de Análise, Vol I (Florianópolis, Brazil), 1997,453-458. Google Scholar M. Dávila and F. S. Chaves, Infinite conservation laws for a system of Korteweg-de Vries type equations, Proceedings of DINCON 2006, Brazilian Conference on Dynamics, Control and Their Applications (Guaratinguetá, Brazil), 2006, 22-26. Google Scholar S. Dolecki and D. L. Russell, A general theory of observation and control, SIAM J. Control Opt., 15 (1977), 185-220. doi: 10.1137/0315015. Google Scholar J. A. Gear and R. Grimshaw, Weak and strong interactions between internal solitary waves, Stud. Appl. Math., 70 (1984), 235-258. doi: 10.1002/sapm1984703235. Google Scholar A. Haraux, Séries lacunaires et contrôle semi-interne des vibrations d'une plaque rectangulaire, J. Math. Pures Appl., 68 (1989), 457-465. Google Scholar A. Haraux, Quelques M hodes et Résultats Récents en Théorie de la Contrôlabilité Exacte, Rapport de recherche No. 1317, INRIA Rocquencourt, Octobre 1990. Google Scholar A. Haraux and S. Jaffard, Pointwise and spectral control of plate vibrations, Rev. Mat. Iberoamericana, 7 (1991), 1-24. doi: 10.4171/RMI/103. Google Scholar A. Haraux, Quelques propriétés des séries lacunaires utiles dans létude des systèmes élastiques, Nonlinear Partial Differential Equations and Their Applications. Collège de France Seminar, XII (Paris, 1991-1993), 113-124, Pitman Res. Notes Math.Ser., 302, Longman Sci.Tech.Harlow, 1994. Google Scholar R. Hirota and J. Satsuma, Soliton solutions of a coupled Korteweg-de Vries equation, Phys. Lett. A, 85 (1981), 407-408. doi: 10.1016/0375-9601(81)90423-0. Google Scholar R. Hirota and J. Satsuma, A Coupled KdV equation is one case of the four-reduction of the KP hierarchy, J. Phys. Soc. Japan, 51 (1982), 3390-3397. doi: 10.1143/JPSJ.51.3390. Google Scholar A. E. Ingham, Some trigonometrical inequalities with applications in the theory of series, Math. Z., 41 (1936), 367-379. doi: 10.1007/BF01180426. Google Scholar S. Jaffard, M. Tucsnak and E. Zuazua, On a theorem of Ingham, J. Fourier Anal. Appl., 3 (1997), 577-582. doi: 10.1007/BF02648885. Google Scholar V. Komornik, Rapid boundary stabilization of linear distributed systems, SIAM J. Control Optim., 35 (1997), 1591-1613. doi: 10.1137/S0363012996301609. Google Scholar V. Komornik and P. Loreti, Fourier Series in Control Theory, Springer-Verlag, New York, 2005. Google Scholar V. Komornik, D. L. Russell and B.-Y. Zhang, Stabilisation de l'équation de Korteweg-de Vries, C. R. Acad. Sci. Paris Sér. I Math., 312 (1991), 841-843. Google Scholar J.-L. Lions, Contrôlabilité Exacte et Stabilisation de Systèmes Distribués, Volume I, Masson, Paris, 1988. Google Scholar P. Loreti, On some gap theorems, Proceedings of the 11th Meeting of EWM, CWI Tract (Marseille, France), 135 (2005), 39-45. Google Scholar M. Mehrenberger, Critical length for a Beurling type theorem, Bol. Un. Mat. Ital. B, 8 (2005), 251-258. Google Scholar S. Micu and J. H. Ortega, On the controllability of a linear coupled system of Korteweg-de Vries equations, Mathematical and Numerical Aspects of Wave Propagation (Santiago de Compostela, Spain), SIAM, Philadelphia, PA, 2000, 1020-1024. Google Scholar S. Micu, J. H. Ortega and A. F. Pazoto, On the Controllability of a Coupled system of two Korteweg-de Vries equation, Commun. Contemp. Math., 11 (2009), 779-827. doi: 10.1142/S0219199709003600. Google Scholar A. F. Pazoto and G. R. Souza, Uniform stabilization of a nonlinear dispersive system, Quart. Appl. Math., 72 (2014), 193-208. doi: 10.1090/S0033-569X-2013-01343-1. Google Scholar L. Rosier, Exact boundary controllability for the Korteweg-de Vries equation on a bounded domain, ESAIM Control Optim. Cal. Var., 2 (1997), 33-55. doi: 10.1051/cocv:1997102. Google Scholar D. L. Russell, Controllability and stabilizability theory for linear partial differential equations. Recent progress and open questions, SIAM Rev., 20 (1978), 639-739. doi: 10.1137/1020095. Google Scholar Kaïs Ammari, Mohamed Jellouli, Michel Mehrenberger. Feedback stabilization of a coupled string-beam system. Networks & Heterogeneous Media, 2009, 4 (1) : 19-34. doi: 10.3934/nhm.2009.4.19 Vanessa Baumgärtner, Simone Göttlich, Stephan Knapp. Feedback stabilization for a coupled PDE-ODE production system. Mathematical Control & Related Fields, 2020, 10 (2) : 405-424. doi: 10.3934/mcrf.2020003 Lingyang Liu, Xu Liu. Controllability and observability of some coupled stochastic parabolic systems. Mathematical Control & Related Fields, 2018, 8 (3&4) : 829-854. doi: 10.3934/mcrf.2018037 Chun Zong, Gen Qi Xu. Observability and controllability analysis of blood flow network. Mathematical Control & Related Fields, 2014, 4 (4) : 521-554. doi: 10.3934/mcrf.2014.4.521 Ali Wehbe, Marwa Koumaiha, Layla Toufaily. Boundary observability and exact controllability of strongly coupled wave equations. Discrete & Continuous Dynamical Systems - S, 2021 doi: 10.3934/dcdss.2021091 Yongqin Liu, Weike Wang. The pointwise estimates of solutions for dissipative wave equation in multi-dimensions. Discrete & Continuous Dynamical Systems, 2008, 20 (4) : 1013-1028. doi: 10.3934/dcds.2008.20.1013 Hideo Kubo. On the pointwise decay estimate for the wave equation with compactly supported forcing term. Communications on Pure & Applied Analysis, 2015, 14 (4) : 1469-1480. doi: 10.3934/cpaa.2015.14.1469 Ilyasse Lamrani, Imad El Harraki, Ali Boutoulout, Fatima-Zahrae El Alaoui. Feedback stabilization of bilinear coupled hyperbolic systems. Discrete & Continuous Dynamical Systems - S, 2021, 14 (10) : 3641-3657. doi: 10.3934/dcdss.2020434 Elon Lindenstrauss. Pointwise theorems for amenable groups. Electronic Research Announcements, 1999, 5: 82-90. Felipe Linares, M. Panthee. On the Cauchy problem for a coupled system of KdV equations. Communications on Pure & Applied Analysis, 2004, 3 (3) : 417-431. doi: 10.3934/cpaa.2004.3.417 Abdelmouhcene Sengouga. Exact boundary observability and controllability of the wave equation in an interval with two moving endpoints. Evolution Equations & Control Theory, 2020, 9 (1) : 1-25. doi: 10.3934/eect.2020014 Mohammed Aassila. Exact boundary controllability of a coupled system. Discrete & Continuous Dynamical Systems, 2000, 6 (3) : 665-672. doi: 10.3934/dcds.2000.6.665 Mokhtari Yacine. Boundary controllability and boundary time-varying feedback stabilization of the 1D wave equation in non-cylindrical domains. Evolution Equations & Control Theory, 2022, 11 (2) : 373-397. doi: 10.3934/eect.2021004 Abdallah Benabdallah, Mohsen Dlala. Rapid exponential stabilization by boundary state feedback for a class of coupled nonlinear ODE and $ 1-d $ heat diffusion equation. Discrete & Continuous Dynamical Systems - S, 2021 doi: 10.3934/dcdss.2021092 Margaret Beck. Stability of nonlinear waves: Pointwise estimates. Discrete & Continuous Dynamical Systems - S, 2017, 10 (2) : 191-211. doi: 10.3934/dcdss.2017010 Boris Hasselblatt, Yakov Pesin, Jörg Schmeling. Pointwise hyperbolicity implies uniform hyperbolicity. Discrete & Continuous Dynamical Systems, 2014, 34 (7) : 2819-2827. doi: 10.3934/dcds.2014.34.2819 Lorena Bociu, Steven Derochers, Daniel Toundykov. Feedback stabilization of a linear hydro-elastic system. Discrete & Continuous Dynamical Systems - B, 2018, 23 (3) : 1107-1132. doi: 10.3934/dcdsb.2018144 Elena-Alexandra Melnig. Internal feedback stabilization for parabolic systems coupled in zero or first order terms. Evolution Equations & Control Theory, 2021, 10 (2) : 333-351. doi: 10.3934/eect.2020069 Miao Liu, Weike Wang. Global existence and pointwise estimates of solutions for the multidimensional generalized Boussinesq-type equation. Communications on Pure & Applied Analysis, 2014, 13 (3) : 1203-1222. doi: 10.3934/cpaa.2014.13.1203 Jiaxiang Cai, Juan Chen, Bin Yang. Fully decoupled schemes for the coupled Schrödinger-KdV system. Discrete & Continuous Dynamical Systems - B, 2019, 24 (10) : 5523-5538. doi: 10.3934/dcdsb.2019069 Roberto de A. Capistrano-Filho Vilmos Komornik Ademir F. Pazoto
CommonCrawl
Aad G (2014) Measurement of the top quark pair production charge asymmetry in proton-proton collisions at $ \sqrt{s} $ = 7 TeV using the ATLAS detector in Journal of High Energy Physics Aad G (2015) Search for heavy Majorana neutrinos with the ATLAS detector in pp collisions at s = 8 $$ \sqrt{s}=8 $$ TeV in Journal of High Energy Physics Aad G (2015) Measurement of charged-particle spectra in Pb+Pb collisions at s N N = 2.76 $$ \sqrt{s_{\mathrm{NN}}}=2.76 $$ TeV with the ATLAS detector at the LHC in Journal of High Energy Physics Aad G (2015) Measurement of the production of neighbouring jets in lead-lead collisions at s NN = 2.76 TeV with the ATLAS detector in Physics Letters B Aad G (2014) Search for Higgs boson decays to a photon and a Z boson in pp collisions at <mml:math altimg="si1.gif" overflow="scroll" xmlns:xocs="http://www.elsevier.com/xml/xocs/dtd" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.elsevier.com/xml/ja/dtd" xmlns:ja="http://www.elsevier.com/xml/ja/dtd" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:tb="http:// in Physics Letters B Aad G (2015) Two-particle Bose-Einstein correlations in collisions at [Formula: see text] 0.9 and 7 TeV measured with the ATLAS detector. in The European physical journal. C, Particles and fields Aad G (2015) Measurement of transverse energy-energy correlations in multi-jet events in pp collisions at s = 7 TeV using the ATLAS detector and determination of the strong coupling constant a s ( m Z ) in Physics Letters B Aad G (2015) Measurement of the transverse polarization of ? and ? ¯ hyperons produced in proton-proton collisions at s = 7 TeV using the ATLAS detector in Physical Review D Aad G (2015) Searches for heavy long-lived charged particles with the ATLAS detector in proton-proton collisions at s = 8 $$ \sqrt{s}=8 $$ TeV in Journal of High Energy Physics Aad G (2014) The differential production cross section of the $$\phi $$ ? (1020) meson in $$\sqrt{s}$$ s = 7 TeV $$pp$$ p p collisions measured with the ATLAS detector in The European Physical Journal C
CommonCrawl
Author / Contributor: Conference / Event Book / Monograph Program Document Limit to INIS / NSA records only Limit to Nobel Prize winning researchers only Submit Research Results Data Services & Dev Tools OSTI.GOV Journal Article: The Broad Absorption Line Tidal Disruption Event iPTF15af: Optical and Ultraviolet Evolution Title: The Broad Absorption Line Tidal Disruption Event iPTF15af: Optical and Ultraviolet Evolution We present multiwavelength observations of the tidal disruption event (TDE) iPTF15af, discovered by the intermediate Palomar Transient Factory survey at redshift z = 0.07897. The optical and ultraviolet (UV) light curves of the transient show a slow decay over 5 months, in agreement with previous optically discovered TDEs. It also has a comparable blackbody peak luminosity of $${L}_{\mathrm{peak}}\approx 1.5\times {10}^{44}$$ $$\mathrm{erg}\,{{\rm{s}}}^{-1}$$. The inferred temperature from the optical and UV data shows a value of (3–5) × 104 K. The transient is not detected in X-rays up to $${L}_{{\rm{X}}}\lt 3\times {10}^{42}$$ $$\mathrm{erg}\,{{\rm{s}}}^{-1}$$ within the first 5 months after discovery. The optical spectra exhibit two distinct broad emission lines in the He ii region, and at later times also Hα emission. Additionally, emission from [N iii] and [O iii] is detected, likely produced by the Bowen fluorescence effect. UV spectra reveal broad emission and absorption lines associated with high-ionization states of N v, C iv, Si iv, and possibly P v. These features, analogous to those of broad absorption line quasars (BAL QSOs), require an absorber with column densities $${N}_{{\rm{H}}}\gt {10}^{23}$$ cm –2. Here, this optically thick gas would also explain the nondetection in soft X-rays. The profile of the absorption lines with the highest column density material at the largest velocity is opposite that of BAL QSOs. We suggest that radiation pressure generated by the TDE flare at early times could have provided the initial acceleration mechanism for this gas. Spectral UV line monitoring of future TDEs could test this proposal. Blagorodnova, N. [1]; Search OSTI.GOV for author "Blagorodnova, N." Search OSTI.GOV for ORCID "0000-0003-0901-1606" Cenko, S. B. [2]; Search OSTI.GOV for author "Cenko, S. B." Search OSTI.GOV for ORCID "0000-0003-1673-970X" Search orcid.org for ORCID "0000-0003-1673-970X" Kulkarni, S. R. [3]; Search OSTI.GOV for author "Kulkarni, S. R." Arcavi, I. [4]; Search OSTI.GOV for author "Arcavi, I." Bloom, J. S. [5]; Duggan, G. [3]; Search OSTI.GOV for author "Duggan, G." Filippenko, A. V. [5]; Search OSTI.GOV for author "Filippenko, A. V." Fremling, C. [3]; Search OSTI.GOV for author "Fremling, C." Horesh, A. [6]; Hosseinzadeh, G. [7]; Search OSTI.GOV for author "Hosseinzadeh, G." Karamehmetoglu, E. [8]; Search OSTI.GOV for author "Karamehmetoglu, E." Levan, A. [9]; Search OSTI.GOV for author "Levan, A." Masci, F. J. [3]; Search OSTI.GOV for author "Masci, F. J." Nugent, P. E. [10]; Search OSTI.GOV for author "Nugent, P. E." Pasham, D. R. [11]; Veilleux, S. [12]; Search OSTI.GOV for author "Veilleux, S." Walters, R. [3]; Yan, L. [3]; Zheng, W. [5] California Inst. of Technology (CalTech), Pasadena, CA (United States); Radboud Univ., Nijmegen (The Netherlands) Univ. of Maryland, College Park, MD (United States); NASA Goddard Space Flight Center (GSFC), Greenbelt, MD (United States) California Inst. of Technology (CalTech), Pasadena, CA (United States) Univ. of California, Santa Barbara, CA (United States); Las Cumbres Observatory, Goleta, CA (United States); Univ. of California, Berkeley, CA (United States) Univ. of California, Berkeley, CA (United States) The Hebrew Univ. of Jerusalem, Jerusalem (Israel) Harvard-Smithsonian Center for Astrophysics, Cambridge, MA (United States) Stockholm Univ., Stockholm (Sweden) Univ. of Warwick, Coventry (United Kingdom) Univ. of California, Berkeley, CA (United States); Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States) Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States) Univ. of Maryland, College Park, MD (United States) USDOE Office of Science (SC) DOE Contract Number: AC02-05CH11231 The Astrophysical Journal (Online) Journal Volume: 873; Journal Issue: 1; Journal ID: ISSN 1538-4357 Institute of Physics (IOP) 79 ASTRONOMY AND ASTROPHYSICS; accretion; accretion disks; black hole physics; galaxies: nuclei; stars: individual (iPTF15af) Blagorodnova, N., Cenko, S. B., Kulkarni, S. R., Arcavi, I., Bloom, J. S., Duggan, G., Filippenko, A. V., Fremling, C., Horesh, A., Hosseinzadeh, G., Karamehmetoglu, E., Levan, A., Masci, F. J., Nugent, P. E., Pasham, D. R., Veilleux, S., Walters, R., Yan, L., and Zheng, W. The Broad Absorption Line Tidal Disruption Event iPTF15af: Optical and Ultraviolet Evolution. United States: N. p., 2019. Web. doi:10.3847/1538-4357/ab04b0. Blagorodnova, N., Cenko, S. B., Kulkarni, S. R., Arcavi, I., Bloom, J. S., Duggan, G., Filippenko, A. V., Fremling, C., Horesh, A., Hosseinzadeh, G., Karamehmetoglu, E., Levan, A., Masci, F. J., Nugent, P. E., Pasham, D. R., Veilleux, S., Walters, R., Yan, L., & Zheng, W. The Broad Absorption Line Tidal Disruption Event iPTF15af: Optical and Ultraviolet Evolution. United States. doi:10.3847/1538-4357/ab04b0. Blagorodnova, N., Cenko, S. B., Kulkarni, S. R., Arcavi, I., Bloom, J. S., Duggan, G., Filippenko, A. V., Fremling, C., Horesh, A., Hosseinzadeh, G., Karamehmetoglu, E., Levan, A., Masci, F. J., Nugent, P. E., Pasham, D. R., Veilleux, S., Walters, R., Yan, L., and Zheng, W. Fri . "The Broad Absorption Line Tidal Disruption Event iPTF15af: Optical and Ultraviolet Evolution". United States. doi:10.3847/1538-4357/ab04b0. title = {The Broad Absorption Line Tidal Disruption Event iPTF15af: Optical and Ultraviolet Evolution}, author = {Blagorodnova, N. and Cenko, S. B. and Kulkarni, S. R. and Arcavi, I. and Bloom, J. S. and Duggan, G. and Filippenko, A. V. and Fremling, C. and Horesh, A. and Hosseinzadeh, G. and Karamehmetoglu, E. and Levan, A. and Masci, F. J. and Nugent, P. E. and Pasham, D. R. and Veilleux, S. and Walters, R. and Yan, L. and Zheng, W.}, abstractNote = {We present multiwavelength observations of the tidal disruption event (TDE) iPTF15af, discovered by the intermediate Palomar Transient Factory survey at redshift z = 0.07897. The optical and ultraviolet (UV) light curves of the transient show a slow decay over 5 months, in agreement with previous optically discovered TDEs. It also has a comparable blackbody peak luminosity of ${L}_{\mathrm{peak}}\approx 1.5\times {10}^{44}$ $\mathrm{erg}\,{{\rm{s}}}^{-1}$. The inferred temperature from the optical and UV data shows a value of (3–5) × 104 K. The transient is not detected in X-rays up to ${L}_{{\rm{X}}}\lt 3\times {10}^{42}$ $\mathrm{erg}\,{{\rm{s}}}^{-1}$ within the first 5 months after discovery. The optical spectra exhibit two distinct broad emission lines in the He ii region, and at later times also Hα emission. Additionally, emission from [N iii] and [O iii] is detected, likely produced by the Bowen fluorescence effect. UV spectra reveal broad emission and absorption lines associated with high-ionization states of N v, C iv, Si iv, and possibly P v. These features, analogous to those of broad absorption line quasars (BAL QSOs), require an absorber with column densities ${N}_{{\rm{H}}}\gt {10}^{23}$ cm–2. Here, this optically thick gas would also explain the nondetection in soft X-rays. The profile of the absorption lines with the highest column density material at the largest velocity is opposite that of BAL QSOs. We suggest that radiation pressure generated by the TDE flare at early times could have provided the initial acceleration mechanism for this gas. Spectral UV line monitoring of future TDEs could test this proposal.}, doi = {10.3847/1538-4357/ab04b0}, journal = {The Astrophysical Journal (Online)}, DOI: 10.3847/1538-4357/ab04b0 Find in Google Scholar Heavy X‐Ray Absorption in Soft X‐Ray–weak Active Galactic Nuclei Gallagher, S. C.; Brandt, W. N.; Laor, A. The Astrophysical Journal, Vol. 546, Issue 2 The Astropy Project: Building an Open-science Project and Status of the v2.0 Core Package Price-Whelan, A. M.; Sipőcz, B. M.; Günther, H. M. The Astronomical Journal, Vol. 156, Issue 3 DOI: 10.3847/1538-3881/aabc4f The Swift X-Ray Telescope Burrows, David N.; Hill, J. E.; Nousek, J. A. Space Science Reviews, Vol. 120, Issue 3-4 MID-INFRARED SELECTION OF ACTIVE GALACTIC NUCLEI WITH THE WIDE-FIELD INFRARED SURVEY EXPLORER . I. CHARACTERIZING WISE -SELECTED ACTIVE GALACTIC NUCLEI IN COSMOS Stern, Daniel; Assef, Roberto J.; Benford, Dominic J. DOI: 10.1088/0004-637X/753/1/30 Secondary standard stars for absolute spectrophotometry Oke, J. B.; Gunn, J. E. The Astrophysical Journal, Vol. 266 Broad P v Absorption in the QSO PG 1254+047: Column Densities, Ionizations, and Metal Abundances in Broad Absorption Line Winds Hamann, Fred Further calibration of the Swift ultraviolet/optical telescope: Further calibration of the Swift UVOT Breeveld, A. A.; Curran, P. A.; Hoversten, E. A. Monthly Notices of the Royal Astronomical Society DOI: 10.1111/j.1365-2966.2010.16832.x UV/Optical Detections of Candidate Tidal Disruption Events by GALEX and CFHTLS Gezari, S.; Basa, S.; Martin, D. C. A new class of flares from accreting supermassive black holes Trakhtenbrot, Benny; Arcavi, Iair; Ricci, Claudio Nature Astronomy, Vol. 3, Issue 3 A Dependence of the Tidal Disruption Event Rate on Global Stellar Surface Mass Density and Stellar Velocity Dispersion Graur, Or; French, K. Decker; Zahid, H. Jabran DOI: 10.3847/1538-4357/aaa3fd Black hole masses of tidal disruption event host galaxies Wevers, Thomas; van Velzen, Sjoert; Jonker, Peter G. Monthly Notices of the Royal Astronomical Society, Vol. 471, Issue 2 DOI: 10.1093/mnras/stx1703 The IPAC Image Subtraction and Discovery Pipeline for the Intermediate Palomar Transient Factory Masci, Frank J.; Laher, Russ R.; Rebbapragada, Umaa D. Publications of the Astronomical Society of the Pacific, Vol. 129, Issue 971 DOI: 10.1088/1538-3873/129/971/014002 iPTF16fnl: A Faint and Fast Tidal Disruption Event in an E+A Galaxy Blagorodnova, N.; Gezari, S.; Hung, T. DOI: 10.3847/1538-4357/aa7579 Revisiting Optical Tidal Disruption Events with iPTF16axa Hung, T.; Gezari, S.; Blagorodnova, N. The Discovery of the First "Changing Look" Quasar: new Insights into the Physics and Phenomenology of Active Galactic Nuclei LaMassa, Stephanie M.; Cales, Sabrina; Moran, Edward C. DOI: 10.1088/0004-637X/800/2/144 Ultraviolet Detection of the Tidal Disruption of a Star by a Supermassive Black Hole Gezari, S.; Martin, D. C.; Milliard, B. The diversity of Type II supernova versus the similarity in their progenitors Valenti, S.; Howell, D. A.; Stritzinger, M. D. DOI: 10.1093/mnras/stw870 Constraints on off-axis jets from stellar tidal disruption flares van Velzen, S.; Frail, D. A.; Körding, E. Astronomy & Astrophysics, Vol. 552 Six months of multiwavelength follow-up of the tidal disruption candidate ASASSN-14li and implied TDE rates from ASAS-SN Holoien, T. W. -S.; Kochanek, C. S.; Prieto, J. L. DOI: 10.1093/mnras/stv2486 X-Ray Brightening and UV Fading of Tidal Disruption Event ASASSN-15oi Gezari, S.; Cenko, S. B.; Arcavi, I. DOI: 10.3847/2041-8213/aaa0c2 The Keck Low-Resolution Imaging Spectrometer Oke, J. B.; Cohen, J. G.; Carr, M. Publications of the Astronomical Society of the Pacific, Vol. 107 What Sets the Line Profiles in Tidal Disruption Events? Roth, Nathaniel; Kasen, Daniel DOI: 10.3847/1538-4357/aaaec6 PTF12os and iPTF13bvn: Two stripped-envelope supernovae from low-mass progenitors in NGC 5806⋆ Fremling, C.; Sollerman, J.; Taddia, F. The influence of circumnuclear environment on the radio emission from TDE jets Generozov, A.; Mimica, P.; Metzger, B. D. DOI: 10.1093/mnras/stw2439 AN ULTRAVIOLET SPECTRUM OF THE TIDAL DISRUPTION FLARE ASASSN-14li Cenko, S. Bradley; Cucchiara, Antonino; Roth, Nathaniel DOI: 10.3847/2041-8205/818/2/L32 MOSFiT: Modular Open Source Fitter for Transients Guillochon, James; Nicholl, Matt; Villar, V. Ashley The Astrophysical Journal Supplement Series, Vol. 236, Issue 1 DOI: 10.3847/1538-4365/aab761 A Possible Relativistic Jetted Outburst from a Massive Black Hole Fed by a Tidally Disrupted Star Bloom, J. S.; Giannios, D.; Metzger, B. D. Science, Vol. 333, Issue 6039 ASASSN-15oi: a rapidly evolving, luminous tidal disruption event at 216 Mpc A Sample of Quasars with Strong Nitrogen Emission Lines from the Sloan Digital Sky Survey Jiang, Linhua; Fan, Xiaohui; Vestergaard, M. STELLAR MASSES AND STAR FORMATION RATES FOR 1 M GALAXIES FROM SDSS+ WISE Chang, Yu-Yen; Wel, Arjen van der; Cunha, Elisabete da DOI: 10.1088/0067-0049/219/1/8 Determination of confidence limits for experiments with low numbers of counts Kraft, Ralph P.; Burrows, David N.; Nousek, John A. PS16dtm: A Tidal Disruption Event in a Narrow-line Seyfert 1 Galaxy Blanchard, P. K.; Nicholl, M.; Berger, E. DOI: 10.3847/1538-4357/aa77f7 Photometric calibration of the Swift ultraviolet/optical telescope: Photometric calibration of the Swift UVOT Poole, T. S.; Breeveld, A. A.; Page, M. J. The Palomar Transient Factory photometric catalog 1.0 Ofek, E. O.; Laher, R.; Surace, J. Weighing Black Holes Using Tidal Disruption Events Mockler, Brenna; Guillochon, James; Ramirez-Ruiz, Enrico DOI: 10.3847/1538-4357/ab010f Chemical enrichment and accretion of nitrogen-loud quasars Matsuoka, K.; Nagao, T.; Maiolino, R. A Catalog of Bulge, Disk, and Total Stellar mass Estimates for the Sloan Digital sky Survey Mendel, J. Trevor; Simard, Luc; Palmer, Michael The Swift Gamma‐Ray Burst Mission Gehrels, N.; Chincarini, G.; Giommi, P. Hydrodynamical Simulations to Determine the Feeding rate of Black Holes by the Tidal Disruption of Stars: the Importance of the Impact Parameter and Stellar Structure Guillochon, James; Ramirez-Ruiz, Enrico Measuring Reddening with Sloan Digital sky Survey Stellar Spectra and Recalibrating sfd Schlafly, Edward F.; Finkbeiner, Douglas P. GALEX catalogs of UV sources: statistical properties and sample science applications: hot white dwarfs in the Milky Way Bianchi, L.; Herald, J.; Efremova, B. Astrophysics and Space Science, Vol. 335, Issue 1 Stellar population synthesis at the resolution of 2003 Bruzual, G.; Charlot, S. Tidal Disruption Event Host Galaxies in the Context of the Local Galaxy Population Law-Smith, Jamie; Ramirez-Ruiz, Enrico; Ellison, Sara L. DOI: 10.3847/1538-4357/aa94c7 The He$\rm_{II}$ Fowler lines and the O$\rm_{III}$ and N$\rm_{III}$ Bowen fluorescence lines in the symbiotic nova RR Telescopii Selvelli, P.; Danziger, J.; Bonifacio, P. Astronomy & Astrophysics, Vol. 464, Issue 2 DOI: 10.1051/0004-6361:20066175 The Excitation of the Permitted O III Nebular Lines Bowen, I. S. Publications of the Astronomical Society of the Pacific, Vol. 46 The Spectrum and Composition of the Gaseous Nebulae The Astrophysical Journal, Vol. 81 The NumPy Array: A Structure for Efficient Numerical Computation van der Walt, Stéfan; Colbert, S. Chris; Varoquaux, Gaël Computing in Science & Engineering, Vol. 13, Issue 2 DOI: 10.1109/MCSE.2011.37 A Catalog of Broad Absorption line Quasars in Sloan Digital sky Survey data Release 5 Gibson, Robert R.; Jiang, Linhua; Brandt, W. N. The 13th Data Release of the Sloan Digital Sky Survey: First Spectroscopic Data from the SDSS-IV Survey Mapping Nearby Galaxies at Apache Point Observatory Albareti, Franco D.; Prieto, Carlos Allende; Almeida, Andres Abundance anomalies in tidal disruption events Kochanek, C. S. Swift J2058.4+0516: Discovery of a Possible Second Relativistic Tidal Disruption Flare? Bradley Cenko, S.; Krimm, Hans A.; Horesh, Assaf Quasars with P v broad absorption in BOSS data release 9 Capellupo, D. M.; Hamann, F.; Herbst, H. DOI: 10.1093/mnras/stx870 Flows of X-ray gas reveal the disruption of a star by a massive black hole Miller, Jon M.; Kaastra, Jelle S.; Miller, M. Coleman Nature, Vol. 526, Issue 7574 The Swift Ultra-Violet/Optical Telescope Roming, Peter W. A.; Kennedy, Thomas E.; Mason, Keith O. On the Nature of Soft X‐Ray Weak Quasi‐stellar Objects Brandt, W. N.; Laor, A.; Wills, Beverley J. The metallicities of the broad emission line regions in the nitrogen-loudest quasars Batra, Neelam Dhanda; Baldwin, Jack A. DOI: 10.1093/mnras/stu007 Inward Bound—The Search for Supermassive Black Holes in Galactic Nuclei Kormendy, John; Richstone, Douglas Annual Review of Astronomy and Astrophysics, Vol. 33, Issue 1 DOI: 10.1146/annurev.aa.33.090195.003053 Comparisons of the emission-line and continuum properties of broad absorption line and normal quasi-stellar objects Weymann, Ray J.; Morris, Simon L.; Foltz, Craig B. Relativistic jet activity from the tidal disruption of a star by a massive black hole Burrows, D. N.; Kennea, J. A.; Ghisellini, G. INITIAL PERFORMANCE OF THE NEOWISE REACTIVATION MISSION Mainzer, A.; Bauer, J.; Cutri, R. M. Las Cumbres Observatory Global Telescope Network Brown, T. M.; Baliber, N.; Bianco, F. B. Exploring the Optical Transient Sky with the Palomar Transient Factory Rau, Arne; Kulkarni, Shrinivas R.; Law, Nicholas M. A tidal disruption-like X-ray flare from the quiescent galaxy SDSS J120136.02+300305.5 Saxton, R. D.; Read, A. M.; Esquej, P. DISCOVERY OF AN OUTFLOW FROM RADIO OBSERVATIONS OF THE TIDAL DISRUPTION EVENT ASASSN-14li Alexander, K. D.; Berger, E.; Guillochon, J. A bright year for tidal disruptions Metzger, Brian D.; Stone, Nicholas C. A Catalog of Bulge+Disk Decompositions and Updated Photometry for 1.12 Million Galaxies in the Sloan Digital sky Survey Simard, Luc; Trevor Mendel, J.; Patton, David R. DOI: 10.1088/0067-0049/196/1/11 The X‐Ray Properties of the Most Luminous Quasars from the Sloan Digital Sky Survey Just, D. W.; Brandt, W. N.; Shemmer, O. The Eleventh and Twelfth data Releases of the Sloan Digital sky Survey: Final data from Sdss-Iii Alam, Shadab; Albareti, Franco D.; Prieto, Carlos Allende emcee : The MCMC Hammer Foreman-Mackey, Daniel; Hogg, David W.; Lang, Dustin The NRAO VLA Sky Survey Condon, J. J.; Cotton, W. D.; Greisen, E. W. A radio jet from the optical and x-ray bright stellar tidal disruption flare ASASSN-14li van Velzen, S.; Anderson, G. E.; Stone, N. C. DOI: 10.1126/science.aad1182 Excitation of C III] λ 1909 and Other Semiforbidden Emission Lines in QSOs and Nebulae Osterbrock, D. E. THE ULTRAVIOLET-BRIGHT, SLOWLY DECLINING TRANSIENT PS1-11af AS A PARTIAL TIDAL DISRUPTION EVENT Chornock, R.; Berger, E.; Gezari, S. A CONTINUUM OF H- TO He-RICH TIDAL DISRUPTION CANDIDATES WITH A PREFERENCE FOR E+A GALAXIES Arcavi, Iair; Gal-Yam, Avishay; Sullivan, Mark Optical Discovery of Probable Stellar Tidal Disruption Flares van Velzen, Sjoert; Farrar, Glennys R.; Gezari, Suvi Bowen fluoresence and He II lines in active galaxies and gaseous nebulae Netzer, H.; Elitzur, M.; Ferland, G. J. The Carbon and Nitrogen Abundance Ratio in the Broad Line Region of Tidal Disruption Events Yang, Chenwei; Wang, Tinggui; Ferland, Gary J. The Post-starburst Evolution of Tidal Disruption Event Host Galaxies French, K. Decker; Arcavi, Iair; Zabludoff, Ann DOI: 10.3847/1538-4357/835/2/176 Composite Quasar Spectra from the Sloan Digital Sky Survey Vanden Berk, Daniel E.; Richards, Gordon T.; Bauer, Amanda Absorption Lines in the Spectra of Quasistellar Objects Weymann, Ray J.; Carswell, Robert F.; Smith, Malcolm G. Calibration of X-ray absorption in our Galaxy Willingale, R.; Starling, R. L. C.; Beardmore, A. P. DOI: 10.1093/mnras/stt175 Discovery of Transient Infrared Emission from dust Heated by Stellar Tidal Disruption Flares van Velzen, S.; Mendez, A. J.; Krolik, J. H. An ultraviolet–optical flare from the tidal disruption of a helium-rich stellar core Gezari, S.; Chornock, R.; Rest, A. The Two Micron All Sky Survey (2MASS) Skrutskie, M. F.; Cutri, R. M.; Stiening, R. The unusual late-time evolution of the tidal disruption event ASASSN-15oi Holoien, T. W-S; Brown, J. S.; Auchettl, K. DOI: 10.1093/mnras/sty2273 First Results from the Catalina Real-Time Transient Survey Drake, A. J.; Djorgovski, S. G.; Mahabal, A. WISeREP—An Interactive Supernova Data Repository Yaron, Ofer; Gal-Yam, Avishay Stellar tidal disruption events in general relativity Stone, Nicholas C.; Kesden, Michael; Cheng, Roseanne M. General Relativity and Gravitation, Vol. 51, Issue 2 Tidal Disruption Events Prefer Unusual host Galaxies On the cosmological evolution of the X-ray emission from quasars Avni, Y.; Tananbaum, H. PS1-10jh CONTINUES TO FOLLOW THE FALLBACK ACCRETION RATE OF A TIDALLY DISRUPTED STAR Gezari, S.; Chornock, R.; Lawrence, A. DOI: 10.1088/2041-8205/815/1/L5 The ultraviolet spectroscopic evolution of the low-luminosity tidal disruption event iPTF16fnl Brown, J. S.; Kochanek, C. S.; Holoien, T. W. -S. The relation between optical extinction and hydrogen column density in the Galaxy: Av– NH relation in the Galaxy Güver, Tolga; ÖZel, Feryal Mid-infrared Flare of TDE Candidate PS16dtm: Dust Echo and Implications for the Spectral Evolution Jiang, Ning; Wang, Tinggui; Yan, Lin A non-hydrodynamical model for acceleration of line-driven winds in active galactic nuclei Risaliti, G.; Elvis, M. Astronomy and Astrophysics, Vol. 516 The Wide-Field Infrared Survey Explorer (Wise): Mission Description and Initial On-Orbit Performance Wright, Edward L.; Eisenhardt, Peter R. M.; Mainzer, Amy K. DOI: 10.1088/0004-6256/140/6/1868 Spectral features of tidal disruption candidates and alternative origins for such transient flares Saxton, Curtis J.; Perets, Hagai B.; Baskin, Alexei Tidal disruption of stars by black holes of 106–108 solar masses in nearby galaxies Rees, Martin J. DOI: 10.1038/333523a0 Journal Article Hung, T. ; Gezari, S. ; Blagorodnova, N. ; ... - The Astrophysical Journal (Online) We report the discovery by the intermediate Palomar Transient Factory (iPTF) of a candidate tidal disruption event (TDE) iPTF16axa at z = 0.108 and present its broadband photometric and spectroscopic evolution from three months of follow-up observations with ground-based telescopes and Swift. The light curve is well fitted with a t -5/3 decay, and we constrain the rise time to peak to be <49 rest-frame days after disruption, which is roughly consistent with the fallback timescale expected for the ~5 × 10 6 M ⊙ black hole inferred from the stellar velocity dispersion of the host galaxy. The UV and optical spectral energy distribution is well described by a constant blackbody temperature of T ~ 3 × 10 4 K over the monitoring period, with an observed peak luminosity of 1.1 × 10 44 erg s -1. The optical spectra are characterized by a strong blue continuum and broad He ii and Hα lines, which are characteristic of TDEs. We compare the photometric and spectroscopic signatures of iPTF16axa with 11 TDE candidates in the literature with well-sampled optical light curves. Based on a single-temperature fit to the optical and near-UV photometry, most of these TDE candidates have peak luminosities confined between log(L [erg s -1]) = 43.4–44.4, with constant temperatures of a few ×104 K during their power-law declines, implying blackbody radii on the order of 10 times the tidal disruption radius, that decrease monotonically with time. For TDE candidates with hydrogen and helium emission, the high helium-to-hydrogen ratios suggest that the emission arises from high-density gas, where nebular arguments break down. In conclusion, we find no correlation between the peak luminosity and the black hole mass, contrary to the expectations for TDEs to havemore » $$\dot{M}\propto {M}_{\mathrm{BH}}^{-1/2}$$.« less Journal Article Blagorodnova, N. ; Gezari, S. ; Hung, T. ; ... - The Astrophysical Journal (Online) Here, we present ground-based and Swift observations of iPTF16fnl, a likely tidal disruption event (TDE) discovered by the intermediate Palomar Transient Factory (iPTF) survey at 66.6 Mpc. The light curve of the object peaked at an absolute magmore » $${M}_{g}=-17.2$$. The maximum bolometric luminosity (from optical and UV) was $${L}_{p}\simeq (1.0\pm 0.15)\times {10}^{43}$$ erg s -1, an order of magnitude fainter than any other optical TDE discovered so far. The luminosity in the first 60 days is consistent with an exponential decay, with $$L\propto {e}^{-(t-{t}_{0})/\tau }$$, where t 0 = 57631.0 (MJD) and $$\tau \simeq 15$$ days. The X-ray shows a marginal detection at $${L}_{X}={2.4}_{-1.1}^{1.9}\times {10}^{39}$$ erg s -1 (Swift X-ray Telescope). No radio counterpart was detected down to 3σ, providing upper limits for monochromatic radio luminosities of $${\nu L}_{\nu }\lt 2.3\times {10}^{36}$$ erg s -1 and $${\nu L}_{\nu }\lt 1.7\times {10}^{37}$$ erg s -1 (Very Large Array, 6.1 and 22 GHz). The blackbody temperature, obtained from combined Swift UV and optical photometry, shows a constant value of 19,000 K. The transient spectrum at peak is characterized by broad He ii and Hα emission lines, with FWHMs of about 14,000 km s -1 and 10,000 km s -1, respectively. He i lines are also detected at λλ 5875 and 6678. The spectrum of the host is dominated by strong Balmer absorption lines, which are consistent with a post-starburst (E+A) galaxy with an age of ~650 Myr and solar metallicity. The characteristics of iPTF16fnl make it an outlier on both luminosity and decay timescales, as compared to other optically selected TDEs. In conclusion, the discovery of such a faint optical event suggests a higher rate of tidal disruptions, as low-luminosity events may have gone unnoticed in previous searches.« less The American Astronomical Society. All rights reserved. We present ground-based and Swift observations of iPTF16fnl, a likely tidal disruption event (TDE) discovered by the intermediate Palomar Transient Factory (iPTF) survey at 66.6 Mpc. The light curve of the object peaked at an absolute mag M g = -17.2. The maximum bolometric luminosity (from optical and UV) was L p ≃ (1.0 ± 0.15) × 10 43 erg s -1 , an order of magnitude fainter than any other optical TDE discovered so far. The luminosity in the first 60 days is consistent with an exponential decay, with L ∝ emore » -(t-t0)/τ , where t 0 = 57631.0 (MJD) and τ ≃ 15 days. The X-ray shows a marginal detection at L X = 2.4 1.9 -1.1 × 10 39 erg s -1 (Swift X-ray Telescope). No radio counterpart was detected down to 3σ, providing upper limits for monochromatic radio luminosities of ν L ν < 1.7 × 10 36 erg s -1 and ν L ν < 2.3 × 10 37 erg s -1 (Very Large Array, 6.1 and 22 GHz). The blackbody temperature, obtained from combined Swift UV and optical photometry, shows a constant value of 19,000 K. The transient spectrum at peak is characterized by broad He ii and Hα emission lines, with FWHMs of about 14,000 km s -1 and 10,000 km s -1 , respectively. He i lines are also detected at λλ 5875 and 6678. The spectrum of the host is dominated by strong Balmer absorption lines, which are consistent with a post-starburst (E+A) galaxy with an age of ~650 Myr and solar metallicity. The characteristics of iPTF16fnl make it an outlier on both luminosity and decay timescales, as compared to other optically selected TDEs. The discovery of such a faint optical event suggests a higher rate of tidal disruptions, as low-luminosity events may have gone unnoticed in previous searches.« less Journal Article Brown, J. S. ; Kochanek, C. S. ; Holoien, T. W. -S. ; ... - Monthly Notices of the Royal Astronomical Society In this work, we present the ultraviolet (UV) spectroscopic evolution of a tidal disruption event (TDE) for the first time. After the discovery of the nearby TDE iPTF16fnl, we obtained a series of observations with the Space Telescope Imaging Spectrograph (STIS) onboard the HubbleSpaceTelescope (HST). The dominant emission features closely resemble those seen in the UV spectra of the TDE ASASSN-14li and are also similar to those of N-rich quasars. There is evolution in the shape and central wavelength of the dominant emission features over the course of our observations, such that at early times the lines tend to bemore » broad and redshifted, while at later times they are narrower and peak near the wavelengths of their atomic transitions. Like ASASSN-14li, but unlike N-rich quasars, iPTF16fnl shows neither Mg II 2798 Å nor C III] 1909 Å emission features. We also present optical photometry and spectroscopy, which suggest that the complex He II profiles observed in the optical spectra of many TDEs are in part due to the presence of N III and C III Wolf–Rayet features, which can potentially serve as probes of the far-UV when space-based observations are not possible. Finally, we use Swift X-ray Telescope and Ultraviolet/Optical Telescope (UVOT) observations to place strong limits on the X-ray emission and determine the characteristic temperature, radius and luminosity of the emitting material. Lastly, we find that iPTF16fnl is subluminous and evolves more rapidly than other optically discovered TDEs.« less We report the discovery by the intermediate Palomar Transient Factory (iPTF) of a candidate tidal disruption event (TDE) iPTF16axa at z = 0.108 and present its broadband photometric and spectroscopic evolution from three months of follow-up observations with ground-based telescopes and Swift. The light curve is well fitted with a t -5/3 decay, and we constrain the rise time to peak to be < 49 rest-frame days after disruption, which is roughly consistent with the fallback timescale expected for the ~5 ×10 6 M o black hole inferred from the stellar velocity dispersion of the host galaxy. The UV andmore » optical spectral energy distribution is well described by a constant blackbody temperature of T ~ 3 ×10 4 K over the monitoring period, with an observed peak luminosity of 1.1 ×10 44 erg s -1 . The optical spectra are characterized by a strong blue continuum and broad He ii and Hα lines, which are characteristic of TDEs. We compare the photometric and spectroscopic signatures of iPTF16axa with 11 TDE candidates in the literature with well-sampled optical light curves. Based on a single-temperature fit to the optical and near-UV photometry, most of these TDE candidates have peak luminosities confined between log(L [erg s -1 ]) = 43.4-44.4, with constant temperatures of a few ×10 4 K during their power-law declines, implying blackbody radii on the order of 10 times the tidal disruption radius, that decrease monotonically with time. For TDE candidates with hydrogen and helium emission, the high helium-to-hydrogen ratios suggest that the emission arises from high-density gas, where nebular arguments break down. We find no correlation between the peak luminosity and the black hole mass, contrary to the expectations for TDEs to have Mdot ∝ M BH -1/2 .« less
CommonCrawl
Solvability of boundary value problems for strongly degenerate parabolic equations with discontinuous coefficients Energy-dissipative solution to a one-dimensional phase field model of grain boundary motion February 2014, 7(1): 161-176. doi: 10.3934/dcdss.2014.7.161 Brownian point vortices and dd-model Takashi Suzuki 1, Division of Mathematical Science, Department of System Innovation, Graduate School of Engineering Science, Osaka University, 1-3 Machikane-yama, Toyonaka, Osaka, 560-8531 Received January 2012 Revised August 2012 Published July 2013 We study the kinetic mean field equation on two-dimensional Brownian vortices; derivation, similarity to the DD-model, and existence and non-existence of global-in-time solution. Keywords: chemotaxis., Brownian point vortex, DD model. Mathematics Subject Classification: Primary: 35K57, 35Q82; Secondary: 35Q9. Citation: Takashi Suzuki. Brownian point vortices and dd-model. Discrete & Continuous Dynamical Systems - S, 2014, 7 (1) : 161-176. doi: 10.3934/dcdss.2014.7.161 N. D. Alikakos, $L^p$ bounds of solutions of reaction-diffusion equations, Comm. Partial Differential Equations, 4 (1979), 827-868. doi: 10.1080/03605307908820113. Google Scholar F. Bavaud, Equilibrium properties of the Vlasov functional: The generalized Poisson-Boltzmann-Emden equation, Rev. Modern Physics, 63 (1991), 129-149. doi: 10.1103/RevModPhys.63.129. Google Scholar P. Biler, Local and global solvability of some parabolic systems modelling chemotaxis, Adv. Math. Sci. Appl., 8 (1998), 715-743. Google Scholar P. Biler, W. Hebisch and T. Nadzieja, The Debye system: existence and large time behavior of solutions, Nonlinear Analysis, 23 (1994), 1189-1209. doi: 10.1016/0362-546X(94)90101-5. Google Scholar E. Caglioti, P.-L. Lions, C. Marchioro and M. Pulvirenti, A special class of stationary flows for two-dimensional Euler equations: A statistical mechanics description, Comm. Math. Phys., 143 (1992), 501-525. doi: 10.1007/BF02099262. Google Scholar P.-H. Chavanis, Kinetic theory of $2D$ point vortices from a BBGKY-like hiearchy, Physica A, 387 (2008), 1123-1154. doi: 10.1016/j.physa.2007.10.022. Google Scholar P.-H. Chavanis, Two-dimensional Brownian vortices, Physica A, 387 (2008), 6917-6942. doi: 10.1016/j.physa.2008.09.019. Google Scholar C. Conca and E. Espejo, Threshold condition for global existence and blow-up to a radially symmetric drift-diffusion system, Applied Mathematics Letters, 25 (2012), 352-356. doi: 10.1016/j.aml.2011.09.013. Google Scholar C. Conca, E. Espejo and K. Vilches, Remarks on the blow-up and global existence for a two-species chemotactic Keller-Segel system in $R^2$, Euro. J. Appl. Math., 22 (2011), 553-580. doi: 10.1017/S0956792511000258. Google Scholar E. E. Espejo, M. Kurokiba and T. Suzuki, Blowup threshold and collapse mass separation for a drift-diffusion system in dimension two,, preprint., (). Google Scholar E. E. Espejo, A. Stevens and T. Suzuki, Simultaneous blowup and mass separation during collapse in an interacting system of chemotactic species, Differential and Integral Equations, 25 (2012), 251-288. Google Scholar E. E. Espejo, A. Stevens and J. J. L. Velázquez, Simultaneous finite time blow-up in a two-species model for chemotaxis, Analysis, 29 (2009), 317-338. doi: 10.1524/anly.2009.1029. Google Scholar E. E. Espejo, A. Stevens and J. J. L. Velázquez, A note on non-simultaneous blow-up for a drift-diffusion model, Differential and Integral Equations, 23 (2010), 451-462. Google Scholar G. L. Eyink and H. Spohn, Negative-temperature states and large-scale, long-lived vortices in two-dimensional turbulence, J. Statistical Physics, 70 (1993), 833-886. doi: 10.1007/BF01053597. Google Scholar H. Gajewski and K. Zacharias, Global behaviour of a reaction-diffusion system modelling chemotaxis, Math. Nachr., 195 (1998), 77-114. doi: 10.1002/mana.19981950106. Google Scholar W. Jäger and S. Luckhaus, On explosions of solutions to a system of partial differential equations modelling chemotaxis, Trans. Amer. Math. Soc., 329 (1992), 819-824. doi: 10.2307/2153966. Google Scholar G. Joyce and D. Montgomery, Negative temperature states for two-dimensional guiding-centre plasma, J. Plasma Phys., 10 (1973), 107-121. Google Scholar M. K. H. Kiessling, Statistical mechanics of classical particles with logarithmic interaction, Comm. Pure Appl. Math., 46 (1993), 27-56. doi: 10.1002/cpa.3160460103. Google Scholar M. Kurokiba and T. Ogawa, Finite time blow-up of the solution for a nonlinear parabolic equation of drift-diffusion type, Differential and Integral Equations, 16 (2003), 427-452. Google Scholar M. Kurokiba and T. Ogawa, Wellposedness of the drit-diffusion system in $L^p$ arising from the semiconductor device simulation, J. Math. Anal. Appl., 342 (2008), 1052-1067. doi: 10.1016/j.jmaa.2007.11.017. Google Scholar M. Kurokiba, T. Nagai and T. Ogawa, The uniform boundedness and threshold for the global existence of the radial solution to a drift-diffusion system, Comm. Pure Appl. Anal., 5 (2006), 97-106 Google Scholar T. Nagai, T. Senba and K. Yoshida, Application of the Trudinger-Moser inequality to a parabolic system of chemotaxis, Funkcial. Ekvac., 40 (1997), 411-433. Google Scholar K. Nagasaki and T. Suzuki, Asymptotic analysis for two-dimensional elliptic eivgnvalue problem with exponentially dominated nonlinearities, Asymptoitc Analysis, 3 (1990), 173-188. Google Scholar P. K. Newton, "The $N$-Vortex Problem: Analytical Techniques," Applied Mathematical Sciences, 145, Springer-Verlag, New York, 2001. doi: 10.1007/978-1-4684-9290-3. Google Scholar L. Onsager, Statistical hydrodynamics, Suppl. Nuovo Cimento, 6 (1949), 279-287. doi: 10.1007/BF02780991. Google Scholar T. Senba and T. Suzuki, Chemotactic collapse in a parabolic-elliptic system of mathematical biology, Adv. Differential Equations, 6 (2001), 21-50. Google Scholar T. Senba and T. Suzuki, Parabolic system of chemotaxis; blowup in a finite and in the infinite time, Meth. Appl. Anal., 8 (2001), 349-368. Google Scholar I. Shafrir and G. Wolansky, Moser-Trudinger and logarithmic HLS inequalities for systems, J. Euro. Math. Soc., 7 (2005), 413-448. doi: 10.4171/JEMS/34. Google Scholar T. Suzuki, Global analysis for a two-dimensional elliptic eigenvalue problem with the exponential nonlinearity, Ann. Inst. Henri Poincaré Anal. Non Linéaire, 9 (1992), 367-397. Google Scholar T. Suzuki, "Free Energy and Self-Interacting Particles,'' Progress in Nonlinear Differential Equations and their Applications, 62, Birkhäuser Boston, Inc., Boston, 2005. doi: 10.1007/0-8176-4436-9. Google Scholar T. Suzuki, "Mean Field Theories and Dual Variation,'' Atlantis Studies in Mathematics for Engineering and Science, 2, Atlantis Press, Paris; World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ, 2008. Google Scholar T. Suzuki and T. Senba, "Applied Analysis, Mathematical Methods in Natural Science,'' Second edition, Imperial College Press, London, 2011. Google Scholar T. Suzuki, Exclusion of boundary blowup for $2D$ chemotaxis system provided with Dirichlet condition for the Poisson part,, preprint., (). doi: 10.1016/j.matpur.2013.01.004. Google Scholar Gianluca Crippa, Milton C. Lopes Filho, Evelyne Miot, Helena J. Nussenzveig Lopes. Flows of vector fields with point singularities and the vortex-wave system. Discrete & Continuous Dynamical Systems, 2016, 36 (5) : 2405-2417. doi: 10.3934/dcds.2016.36.2405 Joseph Nebus. The Dirichlet quotient of point vortex interactions on the surface of the sphere examined by Monte Carlo experiments. Discrete & Continuous Dynamical Systems - B, 2005, 5 (1) : 125-136. doi: 10.3934/dcdsb.2005.5.125 James Montaldi, Amna Shaddad. Generalized point vortex dynamics on $ \mathbb{CP} ^2 $. Journal of Geometric Mechanics, 2019, 11 (4) : 601-619. doi: 10.3934/jgm.2019030 Xavier Perrot, Xavier Carton. Point-vortex interaction in an oscillatory deformation field: Hamiltonian dynamics, harmonic resonance and transition to chaos. Discrete & Continuous Dynamical Systems - B, 2009, 11 (4) : 971-995. doi: 10.3934/dcdsb.2009.11.971 Shaokuan Chen, Shanjian Tang. Semi-linear backward stochastic integral partial differential equations driven by a Brownian motion and a Poisson point process. Mathematical Control & Related Fields, 2015, 5 (3) : 401-434. doi: 10.3934/mcrf.2015.5.401 Shijin Ding, Qiang Du. The global minimizers and vortex solutions to a Ginzburg-Landau model of superconducting films. Communications on Pure & Applied Analysis, 2002, 1 (3) : 327-340. doi: 10.3934/cpaa.2002.1.327 Shangzhi Li, Shangjiang Guo. Permanence and extinction of a stochastic SIS epidemic model with three independent Brownian motions. Discrete & Continuous Dynamical Systems - B, 2021, 26 (5) : 2693-2719. doi: 10.3934/dcdsb.2020201 Nicola Bellomo, Youshan Tao. Stabilization in a chemotaxis model for virus infection. Discrete & Continuous Dynamical Systems - S, 2020, 13 (2) : 105-117. doi: 10.3934/dcdss.2020006 Nicolas Vauchelet. Numerical simulation of a kinetic model for chemotaxis. Kinetic & Related Models, 2010, 3 (3) : 501-528. doi: 10.3934/krm.2010.3.501 Kentarou Fujie, Akio Ito, Michael Winkler, Tomomi Yokota. Stabilization in a chemotaxis model for tumor invasion. Discrete & Continuous Dynamical Systems, 2016, 36 (1) : 151-169. doi: 10.3934/dcds.2016.36.151 Hua Chen, Shaohua Wu. The moving boundary problem in a chemotaxis model. Communications on Pure & Applied Analysis, 2012, 11 (2) : 735-746. doi: 10.3934/cpaa.2012.11.735 Hua Chen, Jian-Meng Li, Kelei Wang. On the vanishing viscosity limit of a chemotaxis model. Discrete & Continuous Dynamical Systems, 2020, 40 (3) : 1963-1987. doi: 10.3934/dcds.2020101 Alina Chertock, Alexander Kurganov, Xuefeng Wang, Yaping Wu. On a chemotaxis model with saturated chemotactic flux. Kinetic & Related Models, 2012, 5 (1) : 51-95. doi: 10.3934/krm.2012.5.51 V. Styles. A note on the convergence in the limit of a long wave vortex density superconductivity model to the Bean model. Communications on Pure & Applied Analysis, 2002, 1 (4) : 485-494. doi: 10.3934/cpaa.2002.1.485 Tong Li, Jeungeun Park. Traveling waves in a chemotaxis model with logistic growth. Discrete & Continuous Dynamical Systems - B, 2019, 24 (12) : 6465-6480. doi: 10.3934/dcdsb.2019147 Anne Nouri, Christian Schmeiser. Aggregated steady states of a kinetic model for chemotaxis. Kinetic & Related Models, 2017, 10 (1) : 313-327. doi: 10.3934/krm.2017013 Manuel Delgado, Inmaculada Gayte, Cristian Morales-Rodrigo, Antonio Suárez. On a chemotaxis model with competitive terms arising in angiogenesis. Discrete & Continuous Dynamical Systems - S, 2020, 13 (2) : 177-202. doi: 10.3934/dcdss.2020010 Xin Lai, Xinfu Chen, Mingxin Wang, Cong Qin, Yajing Zhang. Existence, uniqueness, and stability of bubble solutions of a chemotaxis model. Discrete & Continuous Dynamical Systems, 2016, 36 (2) : 805-832. doi: 10.3934/dcds.2016.36.805 Shen Bian, Li Chen, Evangelos A. Latos. Chemotaxis model with nonlocal nonlinear reaction in the whole space. Discrete & Continuous Dynamical Systems, 2018, 38 (10) : 5067-5083. doi: 10.3934/dcds.2018222 Georges Chamoun, Moustafa Ibrahim, Mazen Saad, Raafat Talhouk. Asymptotic behavior of solutions of a nonlinear degenerate chemotaxis model. Discrete & Continuous Dynamical Systems - B, 2020, 25 (11) : 4165-4188. doi: 10.3934/dcdsb.2020092 Takashi Suzuki
CommonCrawl
The Math The Mathematic study online The Inclusion Chart for Special Types of Integral Domains FoldUnfold Table of Contents The Inclusion Chart for Special Types of Integral Domains The Inclusion Chart for Special Types of Integral Domains Below is an inclusion chart showing the various inclusions of the algebraic structures of: commutative rings, integral domains, unique factorization domains, principal ideal domains, Euclidean domains, and … [Read more...] Unique Factorization Domains (UFDs) FoldUnfold Table of Contents Unique Factorization Domains (UFDs) Unique Factorization Domains (UFDs) Definition: Let $(R, +, \cdot)$ be an integral domain. Then $R$ is a Unique Factorization Domain if the following properties are satisfied: 1) Every element $a \in R$ that is nonzero and that is not a unit can be expressed as a product of irreducible elements in $R$. 2) … [Read more...] Irreducible Elements in a Commutative Ring FoldUnfold Table of Contents Irreducible Elements in a Commutative Ring Irreducible Elements in a Commutative Ring Recall from The Greatest Common Divisor of Elements in a Commutative Ring page that if $(R, +, \cdot)$ is a commutative ring and $a_1, a_2, ..., a_n \in R$ then a greatest common divisor of these elements is an element $d \in R$ which satisfies the following … [Read more...] The Greatest Common Divisor of Elements in a Commutative Ring FoldUnfold Table of Contents The Greatest Common Divisor of Elements in a Commutative Ring The Greatest Common Divisor of Elements in a Commutative Ring Recall from the Divisors and Associates of Commutative Rings page that if $(R, +, \cdot)$ is a commutative ring then for $a, b \in R$ we said that $b$ is a divisor of $a$ written $b | a$ if there exists an element $q \in … [Read more...] Associates of Elements in Commutative Rings FoldUnfold Table of Contents Associates of Elements in Commutative Rings Associates of Elements in Commutative Rings Definition: Let $(R, +, \cdot)$ be a commutative ring and let $a, b \in R$. Then $a$ is said to be an Associate of $b$ denoted $a \sim b$ if there exists a unit $u \in R$ such that $a = bu$. Theorem 1: If $(R, +, \cdot)$ is an integral domain and $a … [Read more...] Divisors of Elements in Commutative Rings FoldUnfold Table of Contents Divisors of Elements in Commutative Rings Divisors of Elements in Commutative Rings So far we have discussed the term "divisor" with regards to integers and polynomials over a field $F$. We will now extend the notion of a divisor to a general commutative ring. Definition: Let $(R, +, \cdot)$ be a commutative ring and let $a, b \in R$. Then … [Read more...] The Set of Units of a Ring forms a Group under * FoldUnfold Table of Contents The Set of Units of a Ring forms a Group under * The Set of Units of a Ring forms a Group under * Recall from the Units (Multiplicatively Invertible Elements) in Rings page that if $(R, +, *)$ is a ring with multiplicative identity $1$ then an element $a \in R$ is said to be a unit (or a multiplicatively invertible element) in $R$ if there … [Read more...] The Set of Units in Mnn FoldUnfold Table of Contents The Set of Units in Mnn The Set of Units in Mnn Recall from the Units (Multiplicatively Invertible Elements) in Rings page that if $(R, +, *)$ is a ring then an element $a \in R$ is said to be unit or a multiplicatively invertible element if there exists an element $b \in R$ such that $a * b = 1$ and $b * a = 1$ where we denote $b = … [Read more...] Units (Multiplicatively Invertible Elements) in Rings FoldUnfold Table of Contents Units (Multiplicatively Invertible Elements) in Rings Units (Multiplicatively Invertible Elements) in Rings Definition: Let $(R, +, *)$ be a ring with identity $1$. An element $a \in R$ is said to be a Unit or Multiplicatively Invertible Element if there exists an element $b \in R$ such that $a * b = 1$ and $b * a = 1$. If such a $b$ … [Read more...] Every Euclidean Domain is a Principal Ideal Domain FoldUnfold Table of Contents Every Euclidean Domain is a Principal Ideal Domain Every Euclidean Domain is a Principal Ideal Domain Recall from the Euclidean Domains (EDs) page that if $(R, +, *)$ is an integral domain then $R$ is said to be a Euclidean domain if there exists a function $\delta : R \setminus \{ 0 \} \to \mathbb{N} \cup \{ 0 \}$ that satisfies the following … [Read more...] New building marks new era for college at AU – The Augusta Chronicle Schools in Bihar to teach Vedic maths – Hindustan Times Grade Nine learners taught mathematics skills – Tembisan Vantage teacher receives state award – Defiance Crescent News First Year Investment of Ontario's Four-Year Math Strategy Announced – Wawa-news.com 1st grade math (8) Abstract Algebra (250) Binary Numbers (3) Calculus (468) Combinatorics (133) Complex Analysis (105) Differential Equations (113) Functional Analysis (257) Geometry Topics (48) Graph Theory (73) Math blog (2,314) Math Quiz (18) Measure Theory (121) Number Theory (134) Numerical Analysis (83) Principles of analysis (18) Real Analysis (517) Topology (107) Themath.net © 2020 - The Mathematic study online - LLODO - Internet Do - QA Do
CommonCrawl
The Kalman-Bucy filter revisited Biodiversity and vulnerability in a 3D mutualistic system October 2014, 34(10): 4127-4137. doi: 10.3934/dcds.2014.34.4127 Asymptotic behaviour for prey-predator systems and logistic equations with unbounded time-dependent coefficients Juan C. Jara 1, and Felipe Rivero 2, Departamento de Ecuaciones Diferenciales y Análisis Numérico, Facultad de Matemáticas, Universidad de Sevilla, Calle Tarfia s/n, 41012-Seville, Spain Departamento de Matemática y Mecánica, I.I.M.A.S - U.N.A.M., Apdo. Postal 20-726, 01000 México D. F., Mexico Received September 2013 Revised October 2013 Published April 2014 In this work we study the asymptotic behaviour of the following prey-predator system \begin{equation*} \left\{ \begin{split} &A'=\alpha f(t)A-\beta g(t)A^2-\gamma AP\\ &P'=\delta h(t)P-\lambda m(t)P^2+\mu AP, \end{split} \right. \end{equation*} where functions $f,g:\mathbb{R}\rightarrow\mathbb{R}$ are not necessarily bounded above. We also prove the existence of the pullback attractor and the permanence of solutions for any positive initial data and initial time, making a previous study of a logistic equation with unbounded terms, where one of them can be negative for a bounded interval of time. The analysis of a non-autonomous logistic equation with unbounded coefficients is also needed to ensure the permanence of the model. Keywords: non-autonomous dynamical systems, Non-autonomous prey-predator system, unbounded time-dependent coefficients, evolution processes., pullback attractor, permanece of solutions, non-autonomous logistic equation. Mathematics Subject Classification: 92D15, 35B40, 37N25, 92D25, 34E9. Citation: Juan C. Jara, Felipe Rivero. Asymptotic behaviour for prey-predator systems and logistic equations with unbounded time-dependent coefficients. Discrete & Continuous Dynamical Systems - A, 2014, 34 (10) : 4127-4137. doi: 10.3934/dcds.2014.34.4127 S. Ahmad, Convergence and ultimate bounds of solutions of the nonautonomous Volterra-Lotka competition equations,, J. Math. Anal. Appl., 127 (1987), 377. doi: 10.1016/0022-247X(87)90116-8. Google Scholar J. Balbus and J. Mierczyński, Time-averaging and permanence in nonautonomous competitive systems of PDE's via Vance-Coddington estiamtes,, Discrete Contin. Dyn. Syst. Ser. B, 17 (2012), 1407. doi: 10.3934/dcdsb.2012.17.1407. Google Scholar T. A. Burton and V. Hutson, Permanence for nonautonomous predator-prey systems,, Differential Integral Equations, 4 (1991), 1269. Google Scholar T. Caraballo, A. N. Carvalho, J. A. Langa and F. Rivero, Existence of pullback attractors for pullback asymptotically compact processes,, Nonlinear Analysis, 72 (2010), 1967. doi: 10.1016/j.na.2009.09.037. Google Scholar D. N. Cheban, Global Attractors of Non-Autonomous Dissipative Dynamical Systems,, World Scientific Publishing Co. Pte. Ltd, (2004). doi: 10.1142/9789812563088. Google Scholar V. V. Chepyzhov and M. I. Vishik, Attractors for Equations of Mathematical Physics,, Colloquium Publications, (2002). Google Scholar B. D. Coleman, Nonautonomous logistic equations as models of the adjustment of populations to environmental change,, Math. Biosci, 45 (1979), 159. doi: 10.1016/0025-5564(79)90057-9. Google Scholar T. G. Hallam and C. E. Clarck, Nonautonomous logistic equations as models of populations in a deteriorating environment,, J. Theoret. Biol., 93 (1981), 301. doi: 10.1016/0022-5193(81)90106-5. Google Scholar J. Hale and P. Waltman, Persistence in infinite dimensional systems,, SIAM J. Math. Anal., 9 (1989), 388. doi: 10.1137/0520025. Google Scholar V. Hutson and K. Schmitt, Permanence in dynamical systems,, Math. Biosci., 111 (1992), 1. doi: 10.1016/0025-5564(92)90078-B. Google Scholar P. E. Kloeden, Pullback attractors of nonautonomous semidynamical system,, Stochastics and Dynamics, 3 (2003), 101. doi: 10.1142/S0219493703000632. Google Scholar J. A. Langa, J. C. Robinson, A. Rodríguez-Bernal and A. Suárez, Permanence and asymptotically stable complete trajectories for nonautonomous Lotka-Volterra models with diffusion,, SIAM J. Math. Anal., 40 (2009), 2179. doi: 10.1137/080721790. Google Scholar J. A. Langa, J. C. Robinson and A. Suárez, Forwards and pullback behaviour of a non-autonomous Lotka-Volterra system,, Nonlinearity, 16 (2003), 1277. doi: 10.1088/0951-7715/16/4/305. Google Scholar J. A. Langa, A. Rodríguez-Bernal and S. Suárez, On the long time behaviour of non-autonomous Lotka-Volterra models with diffusion via the sub-supertrajectory method,, J. Differential Equations, 249 (2010), 414. doi: 10.1016/j.jde.2010.04.001. Google Scholar M. N. Nkashama, Dynamics of logistic equations with non-autonomous bounded coefficients,, Electron. J. Differential Equations, 2000 (2000). Google Scholar J. C. Robinson, Infinite-Dimensional Dynamical System. An Introduction to Dissipative Parabolic PDEs and the Theory of Global Attractors,, Cambridge Text in Applied Mathematics, (2001). doi: 10.1007/978-94-010-0732-0. Google Scholar G. R. Sell, Nonautonomous differential equations and dynamical systems,, Trans. Amer. Math. Soc., 127 (1967), 241. Google Scholar J. Sugie, Y. Saito and M. Fan, Global asymptotic stability for predator-prey systems whose prey receives time-variation of the environment,, Proc. Amer. Math. Soc., 139 (2011), 3475. doi: 10.1090/S0002-9939-2011-11124-9. Google Scholar R. R. Vance and E. A. Coddington, A nonautonomous model of population growth,, J. Math. Biol., 27 (1989), 491. doi: 10.1007/BF00288430. Google Scholar Fathalla A. Rihan, Hebatallah J. Alsakaji. Stochastic delay differential equations of three-species prey-predator system with cooperation among prey species. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020468 Joel Kübler, Tobias Weth. Spectral asymptotics of radial solutions and nonradial bifurcation for the Hénon equation. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3629-3656. doi: 10.3934/dcds.2020032 Qiwei Wu, Liping Luan. Large-time behavior of solutions to unipolar Euler-Poisson equations with time-dependent damping. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2021003 Jann-Long Chern, Sze-Guang Yang, Zhi-You Chen, Chih-Her Chen. On the family of non-topological solutions for the elliptic system arising from a product Abelian gauge field theory. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3291-3304. doi: 10.3934/dcds.2020127 Mokhtari Yacine. Boundary controllability and boundary time-varying feedback stabilization of the 1D wave equation in non-cylindrical domains. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021004 Christian Clason, Vu Huu Nhu, Arnd Rösch. Optimal control of a non-smooth quasilinear elliptic equation. Mathematical Control & Related Fields, 2020 doi: 10.3934/mcrf.2020052 Sergey Rashkovskiy. Hamilton-Jacobi theory for Hamiltonian and non-Hamiltonian systems. Journal of Geometric Mechanics, 2020, 12 (4) : 563-583. doi: 10.3934/jgm.2020024 Ziang Long, Penghang Yin, Jack Xin. Global convergence and geometric characterization of slow to fast weight evolution in neural network training for classifying linearly non-separable data. Inverse Problems & Imaging, 2021, 15 (1) : 41-62. doi: 10.3934/ipi.2020077 Ahmad El Hajj, Hassan Ibrahim, Vivian Rizik. $ BV $ solution for a non-linear Hamilton-Jacobi system. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020405 Izumi Takagi, Conghui Zhang. Existence and stability of patterns in a reaction-diffusion-ODE system with hysteresis in non-uniform media. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020400 Erica Ipocoana, Andrea Zafferi. Further regularity and uniqueness results for a non-isothermal Cahn-Hilliard equation. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020289 Juan C. Jara Felipe Rivero
CommonCrawl
Home Journals ACSM Impacts of Element Diffusion on Bond Breakage of Cemented Carbide Cutter Impacts of Element Diffusion on Bond Breakage of Cemented Carbide Cutter Minli Zheng* | Siyuan Gao | Jinguo Chen | Wei Zhang | Jiannan Li | Baoliang Chen College of Mechanical and Power Engineering, Harbin University of Science and Technology, Harbin 150080, China School of Electrical and Mechanical Engineering, Putian University, Putian 351100, China [email protected] To disclose the impacts of element diffusion on bond breakage of cemented carbide cutter, this paper establishes a rational microscale model of cemented carbide and 2.25Cr1Mo0.25V, in the light of the working conditions in the cutting process. Drawing on the molecular dynamics (MD) theory, the MD simulation was carried out to set up the deterioration layer model of cemented carbide cutter. Then, the author computed the bonding energy and interaction parameters between WC and other elements (e.g. Co, Fe, Cr, Mo and V). The results show that element diffusion will happen between cemented carbide and 2.25Cr1Mo0.25V in the temperature range of the bond failure; In the temperature range of the bond failure, the WC can form a stable alloy with proper concentrations of Co and other elements in the 2.25Cr1Mo0.25V; Fe is the leading influencing factor on the bonding ability of WC particles in the cemented carbide deterioration layer. cemented carbide, 2.25Cr1Mo0.25V, molecular dynamics (MD), element diffusion, bond breakage, bonding energy 2.25Cr1Mo0.25V is an H2S-resistant steel widely used in such fields as petroleum and chemical industries, thanks to its excellent high-temperature physical and chemical properties. During the machining of 2.25Cr1Mo0.25V heavy forging parts, the carbide cutting tools are prone to breakage. There are many causes of such a failure, among which the bond breakage is particularly remarkable. To put it more visually, the continuous cutting takes away the material from the tool rake face, resulting in tool failure. In fact, bond breakage is commonplace in many popular iron-carbon alloys, nickel-based alloys and titanium alloys. After cutting stainless steel and other materials with cemented carbide tools, some scholars pointed out that the element diffusion of congeners will bond the chips with the rake face and the loss of tungsten will reduce the hardness of tool material, leading to bond breakage [1-4]; In addition, they also conducted a series of diffusion experiments, established the theoretical model of element diffusion in the tool, and determine the damage threshold. Through the cutting of 508III steel with cemented carbide tools, Cheng et al. [5] identified the bonding layer formed on the tool-chip interface, under the action of force-thermal coupling field, as the fundamental cause of bond breakage, and concluded that bond breakage of cemented carbide tools is a shared problem in the cutting of various chip materials. Eiji [6] explored the role of pressure in the adhesion phenomenon, suggesting that the large normal pressure can bring two solid materials closer to the distance of atomic scale and thus bind the two interfaces. Focusing on the high-speed machining of Ti-6Al-4V, Zhang et al. [7] attributed the element diffusion to the high temperature and concentration gradient on the tool-chip interface, and advised to extend the tool life through the control over the diffusion of chip elements to the tool direction. Targeting low-temperature nitriding iron-based alloys, Tong et al. [8] argued that the grain boundary of the material serves as a fast diffusion channel for atoms, and the element diffusion may produce compounds of varied compositions or phases under certain conditions. Shenouda et al. [9] found that the Ni2Si coating and the Si (100) substrate diffused through the crystal boundary at low temperature (453~473 K), forming a NiSi phase. This finding reveals that element diffusion can create new compounds or phases, which differ from the original compounds or phases in physical and chemical properties. Christensen et al. [10] carried out first-principles study on the bonding strength of cemented carbide materials, and drew the following conclusion: WC-Co enjoys a better bonding strength than TiC-Co because of its edge in bonding energy, as high bonding energy implies strong load transfer ability, high hardness and good strength. This conclusion sheds new light on the microscopic research of bond breakage [11]. Despite the fruitful results, the previous studies on bond breakage, bonding energy and element diffusion of tool-chip surface only concentrate on the macroscopic analysis of dynamic cutting under the continuum hypothesis, failing to examine the cutting process from the microscopic perspective. As a matter of fact, micro mechanism is involved in the dynamic process during the diffusion and property change of tool materials after diffusion. On the microscale, the atoms and molecules are discrete and their diffusion cannot be fully explained by existing macroscopic theories. This calls for new theories that can accurately describe the element diffusion and bonding energy on the microscale [12] and associate the descriptions with the dynamic cutting process. In light of element diffusion, this paper sets up a rational microscale model of cemented carbide and 2.25Cr1Mo0.25V based on the specific working conditions in the cutting process. Inspired by molecular dynamics (MD) theory, the deterioration layer model of cemented carbide was obtained through MD simulation; the bonding energy and interaction parameters between WC and Co, Fe, Cr, Mo, V and other elements were calculated, and the influence of element diffusion on bond breakage was discussed under specific working conditions from the microscopic perspective. 2. MD Simulation 2.1 Establishment of simulation model (1) Microscale model of 2.25Cr1Mo0.25V The microscale model of 2.25Cr1Mo0.25V was constructed by Haile's element method [13]. Specifically, the pure iron cell consists of two Fe atoms with lattice parameters of 2.8644×2.8644×2.8644Å and a space group of IM-3M. Based on the lattice parameters of pure iron, the Fe cell was established and subjected to proportional element replacement, forming a 5×5×5 Fe supercell. The mass fraction of the main elements in 2.25Cr1Mo0.25V are shown in Table 1. The atomic ratio of each main element in the super cell was obtained by calculating the molar mass of the atom of each main element, and the original pure iron system was replaced according to the atomic ratios in the super cell. The element replacement upset the balance of the original pure iron system, and could not simulate the random distribution of the elements in the 2.25Cr1Mo0.25V alloy. Therefore, the replaced Fe supercell was heated to 3,000 K, so that the alloy was fully melted. Table 1. Mass fraction of the main elements in 2.25Cr1Mo0.25V Mass fraction% Mass fraction % Figure 1. Establishment of the microscale model of 2.25Cr1Mo0.25V With the aid of a Nosé–Hoover thermostat, the simulation was carried out in a universal force field (UFF) at the temperature (3,000K) was maintained for 50ps. As shown in Figure 1, each atom of the supercell left the original position and entered into irregular motions, under the action of high temperature. Finally, the system temperature was reduced to 298K. After the simulation, the author obtained a microscale model of 2.25Cr1Mo0.25V with randomly arranged metal atoms. During the cooling process, the density of 2.25Cr1Mo0.25V gradually increased, while the system energy declined gradually before reaching an equilibrium. (2) Microscale model of cemented carbide Cemented carbide is produced from hard phase (e.g. WC, TiC and TaC) and binder phase (Co and Mo) by powder metallurgy. Our research targets the K20 cemented carbide tool, which consists of 91 % WC, 1 % NbC, and 8 % Co. For simplicity, this paper mainly discusses the WC in K20 cemented carbide tool, taking Co as the binder. The carbon and tungsten atoms in the WC exist in a close-packed hexagonal structure with a space group of P-6M2. The C atom lies at the center of the lattice, whose fractional coordinates are (1/2, 2/3, 1/2) and lattice parameters is a=b=2.906 and c=2.837Å. Thus, a WC cell was created based on the said space group and lattice parameters, and a WC supercell was established through proportional replacing of the elements. Then, the atomic ratio of WC-Co was determined according to the mass fraction of Co in the cemented carbide and the molar mass of W, Co and C atoms, and the Co atom was doped into the WC supercell. Since the system became unstable after the addition of the Co atom, it is necessary to determine the optimal occupancy of Co atoms in the WC supercell (Figure 2). Before exploring element diffusion, there should be a cemented carbide diffusion model suitable for MD simulation. Limited by the production technology, many vacancy defects may occur in the materials for cemented carbide tool. To disclose how these defects affect the diffusion of cemented carbide, a cemented carbide model with a vacancy concentration of 0.2 was created based on the optimized cemented carbide supercell. However, the diffusion layer could not be established directly because the WC has a hexagonal structure while the microscale model of 2.25Cr1Mo0.25V is cubic. Hence, the cemented carbide crystal was cut along the (-1 1 0) direction to create a cleave surface. The resulting microscale model of cemented carbide is illustrated in Figure 3 below. Figure 2. Microscale model of the original cemented carbide Figure 3. Microscale model of cemented carbide(3) Microscale model of diffusion layer (3) Microscale model of diffusion layer 1) Model establishment. The microscale model of diffusion layer was established from the microscale models of the 2.25Cr1Mo0.25V and the cemented carbide with a cleave surface, and under the following assumptions: the 2.25Cr1Mo0.25V and the cemented carbide are in the same lattice with periodic boundary conditions during the MD simulation; the total volume of the diffusion layer remains constant in the diffusion process; the system and external environment exchange energy but not material [14]. The established microscale model for diffusion layer is presented as Figure 4. Figure 4. Microscale model for diffusion layer 2) Temperature boundary condition. A cutting experiment was carried out on a CA6140 ordinary lathe to determine the temperature range for bond breakage, that is, the temperature boundary condition of MD simulation. The temperature of the tool-chip interface was collected in real time by a ThermoVision A40M infrared thermal imager, and the experimental process was shot continuously by a high-speed camera at 1,000 fps. When the bond breakage took place, the broken blade was removed and the breakage profile of the tool was observed by a super-depth microscope and a scanning electron microscope (SEM), and the tool-chip interface was analyzed by an energy spectrum analyzer. The experiment platform is illustrated in Figure 5 below. Figure 5. Experiment platform After a long time of cutting, obvious bond breakage was observed on the tool-chip interface. Then, the super-depth microscope was adopted to observe the bond breakage morphology of the blade after cutting. The observed morphology is displayed in Figure 6 below. Figure 6. Bond breakage morphology on tool-chip interface The experiment was carried out by single factor analysis, the cutting speed was kept at v=60 m/min, and the feed was set to f=0.15 mm/r. The back-cutting depth was taken as the variable to study the cutting temperature under different cutting parameters by infrared thermal imager. The temperature curves under different back-cutting depths are shown in Figure 7 below. Figure 7. Temperature curves under different back-cutting depths 2.2 MD Simulation (1) Diffusion simulation MD simulation is a numerical integration method based on atomic potential function. It has been widely adopted to study the deformation behavior of materials under various actions (e.g. tension, compression, bending, twisting and nanofabrication) [15-18]. This method is hailed as the bridge between macroscale and microscale, because of its abilities to accurately describe the microscale features and reproduce the macroscale performance of complex condensed-matter physics systems. The popular MD software include LAMMPS and the Forcite module in the Materials Studio software package. The latter is adopted for our MD simulation. According to Fick's law of diffusion and previous studies [7], the diffusion of solids is mainly affected by temperature and concentration gradients. Therefore, in our simulation, the effect of cutting temperature on element diffusion was studied with temperature as the main variable. Three temperatures, namely, 800 K, 1,000 K and 1,286 K, were selected from the measured temperature range for the simulation. The simulation lasted for 50ps in a UFF with a time step of 1fs. The temperature was controlled by a Nosé–Hoover thermostat. The diffusion model was shown in Figure 8 and the diffusion direction are recorded in Figure 9. It can be seen that different degrees of diffusion occurred between the cemented carbide and 2.25Cr1Mo0.25V under different temperatures. Some Fe, Mo, V and Cr atoms entered the interior of the cemented carbide crystal, while the 2.25Cr1Mo0.25V lost some of its W atoms and Co atoms. The diffusion trends agree well with those in the energy spectrum image (Figure 10). Figure 8. MD simulation model Figure 9. Schematic diagram Figure 10. Microscale diffusion and energy spectrum of atoms on tool-chip interface (2) Bonding energy simulation The bonding strength of crystals depends on the interaction between the particles of the crystal, i.e. the bonding force of the chemical bonds between atoms. The intensity of the interaction can be measured by the bonding energy. In essence, the bonding energy refers to the energy released by the free microscopic particles when being combined into crystals, which equals the energy required to destroy the crystal. Previous studies [11] have demonstrated the major impact of bonding energy on the mechanical properties of the material. Through the cutting experiment and MD simulation, it is learned that, under the bond breakage temperature, element diffusion occurred on the interface between 2.25Cr1Mo0.25V and the cemented carbide; some Fe, Cr, Mo and V atoms entered the cemented carbide. In the diffusion layer, some W atoms were lost and the bonding form changed between the atoms in the layer. Considering the above, the author decided to study the post-diffusion interaction between the particles in the deterioration layer of cemented carbide from the perspective of bonding energy. The grain of each element was simplified into atomic groups, creating atomic groups of Fe, Co, Cr, Mo and V elements with a diameter of 10Å (Figure 11). Figure 11. Atomic groups The bonding energy and interaction parameters between particles were calculated by the Blends atom simulation module, which is inspired by the Flory-Higgins model [19]. In this module, the bonding energy is computed using the configuration of the paired data points, the relative position, and the exclusive volume constraint method; the results are averaged by the Boltzmann constant, before determining the temperature-dependent interaction parameters. During the simulation, the WC atomic group was defined as base, and the other atomic groups were defined as screens. The other simulation parameters are as follows: the force field was set to UFF, the energy sample was 107, the energy group spacing was 0.02 kcal/mol, and the temperature range was 300~1,286 K. 3. Simulation Results and Discussion 3.1 Discussion of diffusion features Wang et al. [20] suggested that diffusion is the only transport method in solid materials. When two chemically intimate solids (e.g. iron and carbon) are put together and heated to a certain temperature, the atoms of these solids will mutually diffuse to form a new alloy. Once solid diffusion occurs, the atoms in the solids will move in a direction that reduces the concentration difference, that is, down the energy gradient [21]. This is because the atoms, under the action of various factors, tend to leave the initial equilibrium positions in seek of new equilibrium positions. The diffusion degree must be quantified to accurately measure the effect of temperature on diffusion. Thus, the diffusion coefficient (D) was introduced, which can describe how fast the solute atoms diffuse in the solvent. Then, the mean square dislocation (MSD) of each atom in the system was computed by the Forcite module. Taking the MSD as an intermediate variable, the D of each atom can be derived by the Einstein method [22]. The MSD can be calculated as: $\operatorname{MSD}(t)=\frac{1}{n} \sum_{i=1}^{n}\left\langle\left|r_{i}(t)-r_{i}(o)\right|^{2}\right\rangle$ (1) where, ri(t) is the position of the atom i at time t; n is the number of atoms in the system. The D can be expressed as: $D=\frac{1}{6} \frac{\partial}{\partial t}\left[\frac{1}{n} \sum_{i=1}^{n}\left\langle\left|r_{i}(t)-r_{i}(o)\right|^{2}\right\rangle\right]$ (2) The expression is usually simplified as: $D=\frac{1}{6}(M S D)$ (3) Figure 12. MSD-time curve of main elements The atom positions at each time point were fitted on Matlab, yielding the MSD-time curve of the main elements in cemented carbide and 2.25Cr1Mo0.25V (Figure 12). To ensure the accuracy of the D, the non-Einstein diffusion region was removed during the fitting of the MSD curve. Table 2. The D value of each main element (Å2/ps) T(k) The fitted MSD value of each element was substituted into equation (3) to obtain the D value of each main element at the corresponding temperature. The specific values are listed in Table 2. As shown in Table 2, the D value is positively correlated with the diffusion ability of an atom. Under each temperature, the 2.25Cr1Mo0.25V atoms moving towards the cemented carbide had greater D values than the cemented carbide atoms moving towards the 2.25Cr1Mo0.25V. This means each 2.25Cr1Mo0.25V atom boasts greater diffusion ability than its counterpart in cemented carbide. From Figure 12 and Table 2, it can be seen that the diffusion degree increased with the temperature, and peaked at 1,286 K. The trend is consistent with the Fick's law of diffusion. Figure 13. Tool-chip interface Eiji explained that the role of pressure in the diffusion process is to press the two materials into the range of atomic range [6]. As shown in Figure 13, the interface between cemented carbide and 2.25Cr1Mo0.25V is rough during the machining. On the microscale, the protrusions and gullies continued to spread on the surfaces of the two metals. In general conditions, it is difficult to make atoms enter the range of atomic force on either side of the interface. In other words, the diffusion conditions are hard to achieved under normal conditions. During the cutting process, however, the cutting force is sufficiently large, such that the two metal surfaces are in close contact. In this case, many atoms can enter the range of atomic force. From the microscopic angle, the high pressure of the cutting process only ensures the close contact of the two atomic layers. In the simulation, the distance between the two materials was on the angstrom level, which is well within the range of atomic force. Thus, pressure is not an important influencing factor of solid diffusion, and is neglected in our discussion. As mentioned above, temperature and concentration gradients are two leading factors affecting diffusion. In our research, the MD simulation lasted only 50 ps and the opposite atom concentration was zero in the microscale models of cemented carbide and 2.25Cr1Mo0.25V. Thus, there existed a concentration gradient to initiate the diffusion. As a result, the effect of temperature on diffusion is discussed in details below. The expression of the D can be rewritten as [23]: $D=\alpha^{2} P \Gamma$ (4) where, α2 is the square of atomic diffusion distance; P is the probability of atomic diffusion; Γ is the frequency of atomic diffusion. Unlike those in liquids and gases, the diffusion of elements in solids needs to overcome the huge bonding energy, i.e. the energy barrier. That is why the solid diffusion is very slow under normal conditions. Fortunately, the high temperature of the cutting process creates a favorable condition for solid element diffusion. In the cutting environment, the atomic diffusion distance hinges on the temperature and the microscopic mechanism of atomic diffusion. On the effect of temperature, the total energy of the system increases with the temperature, making it easier to reach the energy to activate the diffusion of each element. Besides, the vacancy concentration is high and the thermal motion of atoms is vigorous at high temperatures. In this case, an atom can easily enter the range of interaction between other atoms along the crystal gap and vacancies. The diffusion behavior of the atom is thus promoted. The diffusion speed is slow if the temperature is below the atomic recrystallization temperature, and much faster if otherwise. For industrial pure metal, the recrystallization temperature is 35 %~45 % of its melting point. For alloy, recrystallization temperature is 40 %~90 % of its melting point. As the main component of cemented carbide, the WC starts to melt under 3,143 K. Thus, the recrystallization temperature of WC falls in 1,421~2,856 K. The temperature measured in the cutting experiment was 800~1,286 K, failing to reach the recrystallization temperature of WC. However, the measured temperature fell in the recrystallization temperature range of 2.25Cr1mo0.25V. As a result, the 2.25Cr1Mo0.25V atom diffused much faster than W atoms and Co atoms in the cemented carbide. The microscopic mechanism of atomic diffusion directly bears on the energy to activate atomic diffusion. The atoms in vacancies are the easiest to diffuse because they have the highest potential energy. Vacancy diffusion is the most common diffusion mechanism, followed by interstitial diffusion and quasi-interstitial diffusion. By contrast, the atoms at the lattice position have the lowest potential energy. Thus, a huge amount of energy is needed to activate translocation diffusion. As mentioned above, a vacancy concentration of 0.2 was set for the cemented carbide diffusion model. Hence, the atomic diffusion of 2.25Cr1Mo0.25V mainly followed the vacancy diffusion mechanism. With the growth of temperature, the atoms were more likely to surpass the energy barrier, and other diffusion mechanisms gradually emerged. As the cutting progressed, numerous Fe, Cr, Mo and V atoms entered the diffusion deterioration layer, and a large amount of W atoms were lost, leading to changes of tool material composition. Through this process, the tool material was deteriorated and its mechanical strength declined gradually. The bond breakage occurred when the strength dropped to a certain threshold (Figure 14). Figure 14. Diffusion deterioration layer 3.2 Discussion of Blends features The mixing and separation between binary systems in the Blends module can be calculated by the Flory-Higgins model [19]: $\frac{\Delta G}{R T}=\frac{\phi_{b}}{n_{b}} \ln \phi_{b}+\frac{\phi_{s}}{n_{s}} \ln \phi_{z}+\chi \phi_{b} \phi_{z}$ (5) where, △G is the mixture free energy per mole of microscopic particles; Φi is the volume fraction of the i-th component; niis the miscibility of the i-th component; χ(chi) is the interaction parameter. The first two terms represent the mixing entropy and the last term reflects the interaction free energy. In the diffusion deterioration layer of cemented carbide, WC and such elemental metals as Co, Fe and Cr appear as a solid solution. It is necessary to evaluate the solid solubility of each element before determining the structural stability of the diffusion deterioration layer. Here, the solid solubility is explored using the interaction parameter χ below: $\chi=\frac{E_{\operatorname{mix}}}{R T}$ (6) where, Emix is the mixing energy; R is the gas constant; T is the absolute temperature. If χ is negative or small, the two particles have good miscibility at the current temperature. Otherwise, the two particles tend to separate. If the χ is sufficient large, the free energy of the mixture will overcome the mixed entropy and split the mixture into two separate phases. As shown in Figure 15 below, when the cutting temperature was on the rise, the χ values of all atomic groups exhibited a downward trend and eventually tended to zero. This trend shows that the solid solubility of WC (base) and Co, Cr, Fe, Mo and V (screen) increases with the cutting temperature. Within the range of the cutting temperature, WC can forma stable solid solution alloy with proper concentrations of screen elements. With the decline in the cutting temperature, the solubility of WC and all screen elements were reduced, and the particles were more likely to be separated, adding to the difficulty in the formation of alloy. It can be seen from Figure 15(b) and Table 3, under the normal temperature of 298K, the χ values of WC and Mo were lower than those of other elements, and decreased with the increase of temperature. In terms of solid solubility, the WC has similar solubilities in Cr, V and Co, and an extremely high solubility in Mo. Thus, WC and Co can easily form a stable solid solution. Considering the low presence of Mo in the 2.25Cr1Mo0.25V alloy, the diffusion of Mo, Cr and V elements was not viewed as the main cause of bond breakage. This simulation result is consistent with the findings in previous research: when the cemented carbide is sintered, the WC is less soluble in Fe than in Co, and the mushy zone (the range of C content) is narrow in the WC-Fe system, making it difficult to obtain a solid solution two-phase alloy of WC-Fe [24]. Comparing Figure 15(c) and Table 3, it is observed that the χ curve of WC-Co was only slightly higher than that of WC-Fe. Co and Fe are congeners. With the rising temperature, the WC is more and more soluble in Co than in Fe. Thus, it can be concluded that WC-Fe and WC-Co have similar ranges of bond breakage temperature, and the small difference is negatively correlated with temperature. Under element diffusion, the WC is partially combined with Fe into WC-Fe alloy. However, this alloy may produce η phase (FexWxCx) due to the carbon deficiency during metallurgy. The η phase exists as a malignant defect in the structure of cemented carbide, and seriously undermines the physical and chemical properties of the alloy. The η phase is often suppressed by carbon black, which is not available in the random diffusion process of our simulation. Thus, the physical and chemical properties of the tool material continued to worsen in the cutting process. When the deterioration reached a certain threshold, the tool material broke down and was carried away from the tool surface with the chips. Figure 15. Interaction parameter χ Table 3. Results of blends simulation χ(298k) Emix(298k) Ebs avg(298k) 3.3 Discussion of bonding energy The strength of the bonding ability between crystals can be described as the bonding energy, which is defined as the difference between the total energy of a crystal and that of its N free atom states when the crystal is in a steady state. The bonding energy can also be understood as the energy released when a free atom (ion or molecule) is combined into a crystal, or the energy required to split a free atom off a crystal. The bonding energy between the atomic group of WC and that of any screen element helps to judge the fracture of cemented carbide under the cutting force and cutting vibration. Bonding energy can quantify the strength of interface bonding. The higher the bonding energy, the more energy is needed to destroy the interface. The strength of interface bonding can be expressed as: $W=\left(E_{\text {interface }}-E_{\mathrm{a}}-E_{\mathrm{b}}\right) / A$ (7) where, Einterface is the total energy of the interface; Ea is the energy of surface a; Eb is the energy of surface b; A is the square of interface. The directions of the bonding energy were calculated by the MS software. Note that the negative sign means the bonding between the two particles releases energy, and the absolute value stands for the level of bonding energy. Table 3 shows that WC and Mo had stronger bonding energy than the combination between WC and any other screen element. As stated above, the low content of Mo in the 2.25Cr1Mo0.25V is not an important cause. The bonding energy of WC-Co was slightly higher than that of WC-Fe. From the simulation results and χ value analysis, it is known that Fe is the key element in the diffusion of cemented carbide. With the lowest bonding energy at normal temperature, the WC-Fe interface was the weakest and the most likely to break down in our research. Under the same concentration, the bonding strength of WC-Co alloy is 1.5~2 times that of WC-Fe alloy. During the smelting of cemented carbide, a small amount of Cu is often added to enhance the performance of WC-Fe-based alloy. Even so, the bonding strength of WC-Fe-based alloys is only 60 %~70 % of that of WC-Co alloys. The diffusion deterioration layer of cemented carbide had even weaker mechanical properties and bonding ability than WC-Fe bonding, as multiple elements combined and multiple phases formed under the random diffusion in the cutting process. As a result, the material on the rake face of the cutting tool witnessed a decline in mechanical properties. As shown in Figure 16, an unstable intermetallic compound was dissolved and formed in the range of the cutting temperature. If the cutting is interrupted, the changing temperature will break WC-Fe and other elemental bonding, creating local microcracks in the unstable solid solutions. The microcracks will develop between the tool materials, and eventually break the bond between WC and other element in the deterioration layer. The broken parts will leave the rake face of the cutting tool as chips. Figure 16. Diffusion deterioration layer of the bonding interface This paper carries out a cutting experiment and an MD simulation of the cutting of 2.25Cr1Mo0.25V H2S-resistant steel with carbide cutter, aiming to identify the causes to the bond breakage on the rake face of the cutter. Through the results analysis, the following conclusions were put forward: (1) The occurrence of bond breakage was confirmed by the cutting experiment. It is also confirmed that element diffusion will happen between cemented carbide and 2.25Cr1Mo0.25V in the temperature range of the bond failure. With a relatively low diffusion activation energy, 2.25Cr1Mo0.25V is more soluble than hard alloy to the cemented carbide. As the W element is gradually lost in the cutting process, the mechanical properties of the cemented carbide continue to decline. The bond breakage will occur when the decline reaches a certain threshold. (2) Through Blends molecular simulation, it is concluded that the solid solubilities of the WC to Co, Fe, Cr, Mo and V all increases gradually with the rising temperature. In the temperature range of the bond failure, the WC can form a stable alloy with proper concentrations of Co and other elements in the 2.25Cr1Mo0.25V. However, it is difficult to form a stable solid solution with Fe. (3) In the temperature range of the bond failure, the bonding energy of Fe atom and the WC is weaker that of the other elements and the WC. As a key element in 2.25Cr1Mo0.25V, Fe is the leading influencing factor on the bonding ability of WC particles in the cemented carbide deterioration layer. With the increased entry of Fe into the deterioration layer and the gradual loss of W element, the bonding ability of the material in the cemented carbide diffusion deterioration layer continues to decrease, leading to poor mechanical properties. In this case, the tool material is prone to suffer from the bond breakage, under the strong cutting force, temperature change and vibration impact of heavy-duty intermittent cutting. This work was supported by the National Natural Science Foundation of China (Grant No.51575146) and 2017 open fund project of CAD/CAM University Engineering Research Center in Fujian Province, China (Grant No. K201709). The authors are grateful to everyone who contributed to this research, including the technicians who helped to implement the various experiments and analysis. [1] Zheng, M.L., Chen, J.G., Li, Z., Zhang, W., Li, P.F., Xie, H.H. (2018). Experimental study on elements diffusion of carbide tool rake face in turning stainless steel. Journal of Advanced Mechanical Design Systems and Manufacturing, 12(4): 1-12. https://doi.org/10.1299/jamdsm.2018jamdsm0085 [2] Chen, J.G., Zheng, M.L., Zhang, W., Sun, Y.S., Tang, Q.H. (2018). Research on the theoretical model of the rake face wear of carbide cutting tool. The International Journal of Advanced Manufacturing Technology, 98(1-4): 421-429. https://doi.org/10.1007/s00170-018-2303-4 [3] Murugan, S.S. (2018). Processing and characterisation of LM30 alloy + graphite reinforced composite through gravity and centrifugal casting, Annales de Chimie - Science des Matériaux, 42(4): 555-564. https://doi.org/10.3166/ACSM.42.555-564 [4] Chen, J.G., Zheng, M.L., Li, P.F., Feng, J., Sun, Y.S. (2017). Experimental study and simulation on the chip sticking–welding of the carbide cutter's rake face. International Journal on Interactive Design and Manufacturing, 12(4): 1309-1319. https://doi.org/10.1007/s12008-017-0403-2 [5] Cheng, Y.N., Liu, L., Lu, Z.Z., Guan, R., Wang, T. (2015). Study on the adhering failure mechanism of cemented carbide inserts and element diffusion model during the heavy-duty cutting of water chamber head. The International Journal of Advanced Manufacturing Technology, 80: 1833-1842. https://doi.org/10.1007/s00170-015-7166-3 [6] Eiji, U. (1982). The study of cutting and grinding. Mechanical Industry Press, Beijing. [7] Zhang, S., Li, J.F., Deng, J.X., Li, Y.S. (2009). Investigation on diffusion wear during high-speed machining Ti-6Al-4V alloy with straight tungsten carbide tools. The International Journal of Advanced Manufacturing Technology, 44(1-2): 17-25. https://doi.org/10.1007/s00170-008-1803-z [8] Tong, W.P., Tao, N.R., Wang, Z.B., Lu, J., Lu, K. (2003). Nitriding iron at lower temperatures. Science, 299(5607): 686-688. https://doi.org/10.1126/science.1080216 [9] Shenouda, S.S., Langer, G.A., Katona, G.L., Daróczi, L., Csik, A., Beke, D.L. (2014). Production of NiSi phase by grain boundary diffusion induced solid state reaction between Ni2Si and Si(100) substrate. Applied Surface Science, 320: 627-633. https://doi.org/10.1016/j.apsusc.2014.09.071 [10] Christensen, M., Dudiy, S., Wahnström, G. (2002). First-principles simulations of metal-ceramic interface adhesion: Co/Wc versus Co/Tic. Physical Review B, 65(4): 17002-17011. https://doi.org/10.1103/PhysRevB.65.045408 [11] Feest, E.A. (1994). Interfacial phenomena in metal-matrix composites. Composites, 25(2): 75-86. https://doi.org/10.1016/0010-4361(94)90001-9 [12] Han, X.S. (2007). Investigation atomic level micromechanism about tool wear in the case of nanometric cutting using molecular dynamics method. Chinese Journal of Mechanical Engineering, 43(9): 107-112. http://dx.chinadoi.cn/10.3321/j.issn:0577-6686.2007.09.022 [13] Haile, J.M., Johnston, I., Mallinckrodt, A.J., McKay, S. (1993). Molecular dynamics simulation: elementary methods. Computers in Physics, 7(6): 625. https://doi.org/10.1063/1.4823234 [14] Jian, X. (2016). Atomistic simulation of the interaction of helium with dislocation in nickel. Master's thesis, University of Chinese Academy of Science, Beijing [15] Liu, B., Liu, G., Xiao, B., Yan, J. (2018). Molecularly imprinted electrochemical sensor for the determination of sulfamethoxazole. Journal of New Materials for Electrochemical Systems, 21(2): 77-80. [16] Chen, S.D., Ke, F.J., Bai, Y.L. (2007). Atomistic investigation of the effects of temperature and surface roughness on diffusion bonding between Cu and Al. Acta Materialia, 55(9): 3169-3175. https://doi.org/10.1016/j.actamat.2006.12.040 [17] Denis, S., Ronald, E.M. (2006). Atomic-scale simulation of nanoindentation-induced plasticity in copper crystals with nanometer-sized nickel coatings. Acta Materialia, 54(1): 33-45. https://doi.org/10.1016/j.actamat.2005.08.030 [18] Silva, E.Z.D., Silva, A.J.R.D., Fazzio, A. (2004). Breaking of gold nanowires. Computational Materials Science, 30(1-2): 73-76. https://doi.org/10.1016/j.commatsci.2004.01.011 [19] Flory, P.J. (1953). Principles of Polymer Chemistry. Cornell University Press, Ithaca [20] Wang, Z.B., Tao, N.R., Tong, W.P., Lu, J., Lu, K. (2003). Diffusion of chromium in nanocrystalline iron produced by means of surface mechanical attrition treatment. Acta Materialia, 51(14): 4319-4329. https://doi.org/10.1016/s1359-6454(03)00260-x [21] Xu, Y.Q. (2017). Research on Sillicon Diffusion in Preparing Sillicon Steel by CVD Method. East China University of Science and Technology, Shanghai [22] Allen, M.P., Tildsley, D. (1987). Computer Simulation of Liquids. Claredon Press, Oxford [23] Shi, D.K. (1999). Foundation of Materials Science. Mechanical Industry Press, Beijing [24] Zhao, X.J. (2004). Microstructures and Properties of Fusion Welding Interface Between Cemented Carbides and Steels. Dalian Jiaotong University, Dalian.
CommonCrawl
Physics and Astronomy (2) Materials Research (1) Advances in Animal Biosciences (1) Journal of Materials Research (1) Publications of the Astronomical Society of Australia (1) Animal consortium (1) Materials Research Society (1) The MAGPI survey: Science goals, design, observing strategy, early results and theoretical framework C. Foster, J. T. Mendel, C. D. P. Lagos, E. Wisnioski, T. Yuan, F. D'Eugenio, T. M. Barone, K. E. Harborne, S. P. Vaughan, F. Schulze, R.-S. Remus, A. Gupta, F. Collacchioni, D. J. Khim, P. Taylor, R. Bassett, S. M. Croom, R. M. McDermid, A. Poci, A. J. Battisti, J. Bland-Hawthorn, S. Bellstedt, M. Colless, L. J. M. Davies, C. Derkenne, S. Driver, A. Ferré-Mateu, D. B. Fisher, E. Gjergo, E. J. Johnston, A. Khalid, C. Kobayashi, S. Oh, Y. Peng, A. S. G. Robotham, P. Sharda, S. M. Sweet, E. N. Taylor, K.-V. H. Tran, J. W. Trayford, J. van de Sande, S. K. Yi, L. Zanisi Journal: Publications of the Astronomical Society of Australia / Volume 38 / 2021 Published online by Cambridge University Press: 26 July 2021, e031 We present an overview of the Middle Ages Galaxy Properties with Integral Field Spectroscopy (MAGPI) survey, a Large Program on the European Southern Observatory Very Large Telescope. MAGPI is designed to study the physical drivers of galaxy transformation at a lookback time of 3–4 Gyr, during which the dynamical, morphological, and chemical properties of galaxies are predicted to evolve significantly. The survey uses new medium-deep adaptive optics aided Multi-Unit Spectroscopic Explorer (MUSE) observations of fields selected from the Galaxy and Mass Assembly (GAMA) survey, providing a wealth of publicly available ancillary multi-wavelength data. With these data, MAGPI will map the kinematic and chemical properties of stars and ionised gas for a sample of 60 massive ( ${>}7 \times 10^{10} {\mathrm{M}}_\odot$ ) central galaxies at $0.25 < z <0.35$ in a representative range of environments (isolated, groups and clusters). The spatial resolution delivered by MUSE with Ground Layer Adaptive Optics ( $0.6-0.8$ arcsec FWHM) will facilitate a direct comparison with Integral Field Spectroscopy surveys of the nearby Universe, such as SAMI and MaNGA, and at higher redshifts using adaptive optics, for example, SINS. In addition to the primary (central) galaxy sample, MAGPI will deliver resolved and unresolved spectra for as many as 150 satellite galaxies at $0.25 < z <0.35$ , as well as hundreds of emission-line sources at $z < 6$ . This paper outlines the science goals, survey design, and observing strategy of MAGPI. We also present a first look at the MAGPI data, and the theoretical framework to which MAGPI data will be compared using the current generation of cosmological hydrodynamical simulations including EAGLE, Magneticum, HORIZON-AGN, and Illustris-TNG. Our results show that cosmological hydrodynamical simulations make discrepant predictions in the spatially resolved properties of galaxies at $z\approx 0.3$ . MAGPI observations will place new constraints and allow for tangible improvements in galaxy formation theory. Farmers' Adoption Path of Precision Agriculture Technology N. J. Miller, T. W. Griffin, J. Bergtold, I. A. Ciampitti, A. Sharda Journal: Advances in Animal Biosciences / Volume 8 / Issue 2 / July 2017 Published online by Cambridge University Press: 01 June 2017, pp. 708-712 Print publication: July 2017 Precision agriculture technologies have been adopted individually and in bundles. A sample of 348 Kansas Farm Management Association farm-level observations provides insight into technology adoption patterns of precision agriculture technologies. Estimated transition probabilities shed light on how adoption paths lead to bundling of technologies. Three information intensive technologies were assigned to one of eight possible bundles, and the sequence of adoption was examined using Markov transition processes. The probability that farms remain with the same bundle or transition to a different bundle by the next time period are reported. Farms with the complete bundle of all three technologies were likely to persist with their current technology. Diamond deposition on Ni/Ni-diamond coated stainless steel substrate A. K. Sikder, T. Sharda, D. S. Misra, D. Chandrasekaram, P. Veluchamy, H. Minoura, P. Selvam Journal: Journal of Materials Research / Volume 14 / Issue 3 / March 1999 Published online by Cambridge University Press: 31 January 2011, pp. 1148-1152 Print publication: March 1999 Electrodeposited Ni and Ni-diamond composite layers were used as diffusion barriers for Fe to facilitate the diamond growth on stainless steel substrates. Raman spectroscopy and scanning electron microscopy show the formation of good quality diamond crystallites by chemical vapor deposition. X-ray diffraction results indicate that the expansion of Ni unit cell has taken place due to the formation of the Ni–C solid solution. This observation is also well supported by x-ray photoelectron spectroscopy studies. The lattice constant of the expanded Ni unit cell matches closely with the diamond, and this may be helpful in explaining the epitaxial growth of diamond on single-crystal Ni observed by others.
CommonCrawl
Proving a function has a discontinuity at 0 but continuous elsewhere Let $f(x) = \left\{\begin{array}{ll} x + 2 & -3 < x < -2 \\ -x -2 & -2 \leq x < 0 \\ x + 2 & 0 \leq x < 1 \end{array}\right.$ I want to show that $f$ has a discontinuity at $x=0$ but is continuous at all other points in $(-3,1)$ Attempt: If $f$ was continuous at $0$ then $\lim_{x \to 0}f(x) = f(0)$. Now $f(0) = 2$. Not sure how to then proceed. Do I just then say $\lim_{x \to 0}f(x) = x+2$ (since x = 0?) Hence it is discontinuous at $x = 0$ For continuity at all other points: let $\epsilon > 0$ be given. Then there exists $ \delta > 0$ such that $|x - y| < \delta \Rightarrow |(x+2) - (y+2)| = |x - y| < \epsilon$ (let $\delta = \epsilon$). Also there exists $\delta > 0$ such that $|x - y| < \delta \Rightarrow |-x -2 - (-y -2)| = |y - x| = |x - y| < \epsilon$. Hence $f$ is continuous at all other points in $(-3,1)$. real-analysis sequences-and-series continuity solution-verification real-numbers learningmathematicslearningmathematics $\begingroup$ Writing, e.g, $\lim_{x\to 0}f(x)=x+2$. makes no sense. The left hand is not a function of $x$. $\endgroup$ – lulu $\begingroup$ As a suggestion: consider the two one-sided limits. In order for $f(x)$ to be continuous at $0$, both of those limits must exist and they must coincide. $\endgroup$ For $x<0$, $f(x)=-x-2$ and $$l_{x<0}=\lim_{x\rightarrow0}f(x)=-2$$ For $x\geq0$, $f(x)=x+2$ and $$l_{x\geq0}\lim_{x\rightarrow0}f(x)=2\rightarrow$$ $$l_{x<0}\neq l_{x\geq0}$$ hence, the function has a discontinuity at$0$. VasileVasile Since $f(x)=-x-2$ when $x\in[-2,0)$, you have$$\lim_{x\to0^-}f(x)=-2\ne f(0),$$and therefore $f$ is discontinuous at $0$. The only other point of $(-3,1)$ at which $f$ could be discontinuous is $-2$. But it is continuous at $-2$ since$$\lim_{x\to-2^-}f(x)=\lim_{x\to-2^+}f(x)=0=f(-2).$$ José Carlos SantosJosé Carlos Santos 381k198198 gold badges243243 silver badges418418 bronze badges Not the answer you're looking for? Browse other questions tagged real-analysis sequences-and-series continuity solution-verification real-numbers or ask your own question. Discontinuity of the characteristic function Discontinuity of floor function Let $f$ be a function such that every point of discontinuity is a removable discontinuity. Prove that $g(x)= \lim_{y\to x}f(y)$ is continuous. Prove continuity/discontinuity of the Popcorn Function (Thomae's Function). Proving discontinuity of a piecewise function Proving discontinuity using $\epsilon$-$\delta$ form function only contains jump discontinuity but is not piecewise continuous Every point of discontinuity of a function $f$ is a removable discontinuity. Prove that $g(x)=\lim_{y\to x}f(y)$ is continuous. Proving a piecewise function is discontinuous at a point
CommonCrawl
How can infinite density exist? Thinking in mind, infinite density is where there is no space between particles correct me here if i'm wrong. Inside a blackhole at the core. So if there are 2 particles touching each other Then let there be a scale where we can scale the microscope's optics to see the size of quarks. So then we are seeing there is space between 2 particles who are touching as they are zoomed to quarks where the particles are touching but not the quarks. So lets make the quarks touch each other? right? But then we can zoom even more to see that quarks are made up of something and technically they are not touching? What i mean is can 2 particles ever touch each other without having space between them?.I'm new here correct me if i'm wrong particle-physics density singularities quarks Qmechanic♦ weegeeweegee $\begingroup$ What is the "size" of a quark? What does "touch" mean when you are talking about quarks? You say, "...we can...see that quarks are made up of something." Is that true now? $\endgroup$ $\begingroup$ P.S.; The mathematical model that predicted the existence of black holes says that there must be a point with infinite density--a singularity--at the center of a Schwarzschild black hole, but nobody's ever gone in to a black hole and then come back out to tell us whether the model is correct, or whether it is only an approximation that "breaks down" at some extreme degree of spacetime curvature. $\endgroup$ $\begingroup$ When you talk about "0 as the denominator," you are talking about an equation of general relativity. But, nobody knows whether the general relativity is a true model of reality. We used to think that Newton's model was true, but then we learned that it only was an approximation that "breaks down" (i.e., gives wrong answers) under extreme conditions. So what are we to think of general relativity? Is it the truth? or is it merely a better approximation that only "breaks down" under conditions that are even more extreme--more extreme than any conditions we have been able to observe so far? $\endgroup$ $\begingroup$ String theories are a whole other question. But, It'll probably get voted down if the question doesn't go any deeper than "what do you think of them, eh?" If you want to do some research on you own though, here's a question to start your quest: Has any string theory told us anything new? or are they all just new ways of modelling phenomena that the Standard Model previously predicted? $\endgroup$ $\begingroup$ I think the problem with your proposal is that you are modeling fundamental particles as if they are tiny, discrete 'balls', which at some level of magnification must be 'touching', with space in between. We know from Quantum Mechanics that description is wrong, in the case of fundamental particles. $\endgroup$ – Time4Tea History of physics shows that quantum mechanics explains away infinities, by its probabilistic nature. Quantum mechanics gives probabilities where classical mechanics and electrodynamics gives infinities. For example the 1/r infinity when positive is attracted to negative in classical electromagnetism , is replaced by Heisenberg's uncertainty principle, and/or by solutions of quantum mechanical equations where probabilities can be calculated. The spectrum of the hydrogen atom has been calculated with the 1/r potential in Schrodinger's equation, and there is no "touching" because there are no orbits, just energy levels and orbitals. Thus the classical density has no meaning in the underlying quantum mechanical level. In the Big Bang cosmological model, effective quantizations has been assumed , to avoid the infinite densities of classical calculations. Once gravity has been successfully quantized, the singularity at the center of a black hole will also become a probability locus where the classical term "density" has no meaning. (At the moment string theories quantize gravity and can embed the standard model of elementary particles, but no unique model has been found yet). anna vanna v That's actually one good reason to think that "infinite density" doesn't exist. One needs to make very clear the epistemological status of what we know and do not regarding black holes. For one, we don't know that "infinite density singularities" actually exist within them. General relativity predicts this, but there is both no way to observe anything beyond a black hole's event horizon that we know of to be able to validate or invalidate this prediction, and moreover, there are good reasons to suspect general relativity fails at some point within the black hole. This may be near the expected singularity, but could also, potentially, be right at the horizon - because we can't observe past, any such scenario or anywhere in between could be entirely consistent with our existing corpus of experimental and observational data. In fact, there are some theories about what black holes "really" are which do, indeed, predict just such a failure at the horizon, such as the "fuzzball" concept from string theory. So the answer to the question is "quite possibly, it doesn't, and here's a reason why". Though maybe it does - maybe things cease being made of separate particles and thus can go to size zero, that's logically imaginable - but there's no reason to believe that over that it doesn't. The ultimate answer will depend on being able to develop a theory of quantum gravity and black holes that we can test empirically in some fashion, and even then, unless it also gives us a loophole in which we can peer past the horizon, technically any truly direct confirmation or not of its predictions for the inner structure of black holes will not be possible. Nonetheless, if it's confirmed in every other spot, and it makes at least some sort of variant prediction for black holes that we also can test and confirm, then we could at least be pretty confident in what it says about the interior. After all, no science is certain knowledge. Yet also, that does not prevent us from becoming arbitrarily close to certainty, and thus why that well-established scientific knowledge cannot or should not be dismissed lightly and moreover why that in practical, common-sense, rubber-hits-the-road terms as to how to apply it in our lives and in human society, well-established scientific knowledge might as well be a certainty. The_SympathizerThe_Sympathizer Infinite density singularities don't exist and even cosmology says as much. It functions in mathmatical modeling as a place holder TBD at a later date. That's all. Kyle Kanos David ThurmanDavid Thurman First of all, I would like to clarify some of the aspects of General Relativity discussed in the comments above: General Relativity is one of the strongest and full theory we have for our universe for large scales. There is no doubt in the theory itself with respect to any experiment and you can read about experimental successes of it in normal and extreame conditions. The reasons some of the people say that it is a incomplete theory are: 1) It pridicts existence of darkmatter: Physicists have hard time searching darkmatter and making theories of it. For now it is, atleast, confirmed that some sort of non-visible-matter(non-baryonic) aka darkmatter do exist and the general relativity with darkmatter works perfectly well.(For ref. you can watch this video: https://www.youtube.com/watch?v=iu7LDGhSi1A&index=15&list=LLgnMAGzIJCwNsEWIjmn8jig&t=0s) 2) It does't work in quantum scales: Like general relativity is for large scales, quantum mechanics is for small scales. Both works perfectly well in their own domains but we don't know how to combine these two theories to give a quntum gravity(QG) theory. Note: This does't mean that general relativity or quantum mechanics are incomplete theories of our universe They are incomplete in the sense that we need to combine these two theories to give a scale independent theory aka QG. Answering the main question: (1) General relativity do pridict the existence of infinitely dense singularities at the center of the black holes ( and also at the big bang point). Questions like what happens at the singularity can't be ever fully realized alone with the general relativity, we need to include quantum effects, we need a QG theory which we don't have. So don't waste time on asking such quetion instead try to develope a quntum gravity theory by yourself if you are interested! (2)We don't know anything about what a singularity in spacetime really is and what is the state of matter in it. We need a QG theory to know such answers. But we do know that it might not the quarks because we already have quark-stars in which all of matter is in the form of "ocean of quarks". We believe, from standard model, that quarks are fundamental particles. But if the String Theory is true then may be these singularities contains not any type of particles but strings itself, who knows!. Aman pawarAman pawar Before, the density is finite; after, the density is infinite. But there is a gap between the largest real number and the smallest infinite number, and there is no definitive mechanism to join the two. Surreal numbers? This would be their first use in physics! In my opinion, we are saved from infinities/infinitesimals by Planck length/area/volume, specifically the volume which I call the 'planckon'. After all, a BH has finite mass, and a region of infinite density would have to be infinitesimal. Planckons have a diameter ~Planck length' as close to infinitesimal as we're allowed to get. Are 'planckons' somehow immune to the effects of relativity? I think so-- the Penrose-Terrell effect preserves the apparent disc (conformality) of the planckons if they could be observed under relativistic conditions. My answer: infinite density cannot exist. george lastrapesgeorge lastrapes Proof: 1 Infinity density $\Rightarrow $ Infinity energy-momentum $\Rightarrow$ singularity. Quark and standard model does not work in the scale of singularity. (also, quark is more like a structure of interaction, size until today tested to be pretty much $0$.) Also, notice $\lambda=\frac{h}{p}$, with infinite density, $p$ went to infinity, which resulted $\lambda=0$, a measure $0$ in space. Thus infinite density had a measure $0$ in position space. ($\delta$ function might be a good way to think about it, but $\int f(x)\delta(x)=f(0)$ still a finite number.) However, if you assume the continuous of density in space with finite density, the above result implied your infinity either could not exists, or not analytic, or was a "fake infinity". Thus either from GR or quantum, you would not get it. Reasoning: 3 Notice $\Delta E\Delta t$ relationship, $\Delta E=\infty\Rightarrow $ no restrictoin over $\Delta t\Rightarrow$ cross section could be $0\Rightarrow$ Not only continous of space was broken, but also, the space was nolonger hausdorff and you pretty much lost capability to define a particle. ShoutOutAndCalculateShoutOutAndCalculate According to particle physics, an elementary particle or fundamental particle is a subatomic particle with no substructure which is a point particle. They have no volume and theoratically can fill any volume with infinite density, provided their associated forces don't oppose that action. For example if we can somehow cause the energies of each elementary particle to be merged together instead of repelling each other, we can create a volume of infinte density with all the point particles merged together and their combined energies forming an infinite energy shell holding them together! Therefore, we end up having an object of infinite density and energy. I'd like to give another perspective here. As many others pointed out, "touching" or becoming infinitesimally close is neither an allowed nor a proper concept at quantum scales, because of the uncertainty principle and the wave nature of particles. Down that level, something more fundamental governs the system than matter density, which is the information. And there is an upper limit of information that can be contained within a given region of space. This upper limit is called Bekenstein bound. Density of information is always finite everywhere. Since even information, given that it is more fundamental, cannot be infinite in a finite region, infinite density of anything seems impossible. highly oscillatory integrandhighly oscillatory integrand There's a lot of speculation here, and some pretty interesting discussion. But I think the closest you can come to a 'true' answer is simply: We don't know. We're just not sure how physics operates at the centre of a black hole - it might be some completely new concept. Quantum and GR don't seem to want to play nice together, so which quantising space might be a tempting solution it comes with its own host of problems. A few things that you might find interesting: The idea of infinite density occurs because any particle inside the event horizon will reach the centre of the black blackhole for some finite proper time In "normal" physics it's perfectly acceptable to have particles "ontop" of each other, spatially. Even fermions, provided that they have some other factor distinguishing their quantum state My personal pet hypothesis: All particles probably density will end up centred around a point in space. The probability density function of each particle will constantly be reducing in width. This may imply ever increasing, but forever finite, density. I'm no cosmologist but that's my understanding of it. V.L. ProudV.L. Proud Infinite density can't exist because infinity anything generally implies a breakdown of the theory. In fact, it's for this reason that the Romanian philosopher and physicist, Bescovitch (I may have mispelt his name as I can't find him on Wikipedia, but he is cited by Paul Davies on exactly this issue), deduced that there must be such a thing as intermolecular repulsion. Mozibur UllahMozibur Ullah Not the answer you're looking for? Browse other questions tagged particle-physics density singularities quarks or ask your own question. How can a singularity have infinite density? Do we need infinite energy to make 2 similar charges touch only in theory? Is there some special cutoff density after which spacetime "collapses" and forms a black hole? Strong force between quarks that are out of causal contact How to produce a 3D density map of a time-depenent system of particles? Why is the top quark mass reconstructed too high when the bottom quarks were swapped (semileptonic decay of a ttbar pair)? Is high temperature **required** for nuclear fusion? Proton structure: are there photons, electrons and neutrinos in there too? Do hadronic jets always accompany the production of quarks? Do we account for them when describing collisions and decays?
CommonCrawl
Recent questions tagged inflection Rewrite \(\cos 3 \mathrm{~A}\) in terms of \(\cos \mathrm{A}\) Rewrite \(\sin 3 \alpha\) in terms of \(\sin \alpha\) : asked Jan 24 in Mathematics by ♦Gauss Diamond (71,867 points) | 8 views If \(\lim _{x \rightarrow 1} \frac{x+x^2+x^3+\ldots . .+x^n-n}{x-1}=820,(n \in N)\) then the value of \(\mathrm{n}\) is equal to Evaluate \(\int \frac{x+1}{x^2+4 x+8} d x\). Evaluate \(\int \frac{x^3}{(x-2)(x+3)} d x\). Rewrite \(\int \frac{x^3}{(x-2)(x+3)} d x\) in terms of an integral with a numerator that has degree less than 2. Find \(\int \frac{x^3}{(3-2 x)^5} d x\). Evaluate \(\int \sqrt{4-9 x^2} d x\) Evaluate \(\int \sqrt{1-x^2} d x\). What is a vertical asymptote? Evaluate the following limits. Let \(\lim _{x \rightarrow 2} f(x)=2, \quad \lim _{x \rightarrow 2} g(x)=3 \quad\) and \(\quad p(x)=3 x^2-5 x+7 .\) Find the following limits: Describe three situations where \(\lim _{x \rightarrow c} f(x)\) does not exist. Integral part of \((\sqrt{2}+1)^{6}\) is asked Aug 15, 2022 in Mathematics by ♦Gauss Diamond (71,867 points) | 79 views Suppose \(f^{\prime}\) is continuous on \([a, b]\) and \(\varepsilon>0\). Prove that there exists \(\delta>0\) such that \[ \left|\frac{f(t)-f(x)}{t-x}-f^{\prime}(x)\right|<\varepsilon \] Suppose \(f^{\prime}(x)\) and \(g^{\prime}(x)\) exist, \(g^{\prime}(x) \neq 0\), and \(f(x)=g(x)=0\). Prove that \[ \lim _{t \rightarrow x} \frac{f(t)}{g(t)}=\frac{f^{\prime}(x)}{g^{\prime}(x)} . \] Suppose \(f\) is a real, continuously differentiable function on \([a, b]\), \(f(a)=f(b)=0\), and \[ \int_{a}^{b} f^{2}(x) d x=1 \] Prove that If \(f(x)=0\) for all irrational \(x, f(x)=1\) for all rational \(x\), prove that \(f \notin \mathcal{R}\) on \([a, b]\) for any \(a<b\). Suppose \(f \geq 0, f\) is continuous on \([a, b]\), and \(\int_{a}^{b} f(x) d x=0\). Prove that \(f(x)=0\) for all \(x \in[a, b]\). Suppose \(\alpha\) increases on \([a, b], a \leq x_{0} \leq b, \alpha\) is continuous at \(x_{0}\), \(f\left(x_{0}\right)=1\), and \(f(x)=0\) if \(x \neq x_{0}\). Prove that \(\bar{f} \in \mathcal{R}(\alpha)\) and that \(\int f d \alpha=0\). Prove a pointwise version of Fejér's Theorem: If \(f \in \mathcal{R}\) and \(f(x+), f(x-)\) exist for some \(x\), then \[ \lim _{N \rightarrow \infty} \sigma_{N}(f ; x)=\frac{1}{2}[f(x+)+f(x-)] . \] Suppose \(f \in \mathcal{R}\) on \([0, A]\) for all \(A<\infty\), and \(f(x) \rightarrow 1\) as \(x \rightarrow+\infty\). Prove that \[ \lim _{t \rightarrow 0} \int_{0}^{\infty} e^{-t x} f(x) d x=1 \quad(t>0) \] For \(i=1,2,3, \ldots\), let \(\varphi_{i} \in \mathcal{C}\left(R^{1}\right)\) have support in \(\left(2^{-i}, 2^{1-i}\right)\), such that \(\int \varphi_{i}=1\). Put Suppose \(f \in \mathcal{L}^{2}(\mu), g \in L^{2}(\mu)\). Prove that \[ \left|\int f \bar{g} d \mu\right|^{2}=\int|f|^{2} d \mu \int|g|^{2} d \mu \] if and only if there is a constant \(c\) such that \(g(x)=c f(x)\) almost everywhere. If \(f \in \mathcal{R}\) on \([a, b]\) and if \(F(x)=\int_{a}^{x} f(t) d t\), prove that \(F^{\prime}(x)=\) \(f(x)\) almost everywhere on \([a, b]\). If \(f \geq 0\) and \(\int_{E} f d \mu=0\), prove that \(f(x)=0\) almost everywhere on \(E\). Evaluate \(f(x)=\int_{0}^{x} \frac{1}{\sqrt{1+t^{2}}} d t\) Solve the integral equation \[ \int_{0}^{x}\left((x-y)^{2}-2\right) f(y) d y=-4 x \] applying differentiation and the solving the resulting differential equation. Solve \[ \int_{0}^{x} e^{-x} f(s) d s=e^{-x}+x-1 \] applying differentiation. Evaluate the integral \(f(x)=\int_{0}^{x} \frac{1}{\sqrt{1+t^{2}}} d t\) asked Jun 27, 2022 in Mathematics by ♦Gauss Diamond (71,867 points) | 64 views Evaluate \(x^2+49=0\) asked Jun 22, 2022 in Mathematics by ♦MathsGee Platinum (163,814 points) | 62 views Evaluate \(\int \frac{1}{\sqrt[4]{x}+3} d x\) Calculate \(\int 5 x^{4} d x\) asked May 31, 2022 in Mathematics by ♦Gauss Diamond (71,867 points) | 88 views Calculate \(\int \frac{1}{\sqrt[3]{x}} d x\) State the definition of a inflection point of a function \(f\). asked Feb 4, 2022 in Mathematics by ♦Gauss Diamond (71,867 points) | 148 views What is the general antiderivative of \(6 x^{2}+2 x+5 ?\) antiderivative Sketch the graph of \(f(x)=\frac{\ln x}{x}\), showing all extrema. intercepts asymptotes Find the general form for the following antiderivative: \(\int \frac{z}{z^{2}+9} d z\) The function $f(x)=\dfrac{x^{3}-6 x}{x^{2}-4}$ has ... asked Jan 13, 2022 in Mathematics by ♦MathsGee Platinum (163,814 points) | 114 views Evaluate the integral: $\int \dfrac{3 x+2}{\sqrt{4-3 x^{2}}} d x$ Evaluate the integral: $\int_{0}^{\pi} 2 x \cos 2 x d x$ Evaluate $\int \dfrac{\ln z}{(1+z)^{2}} d z$ Evaluate the following integral: $\int \frac{t}{\sqrt{t^{4}-1}} d t$. Evaluate the integral $\int_{1}^{2} \dfrac{2 s-2}{\sqrt{-s^{2}+2 s+3}} d s$ Evaluate $\int_{-1}^{2} \dfrac{x}{\sqrt{10+2 x+x^{2}}} d x=$
CommonCrawl
Search Results: 1 - 10 of 867 matches for " Ryohei Fukuda " Effects of Triple-$α$ and $^{12}\rm C(α,γ)^{16}O$ Reaction Rates on the Supernova Nucleosynthesis in a Massive Star of 25 $M_{\odot}$ Yukihiro Kikuchi,Masa-aki Hashimoto,Masaomi Ono,Ryohei Fukuda Abstract: We investigate effects of triple-$\alpha$ and $^{12}\rm C(\alpha,\gamma) ^{16}O$ reaction rates on the production of supernova yields for a massive star of 25 $M_{\odot}$. We combine the reaction rates to examine the rate dependence, where the rates are considered to cover the possible variation of the rates based on experiments on the earth and theories. We adopt four combinations of the reaction rates from two triple-$\alpha$ reaction rates and two $^{12}\rm C(\alpha,\gamma)^{16}O$ ones. First, we examine the evolution of massive stars of 20 and 25 $M_{\odot}$ whose helium cores correspond to helium stars of 6 and 8 $M_{\odot}$, respectively. While the 25 $M_{\odot}$ stars evolve to the presupernova stages for all combinations of the reaction rates, evolutionary paths of the 20 $M_{\odot}$ stars proceed significantly different way for some combinations, which are unacceptable for progenitors of supernovae. Second, we perform calculations of supernova explosions within the limitation of spherical symmetry and compare the calculated abundance ratios with the solar system abundances. We can deduce some constraints to the reaction rates. As the results, a conventional rate is adequate for a triple-$\alpha$ reaction rate and a rather higher value of the reaction rate within the upper limit for the experimental uncertainties is favorable for a $^{12}\rm C(\alpha,\gamma)^{16}O$ rate. Purification and properties of S-hydroxymethylglutathione dehydrogenase of Paecilomyces variotii no. 5, a formaldehyde-degrading fungus Ryohei Fukuda, Kazuhiro Nagahama, Kohsai Fukuda, Keisuke Ekino, Takuji Oka, Yoshiyuki Nomura AMB Express , 2012, DOI: 10.1186/2191-0855-2-32 -Process Nucleosynthesis in MHD Jet Explosions of Core-Collapse Supernovae Motoaki Saruwatari,Masa-aki Hashimoto,Ryohei Fukuda,Shin-ichiro Fujimoto Journal of Astrophysics , 2013, DOI: 10.1155/2013/506146 Abstract: We investigate the -process nucleosynthesis during the magnetohydrodynamical (MHD) explosion of a supernova in a helium star of 3.3? , where effects of neutrinos are taken into account using the leakage scheme in the two-dimensional (2D) hydrodynamic code. Jet-like explosion due to the combined effects of differential rotation and magnetic field is able to erode the lower electron fraction matter from the inner layers. We find that the ejected material of low electron fraction responsible for the -process comes out from just outside the neutrino sphere deep inside the Fe-core. It is found that heavy element nucleosynthesis depends on the initial conditions of rotational and magnetic fields. In particular, the third peak of the distribution is significantly overproduced relative to the solar system abundances, which would indicate a possible -process site owing to MHD jets in supernovae. 1. Introduction Study of the -process has been developed considerably keeping pace with the terrestrial experiments of nuclear physics far from the stability line of nuclides [1]. In particular, among the three peaks, which correspond to the elements of , , and , in the abundance pattern for the solar system -elements, the transition from the second to third peak elements has been stressed by nuclear physicists [2]. Although supernovae could be one of the astrophysical sites of the -process [2, 3], explosion mechanism is not still completely resolved, where supernova explosions are originated from the gravitational collapse of massive stars of ? [4, 5]. However it is unclear whether neutron-rich elements could be ejected or not during the shock wave propagation. As far as the one-dimensional calculations, almost all realistic numerical simulations concerning the collapse-driven supernovae of ? have failed to explode the outer layer above the Fe-core due to drooping of the energetic shock wave propagation [6, 7]. Although there exist calculations for 8 and 11? stars to explode, the explosion energies are very weak [8–15]. Therefore, a plausible site/mechanism of the -process has not yet been clarified. On the other hand, models of magnetorotational explosion (MRE) for core-collapse supernovae have been presented as a supernova mechanism [16–19] since both rapid rotations and/or strong magnetic fields could be resulted for neutron stars after the explosions. Furthermore, MRE with a realistic magnetic field configuration has been investigated [20–22]. In their series of papers, it has been shown that magnetorotational instability plays a critical role concerning the The Effects of Opening Trade on Regional Inequality in a Model of Scale-Invariant Growth and Foot-Loose Capital [PDF] Katsufumi Fukuda Theoretical Economics Letters (TEL) , 2012, DOI: 10.4236/tel.2012.25078 Abstract: We consider a semi endogenous R & D growth model with international trade, foot-loose capital, and local and international knowledge spillovers in a closed economy and also international knowledge spillovers in an open economy. We show that by opening trade two regions diverge (converge) with (not) sufficiently high intertemporal knowledge spillover in the R & D sector and elasticity of substitution between modern goods, and not sufficiently high (sufficiently high) richer country A's share of firm owned. Periodontal Disease Bacteria Specific to Tonsil in IgA Nephropathy Patients Predicts the Remission by the Treatment Yasuyuki Nagasawa, Kenichiro Iio, Shinji Fukuda, Yasuhiro Date, Hirotsugu Iwatani, Ryohei Yamamoto, Arata Horii, Hidenori Inohara, Enyu Imai, Takeshi Nakanishi, Hiroshi Ohno, Hiromi Rakugi, Yoshitaka Isaka PLOS ONE , 2014, DOI: 10.1371/journal.pone.0081636 Abstract: Background Immunoglobulin (Ig)A nephropathy (IgAN) is the most common form of primary glomerulonephritis in the world. Some bacteria were reported to be the candidate of the antigen or the pathogenesis of IgAN, but systematic analysis of bacterial flora in tonsil with IgAN has not been reported. Moreover, these bacteria specific to IgAN might be candidate for the indicator which can predict the remission of IgAN treated by the combination of tonsillectomy and steroid pulse. Methods and Findings We made a comprehensive analysis of tonsil flora in 68 IgAN patients and 28 control patients using Denaturing gradient gel electrophoresis methods. We also analyzed the relationship between several bacteria specific to the IgAN and the prognosis of the IgAN. Treponema sp. were identified in 24% IgAN patients, while in 7% control patients (P = 0.062). Haemophilus segnis were detected in 53% IgAN patients, while in 25% control patients (P = 0.012). Campylobacter rectus were identified in 49% IgAN patients, while in 14% control patients (P = 0.002). Multiple Cox proportional-hazards model revealed that Treponema sp. or Campylobactor rectus are significant for the remission of proteinuria (Hazard ratio 2.35, p = 0.019). There was significant difference in remission rates between IgAN patients with Treponema sp. and those without the bacterium (p = 0.046), and in remission rates between IgAN patients with Campylobacter rectus and those without the bacterium (p = 0.037) by Kaplan-Meier analysis. Those bacteria are well known to be related with the periodontal disease. Periodontal bacteria has known to cause immune reaction and many diseases, and also might cause IgA nephropathy. Conclusion This insight into IgAN might be useful for diagnosis of the IgAN patients and the decision of treatment of IgAN. Error Correction of Enumerative Induction of Deterministic Context-free L-system Grammar Ryohei Nakano IAENG International Journal of Computer Science , 2013, Khovanov homology and Rasmussen's s-invariants for pretzel knots Ryohei Suzuki Mathematics , 2006, Abstract: We calculated the rational Khovanov homology of some class of pretzel knots, by using the spectral sequence constructed by P. Turner. Moreover, we determined the Rasmussen's s-invariant of almost of pretzel knots with three pretzels. Infinite Sparse Block Model with Text Using 2DCRP Ryohei Hisano Statistics , 2015, Abstract: A fundamental step in understanding the topology of a network is to uncover its latent block structure. To estimate the latent block structure with more accuracy, I propose an extension of the sparse block model, incorporating node textual information and an unbounded number of roles and interactions. The latter task is accomplished by extending the well-known Chinese restaurant process to two dimensions. Inference is based on collapsed Gibbs sampling, and the model is evaluated on both synthetic and real-world interfirm buyer-seller network datasets. Heterostructure Solar Cells Based on Sol-Gel Deposited SnO2 and Electrochemically Deposited Cu2O [PDF] Akito Fukuda, Masaya Ichimura Materials Sciences and Applications (MSA) , 2013, DOI: 10.4236/msa.2013.46A001 To fabricate a heterostructure solar cell using environmentally friendly materials and low cost techniques, tin oxide (SnO2) and cuprous oxide (Cu2O) were deposited by the sol-gel method and the electrochemical deposition, respectively. The SnO2 films were deposited from a SnCl2 solution containing ethanol and acetic acid. The Cu2O films were deposited using a galvanostatic method from an aqueous bath containing CuSO4 and lactic acid at a temperature of 40°C. The Cu2O/SnO2 heterostructure solar cells showed rectification and photovoltaic properties, and the best cell showed a conversion efficiency of 6.6 × 10-2 % with an open-circuit voltage of 0.29 V, a short-circuit current of 0.58 mA/cm2, and a fill factor of 0.39. Bioremediation of Bisphenol A by Glycosylation with Immobilized Marine Microalga Amphidinium crassum ——Bioremediation of Bisphenol a by Immobilized Cells [PDF] Kei Shimoda, Ryohei Yamamoto, Hiroki Hamada Advances in Chemical Engineering and Science (ACES) , 2011, DOI: 10.4236/aces.2011.13015 Abstract: Glycosylation of bisphenol A, which is an endocrine disrupting chemical, was investigated using immobilized marine microalga and plant cells from the viewpoint of bioremediation of bisphenol A. Immobilized marine microalga of Amphidinium crassum glucosylated bisphenol A to the corresponding glucoside. On the other hand, bisphenol A was glycosylated to its glucoside, diglycoside, gentiobioside, and gentiobiosylglucoside, which was a new compound, by immobilized plant cells of Catharanthus roseus.
CommonCrawl
The New Mode of Instability in Viscous High-Speed Boundary Layer Flows Jie Ren, Youcheng Xi & Song Fu 10.4208/aamm.OA-2017-0298 Adv. Appl. Math. Mech., 10 (2018), pp. 1057-1068. Preview Purchase PDF 266 17894 Abstract The new mode of instability found by Tunney et al. [24] is studied with viscous stability theory in this article. When the high-speed boundary layer is subject to certain values of favorable pressure gradient and wall heating, a new mode becomes unstable due to the appearance of the streamwise velocity overshoot ($U(y) > U_∞$) in the base flow. The present study shows that under practical Reynolds numbers, the new mode can hardly co-exist with the conventional first mode and Mack's second mode. Due to the requirement for additional wall heating, the new mode may only lead to laminar-turbulent transition under experimental (artificial) conditions. A Galerkin Splitting Symplectic Method for the Two Dimensional Nonlinear Schrödinger Equation Zhenguo Mu, Haochen Li, Yushun Wang & Wenjun Cai In this paper, we propose a Galerkin splitting symplectic (GSS) method for solving the 2D nonlinear Schrödinger equation based on the weak formulation of the equation. First, the model equation is discretized by the Galerkin method in spatial direction by a selected finite element method and the semi-discrete system is rewritten as a finite-dimensional canonical Hamiltonian system. Then the resulted Hamiltonian system is split into a linear Hamiltonian subsystem and a nonlinear subsystem. The linear Hamiltonian subsystem is solved by the implicit midpoint method and the nonlinear subsystem is integrated exactly. By the Strang splitting method, we obtain a fully implicit scheme for the 2D nonlinear Schrödinger equation (NLS), which is symmetric and of order 2 in time. Furthermore, we apply the FFT technique to improve computation efficiency of the new scheme. It is proven that our scheme preserves the mass conservation and the symplectic conservation. Comprehensive numerical experiments are carried out to illustrate the accuracy of the scheme as well as its conservative properties. The LMAPS Using Polynomial Basis Functions for Near-Singular Problems Zhengzhi Li, Daniel Watson, Ming Li & Thir Dangal In this paper, we combine the Local Method of Approximate Particular Solutions (LMAPS) with polynomial basis functions to solve near-singular problems. Due to the unique feature of the local approach, the LMAPS is capable of capturing the rapid variation of the solution. Polynomial basis functions can become very unstable when the order of the polynomial becomes large. However, since the LMAPS is a local method, the order does not need to be very high; an order of 5 can achieve sufficient accuracy. In order to show the effectiveness of the LMAPS using polynomial basis functions for solving near singular problems, we compare the results with the LMAPS using radial basis functions (RBFs). The advantage of using polynomials as a basis rather than RBFs is that finding an appropriate shape parameter is not necessary. Fully Finite Element Adaptive AMG Method for Time-Space Caputo-Riesz Fractional Diffusion Equations X. Q. Yue, W. P. Bu, S. Shu, M. H. Liu & S. Wang The paper aims to establish a fully discrete finite element (FE) scheme and provide cost-effective solutions for one-dimensional time-space Caputo-Riesz fractional diffusion equations on a bounded domain $Ω$. Firstly, we construct a fully discrete scheme of the linear FE method in both temporal and spatial directions, derive many characterizations on the coefficient matrix and numerically verify that the fully discrete FE approximation possesses the saturation error order under $L^2(Ω)$ norm. Secondly, we theoretically prove the estimation $1+\mathcal{O}(τ^αh^{−2β})$ on the condition number of the coefficient matrix, in which $τ$ and $h$ respectively denote time and space step sizes. Finally, on the grounds of the estimation and fast Fourier transform, we develop and analyze an adaptive algebraic multigrid (AMG) method with low algorithmic complexity, reveal a reference formula to measure the strength-of-connection tolerance which severely affect the robustness of AMG methods in handling fractional diffusion equations, and illustrate the well robustness and high efficiency of the proposed algorithm compared with the classical AMG, conjugate gradient and Jacobi iterative methods. The Plane Wave Methods Combined with Local Spectral Finite Elements for the Wave Propagation in Anisotropic Media Long Yuan & Qiya Hu The plane wave least-squares method combined with local spectral finite elements has been used effectively to solve time-harmonic acoustic and electromagnetic wave propagation with complex wavenumbers. We develop the plane wave least-squares method and the ultra weak variational formulation for the nonhomogeneous case of the electromagnetic wave propagation in anisotropic media. We derive error estimates of the approximation solutions generated by these methods in one special case of TE mode scattering. Numerical results indicate that the resulting approximate solutions generated by these two methods possess high accuracy and verify the validity of the theoretical results. Structural Deformation-Based Computational Method for Static Aeroelasticity of High-Aspect-Ratio Wing Model in Pressurized Wind Tunnel Tongqing Guo, Ennan Shen, Zhiliang Lu & Li Ding In this study, the structural model of a high-aspect-ratio wing is unknown but its structural deformation is measured at some attack angles in a pressured wind tunnel. To implement the static aeroelastic computation at an arbitrary state, an inversion method is proposed to derive the structural stiffness from the known deformation. The wing is simplified into a single-beam model and its bending and torsional flexibility distributions are respectively expressed as a linear combination of several selected basis functions. The bending deformation can be then expressed as a linear combination of the bending deformations of the models structurally characterized by each basis function, which are gradually evaluated by loading the aerodynamic loads computed at the chosen design state. Based on the measured deformation, the bending stiffness distribution is ultimately fitted by a least square method. The torsional stiffness distribution is solved in the same way. Resultantly, a structural deformation-based computational method for static aeroelasticity of a high-aspect-ratio wing model is achieved by combining the structural stiffness inversion method with a coupled computational fluid dynamics (CFD)-computational structural dynamics (CSD) algorithm. The present method is applied to the design and validation states and the numerical results agree well with the experimental data. Comparative Study of Implicit and Explicit Immersed Boundary-Lattice Boltzmann Methods for Simulation of Fluid-Structure Interactions J. Y. Shao & N. Y. Liu This work aims to provide guidance for selection between the implicit and explicit immersed boundary methods (IBMs). The implicit method ensures satisfaction of the boundary condition in one correction step (Wu and Shu, J. Comput. Phys., 228 (2009), pp. 1963–1979). However, it requires formation of a relatively complex matrix. The computational time of this method for moving boundary problems is also a concern, because matrix operations are required at each time step. On the other hand, the explicit multi-direct forcing (MDF) (Luo et al., Phys. Rev. E, 76 (2007), 066709) methods are more straightforward. Nevertheless, the detailed comparison between the two methods is lacked in the literature. In this work, the implicit method, the MDF method, MDF method with under/over-relaxation parameters are compared in detail through simulation of stationary and moving boundary problems. It is found that the implicit method generally outperforms the explicit methods, in respects of maintaining both accuracy and computational efficiency. The present study suggests that it can be worthy to form the matrix and solve the equation system implicitly in IBM for both stationary and moving boundary problems. Decoupled Scheme for Non-Stationary Viscoelastic Fluid Flow Md. Abdullah Al Mahbub, Shahid Hussain, Nasrin Jahan Nasu & Haibiao Zheng In this paper, we present a decoupled finite element scheme for two-dimensional time-dependent viscoelastic fluid flow obeying an Oldroyd-B constitutive equation. The key idea of our decoupled scheme is to divide the full problem into two subproblems, one is the constitutive equation which is stabilized by using discontinuous Galerkin (DG) approximation, and the other is the Stokes problem, can be computed parallel. The decoupled scheme can reduce the computational cost of the numerical simulation and implementation is easy. We compute the velocity $u$ and the pressure $p$ from the Stokes like problem, another unknown stress $σ$ from the constitutive equation. The approximation of stress, velocity and pressure are respectively, $P_1$-discontinuous, $P_2$-continuous, and $P_1$-continuous finite elements. The well-posedness of the finite element scheme is presented and derive the stability analysis of the decoupled algorithm. We obtain the desired error bound also demonstrate the order of the convergence, stability and the flow behavior with the support of two numerical experiments which reveals that decoupled scheme is more efficient than coupled scheme. A New Error Analysis of Nonconforming $EQ^{rot}_1$ FEM for Nonlinear BBM Equation Yanhua Shi, Yanmin Zhao, Fenling Wang & Dongyang Shi Nonconforming $EQ^{rot}_1$ element is applied to solving a kind of nonlinear Benjamin-Bona-Mahony (BBM for short) equation both for space-discrete and fully discrete schemes. A new important estimate is proved, which improves the result of previous works with the exact solution $u$ belonging to $H^2(Ω)$ instead of $H^3(Ω)$. And then, the supercloseness and global superconvergence estimates in broken $H^1$ norm are obtained for space-discrete scheme. Further, the superclose estimates are deduced for backward Euler and Crank-Nicolson schemes. To confirm our theoretical analysis, numerical experiments for backward Euler scheme are executed. It seems that the results presented herein have never been seen for nonconforming FEMs in the existing literature. Polynomial Particular Solutions for the Solutions of PDEs with Variables Coefficients Jun Lu, Hao Yu, Ji Lin & Thir Dangal The closed-form particular solutions with polynomial basis functions for general partial differential equations (PDEs) with constant coefficients have been derived and applied for solving various kinds of problems in the context of the method of approximate particular solutions (MAPS). In this paper, we propose to extend the above-mentioned method to PDEs with variable coefficients by the substituting and adding-back technique. Since the linear system derived from the polynomial particular solutions is notoriously ill-conditioned, the multiple scale method is applied to alleviate this difficulty. To validate our proposed method, four numerical examples are considered and compared with those obtained by the MAPS using the radial basis functions. Air Chemical Non-Equilibrium Effects on the Hypersonic Combustion Flow of RCS with Gaseous Ethylene Fuel Faming Zhao, Jiangfeng Wang, Xiaofeng Fan & Tianpeng Yang In this paper, air chemical non-equilibrium effects on the shock-induced combustion flow are numerically investigated for a reaction control system (RCS) with gaseous ethylene fuel by solving multi-component Navier-Stokes (N-S) equations. An integrated numerical method is developed that considers two different chemical reaction mechanisms: the high temperature air chemical non-equilibrium reactions and ethylene-oxygen combustion reactions. The method is independently validated by two types of reacting flow: the hypersonic air chemical non-equilibrium flow over a sphere and supersonic ethylene-oxygen combustion flow for a dual combustion chamber. Furthermore, the mixed reacting flow over a blunt cone with a transverse multi-component gaseous jet is analyzed in detail. Numerical results indicate that air chemical non-equilibrium effects could lead to a reduction of the shock detachment distance, a decrease of the temperature behind the shock wave and a reduction of the combustion products. Mixed Finite Element Methods for Elastodynamics Problems in the Symmetric Formulation Yan Yang & Shiquan Zhang In this paper, we analyze semi-discrete and fully discrete mixed finite element methods for linear elastodynamics problems in the symmetric formulation. For a large class of conforming mixed finite element methods, the error estimates for each scheme are derived, including the energy norm and $L^2$ norm for stress, and the $L^2$ norm for velocity. All the error estimates are robust for the nearly incompressible materials, in the sense that the constant bound and convergence order are independent of Lamé constant λ. The stress approximation in both norms, as well as the velocity approximation in $L^2$ norm, are with optimal convergence order. Finally numerical experiments are provided to confirm the theoretical analysis.
CommonCrawl
Received : 05 August 2021Revised : 16 September 2021Accepted : 05 October 2021 Study of Battery State-of-charge Estimation with kNN Machine Learning Method TalluriTeressa ChungHee Tae ShinKyoojae (Department of the Electronics and Robotics Engineering, Busan University of Foreign Studies/ Busan, Korea [email protected], {htchung, kyoojae}@bufs.ac.kr ) * Corresponding Author: KyooJae Shin Electric vehicles have high demand due to their ecofriendly nature. From this point of view, lithium batteries have gained high attention in recent days due to their high efficiency and long life time. Hence, it is of the utmost importance to evaluate the battery characteristics, such as the state of charge (SOC), depth of discharge (DOD), and remaining life of a battery to ensure battery safety. These parameters were derived in order to estimate the battery life time before degradation. This estimation is very much required in making a decision about battery usage in the future. In this study, the SOC of a lithium polymer battery was evaluated in a real-time experiment. Charging and discharging cycles were done, and we obtained the voltage, current, and time data from the experimental result. This experimental data trained machine learning methods such as the kNN (k Nearest Neighbor) method to estimate the SOC more precisely. After training the model, a test was done. The proposed estimator was calibrated by experimental data. The results are satisfactory with accuracy of 98% and mean absolute error (MAE) as low as 0.74[%]. Electrical vehicle, Battery characteristics, State of charge, Depth of discharge, Machine learning algorithms and kNN method High environmental pollution and energy crises make natural and high energy storage devices very much necessary in this modern era [1]. Out of all high-energy storage devices, lithium batteries have gained much attention due to their long cycle life and ecofriendly characteristics [2]. These batteries are extensively used in transportation, electric vehicles, bicycles, vacuum cleaners, and smart phones [3-5]. Lithium batteries have excellent heat resistance when compared to lead batteries, but it was found that a battery can be damaged and also cause explosion [6]. Hence, accurate measurements and SOC estimation are very much required to improve the efficiency and ensure safety when using the battery [7]. The SOC is the important parameter in a Battery Management System (BMS). Many researchers have focused their research on the safety of lithium batteries with SOC estimation before complete degradation. However, it is not easy to estimate SOC accurately due to limited computational power [8]. The research on the comparison of different types of batteries is very limited. Many methods have been implemented for SOC estimation. Mostly, they use battery cell estimations, but comparisons between two different batteries for SOC estimation are scarce. Estimating the SOC of a battery results in high efficiency and safety of the battery [9]. In this study, the SOC estimation of cells in lithium battery packs using kNN machine learning is proposed. The kNN algorithm is one of the most used machine learning algorithms due to its advantages compared to other methods, such as linear regression, support vector machine, etc. The main advantages with kNN are: 1) kNN makes predictions very quickly by calculating similarity between the input samples in each training step, 2) it requires low calculation time, and 3) it is easy to interpret the output. The relation between voltage and SOC is evaluated after charging and discharging battery cells at room temperatures. The SOC of a battery pack is estimated by learning with collected sensor data. In this study, SOC estimation was done and compared with experimental and kNN methods. The error difference was calculated and compared during a charging and discharging process. In section 2, we explain about the SOC estimation process in an electrochemical and equivalent circuit model. In section 3, we explain about training of the kNN model to predict SOC. Section 4 is about the experiment process, and results are explained. 2. Analysis of Battery Model for Charac- teristic Evaluation The battery characteristics such as voltage, current, and temperature are the main parameters to be monitored to ensure proper functioning and safety of the battery before degradation. In this study, these parameters were evaluated to understand the behavior of each battery with an electrochemical and equivalent circuit model. 2.1 Battery SOC Estimation with Electro-Chemical Model The battery characteristics can be measured with a physical model and also with equivalent circuit models. In a physical model, the lithium-ion battery mainly consists of three domains: a negative electrode, separator, and positive electrode. The structure of a battery and corresponding governing equations of the physical modeling are shown in Fig. 1. The major physical equations describing the physics of a battery are described below as x-axis staring from negative to positive current collectors, r-axis along the radius direction of a solid electrode particle. The transportation of Li ions in the solid phase is modeled by diffusion equations that are shown in Eqs. (1) and (2). These represent the lithium-ion transportation in electrolyte [10]. $\frac{\partial C_{s}\left(x,r,t\right)}{\partial t}=\frac{1}{r^{2}}\frac{\partial }{\partial r}\left(D_{s}\,r^{2}\frac{\partial C_{s}\left(x,r,t\right)}{\partial r}\right)$ $\frac{\partial C_{e}\left(x,t\right)}{\partial t}=\frac{\partial }{\partial x}\left(D_{e}\frac{\partial C_{e}\left(x,t\right)}{\partial x}+\frac{1}{F\varepsilon _{e}}{t_{a}}^{0}i_{e}\left(x,t\right)\right)$ $C_{s}$ : solid phase concentration $D_{s}$ : diffusion coefficient. $C_{e}$ : lithium concentration in electrolyte. $D_{e}$ : effective diffusion coefficient. $\varepsilon _{e}$ : effective volume fraction of electrolyte. $F$ : faraday's constant and ${t_{a}}^{0}$ : transference number for the anion. The electrochemical characteristics are important parameters in estimating battery capacity. The electrochemical models determine the battery state at its degradation level. Degradation can be caused by side reactions. The cause of a side reaction is based on overpotential in the battery. Overpotential in a battery means the voltage difference measured between the theoretically measured voltage and the actual voltage in different operating conditions. If one can estimate this overpotential, then the SOC is estimated easily in a battery. The SOC estimation based on lithium concentration is given in Eq. (3): $SOC~ \left(t\right)=\frac{3}{LD^{3}}\int _{0}^{L}\int _{0}^{D}r^{2}\frac{C_{s}\left(x,r,t\right)}{C_{s,~ \max }}dt\,dx$ $C_{s}\left(x,r,t\right)$: phase concentration in $x$ and $r$ directions $C_{s,\max }$ : maximum concentration in solid phase $L$ : thickness of electrode $D$ : radius of electrode. The basic difference between a lithium-ion and a lithium-polymer battery is only the separator. As shown in Fig. 1, a separator is arranged in between negative and positive electrodes. In a lithium-ion battery, the separator is a porous material, and in a lithium-polymer battery, the separator is a polymer material. A comparison can be made between these two types of batteries based on the temperature and operation of the battery. The remaining parameters are all the same. Fig. 1. Model of Lithium Battery. 2.2 Battery SOC Estimation with Equivalent Circuit Model The battery SOC can be estimated by an equivalent circuit model. In this case, we considered a basic Thevenin equivalent circuit model and describe it below, as shown in Fig. 2. This model has three components: (i) an equilibrium potential E (ii) an internal resistance Ri having two components R$_{1}$ and R$_{2}$ (iii)~an effective capacitance that characterizes the transient response of charge i$_{0}$ and i$_{1}$ . Discharging of the battery follows the constant current method [11]. The equilibrium potential, terminal voltage, and degradation of the battery and corresponding equations are given in Eqs. (4) to (6): $E\left[i\left(t\right),T\left(t\right),t\right]=V\left[i\left(t\right),T\left(t\right),t\right]-Ri\left(t\right)$ $V\left[i\left(t\right),T\left(t\right),t\right]=\sum _{k=0}^{n}C_{k}DoD^{k}\left[i\left(t\right),T\left(t\right),t\right]+E$ $\text{Degradation}=\frac{1}{C_{q}}\int _{0}^{t}\propto \left[i\left(t\right)\right].\beta \left[T\left(t\right)\right].i\left(t\right)dt $ $C_{k}$ : coefficient of Kth-order term in polynomial $E$ : equilibrium potential $\beta \left(T\right)$ : temperature factor $\propto \left(T\right)$ : discharge rate $R$ {\quad}: internal resistance difference $\left(R_{2}-R_{1}\right)$ $V$ {\quad}: terminal voltage difference $\left(V_{0}-V_{1}\right)$ $i\left(t\right)$ {\quad}: discharged capacity difference $\left(d_{2}-d_{1}\right)$ The cycling capacity degradation is expressed by following general equations [12]. The DOD of a battery is also an important parameter to estimate the life of battery. The SOC must be checked at 100%. SOC and DOD are the important parameters to check the battery state. DOD is inversely related to SOC: as DOD increases, SOC decreases. SOC for a battery is expressed with the Coulomb count as: $SOC\left(t\right)=SOC\left(t-1\right)+\int _{0}^{t}\frac{I\left(t\right)}{C_{q}}dt$ where $Q_{t}$ is the Coulomb count and is expressed as: $Q_{t}=\int _{0}^{t}I\left(t\right)dt$ $SOC\left(t-1\right)=1$ is the condition of a battery in a charged state, and from Eqs. (7) and (8), SOC is given as (9) based on the collected Coulomb count: $SOC~ \left(t\right)=1-\frac{Q_{t}}{C_{q}}$ $Q_{t}$ : Coulomb count $C_{q}$ : total capacity of battery. Battery life completely depends on SOC and DOD parameters. The degradation of the battery after performing charge and discharge is called aging. Hence, aging of the battery is based on the following parameters, $Battery Life=f(SOC, DOD)$ If the DOD is large, the capacity of a battery will degrade very fast. High charge and discharge rates will rapidly increase degradation. For example, if one wants to calculate the remaining battery life, then one must consider that the number of cycles charged (SOC) and discharged (DOD) that the battery has undergone should be known. The battery DOD is identified by the SOC condition. The electrochemical reactions in the battery are given as current, electrons, and resistance variables with respect to time. Based on these reactions, heat is also generated from the battery, and according to the state of heat, a battery must be arranged for charging and discharging to conduct experiments. Fig. 2. Basic Thevenin Equivalent Circuit Model. 2.3 Temperature Effects in battery When a cell is charged or discharged, the temperature of the cell varies, and accordingly, ion diffusion in the solid is affected. The temperature of a cell as an energy equation under isothermal conditions is [13]: $\rho C_{p}\frac{\partial T}{\partial t}=\left(Q_{gen}-q\right)$ $\rho $ : density of the cell $C_{p}$ : heat capacity of the cell $Q_{gen}$ : heat generation rate per unit volume. Heat flux between a cell and the surroundings is expressed as: $q=~ \frac{h}{d}\left(T-T_{a}\right)$ $h$ : heat transfer coefficient $d$ : thickness of cell $T_{a}$ : ambient temperature A battery will generate heat during charge or discharge operations. These operations will be calculated based on a voltage difference such as the one between the OCV (open circuit voltage) and terminal voltage. In general, heat generation is expressed as a sum of reversible and irreversible heat generation terms. The irreversible heat source term is the difference between the voltage and OCV. The reversible heat source term is the change of entropy of OCV over a given temperature. $~ Q_{gen}=\frac{1}{V}~ \left(U_{OCV}-V_{T}\right)-\frac{1}{V}~ \left(T\cdot \frac{\partial U_{OCV}}{\partial T}\right)~ $ $U_{OCV}$: open circuit voltage $V_{T}$: terminal voltage $T$ : temperature of the battery For a small change in the battery, the temperature increases, and the effects will be observed in future use. Table 1. Characteristics of Li-Polymer Battery. Li polymer Current [A] Rated Voltage [V] Voltage [V] Power [W] 3S1P Fig. 3. Realization of Battery Pack Model. Fig. 4. Neural Network Structure. 3. kNN Algorithm Design for SOC and DOD Model Training Machine learning methods are used to predict a model, but one must be selected based on our requirement and error percentage deviation. In this study, the kNN regression algorithm was used to predict the values of SOC of a battery based on the voltage degradation parameter. In this method, the target is predicted based on the Euclidean distance method. The Euclidean distance method is applied to test the nearest neighbor point. After calculating nearest neighbors, the weighted average of nearest neighbors is used as a final prediction. K layers must be chosen based on the accurate comparison of trained and tested data. We chose the K nearest neighbors with the calculated distance method. Among the obtained K neighbors, we count the number of data points and categories of the maximum number of neighbors. Finally, a predicted value is obtained as output. This study was carried out with a lithium-polymer pouch battery with a nominal voltage of 3.7 V and rated current of 16 A. The pack is made by a combination of 3 cells arranged in series (3S1P) with a maximum power of 177 W. Then battery SOC is calculated as given in Eq. (14) [15]: $SOC_{\textit{Estimation}}=\frac{C_{\textit{capacity}}}{C_{\textit{rated}}}\times 100% $ where C$_{\mathrm{capacity}}$ is the initial capacity of the battery, and C$_{\mathrm{rated}}$ is the capacity of the battery currently available. In this study, the equationwas used to calculate the time for SOC estimation and the SOC origin of the lithium battery pack. This calculated SOC is sent to the kNN classifier for the prediction of SOC. ``Capacity'' is the initial capacity of the battery, and ``Crated'' is the capacity of the battery currently available. This calculated SOC is sent to the kNN classifier for the prediction of SOC. The DOD is also an important parameter to estimate the life of a battery. High SOC and low DOD will decrease the battery life rapidly. DOD is expressed as in this equation: $DOD=1-\frac{C_{\textit{capacity}}}{C_{\textit{rated}}}$ The battery life cycle will be decreased over time due to many factors and interconnected degradation mechanisms. Hence, operation cycle conditions play a vital role in identifying parameters. Hence, the number of cycles is considered as a common parameter, and a C-rate of 0.5C and ambient temperature (25$^{\circ}$C) are the most important variables [14]. The experiment results were sent to the machine learning algorithm and trained to obtain the result of SOC. We then compared this predicted result with an experiment result. The predicted output accuracy has to be calculated by the error calculation in between the predicted and actual SOC values. It is given as the mean absolute error (MAE), which is defined as the difference of the actual value and the predicted value [16]: $MAE=\frac{1}{n}\sum _{i=1}^{n}\left(T_{i}-A_{i}\right)$ where n is the number of samples, T$_{\mathrm{i}}$ is the predicted value, and A$_{\mathrm{i}}$ is the actual value. In this study, the equation was used to calculate the time for SOC estimation and the SOC origin of the lithium battery pack. The distance to be calculated is given by the Euclidean distance in Eq. (17): $Y=\sqrt{\sum _{i=1}^{N}w_{n}\left(x_{t}-x_{i}\right)^{2}}$ where N is the number of parameters that are dependent, x$_{\mathrm{t}}$ is the parameter of a test point\textit{, x}$_{i}$ is the parameter of a training point, and w$_{\mathrm{n}}$ is the weight of the parameter and is given as Eq. (18): $w_{n}~ =\frac{~ ~ ~ ~ \left(\frac{1}{y_{t,i}}\right)^{t}}{\sum _{i=1}^{k}\left(\frac{1}{y_{t,i}}\right)^{t}}$ where y$_{\mathrm{t,i}}$ is the distance between the nearest test and neighbor points. Sample calculation of Euclidean distance is explained as follows based on experiment results. If we substitute the theoretical SOC in Eqs. (17) and (18) (x$_{\mathrm{t}}$ = 88, x$_{\mathrm{i}}$ = 89, which are the SOC with respect to time), from these coordinates, we can easily calculate the distance between nearest and neighboring points. We obtained y$_{\mathrm{t,i}}$= 1.280 and w$_{\mathrm{n}}$ = 0.8194, and the Euclidean distance calculated by this method was obtained as y = 1.31 mm. In this way, we can calculate the shortest distance for each value based on the kNN layer number to predict SOC, which is more accurate with the experimental SOC. Fig. 5. Network Structure for Training Model. Fig. 6. kNN Evaluation Model. The experiment setup consists of charger and discharger units. The battery is subjected to a constant current discharge method with a discharge rate of 0.5C. In the procedure of the experiment, initially, the battery charges to 3.9 V, rests for one hour, and then discharges to 2.7 V with constant current. The experiment process is explained in the steps below: [Step 1] The lithium battery pack consists of 3 cells which are fully charged with constant current of 8 A, and this state is called 100% SOC. [Step 2] After completion of charging, the entire battery pack. [Step 3] The lithium battery pack is discharged to the cutoff voltage. [Step 4] Voltage and current are measured during discharge and sent to a computer with sensors. [Step 5] Procedure steps (1) to (4) are repeated for every cycle, and data is collected during the process. If temperatures are increasing rapidly, we note corresponding temperatures also. [Step 6] The recorded data that was measured is sent to the kNN model for the training process. [Step 7] The data is learned using the kNN model. [Step 8] The actual SOC estimation result and the predicted or learned SOC estimation result are compared. [Step 9] The mean absolute error is used to calculate the error rate. The experiment setup has been realized as shown in Fig. 7. A discharge unit, power supply, battery, and computer with a current sensor and voltage sensors were used for this experiment. After charging a battery up to the required maximum rated voltage, it rests for 30 to 60 minutes and then starts to discharge. We then collect the charge and discharge voltage and current with respect to time, and it is stored in a computer. The values of voltage in 10 cycles were the same. Hence, the averages for every 10, 50, 100, and 150 cycles were collected. We calculated the theoretical SOC of the battery and compared it with the proposed kNN algorithm. The result of the experiment and predicted SOC with machine learning model was obtained. The deviation between the measured and predicted SOC was evaluated from the MAE method. The details of the measured and predicted result with error between these models are explained in the next section. Fig. 7. Experiment Setup of Battery Pack. Fig. 8. Comparison of SOC with Experiment and kNN Model In Battery During Discharge: (a) Experiment Result, (b) 10 Cycles, (C) 50 Cycles, (D) 100 Cycles and (E) 150 Cycles. 4.1 Result of Voltage and SOC in Li-polymer Battery during Discharge The experimental results shown below in Fig. 8 are graphs that compare between the theoretical SOC estimation and predicted SOC (Figs. 8(b) to (e)). The MAE has been calculated and shows a deviation of 0.74[%] with the original calculated SOC value, which was the lowest value. The yellow line indicates experiment data, and the blue dotted line shows the kNN method of charge prediction. The results are satisfactory. From the experimental result, we observed that the voltage is gradually degraded when the battery underwent consecutive cycles of discharge. For 10 cycles, the constant voltage is reduced, and complete discharge of battery takes 90 minutes. But after 50 cycles, a rapid decrease in the voltage is observed, and within 80 minutes, the battery discharged completely. This result is compared with the predicted SOC with the kNN. The predicted and experimental results have an MAE of 0.74[%]. Hence, the predicted SOC is very near the measured or calculated SOC value. The MAEs with the kNN and experiment results were calculated, and we obtained a lowest value of 0.74[%]. MAE graphs are plotted for 10, 50, 100, and 150 cycles in Figs. 8(a) to (d). They show the error value representation on the y axis at the corresponding time on the x axis. Similarly, for discharge, the MAE is range between below 0.5 deviation. The experiment and the MAE are satisfactory. 4.2 Result of Voltage and SOC in Li-polymer Battery during Charging The same experiment was repeated for charge in a constant current method process with a charge rate of 0.5C. We observed that the voltage is gradually increased when the battery undergoes consecutive cycles of charge. For 10 cycles, the constant voltage is increased to 4.0 V from 2.7 V, and complete charge of the battery takes 90 minutes. But after 50 cycles, a rapid increase in the voltage was observed, and within 80 minutes, the battery charged completely. This result was compared with the predicted SOC with the kNN machine learning. The predicted and the experimental results have an MAE of 0.74[%]. Hence, the predicted SOC is very near the measured or calculated SOC value. The results from the experiment are indicated in Fig. 9(a), and the comparison of predicted and experimental results with the MAE is presented in the graphs in Figs. 9(b) to (e). Tables 2 and 3 show a comparison of temperatures at the ends of cycles during charge and discharge of the battery based on the operating time and voltage. It is observed that with increase in the operation time, the battery temperature also gradually increases. Table 2. Charging Experiment Results. Time [Min] Temperature [°C] 10 Cycles Table 3. Discharge Experiment Results. Fig. 9. Comparison of SOC with Experiment and kNN Model in Battery during Charge: (a) Experiment Result, (b) 10 cycles, (c) 50 Cycles, (d )100 Cycles, (e) 150 Cycles. In this paper, we proposed an SOC estimation method for a battery using the kNN machine learning method. The following conclusions were made in this study: $\color{color-4}{·}$ Experiments were conducted at room temperatures. The proposed method was used to estimate SOC in real time. $\color{color-4}{·}$ A lithium-polymer battery with a nominal capacity of 4 V and 16 A was used in this study. $\color{color-4}{·}$ The battery was charged and discharged with the constant current method at a rate of 0.5C. $\color{color-4}{·}$ During the experiment, voltage variations in the battery after 10, 50, 100, and 150 cycles were observed. $\color{color-4}{·}$ At up to 10 cycles, the battery performance was good and gradually decreased after 150 cycles. $\color{color-4}{·}$ This experiment result was compared with the kNN machine learning method for instant prediction of SOC $\color{color-4}{·}$ Two datasets for prediction were used: one for voltage degradation during charge at different cycles and one for voltage during charging. $\color{color-4}{·}$ After training and prediction, we evaluated the results and compared them with the experiment and kNN method. $\color{color-4}{·}$ The kNN algorithm had an MAE of 0.74[%] and achieved 98% accuracy for the two different types of batteries. This model has an advantage of overcoming complex solvers numerically and analytically. In the future, work will be carried out with different machine learning algorithms by observing temperatures changes in the battery with a linear regression algorithm. This work was supported by the Korea Ministry of Trade, Industry and Energy under the grant of ``The development of high strength lightweight aluminum battery package and PCM-BTMS for high safety and battery efficiency improvement of electrical vehicle 2021.'' Zhang Wenqiang, et al. , 2018, A durable and safe solid-state lithium battery with a hybrid electrolyte membrane., Nano Energy, Vol. 45, pp. 413-419 Song Xiangbao, et al. , 2019, Combined CNN-LSTM network for state-of-charge estimation of lithium-ion batteries., IEEE Access, Vol. 7, pp. 88894-88902 Chen C., Xiong R., Yang R., Shen W., Sun F., 2019, State-of-charge estimation of lithium-ion battery using an improved neural network model and extended Kalman filter., J. Clean. Prod., Vol. 234, pp. 1153-1164 Huang Deyang, et al. , 2019, A model-based state-of-charge estimation method for series-connected lithium-ion battery pack considering fast-varying cell temperature., Energy, Vol. 185, pp. 847-861 Abbas G., Nawaz M., Kamran F., 2019 January, Performance comparison of NARX & RNN-LSTM neural networks for LiFePO4 battery state of charge estimation., In 2019 16th International Bhurban Conference on Applied Sciences and Technology (IBCAST) IEEE., pp. 463-468 Lu Languang, et al. , 2013, A review on the key issues for lithium-ion battery management in electric vehicles., Journal of power sources, Vol. 226, pp. 272-288 Meng Jinhao, et al. , 2017, An overview and comparison of online implementable SOC estimation methods for lithium-ion battery., IEEE Transactions on Industry Applications, Vol. 54, No. 2, pp. 1583-1591 Kim Jonghoon, et al. , 2011, Stable configuration of a Li-ion series battery pack based on a screening process for improved voltage/SOC balancing., IEEE Transactions on Power Electronics, Vol. 27, No. 1, pp. 411-424 Lee P. Y., et al. , 2017, Screening method using the cells deviation for Li-ion battery pack of the high power application., Proceedings of the KIPE Conference. The Korean Institute of Power Electronics He Wei, et al. , 2018, A physics-based electrochemical model for lithium-ion battery state-of-charge estimation solved by an optimised projection-based method and moving-window filtering., Energies, Vol. 11, No. 8, pp. 2120 You Hyun Woo, et al. , 2018, Analysis of equivalent circuit models in lithium-ion batteries., AIP Advances, Vol. 8, No. 12, pp. 125101 Birkl B., Christoph R., et al. , 2017, Degradation diagnostics for lithium ion cells., Journal of Power Sources, Vol. 341, pp. 373-386 Li Xueyan , Meng Xiao , Song-Yul Choe. , 2013, Reduced order model (ROM) of a pouch type lithium polymer battery based on electrochemical thermal principles for real time applications., Electrochimica Acta, Vol. 97, pp. 66-78 Gao, Lijun , Shengyi Liu , Roger A. Dougal. , 2002, Dynamic lithium-ion battery model for system simulation., IEEE transactions on components and packaging technologies, Vol. 25, No. 3, pp. 495-505 Chang Wen-Yeau., 2013, The state of charge estimating methods for battery: A review., International Scholarly Research Notices 2013 Sidhu, Manjot S. , Deepak Ronanki , Sheldon Williamson. , 2019, Hybrid state of charge estimation approach for lithium-ion batteries using k-nearest neighbour and gaussian filter-based error cancellation., 2019 IEEE 28th International Symposium on Industrial Electronics (ISIE). IEEE Teressa Talluri Teressa Talluri is a Ph.D. student in Busan University of Foreign Studies. She is a professor in the Mechanical Engineering Department at KL University India. She also received the best faculty award. She received a master's of technology degree from Jawaharlal Nehru Technological University, Kakinada, India. She was the university topper in her masters course. She received a bachelor's degree from Jawaharlal Nehru Technological University, Hyderabad, India. She also researched about battery thermal management with a new medium called phase-change materials and observed a good result in that research. Her areas of interest in research are thermal management in an electric vehicle battery, heat transfer, artificial intelligence, and robotics. Hee Tae Chung Hee Tae Chung has been a professor in the Department of Electronic and Robot Engineering in Busan University of Foreign Studies, Busan, Korea, since 1997. He received his M.S. and Ph.D. degrees in electronic engineering from Kyungpook National University, Dague, Korea, in 1988 and 1996, respectively. Between 1996 and 1997, he worked as a Patent Examiner in the Korean Industrial Property Office. His current research areas include the application of intelligent control to robot systems, adaptive control, and deep learning with neural networks. Kyoo Jae Shin is a professor of Intelligence Robot Science at Busan University of Foreign Studies (BUFS), Busan, South Korea. He is the director of the Future Creative Science Research Institute at BUFS. He received his B.S. degree in electronics engineering in 1985 and an M.S. degree in electrical engineering from Cheonbuk National University (CNU) in 1988, and he received his Ph.D. degree in the electrical science from Pusan National University (PNU) in 2009. Dr. Shin was a professor of the Navy Technical Education School and a main director for research associates of a dynamic stabilization system in Dusan Defense Weapon Research Institute. Also, he has researched and developed a fish robot, submarine robot, automatic bug spray robot in a glass room, automatic milking robot using a manipulator, personal electrical vehicle, smart accumulated aquarium using a heat pump, solar tracking system, 3D hologram system, and gun/turret stabilization system. He is interested in intelligent robots, image signal processing application systems, and smart farms and aquariums using new energy. Electrical vehicle Battery characteristics State of charge Depth of discharge Machine learning algorithms and kNN method
CommonCrawl
Facial biotype classification for orthodontic treatment planning using an alternative learning algorithm for tree augmented Naive Bayes Gonzalo A. Ruz ORCID: orcid.org/0000-0001-7740-98651,2,3, Pamela Araya-Díaz4 & Pablo A. Henríquez5 BMC Medical Informatics and Decision Making volume 22, Article number: 316 (2022) Cite this article When designing a treatment in orthodontics, especially for children and teenagers, it is crucial to be aware of the changes that occur throughout facial growth because the rate and direction of growth can greatly affect the necessity of using different treatment mechanics. This paper presents a Bayesian network approach for facial biotype classification to classify patients' biotypes into Dolichofacial (long and narrow face), Brachyfacial (short and wide face), and an intermediate kind called Mesofacial, we develop a novel learning technique for tree augmented Naive Bayes (TAN) for this purpose. The proposed method, on average, outperformed all the other models based on accuracy, precision, recall, \(F_{1}\hbox {-score}\), and kappa, for the particular dataset analyzed. Moreover, the proposed method presented the lowest dispersion, making this model more stable and robust against different runs. The proposed method obtained high accuracy values compared to other competitive classifiers. When analyzing a resulting Bayesian network, many of the interactions shown in the network had an orthodontic interpretation. For orthodontists, the Bayesian network classifier can be a helpful decision-making tool. In recent years, there has been a rise in the use of machine learning-based tools in medical treatments to aid in decision-making for treatment planning. In particular, the output of these models can be used as a support tool for health personnel who ultimately make decisions. Given the implications for patients on these decisions, the machine learning technique used should be interpretable. An interesting machine learning technique for this purpose is Bayesian networks (BN) [1], which combines graph theory with probability theory. In the field of dentistry BN have been applied in diverse areas. For example, in [2] prior to and during the application of a certain orthodontic procedure, BN were employed to describe certain tooth color parameters. To better understand the underlying data structure of the patterns of dental caries in the population, the prevalence of dental caries in the primary dentition of 352 Myanmar schoolchildren was examined at the tooth level using BN in [3]. The effectiveness of BN in the assessment of dental age-related evidence obtained using a geometrical approximation approach of the pulp chamber volume was examined in [4]. BN are used in [5] for age estimation and classification based on dental evidence, in particular, to the development of third molars. A BN clinical decision support system was designed in [6] to assist general practitioners in determining whether patients with permanent dentition need orthodontic treatment. A Dental Caries Clinical Decision Support System is evaluated in [7] which uses a BN to provide suggestions and represent clinical patterns. The outcomes demonstrated the Bayesian network's accuracy in various cases. In [8], a minimally invasive method for elevating the lateral maxillary sinus was described, and BN was used to determine the link between the parameters involved. The use of BN to MR images to identify temporomandibular disorders was looked at in [9]. The goal was to ascertain how temporomandibular disorders were diagnosed, concentrating on how each discovery affected the other. The findings demonstrated that the BN path condition method was more than 99% accurate when employing resubstitution validation and 10-fold cross-validation. The key benefit of utilizing BN, however, is its ability to express the causal links between various data and assign conditional probabilities, which might subsequently be utilized to interpret the course of temporomandibular disorders. In [10], BN are used to identify and depict the relationships between several Class III malocclusion maxillofacial features during growth and treatment. The authors demonstrate that as compared to individuals undergoing orthodontic treatment with rapid maxillary expansion and facemask therapy, untreated participants exhibit different Class III craniofacial growth patterns. Also, it is important to point out that BN have been used for meta-analysis in several dental research topics [11,12,13,14,15,16]. BN are probabilistic graphical models representing discrete random variables and conditional dependencies via a directed acyclic graph (DAG). In classification (supervised learning) problems, when using a probabilistic approach, the difficulty is to compute effectively the posterior probability of the class variable \(Y_{k}\) (with \(k=1,\dots ,K\)) given an n-dimensional input data point \({\mathbf {x}}=(x_{1},\dots ,x_{n})\). This can be carried out using the Bayes rule: $$\begin{aligned} p(Y_{k}|{\mathbf {x}})=\frac{p(Y_{k})p({\mathbf {x}}|Y_{k})}{p({\mathbf {x}})}. \end{aligned}$$ The numerator, which comprises the a priori probability of the class variable and the likelihood (the joint probability of the input features conditioned to the class variable), is what is important in this case. The calculation of the class variable's a priori probability is simple. It can be determined from the training set's class variable values' relative frequency. However, there are numerous methods for calculating likelihood. The usage of Bayesian networks, thus, Bayesian network classifiers [17], is one of them. There are various Bayesian network classifiers [18,19,20,21,22,23]. However, the two most often used are the tree augmented Naive Bayes (TAN) classifier [17] and the Naive Bayesian network classifier, also known as the Naive Bayes [24]. The Naive Bayes approach computes the likelihood in (1) by assuming conditional independence among the attributes given the class variable. There are no edges between the attributes as a result. As opposed to TAN, which begins by taking into account a fully connected network with weighted edges, it uses the conditional mutual information between pairs of attributes to generate these weights. Then, the application of Kruskal's algorithm (the maximum weighted spanning tree (MWST)) to produce a tree structure is carried out, leaving just \(n-1\) edges. Each attribute in this version of the Bayesian network classifier will have an incoming edge from another attribute, with the exception of the selected root attribute node. The TAN model corrects the naive version's strong assumption of conditional independence. Theoretically, it ought to deliver better outcomes (accuracy) than the Naive Bayes. However, TAN has significant drawbacks, one of which is its difficulty to estimate the conditional mutual information accurately. Two direct difficulties when working with conditional mutual information are: (1) the computational complexity for n nodes and N training samples is \({\mathcal {O}}(n^{2}N)\) [25], therefore, for datasets with many attributes the computation becomes very slow, needing more computational power, (2) the conditional mutual information estimate produced when there are not enough training instances in each class to accurately estimate the joint probability distribution and the conditional distributions. This is significant because conditional mutual information is used as weights in the fully connected graph throughout TAN's tree construction technique. The obvious question is: Can the network weights be learned from the data to achieve satisfactory classification results without estimating conditional mutual information? When preparing a treatment in orthodontics, especially for children and teenagers, it is crucial to be aware of the changes that take place throughout facial growth because the rate and direction of growth can greatly affect the necessity of using different treatment mechanics. The Ricketts' VERT index is one of the most widely used methods for identifying facial biotypes [26]. The biotypes can be divided into Dolichofacial (long and narrow face), Brachyfacial (short and wide face), and an intermediate form known as Mesofacial based on the VERT index. In this paper, we propose a different approach for learning TAN classifiers without estimating conditional mutual information. Instead, we use an evolution strategy to learn the weights of the networks from the data. Using attributes that are unaffected by the sagittal position of the jaws, we apply the proposed method to automatically classify a patient's biotype, eradicating the inaccuracies shown with the VERT index. In particular, one of the measurements used to calculate the VERT index is the facial depth, which indicates the sagittal relationship between the jaws. When this sagittal relationship is altered, the VERT is also altered. Therefore, a higher VERT is obtained in individuals with a prominent jaw, diagnosing the patient as more Brachyfacial than it is. Conversely, a patient with a mandible positioned further back will appear more Dolichofacial than it is. The results are shown in Table 1. Overall we notice that \((\mu ,\lambda )\)-TAN, on average, outperforms all the other models for the particular dataset analyzed. Moreover, \((\mu ,\lambda )\)-TAN presents the lowest dispersion, making this model more stable and robust against different runs. Table 2 shows that the results in terms of Accuracy of \((\mu ,\lambda )\)-TAN are statistically significantly different to the results obtained by the other methods. Also, it is important to highlight that the results obtained are better than previously published results for the same dataset [27]. The best resulting network using \((\mu ,\lambda )\)-TAN is shown in Fig. 1. For better visualization, we have omitted in this figure, the node with the class variable and the edges from this node to all the other nodes. We used the importance function from the randomForest package in R [28] to create a smaller network. Based on the Gini importance, a metric used to assess the node impurity during the tree inference process, this function calculates the importance of each attribute (in decision trees or random forests). The outcome is displayed in Fig. 2. Table 1 Performance measures for each model Table 2 Statistical significance test for different simulations in terms of Accuracy Shows the best \((\mu ,\lambda )\)-TAN model obtained throughout the 20 runs. The \((\mu ,\lambda )\)-TAN classifier for the facial biotype dataset Attribute importance. Attributes ranking based on the Gini importance measure. Using the top four attributes from Fig. 2, the outcomes of our repeated experiments are displayed in Tables 3 and 4. We notice similar results as before, with slight improvements in the evaluation measures. The best resulting network in this case using \((\mu ,\lambda )\)-TAN is shown in Fig. 3. Table 3 Performance measures for each model (with four attributes) Shows the best \((\mu ,\lambda )\)-TAN model obtained throughout the 20 runs. The \((\mu ,\lambda )\)-TAN classifier for the facial biotype dataset (with the top 4 attributes) To evaluate the robustness of the proposed method, we tested \((\mu ,\lambda )\)-TAN on high-dimensional datasets chosen from the UCI database [29]. For this, three datasets were considered, as described in Table 5. Table 6 shows the performance of \((\mu ,\lambda )\)-TAN and RF. It can be noticed that, in the case of the three datasets, our method achieves better performances on average. Table 5 Information of the high-dimensional datasets Table 6 Performance measures for proposed method using high-dimensional data From Fig. 1 we notice that Mc7 (Lower anterior facial height) is the parent node of 3 variables, St1 (SNA angle), Ja4 (Lower Gonial angle), and Ri21 (Symphysis length). Mc7 is measured from a point close (anterior nasal spine) to one of the points that constitute the SNA angle (point A) and both points are part of the same structure (maxillary), so the modification of the first one could be accompanied of a modification of St1 as well. On the other hand, the relationship between Mc7, Ja4, and Ri21 is explained given that the three correspond to vertical measurements and the modification of one should be accompanied by the modification of the other two variables. Ja4 is the parent node of Ja9 (Cranial base and Mandibular length ratio), a relationship for which we do not have a satisfactory biological explanation since Ja4 is a vertical measurement and Ja9 is a horizontal one. In turn, Ja9 is the parent node of Ja8 (Mandibular corpus length), which is explained given that Ja8 is one of the measurements that make up Ja9. Ri21 is the parent node of Ri19 (Condylar height) and Ja5 (Anterior cranial base length), a relationship that does not have an acceptable biological explanation, except that, as they correspond to linear measurements, they are influenced by the volumetric proportionality that exists between the structures given the greater or lesser general size of the skull. St1 is the parent node of Ja3 (Upper Gonial angle), a relationship that could be explained since both represent sagittal growth, St1 indicates sagittal position of the maxilla with respect to the skull and Ja3 horizontal projection of the mandible; Normally, both structures tend to grow proportionally in the sagittal direction. Ja3 is the parent node of Ri18 (Posterior height), which is explained by the fact that both measurements share a reference point (gonion). Ri18 is the parent node of Ri10 (Maxillary depth angle) and Ja6 (Posterior cranial base length), there is no biological explanation for the relationship between Ri18 and Ri10 since one corresponds to a sagittal measurement and the other is vertical and they are measured in different areas of the face. In the case of Ri18 and Ja6 they use different landmarks but both measure posterior height of the face, so a relationship between both variables is clearly explained. Ri10 is the parent node of Ri13 (Anterior Cranial length), a relationship that can be explained since both measurements share a reference point (Nasion). Ri13 is the parent node of Ri11 (Palatal plane angle) and Ja12 (Jarabak's ratio), Ri13 and Ja12 share a reference point (Nasion) and all three correspond to vertical measurements, so the relationship between them is justifiable. Ri11 is the parent node of Mc3 (Linear distance from point A to nasion perpendicular), which is explained because they share a reference point in the maxilla (Nasion) and the modification of this point would produce a change in both variables. Ja12 (Jarabak's ratio) is the parent node of 3 variables Ri15 (Mandibular corpus axis), Mc6 (Maxillary length), and Ja1 (Saddle angle). Regarding this relationship, Ja12 and Ri15 correspond to measures indicative of the magnitude of vertical growth, Ja1 is part of Ja12, and with Mc6 instead, it cannot be explained biologically in a satisfactory way. Ja1 (Saddle angle) is the parent node of Ja2 (Articular angle), which is explained given that both are contiguous angles that tend to compensate each other, that is, the tendency is that if one angle increases, the other tends to decrease in post of maintaining the proportionality of the face. Ri15 (Mandibular corpus axis) is the parent node of Ja7 (Ramus height) and Ri17 (Mandibular ramus position), a relationship that is explained by the fact that Ri15 and Ri17 share a reference point (Xi), and that the three measurements correspond to vertical variables. Ri17 is the parent node of Ri12 (Cranial deflection), a relationship that is explained by the fact that both measurements contain the Porion-Orbitale line. Mc6 (Maxillary length) is the parent node of Ja10 (Posterior facial height), Ar5 (Nasolabial angle), and Ja11 (Anterior facial height), a relationship that does not have an acceptable biological explanation, except for the volumetric proportionality that exists between the structures that contain the landmarks corresponding to MC6, Ja10, and Ja11, given the greater or lesser general size of the skull and that Ar5 can be influenced by Mc6 since the upper lip rests on the maxilla, although this relationship is not direct since it depends mainly on the sagittal position of the maxilla. Ar5 is the parent node of Ri9 (Maxillary height angle), which could be explained by the fact that, as in the previous case, the position of the upper lip can be modified given the position of the maxilla, although this relationship is not direct since Ri9 corresponds to a indicative measure of the vertical and not sagittal position of the maxilla. Ri9 is in turn the parent node of Ja13 (posterior cranial base ratio to ramus height), a relationship for which we do not have a satisfactory biological explanation since, although both are vertical measurements, they correspond to different areas of the face. Ja10 (Posterior facial height) is the parent node of Ri16 (Articular cavity position: Porion to Ptv), however, there is no direct biological explanation for this relationship; Ri16 is in turn the parent node of Ri20 (Condylar neck length), although the condyle is in relation to the joint cavity, we did not find an explanation for the relationship between the sagittal position of the joint cavity (Ri16) and the length of the neck of the condyle; Ri20 is the parent node of Mc5 (mandibular length), a relationship that could be explained by the fact that Mc5 has a reference point in the condyle and this is related to the joint cavity and both are sagittal measurements. When analyzing the importance of each variable shown in Fig. 2, we notice that the four variables that turned out to have the greatest discriminatory power are: Ja4 (Lower Gonial angle), Ja12 (Jarabak's ratio), Mc7 (Lower anterior facial height), Mc3 (Linear distance from point A to nasion perpendicular). In particular, the first 3 variables are the measurements that account for the direction of vertical growth of the mandible, which is the main determinant in the pattern of facial growth and it is therefore logical that they appear as the most important. On the other hand, the Mc3 variable is indicative of the sagittal position of the maxilla with respect to the skull, which is not considered a determinant of the pattern of facial growth, however it could be related, since the rotation of the mandible generally in normally, it is accompanied by a rotation of the maxilla in the same direction and magnitude. In the case of Fig. 3, it is observed that Mc3 (Linear distance from point A to nasion perpendicular) is the parent node of Ja4 (Lower Gonial angle), however, there is no direct biological explanation to explain this relationship since Mc3 is a sagittal measurement of the maxilla and Ja4 a vertical measurement of the mandible. In turn, Ja4 is the parent node of the variables Ja12 (Jarabak's ratio) and Mc7 (Lower anterior facial height), which can be explained because the three variables correspond to measures indicative of vertical growth, so that when increasing or decreasing a of them, the others also increase or decrease respectively proportionally. In this paper, we have presented an alternative learning method based on an evolution strategy to learn the weights for constructing the TAN classifier. We applied this method to the facial biotype classification problem, obtaining high accuracy values compared to other competitive classifiers. When analyzing a resulting BN from \((\mu ,\lambda )\)-TAN, many of the interactions shown in the network had an orthodontic interpretation, nevertheless, there were a few which did not have a satisfactory biological explanation. Future research will consider more benchmark datasets as well as other medical applications. Dataset description We use the [27] dataset, which comprises 182 lateral teleradiographies taken from patients in Chile. 31 continuous attributes that describe the craniofacial morphology were computed for each one using cephalometric analysis (see Table 7). Orthodontists have personally classified and validated each lateral teleradiograph into one of the three categories (Brachyfacial, Dolichofacial, and Mesofacial). Table 7 A description of the attributes [27] Alternative learning algorithm for tree augmented Naive Bayes We propose an evolution strategy (ES) for learning TAN classifiers. The standard versions of the ES are denoted by [30] $$\begin{aligned} (\mu ,\lambda )-ES \quad \text {and} \quad (\mu +\lambda )-ES, \end{aligned}$$ where \(\lambda\) represents the number of offspring and \(\mu\) the number of parents. From the multi-set of either the offspring, known as comma-selection (\(\mu <\lambda\) must hold), or both the parents and offspring, known as plus-selection, the parents are deterministically chosen (i.e., deterministic survivor selection). Selection is based on the ranking of the individuals' fitness, taking the \(\mu\) best individuals (also referred to as truncation selection). In this study, we generate weights for the TAN model that produce good facial biotype classification results without estimating the conditional mutual information by using the deterministic survivor selection \((\mu ,\lambda )\) technique. In order to do this, a candidate solution (an individual) is encoded as an m-dimensional vector that holds the m weight values of a network. We must define \(m=n(n-1)/2\) weights for a network of n nodes. Consequently, we must locate 465 weights for \(n=31\). (parameters). We proceed as follows in order to determine the right values for these weights. By evenly distributing all the weight values in each individual's unit hypercube at random, we create an initial population of \(\mu\) individuals. The procedure is then repeated a specified number of times. Each iteration starts with a population-wide evaluation of each proposed solution. A flowchart that briefly explains how the ES algorithm functions is shown in Fig. 4. The reader is directed to [31] for further information on how ES functions. Alternative learning algorithm for tree augmented Naive Bayes. Flowchart of the evolution strategy The accuracy, or the percentage of correctly classified instances in the training set, is then used to calculate the fitness function. The highest scoring \(\mu\) parents are then chosen. We perform the subsequent actions: The parent population consists of \(\mu =10\) individuals. The number of offspring produced per each iteration is indicated by \(\lambda =20\). Individuals die out after one iteration step (we use 1000 iterations) and only the offspring (the youngest individuals) survive to the following generation. Then, \(\mu\) parents are chosen from \(\lambda\) offspring via environmental selection. These hyperparameters were selected through simulations of trial and error. Model performance assessment We evaluated four metrics: precision (Prc), recall (Rec), accuracy (Acc), and \(F_{1}\)-score. This is how these measurements are calculated: $$\begin{aligned} Prc= & {} \frac{TP}{TP+FP}\times 100 \end{aligned}$$ $$\begin{aligned} Rec= & {} \frac{TP}{TP+FN}\times 100 \end{aligned}$$ $$\begin{aligned} Acc= & {} \frac{\hbox {number of correctly classified instances}}{\hbox {total number of instances}}\times 100 \end{aligned}$$ $$\begin{aligned} F_{1}\hbox {-score}= & {} 2\times \frac{Prc\times Rec}{Prc+Rec} \end{aligned}$$ where TP, TN, FP, and FN stand for true positive, true negative, false positive, and false negative, respectively. Since we are dealing with a multiclass problem, we compute Prc, Rec, and \(F_{1}\hbox {-score}\) for each individual class, and then report the average. Additionally, we calculate the Kappa statistic, which contrasts the trained model's Acc (in the test set) with a random model's accuracy. We utilize the classification suggested in [32] to interpret the Kappa value: values \(\le 0\) indicate poor agreement, \(0-0.2\) indicate slight, \(0.21-0.4\) indicate fair, \(0.41-0.6\) indicate moderate, \(0.61-0.8\) indicate substantial, and \(0.81-1\) indicate practically perfect agreement. Experimental setup The continuous features were discretized using Fayyad and Irani's Minimum Description Length method [33], which has been shown to have a positive effect on the classifiers' performance [34]. We compared the performance of the Naive Bayesian network classifier (NB), TAN, support vector machine (SVM) [35], decision tree (DT) [36], Random Forest (RF) [37], random vector functional link neural network [38] (RVFL), Averaged TAN (ATAN) [39] and the proposed method \((\mu ,\lambda )\)-TAN. Four greedy hill-climbing algorithms were also used as a basis for learning Bayesian network classifiers: Hill-climbing tree augmented Naive Bayes (HC-TAN) [40]. Hill-climbing super-parent tree augmented Naive Bayes (HC-SP-TAN) [40]. Backward sequential elimination and joining (BSEJ) [18]. Forward sequential selection and joining (FSSJ) [18]. The HC-TAN and HC-SP-TAN algorithms begin with a Naive Bayes structure and continue to add edges until the network score does not increase. Beginning with a Naive Bayes structure, BSEJ adds augmenting edges before removing features from the model until there is no longer any increase in the network score. On the other hand, FSSJ begins with a structure that only has the class node and builds upon it by adding features and enhancing edges. Averages and standard deviations were recorded after 20 times of doing each experiment run. We divided the dataset into 70% for training and 30% for testing for each run, with the division being done at random. We partitioned the original training set into 70% for training and the remaining 30% to evaluate different hyperparameter configurations through grid search for all the algorithms that needed hyperparameter tuning. The open-source R software environment for statistical computation was used for all of the simulations. We used the test set's kappa statistic and accuracy metric to assess classification performance. Additionally a statistical significance test, the paired sample t-test, for different simulations in terms of accuracy was conducted. The corresponding author can be contacted via email for direct access to the dataset used in this study. BN: Evolution strategy Naive Bayesian network classifier TAN: Tree augmented Naive Bayes SVM: DT: RVFL: Random vector functional link neural network ATAN: Averaged tree augmented Naive Bayes \((\mu ,\lambda )\)-TAN: Evolution strategy deterministic survivor selection tree augmented Naive Bayes HC-TAN: Hill-climbing tree augmented Naive Bayes HC-SP-TAN: Hill-climbing super-parent tree augmented Naive Bayes BSEJ: Backward sequential elimination and joining FSSJ: Forward sequential selection and joining Pearl J. Probabilistic reasoning in intelligent systems: networks of plausible inference. San Francisco: Morgan Kaufmann; 1988. Mesaros A-S, Sava S, Mitrea D, Gasparik C, Alb C, Mesaros M, Badea M, Dudea D. In vitro assessment of tooth color changes due to orthodontic treatment using knowledge discovery methods. J Adhes Sci Technol. 2015;29(20):2256–79. Nomura Y, Otsuka R, Wint WY, Okada A, Hasegawa R, Hanada N. Tooth-level analysis of dental caries in primary dentition in Myanmar children. Int J Environ Res Public Health. 2020;17(20):7613. Sironi E, Taroni F, Baldinotti C, Nardi C, Norelli G-A, Gallidabino M, Pinchi V. Age estimation by assessment of pulp chamber volume: a Bayesian network for the evaluation of dental evidence. Int J Legal Med. 2018;132(4):1125–38. Sironi E, Pinchi V, Pradella F, Focardi M, Bozza S, Taroni F. Bayesian networks of age estimation and classification based on dental evidence: a study on the third molar mineralization. J Forensic Legal Med. 2018;55:23–32. Bhornsawan T. Bayesian-based decision support system for assessing the needs for orthodontic treatment. Healthc Inform Res. 2018;24(1):22–8. Bessani M, de Lima DR, Cleiton Cabral Correia Lins E, Maciel CD. Evaluation of a dental caries clinical decision support system. In: Proceedings of the 10th international joint conference on biomedical engineering systems and technologies—BIOSIGNALS, (BIOSTEC 2017), 2017. pp. 198–204. Merli M, Moscatelli M, Mariotti G, Pagliaro U, Bernardelli F, Nieri M. A minimally invasive technique for lateral maxillary sinus floor elevation: a Bayesian network study. Clin Oral Implants Res. 2016;27(3):273–81. Iwasaki H. Bayesian belief network analysis applied to determine the progression of temporomandibular disorders using mri. Dentomaxillofac Radiol. 2015;44(4):20140279. Scutari M, Auconi P, Caldarelli G, Franchi L. Bayesian networks analysis of malocclusion data. Sci Rep. 2017;7(1):15236. Liang M, Lian Q, Kotsakis GA, Michalowicz BS, John MT, Chu H. Bayesian network meta-analysis of multiple outcomes in dental research. J Evid Based Dent Pract. 2020;20(1):101403. Hu S, An K, Peng Y. Comparative efficacy of the bone-anchored maxillary protraction protocols for orthopaedic treatment in skeletal class iii malocclusion: A bayesian network meta-analysis. Orthod Craniofac Res. 2021;25(2):243–50. Aldhohrah T, Mashrah MA, Wang Y. Effect of 2-implant mandibular overdenture with different attachments and loading protocols on peri-implant health and prosthetic complications: a systematic review and network meta-analysis. J Prosthet Dent. 2021;126(6):832–44. Zhao P, Song X, Nie L, Wang Q, Zhang P, Ding Y, Wang Q. Efficacy of adjunctive photodynamic therapy and lasers in the non-surgical periodontal treatment: a Bayesian network meta-analysis. Photodiagn Photodyn Ther. 2020;32:101969. Wu Z, Zhang X, Li Z, Liu Y, Jin H, Chen Q, Guo J. A Bayesian network meta-analysis of orthopaedic treatment in class iii malocclusion: Maxillary protraction with skeletal anchorage or a rapid maxillary expander. Orthod Craniofac Res. 2020;23(1):1–15. Machado V, Botelho J, Mascarenhas P, Mendes JJ, Delgado A. A systematic review and meta-analysis on Bolton's ratios: normal occlusion and malocclusion. J Orthod. 2020;47(1):7–29. Friedman N, Geiger D, Goldszmidt M. Bayesian network classifiers. Mach Learn. 1997;29(2):131–63. Pazzani MJ. Constructive induction of Cartesian product attributes. In: Liu H, Motoda H (editors). Feature extraction, construction and selection. The Springer International Series in Engineering and Computer Science, vol. 453. Boston: Springer; 1998. p. 341–54. Provan GM, Singh M. In: Fisher D, Lenz H-J (editors) Learning Bayesian networks using feature selection. New York: Springer, 1996. p. 291–300. Sahami M. Learning limited dependence Bayesian classifiers. In: Proceedings of the second international conference on knowledge discovery and data mining. KDD'96, 1996. p. 335–8. Margaritis D, Thrun S. Bayesian network induction via local neighborhoods. In: Solla SA, Leen TK, Müller K, editors. Advances in neural information processing systems, vol. 12. Cambridge: MIT Press; 1999. p. 505–11. Ruz GA, Pham DT. Building Bayesian network classifiers through a Bayesian complexity monitoring system. Proc IMechE Part C J Mech Eng Sci. 2009;223:743–55. Bielza C, Larrañaga P. Discrete Bayesian network classifiers: a survey. ACM Comput Surv. 2014;47:5–1543. Duda RO, Hart PE. Pattern Classif Scene Anal. New York: John Wiley & Sons; 1973. Pham DT, Ruz GA. Unsupervised training of Bayesian networks for data clustering. Proc R Soc A Math Phys Eng Sci. 2009;465(2109):2927–48. Ricketts RM, Roth RH, Chaconas SJ, Schulhof RJ, Engel GA. Orthodontic diagnosis and planning: their roles in preventive and rehabilitative dentistry. Pacific Palisades: Rock Mountain Data Systems; 1982. Ruz GA, Araya-Díaz P. Predicting facial biotypes using continuous Bayesian network classifiers. Complexity. 2018;2018:4075656. Liaw A, Wiener M. Classification and regression by Randomforest. R News. 2002;2(3):18–22. Dua D, Graff C. UCI machine learning repository 2017. http://archive.ics.uci.edu/ml. Alrashdi Z, Sayyafzadeh M. \((\mu +\lambda )\) evolution strategy algorithm in well placement, trajectory, control and joint optimisation. J Petrol Sci Eng. 2019;177:1042–58. Back T. Evolutionary algorithms in theory and practice: evolution strategies, evolutionary programming, genetic algorithms. Oxford: Oxford University Press; 1996. Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics. 1977;33:159–74. Fayyad UM, Irani KB. Multi-interval discretization of continuous-valued attributes for classification learning. In: IJCAI, 1993. pp. 1022–1029. Dougherty J, Kohavi R, Sahami M. Supervised and unsupervised discretization of continuous features. In: Prieditis A, Russell S, editors. Machine learning proceedings 1995. San Francisco (CA): Morgan Kaufmann; 1995. p. 194–202. Cortes C, Vapnik V. Support-vector networks. Mach Learn. 1995;20(3):273–97. Breiman L, Friedman J, Stone CJ, Olshen RA. Classification and regression trees. London: Chapman and Hall/CR; 1984. Breiman L. Random forests. Mach Learn. 2001;45:5–32. Henríquez PA, Ruz GA. A non-iterative method for pruning hidden neurons in neural networks with random weights. Appl Soft Comput. 2018;70:1109–21. Jiang L, Cai Z, Wang D, Zhang H. Improving tree augmented Naive Bayes for class probability estimation. Knowl Based Syst. 2012;26:239–45. https://doi.org/10.1016/j.knosys.2011.08.010. Keogh EJ, Pazzani MJ. Learning the structure of augmented Bayesian classifiers. Int J Artif Intell Tools. 2002;11(04):587–601. The authors acknowledge Dr. Hernán M. Palomino for facilitating access to the teleradiographies used in the study. This work was funded by ANID FONDECYT 1180706, ANID PIA/BASAL FB0002, and ANID/PIA/ANILLOS ACT210096. Facultad de Ingeniería y Ciencias, Universidad Adolfo Ibáñez, Santiago, Chile Gonzalo A. Ruz Center of Applied Ecology and Sustainability (CAPES), Santiago, Chile Data Observatory Foundation, Santiago, Chile Departamento del Niño y Adolescente, Área de Ortodoncia, Facultad de Odontología, Universidad Andres Bello, Santiago, Chile Pamela Araya-Díaz Facultad de Administración y Economía, Universidad Diego Portales, Santiago, Chile Pablo A. Henríquez GAR, PA-D and PAH designed the study. PA-D provided the data. GAR and PAH pre-processed the data. GAR and PAH developed the tools, performed the analyses and produced the results. GAR, PA-D and PAH analysed the results and wrote the manuscript. GAR acquired the funding and provided the resources. All authors read and approved the final manuscript. Correspondence to Gonzalo A. Ruz. The teleradiographies used in this work were obtained from a file of an orthodontic clinic in Chile, which was taken previously as part of the routine protocol at the beginning of orthodontic treatment. Informed consent was obtained for all subjects. Ruz, G.A., Araya-Díaz, P. & Henríquez, P.A. Facial biotype classification for orthodontic treatment planning using an alternative learning algorithm for tree augmented Naive Bayes. BMC Med Inform Decis Mak 22, 316 (2022). https://doi.org/10.1186/s12911-022-02062-7 Facial biotypes Orthodontic treatment planning
CommonCrawl
MAC protocol for underwater acoustic sensor network based on belied state space Lian-suo Wei1,2, Yuan Guo2 & Shao-bin Cai1,3 This study aims to solve time-space uncertainties due to the narrow network channel bandwidth and long transmission delay of an underwater acoustic sensor network when a node is using a channel. This study proposes a MAC protocol (BSPMDP-MAC) for an underwater acoustic sensor network based on the belief state space. This protocol can averagely divide the time axis of a sensor's receiving nodes into n slots. The action state information of a sensor's transmission node was divided by the grades of link quality and the residual energy of each node. The receiving nodes would obtain the decision strategy sequence of the usage rights of the competitive channels of the sensor's transmission nodes according to the joint probability distributions of historical observations and action information of channel occupancy. The transmission nodes will transmit data packets to the receiving nodes in turns in allocated slots, according to the decision strategy sequence, and the receiving nodes will predict the channel occupancy and perceive the belief states and access actions in the next cycle, according to the present belief states and actions. These experimental simulation results show that this protocol can reduce the collision rate of data packets, improve the network throughput and transmission success rate of data packets, and reduce the energy overhead of the network. The medium access control (MAC) protocol for underwater acoustic sensor network protocol is the bottom tier of the data link tier, which mainly allocates the underwater channel resources in a reasonable and effective manner for multiple underground nodes, and is the key protocol to ensure efficiency communication. At present, the research achievements on the MAC protocol for underwater acoustic sensor networks are divided into MAC protocol for resource allocation and MAC protocol for resource competition [1]. The representative MAC protocols for resource allocation include FDMA [2], TDMA [3], CDMA [4], and the corresponding improved protocols. These protocols include problems such as limited channel bandwidth for division, low-precision clock synchronization, and coding difficulties. Therefore, these cannot become the main research objects of MAC protocols for an underwater acoustic sensor network. The MAC protocols for the resource competition are mainly divided into handshake protocol, channel reservation protocol, and the corresponding improved protocols. The typical handshake MAC protocols include the MACA-MN [5], RIPT [6], and S-FAMA [7] protocol. These protocols can obtain channels via the control packet handshake negotiation, allowing it to effectively alleviate the hiding terminal problem and reduce waste time caused by transmission delay. When data packets conflict, the backoff algorithm is used for backoff or retransmission, in order to reduce the conflictions of data packets and improve the channel reuse rate. Multiple handshakes of control packets of such protocol would lead to some overhead, which not only reduces network throughput, but also consumes the limited energy of nodes and affects the survival cycle of the whole network. The typical channel reservation MAC protocols mainly include the T-Lohi [8] protocol. The T-Lohi protocol proposes three channel reservation mechanisms, including ST-Lohi, CUT-Lohi, and AUT-Lohi. These three channel reservation mechanisms can better solve time-space uncertainties when nodes use a channel, but some problems exist. For example, the time synchronization is difficult in the communication slot for ST-Lohi. CUT-Lohi shall avoid difficulties in complicated clock synchronization, extend channel competition time, and reduce network throughput based on ST-Lohi. AUT-Lohi combines the strengths of the above two mechanisms and reduces channel competition time. For low-probability, packet tone collision is controlled, and the collision backoff algorithm is used to reduce the data packet collision of channels, reduce the loss rate of AUT-Lohi data packets, and improve the success rate of data packets. However, it cannot effectively solve the hiding terminal problem, which leads to data packet confliction and data retransmission, and waste limited bandwidth and node energy. Based on this, for the severity of time-time uncertainties, Li et al. proposed one underwater MAC protocol (PC-MAC) for presetting receiving and transmission time. This protocol mainly considers the possible position changes of transmission nodes. The receiving nodes predefine the time window of the receiving packets. Furthermore, the transmission nodes calculate the transmission time of packets based on the receiving window, in order to prevent packet conflictions [9]. QianLiangfang et al. [10] proposed the MAC protocol for underwater wireless sensor networks based on the reserved data queue. This protocol sets the channel reservation cycle and reservation confirmation cycle in data transmission and completes channel reservation by using RTS-CTS handshake. Furthermore, this method can reduce the average reservation time of nodes, decrease waiting time in the data transmission of nodes, and effectively improve network throughput [11]. These protocols consider that the whole network node state cannot be fully observed. When some new nodes join in or individual nodes are invalid due to too low energy, it will affect the normal operation of the MAC protocol. Based on analysis on massive data, the author will describe the influences of factors, such as time slot division and control packet joining on channel occupancy and data arrival rate, using the state transition probability and Markov algorithm, and model an underwater channel as the belief state space using the allocation process, predict channel occupancy according to the present belief state and action, and reasonably perceive the belief state and connection actions of the next cycle. A series of decisions are made to realize the optimal target, effectively reduce the confliction rate of data packets, improve the success rate of data packet transmission and network throughput, and reduce the energy consumption of the network. Partial observable Markov decision model Affected by the water flow, the underwater sensor node monitors dynamic and uncertain targets. The partial observable Markov decision process (POMDP) [12] is the mathematic model to solve dynamic and uncertain problems, which has been applied in MAC protocols for underwater acoustic sensor networks [13]. The sensor nodes perceive partial and incomplete information in POMDP. The history actions of the sensor node are independent from the observed channel states. Thus, these have no non-Markov characteristics. The belief state space is introduced to make it only associated with the present belief state space, and not related to the history belief state space. The POMDP is transformed to the Markov decision process (MDP) [14] based on the belief state space. $$ {b}^t=P\left({s}^t|{a}^t,{z}^t,{a}^{t-1},{z}^{t-1},\cdots, {a}^0,{z}^0,{s}^0\right) $$ The joint probability distribution of history observations and actions is: $$ {b}^t\left({s}_i\right)=P\left({s}_i|{z}^t,{a}^t,{s}^{t-1}\right)=\eta O\left({s}_i,{a}^t,{z}^t\right)\sum \limits_{j=1}^{\mid S\mid }T\left({s}_j,{a}^t,{s}_i\right)P\left({s}^{t-1}={s}_j\right) $$ In this equation, η is the normalization factors: The joint probability distribution is called as the belief state. A set of whole belief states is called as the belief state space [13], which is marked as B. Definition 1 Assume that the POMDP model structure is described by a six-component tuple < S,A,T,R,Z,O> [9], the MDP model under the belief state is defined. The six-component tuple is defined as follows: (1) State set S = {s1,s2,…,s n } is the limited state set of the sensor node. (2) The action set A = {a1,a2,…,a n } is the limited action set of the sensor nodes; (3) The observation set Z = {z1,z2,…,z n } is the limited set of the observed channel states of sensor nodes; (4) The transition function of the belief state is described as follows: $$ T\left(b,a,{b}^{\hbox{'}}\right)=P\left({b}^{\hbox{'}}|a,b\right)\sum \limits_{z\in Z}P\left({b}^{\hbox{'}}|a,b,z\right)P\left(z|a,b\right) $$ In this equation, a∈A, b, b′∈B, Eq. (3) describes the action a taken under the state b and transitions to the next state b′. (5) The observation function is: $$ P\left({b}_j|{b}_j,a\right)=\sum \limits_{k=1}^{\mid {z}_{b_j}\mid }P\left({z}_k|{b}_a\right)=\sum \limits_{k=1}^{\mid {z}_{b_j}\mid}\sum \limits_l^{\mid S\mid }Z\left({s}_l,a,{z}_k\right)\sum \limits_{m=1}^{\mid S\mid }T\left({s}_m,a,{s}_l\right){b}_j\left({s}_m\right) $$ In this equation, \( {z}_{b_j}\subseteq Z,{z}_k\subseteq {z}_{b_j}, \) \( {z}_{b_j} \) is a set of observed channel states under the state bj. Equation (4) describes the probability of transition to state bj when the action a is taken under the state bi. (6) The belief state award function is described as follows: $$ \rho \left(b,a\right)=\sum \limits_{s_i\in s}b(s)R\left(s,a\right) $$ Observe the channel state via b and the award value returned to the system when the action a is taken under the state s. Partial observable Markov decision process For easy description, the superscript indicates the time and subscript indicates the set state. s i indicates ith state of the set, and st indicates the state at t time. P(st = s i ) indicates the probability of the state s i at t time. No superscript and subscript indicates the variant at the present time. The superscript ′ indicates the variant at the subsequent time. For interactions between the MDP model based on the belief state and underwater acoustic channel, refer to Fig. 1. As presented in Fig. 1 and Eqs. (3–4), the belief state space can transform the POMDP problem to the Markov chain problem based on the state space. The MDP decision process based on the state space indicates the transformation of the state in POMDP to the belief state. The transition function is replaced with T(b, a, b′). The influences of states on the observation is changed to the influence of observations on the belief state. The interaction between MDP model and underwater acoustic channel based on internal belief state. Legend: The MDP decision process based on the state space indicates the transformation of the state in POMDP to the belief state Definition 2 The decision strategy Π=(π1,π2,…,π n ) indicates that the sensor node maps the belief state to the action set A, and the selected action can obtain the maximal accumulated value awarded by the channel: $$ {\omega}_{\mathrm{max}}=E\left[\sum \limits_{t=0}^{\infty }{\gamma}^t{\rho}_t\right]\kern0.5em \gamma \in \left(0,1\right] $$ In this equation, γ is the discount factor and can make the expected target value converge. The uncertainty of the underwater sensor network environment and the extended sensor network life shall be considered in the MDP decision of the belief state space. A function shall be constructed between the strategy Π and the obtained accumulative value of the channel award to make a decision. When the node is under the belief state b and the decision strategy Π, the function constructed through the Bellman principle and decision strategy value is described as follows: $$ {\prod}_t^{\ast }(b)=\underset{\alpha }{\arg \max}\left[\sum \limits_{s\in S}b\left(s,a\right)R\left(s,a\right)+\gamma \sum \limits_{z\in Z}P\left(z|b,a\right){\Gamma}_t^{\ast}\left({b}^{\hbox{'}}\right)\right] $$ $$ {\Gamma}_t^{\ast}\left({b}^{\hbox{'}}\right)=\underset{\alpha }{\arg \max}\left[\sum \limits_{s\in S}b(s)R\left(s,a\right)+\gamma \sum \limits_{z\in Z}P\left(z|b,a\right){\Gamma}_{t-1}^{\ast}\left({b}^{\hbox{'}}\right)\right] $$ In Eq. (7), an optimal strategy was searched to map the present belief state to actions in the node competition channel. Channel occupancy is predicted according to the present belief state and actions, in order to reasonably perceive the belief state and connection action in the next cycle. For easy description, the underwater acoustic sensor network is comprised of n randomly distributed underwater acoustic sensor nodes, which is represented as G(V,E). V indicates the set of sensor nodes, and E represents the edge set. Each underwater acoustic sensor node corresponds to one 3D coordinate (x, y, z) in the 3D space, and the communication radius of these nodes is R. Definition 3 The distance between two nodes, s u and s v , in the Euclidean space is described as follows: $$ d\left(u,v\right)=\sqrt{{\left({u}_x-{v}_x\right)}^2+{\left({u}_y-{v}_y\right)}^2+{\left({u}_z-{v}_z\right)}^2} $$ For d(u,v) < R, the s u node is adjacent to the s v node, and there is one communication link between s u and s v . The underwater acoustic sensor node is under the half duplex operation mode, namely, the node is only the transmission or receiving node at the time t. In the communication radius R, if multiple transmission nodes send data to one receiving node, the receiving node shall reasonably schedule the channel slot to reduce channel access conflictions. Description of node state The time axis of the receiving nodes is averagely divided into n slots. The transmission node transmits data packets in the time slot allocated under the obtained channel use privilege. The action state a n,t of the transmission node s n is divided into the silent state and busy state. $$ {a}_n=\left\{\begin{array}{c}1,\kern0.5em \mathrm{The}\ \mathrm{node}\ {s}_n\ i\mathrm{s}\ \mathrm{under}\ \mathrm{the}\ \mathrm{busy}\ \mathrm{state}\ \mathrm{at}\ t+1\ \mathrm{time}\\ {}0,\mathrm{The}\ \mathrm{node}\ {s}_n\ \mathrm{is}\ \mathrm{under}\ \mathrm{the}\ \mathrm{silent}\ \mathrm{state}\ \mathrm{at}\ t+1\ \mathrm{time}\end{array}\right. $$ Assume that the sensor node s n is under the state \( {x}_t^n=\left({l}_t^n,{e}_t^n\right)\kern0.5em \left(n=1,2,\cdots M\right) \), at the time t, \( {l}_t^n \) is the link quality of the node s n and \( {e}_t^n \) is the residual energy of the node s n . The link quality state Q = {l 1 , l 2 , …l m } and residual energy state E = (e 1 , e 2 , …e L ) of each node are respectively divided into M and L discrete grades. The state transition probability matrix of the state \( {l}_k^n \) and \( {e}_t^n \) are as follows: At the time t, when the node s n is silent, its state \( {x}_t^n=\left({l}_k^n,{e}_t^n\right) \) remains unchanged, namely, \( {l}_{t+1}^n={l}_t^{n=i} \) and \( {e}_{t+1}^n={e}_t^n \). The action state transition matrix T n of the transmission node s n is calculated from Un and Xn is (a ij ) m×n , a ij ∈{0,1}. The row vector is the M × 1 binary vector. The transmission node s n receives the confirmation message (ACK) and no confirmation message (NAK) observation values for free or busy receiving channels. It is assumed that the observation set is Z 0 = {ACK,ANK} = {σ1,σ2}, σ1 = 1,σ2 = 0 and each observation value is \( {z}_t^n\in {z}_0 \). The node states are partially observable. Therefore, the channel access conditions at a slot depends on the present state of each node and the historical observation value. It is assumed that the size of each data packet is r bits. In Eqs. (3–4), the probability of the observation value NCK is: $$ P\left({z}_t^n=\mathrm{NCK}|{x}_t^n,{a}_{n,t}\right)=\sum \limits_{t=1}^{\mid {z}_{b_j}\mid}\sum \limits_l^{\mid S\mid }Z\left({s}_l,\mathrm{NCK},{z}_t\right)\sum \limits_{m=1}^{\mid S\mid }T\left({s}_m,\mathrm{NCK},{s}_l\right){b}_j\left({s}_m\right)\times \kern0.5em \sum \limits_{l=t+1}^r{C}_r^l{P}_e{\left({x}_t^n\right)}^l\left(1-{P}_e{\left({x}_t^n\right)}^{r-l}\right) $$ The \( {p}_e\left({x}_t^n\right) \) is the data transmission error rate of the sensor transmission node s n . Channel scheduling strategy The state transition probability depends on the final observation values and present node state in the belief state space MDP system. Assuming that the sensor node s n is under the state \( {x}_t^n \) at the time t and under the state \( {x}_{t+1}^n \) at the time t + 1, the state transition probability could be calculated from Eqs. (10–11). $$ P\left({x}_t^n,{x}_{t+1}^n,{a}_{n,t}\right)=\sum \limits_{\delta_t\in {z}_0}\sum \limits_{d_t}P\left({d}_t\right){P}_U\left({l}_t,{l}_{t+1}\right)P\left({z}_t^n|{x}_t^n,{a}_{n,t}\right)\times \left(\Phi \left({e}_{t+1}^n\right)-\min \left(\Phi \left({e}_t^n\right)+{d}_t,r\right)\right) $$ The buffer size of the node S n is d t , and the transition probability of the link quality is P U (l t ,lt + 1). The quantity of the data packet received at the time t and t + 1 under the corresponding energy state is \( \Phi \left({e}_t^n\right) \) and \( \Phi \left({e}_{t+1}^n\right) \). It is assumed that the link quality award function of the node S n is \( {R}_1\left({l}_t^n,{a}_{n,t}\right) \) at the time t and the residual energy award function is\( {R}_e\left({e}_t^n,{a}_{n,t}\right) \). In order to improve the network channel utilization rate and extend network survival life, the transmission node with the high link quality shall be first selected for the transmission data packets. The nodes with higher residual energy shall be selected to forward data, in order to avoid the excessive consumption of some nodes. The belief state award function ρ(b,a) of the node calculated by Eq. (5) is: $$ \rho \left(b,a\right)=\sum \limits_{s_i\in S}b(s)R\left({x}_t^n,{a}_{n,t}\right)=\sum \limits_{s_i\in S}b(s)\left(\lambda {R}_1\left({l}_t^m,{a}_{n,t}\right)+\left(1-\lambda \right){R}_e\left({e}_t^n,{a}_{n,t}\right)\right) $$ 0 < λ < 1 is the weight of two awards. One target for optimizing channel scheduling is to obtain the maximal award scheduling strategy. The channel scheduling target function calculated by Eqs. (5–6) and Eq. (12) is described as follows: $$ {J}^{\pi }=E\left[\sum \limits_{k=0}^{\infty }{\omega}^k\sum \limits_{n=1}^M\rho \left(b,a\right),{a}_{n,t}|{x}_0^n\right] $$ In this equation, ωk(0 < ωk < 1) is the adjustment factor. The maximal expected award is: $$ {\psi}_n\left({x}^n\right)=\underset{k>0}{\max}\frac{E\left[\sum \limits_{k=0}^{r-1}{w}^k\rho \left(b,a\right)|{x}_0^n\right]}{E\left[\sum \limits_{k=0}^{r-1}{\omega}^k\right]} $$ Steps of channel scheduling algorithm The steps of channel scheduling algorithm are as follows: The set of transmission node s i is Sset(), and the receiving node is s j . Sset() comprises of the adjacent nodes of s i . Read s i in Sset() in order to set the reading node as s n . When the receiving node s j is free, s n sends control packet RT to s j . Calculate Ψ n (xn) from Eq. (14) and place the node (s n ) of the maximal Ψ n (xn) into q i . s j transmits the control packet CTS to s n . Remove s n from Sset(). Traverse all transmission nodes in Sset(). Output the queue q i , which is the scheduling sequence of the channels used by the transmission node. Obtain the scheduling sequence q i of the channel, according to the transmission node and transmit data packets in the allocated slots. After transmission, the state of the channel used by the node is predicted according to link quality, residual energy, and belief state space. The steps (1–7) are repeated to form the MAC protocol for s underwater acoustic sensor networks based on the belief state space (BSPMDP-MAC). Simulation analysis NS3 was simulated and analyzed. The performance of the MAC protocol was evaluated using Aqua-sim as the simulator, and the sensor node was made to work under half duplex mode. The sensor node can move at random in any direction. The movement speed v changes within the interval of 1–5 m/s. For parameter setting, refer to the reference [14]. The network throughput, energy consumption and success rate of transmission data were used as the reference parameters in the simulation analysis. The performance of the BSPMDP-MAC protocol was compared with the performance of the typical AUT-Lohi [15] and ST-Lohi [16], based on channel reservation. The correlation among the network throughput, data packet transmission rate, and node quantity of the three different MAC protocols was compared. The network throughput of the three MAC protocols is proportional to the transmission rate of the data packet (refer to Fig. 2a). When the transmission rate of the data packet reaches 0.07/s, the network throughput will be prone to saturation. The average network throughput of the BSPMDP-MAC is 28.7% higher than that of the AUT-Lohi protocol, and is 34.8% higher than that of the ST-Lohi protocol. In the AUT-Lohi protocol, the competitive node only intercepts one Tmax, and the tone frame may collide with the data packets in next cycle. In the ST-Lohi protocol, two-way epicophosis led to the collision between tone frames and data packets, and between data packets. The network throughput of the BSPMDP-MAC protocol was higher than that of the other two protocols under similar conditions. The relationship between network throughput and packet delivery rate and number of nodes. Legend: The correlation among the network throughput, a data packet transmission rate, and b node quantity of the three different MAC protocols was compared The network throughput of the three MAC protocols was proportional to the quantity of sensor nodes (refer to Fig. 2b). When the node quantity reached 80, the network throughput would gradually approximate to its saturation. The average network throughput of the BSPMDP-MAC protocol was 18.4% higher than that of the AUT-Lohi protocol and was 20.8% higher than that of the ST-Lohi protocol. The link quality was considered in the channel scheduling of the BSPMDP-MAC protocol. Nodes with better link quality could transmit data reliably with each other. The network throughput of the BSPMDP-MAC protocol was higher than that of the other two protocols under the same condition. For the relationship among the data packet transmission success rate of sensor nodes, the data packet transmission rate and node transmission radius of three different MAC protocols were compared. The data packet transmission success rate of sensor nodes of three different MAC protocols was disproportional to the transmission rate (refer to Fig. 3a). When this was proportional to the data packet transmission rate, the data packet transmission success rate of the BSPMDP-MAC protocol became higher than that of other two protocols. The competitive nodes obtained channel access privilege via the tone frame competition in the ST-Lohi protocol. When collision occurred, the random collision backoff was used to affect the data packet transmission success rate. One low-probability tone frame collision as used in the AUT-Lohi protocol. When collision occurred, the confliction backoff algorithm was used to reduce competition time and decrease the loss rate of packets in AUT-Lohi. When the data packet transmission rate was lower, this method was effective. When the data packet transmission rate increased, the transmission success rate of data packets quickly decreased. The relationship between packet transmission rate and packet transmission rate and node. Legend: For the relationship among the data packet transmission success rate of sensor nodes, the a data packet transmission rate and b node transmission radius of three different MAC protocols were compared The data packet transmission success rate of sensor nodes is disproportional to the transmission radius for the three MAC protocols (refer to Fig. 3b). When the transmission radius of the sensor node increased, the adjacent nodes of each sensor node also increased. Therefore, the nodes fiercely competed for channel use privilege, the channel collision probability increased, and the data packet transmission success rate was significantly reduced. When the node transmission radius was proportional, the average data packet transmission success rate of the BSPMDP-MAC protocol was 26.8% higher than that of the AUT-Lohi protocol and 89.5% higher than that of the ST-Lohi protocol. When the nodes inside the communication radius increased, the AUT-Lohi and ST-Lohi protocols called the backoff mechanism. The BSPMDP-MAC protocol transmits the data packets by time slots according to the scheduling sequence without the need to frequently call the backoff mechanism. Thus, the data packet transmission success rate of the BSPMDP-MAC protocol was higher than it of the other two protocols. The correlation between the energy consumption of the sensor node and node quality for three different MAC protocols: The energy consumption of the sensor nodes was disproportional to the node quantity for the three MAC protocols (refer to Fig. 4). When the sensor nodes increase, the communication distance between sensor nodes would become shorter. The BSPMDP-MAC protocol first regards transmission nodes with better link quality as nodes that transmit data packets. Nodes with higher residual energy are used as data forwarding nodes to avoid excessive energy consumption in some nodes. Although the AUT-Lohi and ST-Lohi protocols use a corresponding wakeup mechanism to ensure that the sensor nodes maintain its low energy consumption under the monitoring state, the excessive energy consumption problem of individual nodes was not considered. When the sensor nodes are proportional, the energy consumption of the BSPMDP-MAC protocol was 45.1% less than that of the AUT-Lohi protocol and 19.3% less than that of the ST-Lohi protocol. The whole network survival cycle is thereby extended. The relationship between energy consumption and node quantity. Legend: The energy consumption of the sensor nodes was disproportional to the node quantity for the three MAC protocols The BSPMDP-MAC protocol can obtain the channel decision strategy of nodes according to the joint probability distribution of the historical observations and actions of the sensor's receiving node channel. When the transmission nodes transmit data packets to receiving nodes in the allocated time slots, it reduces the collision rate of data packets according to the scheduling sequence of the decision strategy. The receiving nodes predict the channel occupancy according to the present belied state and actions. In order to perceive the belief state and access actions in the next cycle reasonably, time-space uncertainties and fairness problems in the channel use of nodes are solved and the utilization rate of the whole channel is improved. This protocol regards transmission nodes with better link quality as nodes for data packet transmission firstly. Then, nodes with higher residual energy were used as data forwarding nodes. In order to effectively reduce the confliction rate of data packets, data packet transmission success rate and network throughput are improved. Moreover, the network energy consumption is reduced and the lifecycle of the whole network is extended. BSPMDP: Belief state partial Markov decision process Code division multiple access CTS: Clear to send FDMA: Frequency division multiple access MACA-MN: Multiple access with collision avoidance-multiple neighbors MDP: Markov decision process M-FAMA: Multi-session floor acquisition multiple access PC-MAC: Prescheduling and collision-avoided MAC POMDP: RIPT: Receiver-initiated reservation protocol RTS: Request to send S-FAMA: Slotted floor acquisition multiple access TDMA: Time division multiple address L Lei, L Yu, Z Chunhua, et al., Research and optimization of collision avoidance MAC protocol for underwater acoustic networks. J. Nav. Univ. Eng. 26(4), 32–36 (2014) A Bazzi, A Zanella, BM Masini, An OFDMA-based MAC protocol for next-generation VANETs. IEEE Trans. Veh. Technol. 64(9), 4088–4100 (2015) X Zhang, H Su, Opportunistic spectrum sharing schemes for CDMA-based uplink MAC in cognitive radio networks. IEEE Journal on Selected Areas in Communications 29(4), 716–730 (2011) H Chen, G Fan, L Xie, et al., A hybrid path-oriented code assignment CDMA-based MAC protocol for underwater acoustic sensor networks.[J]. Sensors 13(11):15006-15025 (2013) IF Akyildiz, D Pompili, T Melodia, Underwater acoustic sensor networks: research challenges[J]. Ad Hoc Networks 3(3):257-279 (2005) N Chirdchoo, W Soh, K Chua, RIPT: A receiver-initiated reservation-based protocol for underwater acoustic networks. IEEE Journal on Selected Areas in Communications 26(9), 1744–1753 (2008) C Li, Y Xu, C Xu, et al, DTMAC: A Delay Tolerant MAC Protocol for underwater wireless sensor networks[J]. IEEE Sensors Journal 16(11):4137-4146 (2016) SY Shin, JI Namgung, SH Park, SBMAC: smart blocking MAC mechanism for variable UW-ASN (Underwater Acoustic Sensor Network) environment[J]. Sensors 10(1):501-525 (2010) Y Li, Z Jin, Y Su, A time-spatial uncertainty avoided MAC protocol in underwater sensor network. Acta Scientiarum Naturalium Universitatis Nankaiensis 50(5), 14–20 (2017) LF Qian, SL Zhang, MQ Liu, et al, Reservation-based MAC protocol for underwater wireless sensor networks with data train[J]. Zhejiang Daxue Xuebao (Gongxue Ban)/J Zhejiang Univ (Engineering Science Edition). 51(4), 691–696 (2017) XR Cao, DX Wang, L Qiu, Partial-information state-based optimization of partially observable Markov decision processes and the separation principle. IEEE Trans. Autom. Control 59(4), 921–936 (2014) HH Ng, WS Soh, M Motani, A bidirectional-concurrent MAC protocol with packet bursting for underwater acoustic networks. IEEE J. Ocean. Eng. 38(3), 547–565 (2013) J Cao, J Dou, S Dong, Balance transmission mechanism in underwater acoustic sensor networks. Int J Distributed Sensor Netw 2015, 1–12 (2015) B Wu, M Wu, Real-time POMDP algorithm based on belief state space compression. Control & Decision 22(12), 1417–1420 (2007) D Ji, Y Lu, Stepanov-like pseudo almost automorphic solution to a parabolic evolution equation. Advances in Difference Equations 2015(1), 1–17 (2015) FJL Ribeiro, ADCP Pedroza, Underwater monitoring system for oil exploration using acoustic sensor networks[J]. Telecommunication Systems 58(1):91-106 (2015) I would like to express my gratitude to all those who have helped me during the writing of this thesis. I gratefully acknowledge the help of my supervisor Professor CAI Shao-bin. I do appreciate his patience, encouragement, and professional instructions during my thesis writing. Also, I would like to thank Dr.Yuan Guo, who kindly gave me a hand whenever I need a help. Last but not the least, my gratitude also extends to my family who have been assisting, supporting, and caring for me all of my life. This research is sponsored by the National Natural Science Foundation of China under Grant Nos. 61571150, 61272185, and 61502037; the Fundamental Research Funds for the Central Universities (No. HEUCF160602); the Natural Science Foundation of Heilongjiang Province of China under Grant No. F2017029; and the Heilongjiang Provincial Education Office Project (135109237) and (135209235). We declared that materials described in the manuscript, including all relevant raw data, will be freely available to any scientist wishing to use them for non-commercial purposes, without breaching participant confidentiality. Department of Computer Science and Technology, Harbin Engineering University, Harbin, 150001, Heilongjiang, China Lian-suo Wei & Shao-bin Cai College of Computer and Control Engineering, Qiqihar University, Qiqihar, 161006, Heilongjiang, China & Yuan Guo College of Computer Science and Technology, Huaqiao University, Xiamen, 361021, Fujian, China Shao-bin Cai Search for Lian-suo Wei in: Search for Shao-bin Cai in: WL-suo and CS-bin conceived and designed the study. WL-suo and GY performed the experiments. GY proceeded the data acquisition and data analysis. WEI Lian-suo wrote the paper. CS-bin and GY reviewed and edited the manuscript. All authors read and approved the manuscript. Correspondence to Lian-suo Wei. Wei Lian-suo is an associate professor in the College of Computer and Control Engineering, Qiqihar University. He is a Ph.D in the Department of Computer Science and Technology in Harbin Engineering University. His research fields include semantic web, ontology, social networks, and computer network security. Guo Yuan is a professor in the College of Computer and Control Engineering, Qiqihar University. She graduated from the School of Electrical Engineering of Yan Shan University, China. Her current research interests include artificial intelligence and pattern recognition, sensor technology, and information processing and simulation. Cai Shao-bin is a professor at the Department of Computer Science and Technology, Hua Qiao University. He received his B.S. and Ph.D. degrees in the School of Computer Science and Technology Harbin Institute of Technology, China. He is working in underwater acoustic networks. His research interest includes MAC layer design, energy management, network architecture, signal processing, experimental test beds, and operational systems for scientific applications. Wei, L., Guo, Y. & Cai, S. MAC protocol for underwater acoustic sensor network based on belied state space. J Wireless Com Network 2018, 119 (2018) doi:10.1186/s13638-018-1130-5 Underwater acoustic sensor network Belief state space MAC protocol Research and Challenges of Wireless Networks in Internet of Things
CommonCrawl
A nonlinear p-Laplace equation with critical Sobolev-Hardy exponents and Robin boundary conditions Lingyu Jin1 & Lang Li1 Boundary Value Problems volume 2015, Article number: 185 (2015) Cite this article In this paper, we are concerned with a nonlinear p-Laplace equation with critical Sobolev-Hardy exponents and Robin boundary conditions. Through a compactness analysis of the functional corresponding to the problem, we obtain the existence of positive solutions for this problem under different assumptions. We are concerned with the following class of boundary value problems: $$ \textstyle\begin{cases} -\Delta_{p} u-\mu\frac{|u|^{p-2}u}{|x|^{p}}+\lambda |u|^{p-2}u=\frac{|u|^{{p^{*} (s )}-2}u}{|x|^{s}}+\eta |u|^{q-2}u, &\mbox{in } \Omega,\\ |\nabla u|^{p-1}\frac{\partial u}{\partial \nu}+\alpha (x )|u|^{p-2}u=0, &\mbox{on }\partial\Omega, \end{cases} $$ where \(0\in\overline{\Omega}\subset\mathbb{R}^{n}\), \(2\leq p< n\), \({p^{*} (s )}=p (n-s )/ (n-p )\), \(p< q<{p^{*} (s )}\), \(0\leq s< p\), \(\mu<\bar{\mu}:= \frac{ (n-p )^{p}}{p^{p}}\), \(\eta\geq0\) and \(\lambda\in\mathbb{R}^{1}\) are parameters, \(\alpha (x )\in C (\partial\Omega )\), \(\alpha (x )\geq0\). Ω is a bounded domain with a smooth \(C^{2}\) boundary, ν denotes the unit outward normal to ∂Ω. The main interest of this kind of problems is the presence of the singular potential \(\frac{1}{|x|^{s}}\), \(0\leq s\leq p\), \(\frac{1}{|x|^{s}}\) relating to the Hardy inequality. In the special case when \(\mu=0\), problem (1.1) is related to the well-known Sobolev-Hardy inequality $$\biggl(\int_{\Omega }\frac{u^{q}}{|x|^{s}}\,dx \biggr)^{\frac{p}{q}} \leq\frac {1}{C_{q,s,p}}\int_{\Omega }|\nabla u|^{p}\,dx,\quad \forall u\in W^{1,p}_{0} (\Omega ), $$ which is essentially due to Caffarelli, Kohn and Nirenberg (see [1]), where \(1< p< n\), \(q\leq p^{*} (s )\), \(C_{q,s,p}\) is a positive constant depending on p, q, s. When \(q=s=p\), the above Sobolev inequality becomes the well-known Hardy inequality (see [1, 2]) $$\int_{\Omega }\frac{|u|^{p}}{|x|^{p}}\,dx\leq\frac{1}{\bar{\mu}}\int _{\Omega }|\nabla u|^{p}\,dx, \quad\forall u\in W^{1,p}_{0} (\Omega ). $$ Moreover, the constant μ̄ is optimal and is not achieved since the Sobolev embedding is not compact even locally in any neighborhood of zero. In addition to the inverse potential, there is the presence of the critical Sobolev exponents and critical Sobolev-Hardy exponents, which causes of loss of compactness of the embeddings. This loss of compactness leads to many interesting existence and nonexistence phenomena for the elliptic equations with critical Hardy terms (see, for example, [3–8] and the references therein). For second-order semilinear elliptic differential equations on bounded domains, Brezis and Lieb [9] obtained an existence result of solutions for a class of elliptic equations with critical Sobolev nonlinearities by verifying a sub-level which satisfies the Palais-Smale conditions. A global compact result for a semilinear elliptic problem with critical Sobolev nonlinearities on bounded domains was obtained by Struwe [10]. Pierrotti and Terracini [11] studied a class of critical elliptic equations with Neumann boundary conditions through a compact analysis. Cao and Peng [4] got a global compact result for (1.1) (when \(p=2\), \(s=0\)) with Dirichlet boundary conditions and showed some new blow-up phenomena. Deng, Jin and Peng [12] got a similar result for the Robin boundary problem of equation (1.1) (when \(p=2\), \(s=0\)). In [13], with the Dirichlet boundary conditions of equation (1.1) (when \(s\neq0\)), they got the global compact result on the whole space and a bounded smooth domain, respectively. For the elliptic differential equations on unbounded domains, there have also been some global compact results (refer to [8, 14, 15]). In this paper, we discuss a general Robin boundary problem involving critical Hardy terms and critical Sobolev-Hardy terms with \(p\geq2\), \(0\leq s< p\). The different assumptions on the parameter s induce completely different results corresponding to the noncompactness analysis. In addition, the boundary conditions make great influence on our noncompact analysis. Not only does it change the form of our limiting equations, but it also adds more limiting equations which induce new blow-up bubble such as \(D_{\mu}\) (see Corollary 1.1) to occur. The first goal of this paper is a careful analysis of the features of a Palais-Smale sequence for the corresponding variational functional \(F_{\mu}(u)\) of (1.1). To this aim, following the same idea adopted by Struwe [10] and the main techniques of [11], we shall employ the blow-up technique to characterize all the energy levels where the Palais-Smale condition fails. More precisely, we shall represent any diverging Palais-Smale sequence as the sum of critical points of a family of limiting functionals, which are invariant under scaling. In our problem, due to the Hardy potential, critical Sobolev-Hardy terms, there are some critical points of a new family of limiting functionals. As a by-product, we shall find the smallest level where the Palais-Smale condition may fail. Thus we shall be able to determine safe sublevels where standard critical point theorems can be applied. The second purpose of this paper is to obtain the existence of critical points for the variational functional of (1.1) under different conditions by applying the previous compactness analysis. To mention our main results, it is convenient to introduce some notations. Firstly, we denote by \(F_{\mu}\) the functional associated to (1.1): $$\begin{aligned} F_{\mu} (u )={}&\frac{1}{p} \int_{\Omega} \biggl(|\nabla u|^{p}-\mu \frac{|u|^{p}}{|x|^{p}} \biggr)\,dx+\frac{1}{p}\int_{\partial \Omega } \alpha (x )|u|^{p}\,d\sigma-\frac{1}{p^{*} (s )} \int_{\Omega } \frac{|u|^{p^{*} (s )}}{|x|^{s}}\,dx \\ &{} +\frac {\lambda }{p}\int_{\Omega }|u|^{p}\,dx- \frac{\eta}{q}\int_{\Omega }|u|^{q}\,dx, \quad u\in W^{1,p} (\Omega ). \end{aligned}$$ We denote by \(\lambda_{1}\) the smallest positive eigenvalue such that the following problem has a positive solution: $$ \left \{ \textstyle\begin{array}{@{}l@{\quad}l} -\Delta_{p} u-\mu\frac{ |u|^{p-2}u}{|x|^{p}}=\lambda|u|^{p-2}u, & x\in \Omega, \\ |\nabla u|^{p-1}\frac{\partial u}{\partial\nu} +\alpha (x )|u|^{p-2}u=0,& x\in{\partial} {\Omega }, \\ u\in W^{1,p}(\Omega), \end{array}\displaystyle \right . $$ i.e., $$ \lambda_{1}=\inf \biggl\{ \int_{\Omega } \biggl(|\nabla u|^{p}-\mu\frac{|u|^{p}}{|x|^{p}} \biggr)\,dx +\int _{\partial{ \Omega }}\alpha (x )|u|^{p}\,d\sigma;\int _{\Omega }|u|^{p}\,dx=1, u\in W^{1,p} (\Omega ) \biggr\} . $$ From Lemma A.1 in the Appendix, \(\lambda_{1}\) can be attained. If \(\mu\leq 0\), obviously \(\lambda _{1}>0\). If \(\mu\in (0,\bar{\mu})\), by Lemma A.2 in the Appendix of this paper, we have $$\begin{aligned} \mu\int_{\Omega }\frac{|u|^{p}}{|x|^{p}}\,dx\leq\int _{\Omega }|\nabla u|^{p}+c (\varepsilon ,\mu )\int _{\Omega }|u|^{p} \,dx \end{aligned}$$ for \(u\in W^{1,p} (\Omega )\). Hence, for suitably large \(\lambda >0\), we have \(\lambda +\lambda _{1}>0\) for \(\mu\in (-\infty,\bar{\mu})\). Now, for \(\lambda >-\lambda _{1}\), we define the following norm: $$\|u\|= \biggl[\int_{\Omega } \biggl(|\nabla u|^{p}-\mu \frac{ |u|^{p}}{|x|^{p}}+\lambda |u|^{p} \biggr)\,dx+\int_{\partial \Omega } \alpha (x )|u|^{p}\,d\sigma \biggr]^{\frac{1}{p}}. $$ Then, by Lemma A.3 in the Appendix of this paper, \(\|\cdot\|\) is equivalent to the usual norm \(\|\cdot\|_{W^{1,p} (\Omega )}\). Secondly, we denote \(\mathbb{R}^{n}_{+}:=\{y= (y_{1}, y_{2},\ldots, \ y_{n-1}, y_{n} ):= (y',y_{n} )\in\mathbb{R}^{n}\mid y_{n}>0\} \) with boundary \(\mathbb{R}^{n-1}=\{y\mid (y',0 )\in\mathbb{R}^{n}\}\). Denote \(C^{\infty}_{0}(\Omega)=\{u\in C^{\infty}({\mathbb{R}}^{n})\mid\operatorname {supp} u\subset\subset\Omega\}\). The space \({D^{1,p}} (\Omega )\) is the completion of \(C^{\infty}_{0}(\Omega)\) with respect to the norm $$\|u\|_{D^{1,p}(\Omega)}= \biggl(\int_{\Omega}|\nabla u|^{p}\,dx \biggr)^{1/p}, $$ the space \({D^{1,p}} (\mathbb{R}^{n}_{+} )\) is the space of the restrictions to \(\mathbb{R}^{n}_{+}\) of elements of \({D^{1,p}} (\mathbb {R}^{n} )\). Recall \({p^{*} (s )}=p (n-s )/ (n-p )\) and denote \(p^{*}=p^{*} (0 )=\frac{np}{n-p}\). In the following C and c denote various generic positive constants. \(O (\varepsilon )\) denotes a quantity satisfying \(|O(\varepsilon)| /\varepsilon\leq C\), \(o(\varepsilon)\) means \(|o(\varepsilon)|/\varepsilon\rightarrow0\) as \(\varepsilon\rightarrow0\) and \(o (1 )\) is a generic infinitesimal value. Finally we give the definition of the Palais-Smale sequence as follows: let X be a Banach space, \(\phi\in C^{1}(X,{\mathbb{R}})\) and \(c\in {\mathbb{R}}\). The sequence \(u_{m}\in X\) is called a Palais-Smale sequence of ϕ at a level c if $$\phi(u_{m})\rightarrow c, \qquad\phi'(u_{m}) \rightarrow0 \quad\mbox{as }m\rightarrow \infty. $$ $$S_{\mu,s}=\inf_{u\in {D^{1,p}} (\mathbb{R}^{n} ) \backslash\{0\}}\frac{\int_{\mathbb{R}^{n}} (|\nabla u|^{p}-\mu\frac{|u|^{p}}{|x|^{p}} )\,dx}{ (\int_{\mathbb {R}^{n}}\frac{|u|^{{p^{*} (s )}}}{|x|^{s}}\,dx )^{p/{p^{*} (s )}}}, $$ which plays an important role in our argument. In particular we denote \(S =S_{0,0}\) and \(S_{\mu}=S_{\mu,0}\). In order to establish the global compactness result for problem (1.1), it is also convenient to introduce the problems at infinity corresponding to (1.1) as follows. $$\begin{aligned}& -\Delta_{p} v=|v|^{{p^{*}}-2}v,\quad v\in {D^{1,p}} \bigl(\mathbb{R}^{n} \bigr); \end{aligned}$$ $$\begin{aligned}& -\Delta_{p} v-\mu\frac{|v|^{p-2}v}{|x|^{p}}=\frac{|v|^{{p^{*} (s )}-2}v}{|x|^{s}},\quad v \in {D^{1,p}} \bigl(\mathbb{R}^{n} \bigr); \end{aligned}$$ $$\begin{aligned}& \textstyle\begin{cases} -\Delta_{p} v=|v|^{{p^{*}}-2}v, &v\in{D^{1,p}}(\mathbb{R}^{n}_{+}),\\ |\nabla v|^{p-2}\frac{\partial v}{\partial\nu} =0,&\mbox{on } \mathbb{R}^{n-1}; \end{cases}\displaystyle \end{aligned}$$ $$\begin{aligned}& \textstyle\begin{cases} -\Delta_{p} v-\mu\frac{|v|^{p-2}v}{|x|^{p}} =\frac{|v|^{{p^{*} (s )}-2}v}{|x|^{s}}, &v\in {D^{1,p}}(\mathbb{R}^{n}_{+}),\\ |\nabla v|^{p-2}\frac{\partial v}{\partial\nu} =0, &\mbox{on } \mathbb{R}^{n-1}. \end{cases}\displaystyle \end{aligned}$$ In fact, through scaling and transforming technique, and taking the limit, the Palais-Smale sequence of (1.1) can be represent by the solutions of problems (1.5)-(1.8) (refer to Theorem 1.1). All positive solutions of (1.5) are the well-known \((n+1 )\)-parameter family of $$U^{\varepsilon,y} (x ):=\varepsilon^{ (p-n )/p} U_{0} \biggl( \frac{x-y}{\varepsilon} \biggr), $$ $$U_{0} (x ):= c (n ) \bigl(1+|x|^{\frac {p}{p-1}} \bigr)^{\frac{p-n}{p}} $$ for some appropriate constant \(c (n )>0\). These solutions are also known to minimize the Sobolev quotient S, as was shown by Aubin [16]. Since \(U_{0} (x )\) is radical symmetric, then $$\frac{\partial{U_{0}}}{{\partial\nu}}\Big|_{x_{n}=0}=-\frac{\partial {U_{0}}}{{\partial x_{n}}}\Big|_{x_{n}=0}=-U_{0}'\bigl(|x|\bigr) \frac{x_{n}}{|x|}\Big|_{x_{n}=0}=0, $$ which means that \(U_{0} (x )\) is also the solution of (1.7). For \(0<\mu<\bar{\mu}\) and \(p>s\geq0\), Kang in [17] showed the existence of the positive solutions of (1.6), and the form of the solutions \(V^{\varepsilon}_{\mu}(|x| ):=\varepsilon^{\frac {p-n}{p}} V_{\mu}(|x|/\varepsilon )\), where \(V_{\mu}(x )\) is the unique positive radial function in \(D^{1,p} ({\mathbb{R}}^{n} )\) which achieves \(S_{\mu,s}\). Moreover, $$\begin{aligned}& V^{\varepsilon}_{\mu}(1 )= \biggl(\frac{ (n-s ) (\bar{\mu}-\mu )}{n-p} \biggr)^{\frac{1}{p^{*} (s )-p}}, \end{aligned}$$ $$\begin{aligned}& \lim_{r\rightarrow0}r^{a (\mu )}V_{\mu}(r )=c_{1}>0, \end{aligned}$$ $$\begin{aligned}& \lim_{r\rightarrow+\infty} r^{b (\mu )}V_{\mu}(r )=c_{2}>0, \end{aligned}$$ where \(r=|x|\), \(c_{1}\) and \(c_{2}\) are constants depended on p, n. \(a (\mu )\) and \(b (\mu )\) are solutions of $$0= (p-1 )\tau^{p}- (n-p )\tau^{p-1}+\mu, $$ where \(\tau\geq0\), \(0\leq\mu\leq\bar{\mu}\), \(0\leq a (\mu )<\frac{n-p}{p}<b (\mu )<\frac {n-p}{p-1}\). Of course, \(V^{\varepsilon}_{\mu}(|x| )\) are also the solutions of (1.8). For convenience, we also define the following quantities which will represent the amount of the functional \(F_{\mu}(u)\) carried over by blowing-up bubbles: $$\begin{aligned}& D_{0}:=\int_{\mathbb{R}^{n}} \biggl(\frac{1}{p}|\nabla U_{0}|^{p}-\frac{1}{{p^{*}}}U_{0}^{{p^{*}}} \biggr)\,dx=\frac{1}{n}S^{n/p},\\& D_{\mu}:=\int_{\mathbb{R}^{n}} \biggl(\frac{1}{p}| \nabla V_{\mu}|^{p}-\mu\frac{V_{\mu}^{p}}{|x|^{p}}-\frac{1}{ {p^{*} (s )}} \frac{V_{\mu}^{{p^{*} (s )}}}{|x|^{s}} \biggr)\,dx=\frac{p-s}{ (n-s )p}S_{\mu,s}^{\frac{n-s}{p-s}}. \end{aligned}$$ In order to unify the notations, we shall refer to the solutions of problems (1.5)-(1.8) as critical points of the following family of functionals: $$\begin{aligned}& F^{\infty}(u )=\frac{1}{p}\int_{{\mathbb{R}}^{n} }| \nabla u|^{p}\,dx -\frac{1}{{p^{*}}}\int_{ {\mathbb{R}}^{n}}|u|^{{p^{*}}}\,dx, \end{aligned}$$ $$\begin{aligned}& F_{\mu}^{\infty}(u )=\frac{1}{p}\int _{{\mathbb{R}}^{n} } \biggl(|\nabla u|^{p} -\mu\frac{|u|^{p}}{|x|^{p}} \biggr)\,dx -\frac{1}{{p^{*} (s )}}\int_{ {\mathbb{R}}^{n}} \frac{|u|^{{p^{*} (s )}}}{|x|^{s}}\,dx, \end{aligned}$$ $$\begin{aligned}& F^{\infty}_{+} (u )=\frac{1}{p}\int _{{\mathbb{R}}^{n}_{+} }|\nabla u|^{p}\,dx - \frac{1}{{p^{*}}}\int_{ {\mathbb{R}}^{n}_{+}}|u|^{{p^{*}}}\,dx, \end{aligned}$$ $$\begin{aligned}& F_{\mu,+}^{\infty}(u )=\frac{1}{p}\int _{{\mathbb{R}}^{n}_{+} } \biggl(|\nabla u|^{p} -\mu\frac{|u|^{p}}{|x|^{p}} \biggr)\,dx -\frac{1}{{p^{*} (s )}}\int_{ {\mathbb{R}}^{n}_{+}} \frac{|u|^{{p^{*} (s )}}}{|x|^{s}}\,dx. \end{aligned}$$ We shall prove that any diverging Palais-Smale sequence corresponding to (1.1) can be represented as sums of scaled critical points of the functionals \(F_{\mu}^{\infty}(u)\), \(F_{\mu,+}^{\infty}(u)\) or \(F^{\infty}(u)\), \(F_{+}^{\infty}(u)\) by exploiting suitable blow-up arguments. The first result of this paper is the following global compactness theorem. Let \(\{u_{m}\}\subset W^{1,p} (\Omega )\) be a Palais-Smale sequence of \(F_{\mu}(u )\) at level \(d>0\), \(u_{0}\) is a critical point of \(F_{\mu}(u )\), $$\zeta (s )= \textstyle\begin{cases} 1 &\textit{if }s=0,\\ 0 &\textit{if }s\neq0. \end{cases} $$ Then there exist \(k_{1}, k_{2}, k_{3}\in \mathbb{N}\cup\{0\}\) such that (i) \(u_{m}\) can be decomposed as $$u_{m}=u_{0}+\sum_{j=1}^{k_{1}}r_{m,j}^{\frac{n-p}{p}}U_{j} (r_{m,j}x ) +\zeta (s )\sum_{j=k_{1}+1}^{k_{1}+k_{2}+k_{3}}r_{m,j}^{\frac {n-p}{p}}U_{j} \bigl(r_{m,j} (x-x_{m,j} ) \bigr)+\omega_{m}, $$ where \(\omega_{m}\to0\) in \(W^{1,p} (\Omega )\) as \(m\rightarrow +\infty \), and for \(j=1,\ldots,k_{1}\), \(r_{m,j}\to+\infty\) as \(m\rightarrow+\infty\), $$\textstyle\begin{cases} U_{j} \textit{ satisfy } (1.6) &\textit{if } 0\in \Omega , \\ U_{j} \textit{ satisfy } (1.8) & \textit{if } 0\in \partial \Omega; \end{cases} $$ for \(j=k_{1}+1,\ldots,k_{1}+k_{2}\), \(r_{m,j}\operatorname{dist} (x_{m,j}, \partial \Omega )\to +\infty\), \(r_{m,j}|x_{m,j}|\to+\infty\) as \(m\rightarrow+\infty\), \(U_{j}\) satisfy (1.5); for \(j=k_{1}+k_{2}+1,\ldots,k_{1}+k_{2}+k_{3}\), \(r_{m,j}\operatorname{dist} (x_{m,j}, \partial \Omega )\to c<+\infty\), \(r_{m,j}|x_{m,j}|\to+\infty\) as \(m\rightarrow+\infty\), \(U_{j} \) satisfy (1.7). (ii) \(F_{\mu}(u_{m})\) can be decomposed as the following: for the case that \(0\in \partial \Omega \), as \(m\rightarrow+\infty\), $$F_{\mu}(u_{m} )=F_{\mu}(u_{0} )+\sum _{j=1}^{k_{1}}F_{\mu,+} ^{\infty}(U_{j} )+\zeta (s )\sum_{j=k_{1}+1}^{k_{1}+k_{2}}F^{\infty}(U_{j} ) + \zeta (s )\sum_{j=k_{1}+k_{2}+1} ^{k_{1}+k_{2}+k_{3}}F^{\infty}_{+} (U_{j} )+o (1 ), $$ $$\begin{aligned}& \textit{for }j=1,\ldots,k_{1}, U_{j} \textit{ is a solution of } (1.8);\\& \textit{for }j=k_{1}+1,\ldots,k_{1}+k_{2}, U_{j} \textit{ is a solution of } (1.5);\\& \textit{for } j=k_{1}+k_{2}+1,\ldots,k_{1}+k_{2}+k_{3}, U_{j} \textit{ is a solution of } (1.7); \end{aligned}$$ for the case that \(0\in \Omega \), as \(m\rightarrow+\infty\), $$F_{\mu}(u_{m} )=F_{\mu}(u_{0} )+\sum _{j=1}^{k_{1}}F_{\mu} ^{\infty}(U_{j} )+\zeta (s )\sum_{j=k_{1}+1}^{k_{1}+k_{2}}F^{\infty}(U_{j} ) +\zeta (s )\sum_{j=k_{1}+k_{2}+1}^{k_{1}+k_{2}+k_{3}}F^{\infty}_{+} (U_{j} )+o (1 ) , $$ $$\begin{aligned}& \textit{for }j=1,\ldots,k_{1} , U_{j} \textit{ is a solution of } (1.6); \\& \textit{for } j=k_{1}+1,\ldots,k_{1}+k_{2}, U_{j} \textit{ is a solution of } (1.5);\\& \textit{for } j=k_{1}+k_{2}+1,\ldots,k_{1}+k_{2}+k_{3}, U_{j} \textit{ is a solution of } (1.7). \end{aligned}$$ Corollary 1.1 Any positive Palais-Smale sequence for \(F_{\mu}(u)\) at a level d which is not of the form \(k_{1} D_{\mu}+ k_{2}D_{0}+\frac{1}{2}k_{3}D_{0} \) if \(0\in \Omega \) and the form \(\frac{k_{1}}{2} D_{\mu}+k_{2} D_{0}+\frac{1}{2}k_{3}D_{0} \) if \(0\in\partial \Omega \) for \(k_{1}, k_{2}, k_{3}\in {\mathbb{N}}\cup \{0\}\), gives rise to a nontrivial weak solution of equation (1.1). By applying Theorem 1.1 and the mountain pass theorem [18], we can obtain the following existence theorems by proving that \(F_{\mu}(u)\) satisfies the geometrical assumptions of the mountain pass theorem and that the mountain pass level is actually below the compactness threshold quoted in Theorem 1.1. Suppose \(0\in \Omega \), \(p>s> 0\), \(\lambda >-\lambda _{1}\), \(0<\mu <\bar{\mu}\), then problem (1.1) has a positive solution if $$\max\biggl\{ p, \frac{n}{b (\mu )}, \frac{p (2n-b (\mu )p-p )}{n-p}\biggr\} < q< {p^{*} (s )}. $$ Suppose \(0\in \Omega \), \(s=0\), \(\lambda >-\lambda _{1}\). Then there exists a constant \(\mu^{*} \in(0, \bar{\mu})\) such that problem (1.1) has a positive solution if \(0<\mu\leq\mu^{*}\); problem (1.1) has a positive solution if $$\mu^{*}< \mu< \bar{\mu} \quad\textit{and} \quad \max\biggl\{ p,\frac{n}{b (\mu )}, \frac{p (2n-b (\mu )p-p )}{n-p}\biggr\} < q< p^{*}. $$ Furthermore, \(\mu^{*}\) can be calculated by solving \(S^{\frac{n}{p}} =2 S_{\mu} ^{\frac{n}{p}}\). Remark 1.1 For the case that \(0\in \partial \Omega \), we cannot obtain the existence of the solutions of problem (1.1) since we do not know the explicit form of the attaining functions of \(S_{\mu, s}\). This paper is organized as follows. In Section 2, we prove Theorem 1.1 by carefully analyzing the features of a Palais-Smale sequence for \(F_{\mu}(u)\). In Section 3, we apply Theorem 1.1 and the mountain pass theorem [18] to obtain the existence of critical points for \(F_{\mu}(u)\) under different assumptions on the parameters μ, λ and the fact that \(0\in\Omega\). Finally, we put some preliminaries in the last section as an appendix. Proof of Theorem 1.1 In this section, the features of a Palais-Smale sequence for \(F_{\mu}(u)\) will be analyzed by the blow-up technique adopted by Struwe [10] for the Dirichlet problem. To this end, we need the following lemma. Let \(\{v_{m}\}_{m}\) be a Palais-Smale sequence of \(F_{\mu}(u)\) at level \(d>0\), and assume that \(\{v_{m}\}_{m}\) converges weakly but not strongly to zero in \(W^{1,p} (\Omega )\). (1) For the case \(s\neq0\), if \(0\in\Omega\), there exists a positive sequence \(k_{m}\) such that, up to a subsequence, $$ w_{m}=v_{m} (x )-k_{m}^{\frac{n-p}{p}}v_{0}(k_{m} x),\quad x\in\overline {\Omega}, $$ is a Palais-Smale sequence for \(F_{\mu}(u )\) in \(W^{1,p} (\Omega )\) at level \(d-\frac{p-s}{ (n-s )p}S_{\mu,s}^{\frac{n-s}{p-s}}\), and \(v_{0}\) solves (1.6). Moreover, \(w_{m} \rightarrow0\) weakly in \(W^{1,p} (\Omega )\) as \(m\to +\infty\); if \(0\in\partial\Omega\), there exists a positive sequence \(k_{m}\) such that, up to a subsequence, is a Palais-Smale sequence for \(F_{\mu}(u )\) in \(W^{1,p} (\Omega )\) at level \(d-\frac{p-s}{2 (n-s )p}S_{\mu,s}^{\frac{n-s}{p-s}}\), and \(v_{0}\) solves (1.8). Moreover, \(w_{m} \rightarrow0\) weakly in \(W^{1,p} (\Omega )\) as \(m\to +\infty\). (2) For the case that \(s=0\), then either $$ w_{m}=v_{m} (x )-k_{m}^{\frac{n-p}{p}}v_{0} (k_{m} x ),\quad x\in \overline{\Omega}, $$ is a Palais-Smale sequence for \(F_{\mu}(u )\) in \(W^{1,p} (\Omega )\) at level \(d-\frac{1}{n}S_{\mu}^{\frac{n}{p}}\), and \(v_{0}\) solves (1.6). Moreover, \(w_{m} \rightarrow0\) weakly in \(W^{1,p} (\Omega )\) as \(m\to +\infty\); is a Palais-Smale sequence for \(F_{\mu}(u )\) in \(W^{1,p} (\Omega )\) at level \(d-\frac{1}{2n}S_{\mu}^{\frac{n}{p}}\), and \(v_{0}\) solves (1.8). Moreover, \(w_{m} \rightarrow0\) weakly in \(W^{1,p} (\Omega )\) as \(m\to +\infty\); or there exist sequences \(y_{m}\in\overline{\Omega}\), \(K_{m}\in {\mathbb{R}}^{+}\) such that, up to a subsequence, $$ w_{m} (x )=v_{m} (x )-K_{m}^{\frac{n-p}{p}}v_{0} \bigl(K_{m} (x-y_{m} ) \bigr),\quad x\in\overline{\Omega}, $$ is a Palais-Smale sequence for \(F_{\mu}(u )\) at level \(d-\frac{1}{2n}{S}^{\frac{n}{p}}\) if \(\lim_{m\rightarrow+\infty}K_{m} \operatorname{dist} (y_{m},\partial\Omega )<+\infty\). Moreover, \(w_{m} \rightarrow0\) weakly in \(W^{1,p} (\Omega )\) as \(m\to+\infty \) and \(v_{0}\) is the solution of (1.7); is a Palais-Smale sequence for \(F_{\mu}(u )\) at level \(d-\frac{1}{n}{S}^{\frac{n}{p}}\) if \(\lim_{m\rightarrow +\infty}K_{m} \operatorname{dist} (y_{m},\partial\Omega )=+\infty\). Moreover, \(w_{m} \rightarrow0\) weakly in \(W^{1,p} (\Omega )\) and \(v_{0}\) is the solution of (1.5). We only prove the case when \(0\leq\mu<\bar{\mu}\) since the proof of the case when \(\mu<0\) is similar. By Lemma A.4 in the Appendix, we deduce that there are positive constants \(c_{i} \) (\(i=1,2 \)) such that $$ c_{1}\leq\int_{\Omega}|\nabla v_{m}|^{p}\,dx\leq c_{2}, \quad\forall m \in \mathbb{N}. $$ From (2.7), let \(\delta>0\) be small (will be determined later) such that $$ \limsup_{m\rightarrow+\infty} \int_{\Omega}| \nabla v_{m}|^{p}\,dx >\delta. $$ Fix m, by the integral absolute continuity, \(\forall\varepsilon>0\), there exists a constant \(a>0\) for any set \(E\subset\Omega\) and the measure \(m(E)< a\), then $$\int_{E}|\nabla v_{m}|^{p}\,dx< \varepsilon. $$ Define \(F(R)=\int_{B(0,R)\cap\Omega}|\nabla v_{m}|^{p} \,dx\), then \(F(R)\) is a continuous function of R satisfying $$\lim_{R\rightarrow+\infty}F(R)=\int_{\Omega}|\nabla v_{m}|^{p}\,dx, \qquad\lim_{R\rightarrow0}F(R)=0. $$ Up to a subsequence, we can choose minimal \(\frac{1}{k_{m}}>0\) such that $$ \int_{B (0, \frac{1}{k_{m}} )\cap \Omega }|\nabla v_{m}|^{p} \,dx=\delta. $$ We denote by \(E: W^{1,p} (\Omega )\rightarrow W^{1,p} (\mathbb {R}^{n} )\) the extension operator such that $$E (v )_{|\Omega} =v,\qquad \bigl\| E (v )\bigr\| _{W^{1,p} (\mathbb {R}^{n} )}\leq C (\Omega )\|v \|_{W^{1,p} (\Omega )} $$ (remember that \(\partial\Omega\in C^{1}\)). For the simplicity of notations, we shall denote by the same symbol both the function \(v\in W^{1,p} (\Omega )\) and its extension \(E (v )\in W^{1,p} (\mathbb {R}^{n} )\). Define $$\bar{v}_{m} :=k_{m}^{\frac{p-n}{p}} v_{m} \biggl( \frac{x}{k_{m}} \biggr)\quad\mbox{and}\quad \Omega _{1,m}:=\biggl\{ x\in {\mathbb{R}}^{n} \Big| \frac{x}{k_{m}}\in \Omega \biggr\} , $$ then \(\int_{B (0, 1 )\cap \Omega _{1,m}}|\nabla\bar{v}_{m}|^{p} \,dx=\delta\). Let us point out that, thanks to (2.7)-(2.9), the sequence \(\{k_{m}\}\) is bounded away from zero. Obviously \(\bar{v}_{m} \in W^{1,p} (\Omega _{1,m} )\subset {D^{1,p}} ({\mathbb{R}}^{n} )\). Moreover, $$\|\bar{v}_{m}\|_{{D^{1,p}} (\mathbb{R}^{n} )}=\|v_{m}\|_{{D^{1,p}} (\mathbb{R}^{n} )}\leq C (\Omega )\|v_{m}\|_{W^{1,p} (\Omega )}\leq c. $$ Up to a subsequence, there exists \(v_{0}\in{D^{1,p}} ({\mathbb{R}}^{n} )\) such that \(\bar{v}_{m}\rightarrow v_{0}\) weakly in \({D^{1,p}} ({\mathbb{R}}^{n} )\) and \(\bar{v}_{m}\rightarrow v_{0}\) a.e. in \({\mathbb{R}}^{n}\) as \(m\to+\infty\). We have either \(v_{0} \not\equiv0\) or \(v_{0}\equiv0\). Case (I): Assume \(v_{0}\not\equiv0\). Since \(v_{m}\rightarrow0 \) (\(m\to+\infty \)) weakly in \(W^{1,p} (\Omega )\) and \(\bar{v}_{m}\rightarrow v_{0}\not\equiv0\) weakly in \(W^{1,p} (\Omega )\), we have \(k_{m}\rightarrow+\infty\) (\(m\to+\infty\)). In this case we claim that \(v_{0}\) satisfies (1.6) and the sequence $$w_{m} (x ):=v_{m} (x )-k_{m}^{\frac{n-p}{p} }v_{0} (k_{m} x ),\quad x\in\Omega $$ is a Palais-Smale sequence for \(F_{\mu}(u)\) at level \(d-\frac{p-s}{ (n-s )p}S_{\mu,s}^{\frac{n-s}{p-s}}\). Since \(\bar{v}_{m}\) is bounded in \({D^{1,p}} ({\mathbb{R}}^{n} )\), then $$ \begin{aligned} &\bar{v}_{m}\rightarrow v_{0}\mbox{ weakly in } {D^{1,p}} \bigl({\mathbb{R}}^{n} \bigr), W^{1,p}_{\mathrm{loc}}\bigl({\mathbb{R}}^{n}\bigr)\mbox{ as } m \to+\infty; \\ & \bar{v}_{m}\rightarrow v_{0} \mbox{ a.e. in } {\mathbb{R}}^{n} \mbox{ as } m\to+\infty; \\ &\bar{v}_{m}\rightarrow v_{0} \mbox{ in }L_{\mathrm{loc}}^{p^{*}(s)-1} \bigl({\mathbb{R}}^{n}, |x|^{-s}\bigr) \mbox{ as }m\rightarrow+\infty; \\ & \bar{v}_{m}\rightarrow v_{0} \mbox{ in }L_{\mathrm{loc}}^{p-1}\bigl({\mathbb{R}}^{n}, |x|^{-p}\bigr) \mbox{ as }m\rightarrow+\infty; \\ &\bar{v}_{m}\rightarrow v_{0} \mbox{ in }L_{\mathrm{loc}}^{q} \bigl({\mathbb{R}}^{n}\bigr), 1< q< p^{*}, \mbox{ as }m\rightarrow+\infty. \end{aligned} $$ If \(0\in\Omega\), fix a ball \(B (x,r )\) and a test function \(\phi\in C^{\infty}_{0} (B (x,r ) )\). Notice that for sufficiently large m, \(B (x,r )\subset \Omega _{1,m}\). Then we have $$ \begin{aligned} &\int_{\Omega _{1,m}}|\nabla \bar{v}_{m}|^{p-2}\nabla\bar{v}_{m}\nabla \phi \,dx= \int _{B (x,r )}|\nabla\bar{v}_{m}|^{p-2}\nabla\bar{v}_{m}\nabla \phi \,dx\\ &\hphantom{\int_{\Omega _{1,m}}|\nabla \bar{v}_{m}|^{p-2}\nabla\bar{v}_{m}\nabla \phi \,dx}\rightarrow\int_{B (x,r )}|\nabla v_{0}|^{p-2}\nabla v_{0}\nabla\phi \,dx ; \\ & \int_{\Omega _{1,m}} \frac{|v_{m}|^{{p^{*} (s )}-2}v_{m}\bar{\phi}_{m}}{|x|^{s}}\,dx=\int_{B (x,r )} \frac{|v_{m}|^{{p^{*} (s )}-2}v_{m}\bar{\phi}_{m}}{|x|^{s}}\,dx\rightarrow\int_{B (x,r )}\frac {|v_{0}|^{{p^{*} (s )}-2}v_{0}\phi}{|x|^{s}}\,dx; \\ &\int_{\Omega _{1,m}}\mu\frac{|\bar{v}_{m}|^{p-2}\bar{v}_{m}\phi}{|x|^{p}}\,dx=\int_{B (x,r )} \mu\frac{|\bar{v}_{m}|^{p-2}\bar{v}_{m}\phi}{|x|^{p}}\,dx \rightarrow\int_{B (x,r )}\mu \frac{|v_{0}|^{p-2}v_{0}\phi}{|x|^{p}}\,dx \end{aligned} $$ as \(m\rightarrow+\infty\). And since \(k_{m}\rightarrow+\infty\) as \(m\rightarrow+\infty\), then $$\begin{aligned} &\frac{\lambda }{k^{p}_{m}} \int _{\Omega _{1,m}}\phi\bar{v}_{m} |\bar{v}_{m}|^{p-2}\,dx= \frac{\lambda }{k^{p}_{m}} \int_{B (x,r )}\phi\bar{v}_{m} |\bar{v}_{m}|^{p-2}\,dx\rightarrow 0; \\ &\frac{\eta}{k_{m}^{n-\frac{n-p}{p}q}}\int_{\Omega _{1,m}}\phi|\bar{v}_{m}|^{q-2} \bar{v}_{m}\,dx=\frac{\eta}{k_{m}^{n-\frac{n-p}{p}q}}\int_{B (x,r )}\phi|\bar{v}_{m}|^{q-2}\bar{v}_{m}\,dx\rightarrow0; \\ &\frac{1}{k_{m}^{p-1}}\int_{\partial \Omega _{1,m}} \alpha \biggl( \frac{x}{k_{m}} \biggr)\phi\bar{v}_{m} |\bar{v}_{m}|^{p-2}\,d\sigma=0. \end{aligned} $$ Therefore we have $$\begin{aligned} &\bigl\langle \phi, DF_{\mu}^{\infty}\bigl(v_{0}, \mathbb{R}^{n} \bigr)\bigr\rangle \\ &\quad=\int_{B (x,r )}|\nabla v_{0}|^{p-2}\nabla v_{0}\nabla\phi \,dx-\int_{B (x,r )}\frac{|v_{0}|^{{p^{*} (s )}-2}v_{0}\phi}{|x|^{s}}\,dx- \int_{B (x,r )}\mu\frac{|v_{0}|^{p-2}v_{0}\phi}{|x|^{p}}\,dx \\ &\quad=\int_{\Omega _{1,m}}|\nabla\bar{v}_{m}|^{p-2}\nabla \bar{v}_{m}\nabla \phi \,dx-\int_{\Omega _{1,m}} \frac{|\bar{v}_{m}|^{{p^{*} (s )}-2}\bar{v}_{m}\phi}{|x|^{s}}\,dx-\int_{\Omega _{1,m}}\mu\frac{|\bar{v}_{m}|^{p-2}\bar{v}_{m}\phi}{|x|^{p}}\,dx \\ &\qquad{}+\frac{1}{k_{m}^{p-1}}\int_{\partial \Omega _{1,m}} \alpha \biggl( \frac{x}{k_{m}} \biggr)\phi\bar{v}_{m} |\bar{v}_{m}|^{p-2}\,d\sigma+\frac{\lambda }{k^{p}_{m}} \int_{\Omega _{1,m}}\phi\bar{v}_{m} | \bar{v}_{m}|^{p-2}\,dx \\ &\qquad{}-\frac{\eta }{k_{m}^{n-\frac{n-p}{p}q}}\int_{\Omega _{1,m}}\phi|\bar{v}_{m}|^{q-2} \bar{v}_{m}\,dx+o (1 ) \\ &\quad=\int_{\Omega}|\nabla v_{m}|^{p-2}\nabla v_{m}\nabla\bar{\phi}_{m} \,dy-\int_{\Omega}\frac{|v_{m}|^{{p^{*} (s )}-2}v_{m}\bar{\phi}_{m}}{|y|^{s}}\,dy -\mu\int_{\Omega}\frac{| v_{m}|^{p-2}v_{m}\bar{\phi}_{m}}{|y|^{p}}\,dy \\ &\qquad{} +\int_{\partial\Omega} \alpha (y )\bar{\phi}_{m} v_{m}| v_{m}|^{p-2}\,d\sigma+\lambda \int _{\Omega }| v_{m}|^{p-2}v_{m}\bar{\phi}_{m}\,dy \\ &\qquad{}-\eta\int_{\Omega }\bar{\phi}_{m}|v_{m}|^{q-2}v_{m} \,dy+o (1 ) \quad\biggl(\mbox{let } y=\frac{x}{k_{m}}\biggr) \\ &\quad=o (1 )\quad\mbox{as }m\to+\infty, \end{aligned}$$ where \(\bar{\phi}_{m} (x )=k_{m}^{\frac{n-p}{p}}\phi (k_{m} x )\). Since \(\|\phi\|_{D^{1,p} (B (x,r ) )}=\|\bar{\phi}_{m}\| _{W^{1,p} (\Omega )}+o (1 )\), \(v_{0}\) solves (1.6). If \(0\in\partial \Omega \), fix a ball \(B (x,r )\) and a test function \(\phi\in C^{\infty}_{0} (B (x,r ) )\). Notice that for sufficiently large m, \(B (x,r )\cap {\mathbb{R}}^{n}_{+}\subset \Omega _{1,m}\), we have $$\begin{aligned} &\bigl\langle \phi, DF_{\mu}^{\infty}\bigl(v_{0}, \mathbb{R}^{n}_{+} \bigr)\bigr\rangle \\ &\quad=\int_{B (x,r )\cap {\mathbb{R}}^{n}_{+}}|\nabla v_{0}|^{p-2}\nabla v_{0}\nabla\phi \,dx-\int_{B (x,r )\cap {\mathbb{R}}^{n}_{+}}\frac{|v_{0}|^{{p^{*} (s )}-2}v_{0}\phi}{|x|^{s}}\,dx\\ &\qquad{}- \int_{B (x,r )\cap {\mathbb{R}}^{n}_{+}}\mu\frac{|v_{0}|^{p-2}v_{0}\phi}{|x|^{p}}\,dx \\ &\quad=\int_{\Omega _{1,m}}|\nabla\bar{v}_{m}|^{p-2}\nabla \bar{v}_{m}\nabla \phi \,dx-\int_{\Omega _{1,m}} \frac{|\bar{v}_{m}|^{{p^{*} (s )}-2}\bar{v}_{m}\phi}{|x|^{s}}\,dx-\int_{\Omega _{1,m}}\mu\frac{|\bar{v}_{m}|^{p-2}\bar{v}_{m}\phi}{|x|^{p}}\,dx \\ &\qquad{} +\frac{1}{k_{m}^{p-1}}\int_{\partial \Omega _{1,m}} \alpha \biggl( \frac{x}{k_{m}} \biggr)\phi\bar{v}_{m}|\bar{v}_{m}|^{p-2}\,d\sigma+\frac{\lambda }{k^{p}_{m}} \int_{\Omega _{1,m}}\phi\bar{v}_{m}|\bar{v}_{m}|^{p-2}\bar{v}_{m} \,dx \\ &\qquad{} -\frac{\eta}{k_{m}^{n-\frac{n-p}{p}q}}\int_{\Omega _{1,m}}\phi |\bar{v}_{m}|^{q-2}\bar{v}_{m}\,dx+o (1 ) \\ &\quad=\int_{\Omega}|\nabla v_{m}|^{p-2}\nabla v_{m}\nabla\bar{\phi}_{m} \,dx-\int_{\Omega}\frac{ |v_{m}|^{{p^{*} (s )}-2}v_{m}\bar{\phi}_{m} }{|x|^{s}}\,dx-\int_{\Omega}\mu\frac{|v_{m}|^{p-2}v_{m}\bar{\phi}_{m}}{|x|^{p}}\,dx \\ &\qquad{} +\int_{\partial\Omega} \alpha (x )\bar{\phi}_{m} v_{m}|v_{m}|^{p-2}\,d\sigma+\lambda \int _{\Omega }|v_{m}|^{p-2}v_{m}\bar{\phi}_{m}\,dx-\eta\int_{\Omega }\bar{\phi}_{m}|v_{m}|^{q-2}v_{m} \,dx +o (1 )\\ &\quad=o (1 )\quad\mbox{as }m\to+\infty, \end{aligned}$$ By Lemma A.6 in the Appendix and the invariance of dilation, we have for large m $$\begin{aligned}& F_{\mu}(w_{m} )=F_{\mu}(v_{m} )-F^{\infty}_{\mu}(v_{0} )+o (1 )=d - \frac{p-s}{ (n-s )p}S_{\mu,s}^{\frac{n-s}{p-s}}, \quad\mbox{for } 0\in \Omega , \\& F_{\mu}(w_{m} )=F_{\mu}(v_{m} )-F^{\infty}_{+,\mu } (v_{0} )+o (1 )=d - \frac{p-s}{2 (n-s )p}S_{\mu,s}^{\frac{n-s}{p-s}}, \quad\mbox{for } 0\in\partial \Omega , \\& DF_{\mu} (w_{m} )\rightarrow0 \quad\mbox{in } W^{-1,p} ( \Omega ) . \end{aligned}$$ Also, from \(k_{m}^{\frac{n-p}{p}}v_{0}(k_{m}x) \rightarrow0\) weakly in \(W^{1,p} (\Omega )\) and \(v_{m}\rightarrow0\) weakly in \(W^{1,p} (\Omega )\), it is obvious that \(w_{m}\rightarrow0\) weakly in \(W^{1,p} (\Omega )\). Case (II): Assume \(v_{0}\equiv0\). If \(0\in \Omega \), let \(h\in C^{\infty}_{0} (B (0,1 ) )\), then we have $$\begin{aligned} &\int_{{\mathbb{R}}^{n}}\bigl|\nabla ( \bar{v}_{m} h )\bigr|^{p}\,dx \\ &\quad=\int_{{\mathbb{R}}^{n}}|\nabla\bar{v}_{m}|^{p} h^{p}\,dx+o (1 ) \\ &\quad=\bigl\langle DF_{\mu}(\bar{v}_{m} ), h^{p}\bar{v}_{m}\bigr\rangle +\int_{{\mathbb{R}}^{n}}\frac{\mu h^{p}\bar{v}_{m} ^{p}}{|x|^{p}}\,dx+ \int_{{\mathbb{R}}^{n}}\frac{|\bar{v}_{m}|^{{p^{*} (s )}}h^{p}}{|x|^{s}}\,dx+o (1 ) \\ &\quad\leq\frac{p^{p}\mu}{ (n-p )^{p}}\int_{{\mathbb{R}}^{n}}\bigl|\nabla (\bar{v}_{m} h )\bigr|^{p}\,dx+S^{-1}_{0,s} \biggl(\int _{B (0,1 )}\frac {|\bar{v}_{m}|^{{p^{*} (s )}}}{|x|^{s}}\,dx \biggr)^{\frac{p-s}{n-s}}\int _{{\mathbb{R}}^{n}}\bigl|\nabla (\bar{v}_{m} h )\bigr|^{p}\,dx \\ &\qquad{} +o (1 )\quad\mbox{as }m\to +\infty. \end{aligned}$$ Choose δ suitably small, from (2.13) and the fact that \(0\leq\mu<\frac{ (n-p )^{p}}{p^{p}}\), we can find \(a\in (0,1 )\) such that $$ \int_{B (0,a )}|\nabla \bar{v}_{m}|^{p}\,dx \rightarrow0 \quad\mbox{as }m\to+\infty. $$ Thus we have $$ \int_{B (0,a )}\frac{|\bar{v}_{m}|^{p}}{|x|^{p}}\,dx\rightarrow0,\qquad \int_{B (0,a )}\frac{|\bar{v}_{m}|^{p^{*}(s)}}{|x|^{s}}\,dx\rightarrow 0\quad\mbox{as }m\to+ \infty. $$ If \(0\in \partial \Omega \), define $$\bar{v}'_{m}=\left \{ \textstyle\begin{array}{@{}l@{\quad}l} v_{m} (x',x_{n} ), &x_{n}\geq0, \\ v_{m} (x',-x_{n} ),& x_{n}< 0, \end{array}\displaystyle \right . $$ where \(x'= (x_{1},\ldots,x_{n-1} )\). Proceeding as to obtain (2.14), we deduce $$\int_{ B (0,a )}\bigl|\nabla\bar{v}'_{m}\bigr|^{p}\,dx \rightarrow0 \quad\mbox{as }m\to+\infty \mbox{ for some } a\in (0, 1 ), $$ which implies that $$\int_{ B (0,a )\cap {\mathbb{R}}^{n}_{+}}|\nabla\bar{v}_{m}|^{p}\,dx \rightarrow 0\quad\mbox{as }m\to+\infty \hbox{ for some } a\in (0, 1 ). $$ For \(p>s>0\), we can deduce that Case (II) cannot happen. In fact, if \(0\in \Omega \), from (2.10) and \(0< p< p^{*}\), \(0< p^{*}(s)< p^{*}\), then \(\forall R>0\), $$\begin{aligned}& \int_{B(0,2)\backslash B (0,a )} \frac{|\bar{v}_{m}|^{p}}{|x|^{p}}\,dx \leq\int _{B(0,2)\backslash B (0,a )}\frac{|\bar{v}_{m}|^{p}}{a^{p}}\,dx=o (1 )\quad\mbox{as }m\to+\infty, \end{aligned}$$ $$\begin{aligned}& \int_{B(0,2) \backslash B (0,a )}\frac{|\bar{v}_{m}|^{p^{*} (s )}}{|x|^{s}}\,dx \leq\int _{B(0,2)\backslash B (0,a )}\frac{|\bar{v}_{m}|^{p^{*} (s )}}{a^{s}}\,dx=o (1 )\quad\mbox{as }m\to+\infty. \end{aligned}$$ From (2.14)-(2.17), we have $$ \int_{B(0,2)}\frac{|\bar{v}_{m}|^{p}}{|x|^{p}}\,dx=\int _{B(0,2)}\frac{|\bar{v}_{m}|^{p^{*} (s )}}{|x|^{s}}\,dx=o (1 )\quad\mbox{as }m\to+\infty. $$ Since \(\delta>0\), from (2.9) there exists a positive constant ā such that \(k_{m}\geq\bar{a}>0\), thus \(B(0,\frac{2}{k_{m}})\subset B(0,\frac{2}{\bar{a}})\). Choose $$0< g_{m}\in C_{0}^{\infty}(\Omega),\quad \operatorname{supp} g_{m}\subset B\biggl(0,\frac {2}{k_{m}}\biggr), \quad\mbox{and}\quad g_{m}\equiv1 \quad\mbox{in } B\biggl(0, \frac{1}{k_{m}}\biggr), $$ and \(g_{m}\) is bounded in \(C_{0}^{\infty}(\Omega)\) since \(v_{m}\) is the Palais-Smale sequence of \(F_{\mu}(u)\), then $$ \bigl\langle F'_{\mu}(v_{m}), v_{m} g_{m}\bigr\rangle =o(1) \quad\mbox{as } m\rightarrow+\infty $$ $$\begin{aligned} &\int_{\Omega}|\nabla v_{m}|^{p-2}\nabla v_{m} \nabla(v_{m}g_{m})\,dx \\ &\quad= \mu\int_{\Omega}\frac{|v_{m}|^{p}g_{m}}{|x|^{p}}\,dx+\int_{\Omega} \frac {|v_{m}|^{p^{*}(s)}g_{m}}{|x|^{s}}\,dx -\int_{\partial\Omega}\alpha(x)|v_{m}|^{p} g_{m}\,dx \\ &\qquad{}+\eta\int_{\Omega }|v_{m}|^{q}g_{m}\,dx- \lambda\int|v_{m}|^{p} g_{m}\,dx . \end{aligned}$$ $$ \begin{aligned} &v_{m} \to0 \mbox{ weakly in } W^{1,p}(\Omega) \mbox{ as }m\rightarrow +\infty; \\ &v_{m} \to0 \mbox{ in } L^{q}(\Omega), L^{p}( \partial\Omega), 1< q< p*,\mbox{ as }m\rightarrow+\infty; \\ &v_{m}\to0 \mbox{ a.e. in }\Omega \mbox{ as }m\rightarrow+\infty, \end{aligned} $$ then from (2.18)-(2.21) we have $$\begin{aligned} &\int_{B(0,\frac{1}{k_{m}})}|\nabla v_{m}|^{p}\,dx \\ &\quad\leq\int_{\Omega}|\nabla v_{m}|^{p}\,dx \\ &\quad\leq\int_{\Omega}|\nabla v_{m}|^{p-1}|v_{m}| |\nabla g_{m}|\,dx+|\lambda| \int_{\Omega}|v_{m}|^{p} |g_{m}|\,dx \\ &\qquad{}+c\int_{B(0,\frac{2}{k_{m}})}\frac{|v_{m}|^{p}}{|x|^{p}}\,dx+c\int _{B(0,\frac {2}{k_{m}})}\frac{|v_{m}|^{p*(s)}}{|x|^{s}}\,dx +c\int_{\partial\Omega}\alpha(x)|v_{m}|^{p} \,dx+c \eta\int_{\Omega }|v_{m}|^{q}\,dx \\ &\quad \leq c \biggl(\int_{\Omega}|\nabla v_{m}|^{p}\,dx \biggr)^{\frac {p-1}{p}} \biggl(\int_{\Omega}|v_{m}|^{p}\,dx \biggr)^{\frac{1}{p}}+c\int_{{B(0,2)}}\frac {|\bar{v}_{m}|^{p}}{|x|^{p}}\,dx \\ &\qquad{}+c\int _{B(0,2)}\frac{|\bar{v}_{m}|^{p^{*}(s)}}{|x|^{s}}\,dx+o(1) \\ &\quad=o(1)\quad\mbox{as } m\to+\infty, \end{aligned}$$ where c is a positive constant. Then we have $$ \|v_{m}\|_{D^{1,p} (B (0,\frac{1}{k_{m}} ) )}=\|\bar{v}_{m}\|_{D_{1,p} (B (0,1 ) )}=o (1 )\quad\mbox{as }m\to+\infty, $$ which contradicts (2.9). If \(0\in\partial \Omega \), similarly as (2.16), (2.17), we have $$\begin{aligned}& \int_{B(0,2) \backslash B (0,a )} \frac{|{\bar{v}_{m}}'|^{p}}{|x|^{p}}\,dx =o (1 )\quad\mbox{as }m\to+\infty, \end{aligned}$$ $$\begin{aligned}& \int_{B(0,2) \backslash B (0,a )}\frac{|{\bar{v}_{m}}'|^{p^{*} (s )}}{|x|^{s}}\,dx=o (1 )\quad\mbox{as }m\to+\infty, \end{aligned}$$ then we obtain (2.18). $$\|v_{m}\|_{D^{1,p} (B (0,\frac{1}{k_{m}} )\cap \Omega )}=o (1 )\quad\mbox{as }m\to+\infty, $$ For the case that \(s=0\), we denote \(\bar{v}_{m}\) by \(z_{m}\). Denote by $$Q_{m}(1)=\sup_{x\in \Omega _{1,m}}\int_{B (x,r )}| \nabla z_{m}|^{p}\,dx $$ the concentration function of \(z_{m}\). From (2.7), (2.8) we can choose \(x_{m}\in\bar{\Omega} _{m}\), \(r_{m}\in {\mathbb{R}}\) and define $$\bar{z}_{m} (x ):=r_{m}^{\frac{p-n}{p}}z_{m} \biggl( \frac {x}{r_{m}}+x_{m} \biggr) $$ $$ \bar{Q}_{m} (1 )=\sup_{\frac{x}{r_{m}}+ x_{m}\in \Omega _{1,m}}\int _{B (x,1 )}|\nabla\bar{z}_{m}|^{p}\,dx=\int _{B (0,1 )}|\nabla\bar{z}_{m}|^{p}\,dx= \delta_{1}\leq\frac{1}{2L}S_{\mu}^{\frac{n}{p}}, $$ where \(0<\delta_{1}<\delta\), L denotes the least number of balls with radius 1 in \({\mathbb{R}}^{n}\) that are needed to cover a ball of radius 2. Note that there exists a constant \(b>0\) such that \(r_{m}\geq b\). Set $$\tilde{\Omega }_{m}:=\biggl\{ x\in {\mathbb{R}}^{n} \Big| \frac{x}{r_{m}}+x_{m} \in \Omega _{1,m}\biggr\} . $$ We may assume \(\bar{z}_{m}\in{D^{1,p}} ({\mathbb{R}}^{n} )\). Moreover, \(\{\bar{z}_{m}\} \) is bounded uniformly in \({D^{1,p}} ({\mathbb{R}}^{n} )\). Thus, up to a subsequence, $$\bar{z}_{m} \rightarrow\bar{v}_{0} \mbox{ weakly in } {D^{1,p}} \bigl({\mathbb{R}}^{n} \bigr)\mbox{ as }m\to+\infty. $$ We are going to prove that the convergence actually holds in the strong \(W^{1,p}_{\mathrm{loc}} ({\mathbb{R}}^{n} )\) sense. Since \(C_{0}^{\infty}({\mathbb{R}}^{n})\cap W_{\mathrm{loc}}^{1,p}({\mathbb{R}}^{n})\) is dense in \(W_{\mathrm{loc}}^{1,p}({\mathbb{R}}^{n})\), then without loss of generality we can assume that \(\bar{v}_{m}-\bar{v}_{0}\in C_{0}^{\infty}({\mathbb{R}}^{n})\cap W_{\mathrm{loc}}^{1,p}({\mathbb{R}}^{n})\). Let \(x_{0}\) be a fixed point of \({\mathbb{R}}^{n}\), from Proposition 6.6 in [19], we can find \(\rho\in[1,2]\) such that the solution \(\bar{w}_{m}\) of the Dirichlet problem $$ \textstyle\begin{cases} \Delta_{p} w=0 \quad\mbox{in } B(x_{0},3)\backslash B(x_{0},\rho),\\ w_{|\partial B(x_{0},\rho)}=\bar{v}_{m}-\bar{v}_{0},\qquad w_{|\partial B(x_{0},3)}=0 \end{cases} $$ satisfies the following conditions: $$ \bar{w}_{m}\rightarrow 0 \mbox{ in } W^{1,p} \bigl(B(x_{0},3)\backslash B(x_{0},\rho) \bigr)\mbox{ as }m \to+\infty. $$ $$ \varphi_{m}= \textstyle\begin{cases} \bar{v}_{m}-\bar{v}_{0} &\mbox{in } B(x_{0},\rho),\\ \bar{w}_{m} &\mbox{in } B(x_{0},3)\backslash B(x_{0},\rho),\\ 0 &\mbox{in } {\mathbb{R}}^{n}\backslash B(x_{0},3). \end{cases} $$ It follows from the above equation that \(\|\varphi_{m}\|_{L^{p} (\mathbb{R}^{n} )}\rightarrow 0\) as \(m\to+\infty\). Now, scaling back the function \(\varphi_{m}\), $$\bar{\varphi}_{m}=r^{\frac{n-p}{p}}_{m} \varphi_{m} \bigl(r_{m} (x-x_{m} ) \bigr), $$ then there exists a constant \(\beta>0\) such that \(\operatorname{supp} \bar{\varphi}_{m} \subset B(x_{0},\beta)\subset \Omega _{1,m}\) for m large. Taking into account (2.27), (2.28) and (2.29), letting \(m\rightarrow+\infty\), we have $$ \|\nabla \bar{\varphi}_{m}\|^{p}_{L^{p} (B(x_{0},\beta) )}- \|\varphi_{m}\| ^{p}_{{D^{1,p}} ({\mathbb{R}}^{n} )}-\|\bar{z}_{m}- \bar{v}_{0}\|^{p}_{{D^{1,p}} (B(x_{0},\rho) )}\rightarrow0. $$ By scale invariance and the fact that \(\{z_{m}\}\) is a Palais-Smale sequence for \(F_{\mu}(u)\), it follows that $$\bigl\langle DF_{\mu,m} (\bar{z}_{m} ),\varphi_{m} \bigr\rangle =\bigl\langle DF_{\mu}( z_{m} ),\bar{\varphi}_{m}\bigr\rangle +o (1 )=o (1 ), $$ $$\begin{aligned} F_{\mu,m} (\bar{v} )={}& \frac{1}{p} \int_{\tilde{\Omega}_{m}} \biggl(|\nabla\bar{v}|^{p}-\mu\frac{ | \bar{v}|^{p}}{|x+r_{m}x_{m}|^{p}} \biggr)\,dx+ \frac{1}{p r_{m}^{p-1}}\int_{\partial\tilde{\Omega}_{m}}\alpha \biggl(x_{m}+ \frac {x}{r_{m}} \biggr)|\bar{v}|^{p}\,d\sigma\\ &{}-\frac{1}{{p^{*}}}\int_{\tilde{\Omega}_{m}}|\bar{v}|^{{p^{*}}}\,dx- \frac{\eta}{qr_{m}^{n-\frac{n-p}{p}q}}\int_{\tilde {\Omega }_{m}}|\bar{v}|^{q} \,dx+ \frac{\lambda }{pr_{m}^{p}}\int_{\tilde{\Omega}_{m}}|\bar{v}|^{p}\,dx. \end{aligned}$$ Therefore, from the definitions of \(F_{\mu,m}\), \(\varphi_{m}\) and (2.28), we have $$\begin{aligned} o (1 )={}&\int_{\tilde{\Omega}_{m}\cap B(x_{0},\rho)} \biggl[|\nabla\bar{z}_{m}|^{p-2} \nabla\bar{z}_{m}\nabla (\bar{z}_{m}-\bar{v}_{0} )-\mu \frac{|\bar{z}_{m}|^{p-2}\bar{z}_{m} (\bar{z}_{m}-\bar{v}_{0} )}{|x+r_{m}x_{m}|^{p}} \biggr]\,dx \\ &{}-\int_{ \tilde{\Omega}_{m}\cap B(x_{0},\rho)}|\bar{z}_{m}|^{{p^{*}}-2}\bar{z}_{m} (\bar{z}_{m}-\bar{v}_{0} )\,dx+o (1 ) \\ ={}&\int_{\tilde{\Omega}_{m}\cap B(x_{0},\rho )} \biggl(\bigl|\nabla (\bar{z}_{m}-\bar{v}_{0} )\bigr|^{p} -\mu\frac{|\bar{z}_{m}-\bar{v}_{0}|^{p}}{|x+r_{m}x_{m}|^{p}} \biggr)\,dx \\ &{}-\int_{\tilde{\Omega}_{m}\cap B(x_{0},\rho)}|\bar{z}_{m}-\bar{v}_{0}|^{{p^{*}}}\,dx+o (1 ) \\ ={}&\int_{\tilde{\Omega}_{m}} \biggl(|\nabla \varphi_{m}|^{p} -\mu\frac{|\varphi_{m}|^{p}}{|x+r_{m}x_{m}|^{p}} \biggr)\,dx-\int_{\tilde {\Omega}_{m}}| \varphi_{m}|^{{p^{*}}}\,dx+o (1 ). \end{aligned}$$ Moreover, by scale invariance and $$\begin{aligned}& \int_{B(x_{0},\beta)} |\bar{\varphi}_{m}|^{p}\,dx=\int _{\Omega _{1,m}} |\bar{\varphi}_{m}|^{p}\,dx= \frac{1}{r_{m}^{p}}\int_{{\mathbb{R}}^{n}}|\varphi _{m}|^{p}\,dx=o (1 )\quad \mbox{as }m\rightarrow+\infty, \\& \begin{aligned}[b] o (1 )&=\int_{\Omega _{1,m}} \biggl(|\nabla\bar{\varphi}_{m}|^{p}-\mu\frac{|\bar{\varphi}_{m}|^{p}}{|x|^{p}} \biggr)\,dx-\int_{\Omega _{1,m}}|\bar{\varphi}_{m}|^{{p^{*}}}\,dx\\ &\geq\int_{\Omega _{1,m}} \biggl(|\nabla\bar{\varphi}_{m}|^{p}- \mu\frac{|\bar{\varphi}_{m}|^{p}}{|x|^{p}} \biggr)\,dx \biggl(1-\frac{\|\bar{\varphi}_{m}\|^{{p^{*}}}_{L^{p^{*}} (\Omega _{1,m} )}}{\int_{{\Omega}_{1,m}} (\nabla\bar{\varphi}_{m}|^{p}-\mu\frac{|\bar{\varphi}_{m}|^{p}}{|x|^{p}} )\,dx } \biggr)\\ &\geq\int_{\Omega _{1,m}} \biggl(|\nabla\bar{\varphi}_{m}|^{p}- \mu\frac{|\bar{\varphi}_{m} |^{p}}{|x|^{p}} \biggr)\,dx \biggl(1-\frac{\| \nabla\bar{\varphi}_{m}\|^{{p^{*}}-p}_{L^{p} (\Omega _{1,m} )}}{S_{\mu }^{\frac{{p^{*}}}{p}}} \biggr)\\ &\geq\int_{\Omega _{1,m}} \biggl(|\nabla \bar{\varphi}_{m}|^{p}- \mu\frac{|\bar{\varphi}_{m}|^{p}}{|x|^{p}} \biggr)\,dx \biggl(1-\frac{\| \nabla (\bar{z}_{m}-\bar{v}_{0} )\|^{{p^{*}}-p}_{L^{p} (B(x_{0},\rho ) )}}{S_{\mu}^{\frac{{p^{*}}}{p}}} \biggr). \end{aligned} \end{aligned}$$ Let us cover \(B(x_{0},\rho)\) with L balls of radius one, from (2.26) then $$\begin{aligned} \bigl\| \nabla (\bar{z}_{m}-\bar{v}_{0} )\bigr\| ^{p}_{L^{p} (B (x_{0},\rho ) )}&\leq \|\nabla\bar{z}_{m}\|^{p}_{L^{p} (B (x_{0},\rho ) )}+o (1 ) \\ &\leq L\|\nabla\bar{z}_{m}\|^{p}_{L^{p} (B (0,1 ) )}+o (1 )\leq \frac {1}{2}S_{\mu}^{n/p}+o (1 ), \end{aligned}$$ so that (2.31) and (2.32) yield $$\|\bar{\varphi}_{m}\|_{W^{1,p} (B(x_{0},\beta) )}=\|\bar{\varphi}_{m}\| _{W^{1,p} (\Omega _{1,m} )}\rightarrow0\quad\mbox{as }m\to +\infty. $$ Finally, using again the properties of the extension operator, we obtain from (2.30) $$\begin{aligned} \|\bar{\varphi}_{m}\|^{p}_{W^{1,p} (B(x_{0},\beta) )}&\geq \frac {1}{C^{p}}\|\bar{\varphi}_{m}\|^{p}_{W^{1,p} (\mathbb{R}^{n} )} \\ &=\frac{1}{C^{p}}\|\varphi_{m}\|^{p}_{W^{1,p} (\mathbb{R}^{n} )}+o (1 ) \\ &=\frac{1}{C^{p}}\|\bar{z}_{m}-\bar{v}_{0}\|^{p}_{W^{1,p} (B(x_{0},\rho) )}+o (1 ) \quad\mbox{as } m\to+\infty, \end{aligned}$$ where C is a positive constant depending on the domain \(B(x_{0},\beta)\). Therefore $$\forall x_{0}\in {\mathbb{R}}^{n},\quad \|\bar{z}_{m}-\bar{v}_{0}\|_{W^{1,p} (B(x_{0},\rho ) )}\rightarrow 0. $$ Since \(\int_{B (0,1 )}|\nabla\bar{z}_{m}|^{p}\,dx=\delta_{1}>0\), we have \(\bar{v}_{0}\not\equiv0\). Hence by local properties of the extension operator, we have that \(\bar{v}_{0}|_{{\tilde{\Omega} _{m}}}\nrightarrow0\). Since \(z_{m}\rightarrow0 \) weakly in \(D^{1,p} ({\mathbb{R}}^{n} )\), we also have \(r_{m}\rightarrow +\infty\) as \(m\rightarrow+\infty\). Now, using the result of Case (I), we have $$\bar{z}_{m} (x )=r_{m}^{\frac{p-n}{p}}\bar{v}_{m} \biggl(\frac{x}{r_{m}}+x_{m} \biggr)= (r_{m}k_{m} )^{\frac {p-n}{p}}v_{m} \biggl(\frac{x}{r_{m}k_{m}}+\frac{x_{m}}{k_{m}} \biggr). $$ Define \(K_{m}=r_{m}k_{m}\), \(y_{m}=x_{m}/k_{m}\); then \(y_{m}\rightarrow y_{0}\in\overline{\Omega }\), \(K_{m}|y_{m}|=r_{m}|x_{m}|\). By (2.14) we have \(|x_{m}|>a>0\), so \(K_{m}|y_{m}|\rightarrow+\infty\). Also, by the fact that \(\{k_{m}\}\) is bounded away from zero, \(K_{m}\rightarrow+\infty\) (as \(m\rightarrow+\infty\)). Then \(\tilde{\Omega}_{m}=\{x\in {\mathbb{R}}^{n} | \frac{x}{K_{m}}+y_{m}\in \Omega \}\), \(\bar{v}_{m}= K^{\frac{p-n}{p}}_{m} v_{m} (\frac{x}{K_{m}}+y_{m} )\). Since we have \(\int_{\tilde{\Omega}_{m}}\frac{|\bar{z}_{m}|^{p-2}\bar{z}_{m}\phi}{|x+K_{m}y_{m}|^{p}}\,dx=o (1 )\) for large m and any given \(\phi\in C^{\infty}_{0} (B(x,r) )\), we can proceed our proof as follows. (1) For the case when \(\lim_{m\rightarrow+\infty}K_{m} \operatorname{dist} (y_{m}, \partial \Omega )=+\infty\) uniformly, we claim that \(\bar{v}_{0}\) solves (1.5). Indeed, for a fixed ball \(B (x,r )\) and a test function \(\phi\in C^{\infty}_{0} (B (x,r ) )\) and for sufficiently large m, \(B (x,r )\subset \tilde{\Omega}_{m}\). Therefore, we have $$\begin{aligned} &\bigl\langle \phi, DF^{\infty}\bigl(\bar{v}_{0}, \mathbb{R}^{n} \bigr)\bigr\rangle \\ &\quad=\int_{B (x,r )}|\nabla \bar{v}_{0}|^{p-2}\nabla \bar{v}_{0}\nabla\phi \,dx-\int_{B (x,r )} |\bar{v}_{0}|^{{p^{*}}-2}\bar{v}_{0}\phi \,dx \\ &\quad=\int_{\tilde{\Omega}_{m}}|\nabla\bar{z}_{m}|^{p-2}\nabla \bar{z}_{m}\nabla \phi \,dx-\int_{\tilde{\Omega}_{m}}|\bar{z}_{m}|^{{p^{*}}-2}\bar{z}_{m}\phi \,dx-\int _{\tilde{\Omega}_{m}}\mu\frac{|\bar{z}_{m}|^{p-2}\bar{z}_{m}\phi }{|x+K_{m}y_{m}|^{p}}\,dx \\ &\qquad{} -\frac{\eta}{K_{m}^{n-\frac{n-p}{p}q}}\int_{\tilde{\Omega }_{m}}\phi|\bar{z}_{m}|^{q-2} \bar{z}_{m}\,dx+\frac{1}{K_{m}^{p-1}}\int _{\partial\tilde{\Omega }_{m}}\alpha \biggl(\frac{x}{K_{m}}+y_{m} \biggr)\phi \bar{z}_{m}|\bar{z}_{m}|^{p-2}\,d\sigma \\ &\qquad{} +\frac{\lambda }{K_{m}^{p}}\int_{\tilde{\Omega}_{m}}|\bar{z}_{m}|^{p-2} \bar{z}_{m}\phi \,dx+o (1 ) \\ &\quad=\int_{\Omega}|\nabla v_{m}|^{p-2}\nabla v_{m}\nabla\bar{\phi}_{m} \,dx-\int_{\Omega}|v_{m}|^{{p^{*}}-2}v_{m} \bar{\phi}_{m} \,dx-\int_{\Omega}\mu\frac{|v_{m}|^{p-2}v_{m}\bar{\phi}_{m}}{|x|^{p}}\,dx \\ &\qquad{} +\int_{\partial\Omega} \alpha (x )\bar{\phi}_{m} v_{m}|v_{m}|^{p-2}\,d\sigma-\eta\int _{\Omega }\bar{\phi}_{m} |v_{m}|^{q-2}v_{m} \,dx+\lambda \int_{\Omega }|v_{m}|^{p-2} v_{m}\bar{\phi}_{m}\,dx+o (1 ) \\ &\quad=o (1 )\quad\mbox{as }m\to+\infty, \end{aligned}$$ where \(\bar{\phi}_{m} (x )=K_{m}^{\frac{n-p}{p}}\phi (K_{m} (x-y_{m} ) )\). (2) For the case when \(\lim_{m\rightarrow+\infty}K_{m} \operatorname{dist} (y_{m},\partial \Omega )\rightarrow c<+\infty\), we claim that \(\bar{v}_{0}\) solves (1.7). Indeed, fix a ball \(B (x,r )\) and a test function \(\phi\in C^{\infty}_{0} (B (x,r ) )\) and note that, for sufficiently large m, \(B (x,r )\cap {\mathbb{R}}^{n}_{+}\subset\tilde{\Omega}_{m}\) we have $$\begin{aligned} &\bigl\langle \phi, DF_{0}^{\infty}\bigl(\bar{v}_{0}, \mathbb{R}^{n}_{+} \bigr)\bigr\rangle \\ &\quad=\int_{B (x,r )\cap {\mathbb{R}}^{n}_{+}} |\nabla\bar{v}_{0}|^{p-2}\nabla \bar{v}_{0}\nabla\phi \,dx-\int_{B (x,r )\cap {\mathbb{R}}^{n}_{+}}|\bar{v}_{0}|^{{p^{*}}-2}\bar{v}_{0}\phi \,dx \\ &\quad=\int_{\tilde{\Omega}_{m}}|\nabla\bar{z}_{m}|^{p-2}\nabla \bar{z}_{m}\nabla \phi \,dx-\int_{\tilde{\Omega}_{m}}|\bar{z}_{m}|^{{p^{*}}-2}\bar{z}_{m}\phi \,dx-\int _{\tilde{\Omega}_{m}}\mu\frac{|\bar{z}_{m}|^{p-2} \bar{z}_{m}\phi}{|x+K_{m}y_{m}|^{p}}\,dx \\ &\qquad{} -\frac{\eta}{K_{m}^{n-\frac{n-p}{p}q}}\int_{\tilde{\Omega }_{m}}\phi| \bar{z}_{m}|^{q-2}\bar{z}_{m}\,dx+\frac{1}{K_{m}^{p-1}}\int _{\partial\tilde {\Omega }_{m}}\alpha \biggl(\frac{x}{K_{m}}+y_{m} \biggr)\phi \bar{z}_{m}|\bar{z}_{m}|^{p-2}\,d\sigma \\ &\qquad{} +\frac{\lambda }{K_{m}^{p}}\int_{\tilde{\Omega}_{m}}|\bar{z}_{m}|^{p-2} \bar{z}_{m}\phi \,dx+o (1 ) \\ &\quad=\int_{\Omega}|\nabla v_{m}|^{p-2}\nabla v_{m}\nabla\bar{\phi}_{m} \,dx-\int_{\Omega}|v_{m}|^{{p^{*}}-2}v_{m} \bar{\phi}_{m} \,dx-\int_{\Omega}\mu\frac{|v_{m}|^{p-2}v_{m}\bar{\phi}_{m}}{|x|^{p}}\,dx \\ &\qquad{} +\int_{\partial\Omega} \alpha (x )\bar{\phi}_{m} v_{m}|v_{m}|^{p-2}\,d\sigma-\eta\int _{\Omega }\bar{\phi}_{m} |v_{m}|^{p-2}v_{m}\,dx+ \lambda \int_{\Omega }v_{m}\bar{\phi}_{m}\,dx+o (1 ) \\ &\quad=o (1 )\quad\mbox{as }m\to+\infty, \end{aligned}$$ $$w_{m} (x )=v_{m} (x )-K_{m}^{\frac{n-p}{p}}\bar{v}_{0} \bigl(K_{m} (x-y_{m} ) \bigr). $$ For the case that \(\lim_{m\rightarrow+\infty}K_{m} \operatorname{dist} (y_{m}, \partial \Omega )=c<+\infty\), we have that \(\bar{v}_{0}\) is a weak solution of equation (1.5) and \(w_{m}\) is a Palais-Smale sequence of \(F_{\mu}(u)\) at level \(d-\frac{1}{2n}S^{\frac{n}{p}}\). For the case that \(\lim_{m\rightarrow+\infty}K_{m} \operatorname{dist} (y_{m}, \partial \Omega )=+\infty\), we have that \(\bar{v}_{0}\) is a weak solution of equation (1.7) and \(w_{m}\) is a Palais-Smale sequence of \(F_{\mu}(u)\) at level \(d-\frac{1}{n}S^{\frac{n}{p}}\). This concludes the proof of Lemma 2.1. □ Now, we are going to complete the proof of Theorem 1.1. By applying Lemma 2.1, Lemmas A.4-A.6 recursively, the iteration must stop after a finite number of steps; moreover, the last Palais-Smale sequence must strongly converge to zero. Hence we prove parts (i) and (ii). As a consequence, we finish the proof of Theorem 1.1. □ The proofs of existence results In this section, we shall apply Theorem 1.1 and the mountain pass theorem [18] to obtain the existence of critical points for \(F_{\mu}(u)\) under different assumptions on the parameters μ, λ and the fact that \(0\in\Omega\) or \(0\in \partial\Omega\). For convenience, we only consider the case of \(\alpha (x )=0\). For \(\lambda >-\lambda _{1}\), \(F_{\mu}(u )\) satisfies the geometry structure of the mountain pass theorem. By Lemma A.3 in the Appendix, the proof of Lemma 3.1 can be completed easily. $$c_{\mu}=:\inf_{\gamma\in\Gamma} \sup_{t \in [0,1 ]}F_{\mu}\bigl(\gamma (t ) \bigr), $$ where \(\Gamma=\{\gamma\in C ([0,1],W^{1,p} (\Omega ) ):\gamma (0 )=0,\gamma (1 )=\psi_{0}\in W^{1,p} (\Omega )\}\). The \(\psi_{0}\) is chosen such that \(F_{\mu}(t\psi_{0} )\leq0\) for all \(t\geq1\). According to Theorem 1.1, we easily have the following. Proposition 3.1 For the case that \(s\neq0\), the following two statements are true: (1) Suppose \(0\in\Omega\), \(\mu\in (0,\bar{\mu})\) and \(\lambda >-\lambda _{1}\). If $$ 0< c_{\mu}< \frac{p-s}{ (n-s )p}S_{\mu,s}^{\frac{n-s}{p-s}}, $$ then (1.1) has a positive solution satisfying \(F_{\mu}(u )\leq c_{\mu}\). (2) Suppose \(0\in\partial\Omega\), \(\mu\in (0,\bar{\mu})\) and \(\lambda >-\lambda _{1}\). If $$ 0< c_{\mu}< \frac{p-s}{2 (n-s )p}S_{\mu,s}^{\frac{n-s}{p-s}}, $$ For the case that \(s=0\), the following two statements are true: $$ 0< c_{\mu}< \min\biggl\{ \frac{1}{2n}S^{\frac{n}{p}}, \frac{1}{n}S_{\mu}^{\frac{n}{p}}\biggr\} , $$ $$ 0< c_{\mu}< \frac{1}{2n}S_{\mu}^{\frac{n}{p}}, $$ By Proposition 3.1, we only need to prove that \(c_{\mu}<\frac{p-s}{ (n-s )p}S_{\mu,s}^{ (n-s )/ (p-s )}\). Let \(\varphi (x )\in C^{\infty}_{0} (\Omega )\), \(\varphi (x )=1\) for \(|x|\leq R\), \(\varphi (x )=0\) for \(|x|\geq2R\), where \(B (0,2R )\subset \Omega \). Set \(v_{\varepsilon }(x )=\varphi (x )V^{\varepsilon }_{\mu}(x )\), we only need to verify $$ \max_{t>0}F_{\mu}(tv_{\varepsilon })< \frac{p-s}{ (n-s )p}S_{\mu ,s}^{ (n-s )/ (p-s )}. $$ It is easy to get the following estimates (Lemma 2.3 in [17]): $$\begin{aligned}& \int_{\Omega } \biggl(|\nabla v_{\varepsilon }|^{p}- \mu\frac{|v_{\varepsilon }|^{p}}{|x|^{p}} \biggr)\,dx=S_{\mu,s}^{ (n-s )/ (p-s )}+O \bigl( \varepsilon ^{b (\mu )p+p-n} \bigr); \end{aligned}$$ $$\begin{aligned}& \int_{\Omega }\frac{|v_{\varepsilon }|^{p^{*} (s )}}{|x|^{s}}\,dx=S_{\mu ,s}^{ (n-s )/ (p-s )}+O \bigl(\varepsilon ^{b (\mu )p^{*} (s )-n+s} \bigr); \end{aligned}$$ $$\begin{aligned}& \int_{\Omega }|v_{\varepsilon }|^{p}\,dx= \textstyle\begin{cases}O (\varepsilon ^{b (\mu )p+p-n} ), & p< \frac{n}{b (\mu )},\\ O (\varepsilon ^{ p}|\log \varepsilon | ),& p=\frac{n}{b (\mu )},\\ O (\varepsilon ^{p} ),&p>\frac{n}{b{ (\mu )}}; \end{cases}\displaystyle \end{aligned}$$ $$\begin{aligned}& \int_{\Omega }|v_{\varepsilon }|^{q}\,dx= \textstyle\begin{cases}O (\varepsilon ^{ (b (\mu )+1-\frac {n}{p} )q} ), & q< \frac{n}{b (\mu )},\\ O (\varepsilon ^{n+ (1-\frac{n}{p} )q}|\log \varepsilon | ),& q=\frac{n}{b (\mu )},\\ O (\varepsilon ^{n+ (1-\frac{n}{p} )q} ),&q>\frac{n}{b{ (\mu )}}. \end{cases}\displaystyle \end{aligned}$$ Since \(\max\{p,\frac{n}{b (\mu )},\frac{p (2n-b (\mu )p-p )}{n-p}\}< q<{p^{*} (s )}\) and from (3.9), we have $$ \int_{\Omega }|v_{\varepsilon }|^{q} \,dx=O \bigl(\varepsilon ^{n+ (1-\frac{n}{p} )q} \bigr), O \bigl(\varepsilon ^{p} \bigr)+O \bigl( \varepsilon ^{p}|\log \varepsilon | \bigr)+O \bigl(\varepsilon ^{b (\mu )p+p-n} \bigr)=o \bigl( \varepsilon ^{n+ (1-\frac{n}{p} )q} \bigr). $$ Similar as the proof of Lemma 8.1 in [20], let \(t_{\varepsilon }\) be the attaining point of \(\max_{t>0}F_{\mu}(tv_{\varepsilon })\), we claim \(t_{\varepsilon }\) is uniformly bounded for \(\varepsilon >0\) small. In fact, we consider the function $$\begin{aligned} g(t)={}&F_{\mu}(tv_{\varepsilon })=\frac{t^{p}}{p}\int _{\Omega } \biggl(|\nabla v_{\varepsilon }|^{p}-\mu \frac{|v_{\varepsilon }|^{p}}{|x|^{p}} \biggr)\,dx-\frac{t^{{p^{*} (s )}}}{{p^{*} (s )}}\int_{\Omega }\frac{|v_{\varepsilon }|^{{p^{*} (s )}}}{|x|^{s}}\,dx\\ &{}+\frac{t^{p}}{p}\int_{\Omega } \lambda |v_{\varepsilon }|^{p} \,dx-\eta\frac{t^{q}}{q}\int _{\Omega }|v_{\varepsilon }|^{q}\,dx. \end{aligned}$$ Since \(\lim_{t\rightarrow+\infty} g(t)=-\infty\) and \(g(t)>0\) when t is close to 0, so that \(\max_{t>0}g(t)\) is attained for \(t_{\varepsilon }>0\). Then $$\begin{aligned} g'(t_{\varepsilon })={}&t_{\varepsilon }^{p-1} \int_{\Omega } \biggl(|\nabla v_{\varepsilon }|^{p}-\mu \frac{|v_{\varepsilon }|^{p}}{|x|^{p}}+\lambda|v_{\varepsilon }|^{p} \biggr)\,dx \\ &{}-{t_{\varepsilon }^{{p^{*} (s )-1}}} \int_{\Omega }\frac{|v_{\varepsilon }|^{{p^{*} (s )}}}{|x|^{s}}\,dx-\eta t_{\varepsilon }^{q-1} \int_{\Omega }|v_{\varepsilon }|^{q}\,dx=0. \end{aligned}$$ Since \(\eta>0\), from (3.6)-(3.9) and (3.11), for ε sufficiently small, we have $$ t_{\varepsilon }^{{p^{*} (s )-p}}< \frac {\int_{\Omega } (|\nabla v_{\varepsilon }|^{p}-\mu\frac{|v_{\varepsilon }|^{p}}{|x|^{p}}+\lambda|v_{\varepsilon }|^{p} )\,dx}{\int_{\Omega }\frac{|v_{\varepsilon }|^{{p^{*} (s )}}}{|x|^{s}}\,dx}< 2. $$ Thus from (3.8), (3.11), (3.12), \(p< q< p^{*}(s)\) and for ε sufficiently small, $$\begin{aligned} & \int_{\Omega} \biggl(| \nabla v_{\varepsilon }|^{p}-\mu\frac{|v_{\varepsilon }|^{p}}{|x|^{p}}+ \lambda|v_{\varepsilon }|^{p} \biggr)\,dx \\ &\quad\leq t_{\varepsilon }^{{p^{*} (s )-p}} \int_{\Omega }\frac{|v_{\varepsilon }|^{{p^{*} (s )}}}{|x|^{s}}\,dx+ 2^{\frac {q-p}{p^{*}(s)-p}}\eta\int _{\Omega}|v_{\varepsilon }|^{q}\,dx \\ &\quad\leq t_{\varepsilon }^{{p^{*} (s )-p}}\int_{\Omega }\frac{|v_{\varepsilon }|^{{p^{*} (s )}}}{|x|^{s}}\,dx+\frac{1}{2}\int_{\Omega } \biggl(| \nabla v_{\varepsilon }|^{p}-\mu\frac{|v_{\varepsilon }|^{p}}{|x|^{p}}+ \lambda|v_{\varepsilon }|^{p} \biggr)\,dx. \end{aligned}$$ By (3.6)-(3.9), (3.13) and choosing ε small enough, we have $$ t_{\varepsilon }^{p^{*}(s)-p}\geq\frac{\frac {1}{2}\int_{\Omega } (|\nabla v_{\varepsilon }|^{p}-\mu\frac{|v_{\varepsilon }|^{p}}{|x|^{p}}+\lambda|v_{\varepsilon }|^{p} )\,dx}{\int_{\Omega }\frac{|v_{\varepsilon }|^{{p^{*} (s )}}}{|x|^{s}}\,dx}> \frac{1}{4}. $$ Thus \(t_{\varepsilon }\) is uniformly bounded for \(\varepsilon >0\) small enough. Then from (3.6)-(3.10), (3.12) and (3.14), for ε sufficiently small, we have $$\begin{aligned} \max_{t>0}F_{\mu}(tv_{\varepsilon })={}&F_{\mu}(t_{\varepsilon }v_{\varepsilon }) \\ \leq{}& \max_{t>0} \biggl\{ \frac{t^{p}}{p}\int _{\Omega } \biggl(|\nabla v_{\varepsilon }|^{p}-\mu \frac{|v_{\varepsilon }|^{p}}{|x|^{p}} \biggr)\,dx-\frac{t^{{p^{*} (s )}}}{{p^{*} (s )}}\int_{\Omega }\frac{|v_{\varepsilon }|^{{p^{*} (s )}}}{|x|^{s}}\,dx \biggr\} \\ &{} +\frac{t_{\varepsilon }^{p}}{p}\int_{\Omega }\lambda |v_{\varepsilon }|^{p} \,dx-\eta\frac {t_{\varepsilon }^{q}}{q}\int_{\Omega }|v_{\varepsilon }|^{q}\,dx \\ ={}&\frac{p-s}{ (n-s )p}S_{\mu,s}^{\frac{n-s}{p-s}}+O \bigl(\varepsilon ^{b (\mu )p+p-n} \bigr)-O \bigl(\varepsilon ^{b (\mu )p^{*} (s )-n+s} \bigr) \\ &{} -\eta \textstyle\begin{cases}O (\varepsilon ^{ (b (\mu )+1-\frac {N}{p} )q} ), & q< \frac{n}{b (\mu )},\\ O (\varepsilon ^{n+ (1-\frac{n}{p} )q}|\log \varepsilon | ),& q=\frac{n}{b (\mu )},\\ O (\varepsilon ^{n+ (1-\frac{n}{p} )q} ),&q>\frac{n}{b{ (\mu )}} \end{cases}\displaystyle +\lambda \textstyle\begin{cases}O (\varepsilon ^{b (\mu )p+p-n} ), & p< \frac{n}{b (\mu )},\\ O (\varepsilon ^{ p}|\log \varepsilon | ),& p=\frac{n}{b (\mu )},\\ O (\varepsilon ^{p} ),&p>\frac{n}{b{ (\mu )}} \end{cases}\displaystyle \\ < {}&\frac{p-s}{ (n-s )p}S_{\mu,s}^{\frac{n-s}{p-s}} \quad\bigl(\mbox{by (3.10)}\bigr), \end{aligned}$$ which completes the proof of Theorem 1.2. □ Since \(S_{0}=S\), \(\lim_{\mu\to\bar{\mu}}S_{\mu}=0\) and \(S_{\mu}\) is continuous with respect to μ, we deduce that there exists \(\mu^{*}\in (0,\bar{\mu} )\) such that \(\frac {1}{2}S^{\frac {n}{p}}\leq S_{\mu}^{\frac{n}{p}}\) for \(0<\mu\leq\mu^{*}\) and \(\frac{1}{2}S^{\frac{n}{p}}>S_{\mu}^{\frac{n}{p}}\) for \(\mu^{*}<\mu<\bar{\mu}\). From this fact, we can define \(\mu^{*}\) as above. (1) By Proposition 3.2 and the definition of \(\mu^{*}\), it suffices to prove $$ c_{\mu}< \frac{1}{2n}S^{\frac{n}{p}}. $$ Let \(B (x,r )\) be a ball containing Ω, \(\partial B (x,r )\cap \partial \Omega \neq\emptyset\), \(x_{0}\in\partial B (x,r )\cap \partial \Omega \). Then without loss of generality we may suppose that \(\Omega \subset\{x\in {\mathbb{R}}^{n},x_{n}>x_{0}^{n}\}\), where \(x_{0}= (x_{1}^{0},x_{2}^{0},\ldots,x_{n}^{0} )\). Since \(\mu>0\), \(\eta>0\), we have $$\begin{aligned} \max_{t>0}F_{\mu}\bigl(tU_{x_{0}}^{\varepsilon } \bigr)\leq y_{\varepsilon }:= \max_{t>0} \biggl\{ & \frac{t^{p}}{p}\int_{\Omega } \bigl(\bigl|\nabla U_{x_{0}}^{\varepsilon }\bigr|^{p}+\lambda \bigl|U_{x_{0}}^{\varepsilon }\bigr|^{p} \bigr)\,dx-\frac{t^{p^{*}}}{p^{*}}\int_{\Omega }\bigl|U_{x_{0}}^{\varepsilon }\bigr|^{p^{*}}\,dx \biggr\} , \end{aligned}$$ and by Lemma 3.4 in [21], we have $$ y_{\varepsilon }< \frac{1}{2n}S^{n/p}. $$ It follows from the definition of \(c_{\mu}\) and (3.16) that (3.15) holds. (2) For the case that \(\mu^{*}<\mu<\bar{\mu}\), let \(v_{\varepsilon }\) and \(t_{\varepsilon }\) be defined as in the proof of Theorem 1.2. Since \(\mu>0\), \(\eta>0\), we have $$\begin{aligned} \max_{t>0}F_{\mu}(tv_{\varepsilon })={}&F_{\mu}(t_{\varepsilon }v_{\varepsilon }) \\ \leq{}& \max_{t>0} \biggl\{ \frac{t^{p}}{p}\int _{\Omega } \biggl(|\nabla v_{\varepsilon }|^{p}-\mu \frac{|v_{\varepsilon }|^{p}}{|x|^{p}} \biggr)\,dx-\frac{t^{{p^{*}}}}{{p^{*}}}\int_{\Omega }|v_{\varepsilon }|^{{p^{*}}}\,dx \biggr\} \\ &{}+\frac{t_{\varepsilon }^{p}}{p}\int _{\Omega }\lambda |v_{\varepsilon }|^{p} \,dx-\eta \frac{t_{\varepsilon }^{q}}{q}\int_{\Omega }|v_{\varepsilon }|^{q}\,dx \\ \leq{}&\max_{t>0} \biggl\{ \frac{t^{p}}{p}\int _{\Omega }|\nabla v_{\varepsilon }|^{p}\,dx- \frac{t^{{p^{*}}}}{{p^{*}}}\int_{\Omega }|v_{\varepsilon }|^{{p^{*}}}\,dx \biggr\} \\ &{}+\frac{t_{\varepsilon }^{p}}{p}\int_{\Omega }\lambda |v_{\varepsilon }|^{p} \,dx-\eta\frac{t_{\varepsilon }^{q}}{q}\int_{\Omega }|v_{\varepsilon }|^{q}\,dx. \end{aligned}$$ By a similar argument as in the proof of (3.5) for the special case \(s=0\), \(\mu=0\), we have $$c_{\mu}< \frac{1}{n} S^{\frac{n}{p}}. $$ The proof of Theorem 1.3 is complete. □ Caffarelli, L, Kohn, R, Nirenberg, L: First order interpolation inequalities with weights. Compos. Math. 53, 259-275 (1984) MATH MathSciNet Google Scholar Garcia Azorero, JP, Peral Alonso, I: Hardy inequalities and some critical elliptic and parabolic problems. J. Differ. Equ. 144, 441-476 (1998). doi:10.1006/jdeq.1997.3375 MATH MathSciNet Article Google Scholar Adimurthi, YSL: Critical Sobolev exponent problem in \(\mathbb{R}^{N}\) (\(N\geq 4\)) with Neumann boundary condition. Proc. Indian Acad. Sci. Math. Sci. 100, 275-284 (1990). doi:10.1007/BF02837850 Cao, DM, Peng, SJ: A global compactness result for singular elliptic problems involving critical Sobolev exponent. Proc. Am. Math. Soc. 131, 1857-1866 (2003). doi:10.1090/S0002-9939-02-06729-1 Cao, DM, Peng, SJ: A note of the sign-changing solutions to elliptic problem with critical Sobolev and Hardy terms. J. Differ. Equ. 193, 424-434 (2003). doi:10.1016/S0022-0396(03)00118-9 Chabrowski, J: On the nonlinear Neumann problem involving the critical Sobolev exponent and Hardy potential. Rev. Mat. Complut. 17, 195-227 (2004). doi:10.5209/rev-REMA.2004.v17.n1.16800 Jannelli, E: The role played by space dimension in elliptic critical problems. J. Differ. Equ. 156, 407-426 (1999). doi:10.1006/jdeq.1998.3589 Smets, D: Nonlinear Schrödinger equations with Hardy potential and critical nonlinearities. Trans. Am. Math. Soc. 357, 2909-2938 (2005). doi:10.1090/S0002-9947-04-03769-9 Brezis, H, Lieb, E: A relation between pointwise convergence of functions and convergence of functionals. Proc. Am. Math. Soc. 88, 486-490 (1983). doi:10.2307/2044999 Struwe, M: A global compactness result for elliptic boundary value problems involving limiting nonlinearities. Math. Z. 187, 511-517 (1984). doi:10.1007/BF01174186 Pierrotti, D, Terracini, S: On a Neumann problem with critical exponent and critical nonlinearity on the boundary. Commun. Partial Differ. Equ. 20, 1155-1187 (1995). doi:10.1080/03605309508821128 Deng, YB, Jin, LY, Peng, SJ: A Robin boundary problem with Hardy potential and critical nonlinearities. J. Anal. Math. 104, 125-154 (2008). doi:10.1007/s11854-008-0019-3 Li, YY, Guo, QQ, Niu, PC: Global compactness results for quasilinear elliptic problems with combined critical Sobolev-Hardy terms. Nonlinear Anal. 74, 1445-1464 (2011). doi:10.1016/j.na.2010.10.018 Deng, YB, Jin, LY, Peng, SJ: Solutions of Schrödinger equations with inverse square potential and critical nonlinearity. J. Differ. Equ. 253, 1376-1398 (2012). doi:10.1016/j.jde.2012.05.009 Jin, LY, Deng, YB: A global compact result for a semilinear elliptic problem with Hardy potential and critical nonlinearities on \({\mathbb{R}}^{n}\). Sci. China Ser. A 53(2), 385-400 (2010). doi:10.1007/s11425-009-0075-x Aubin, T: Problèmes isopérimétriques et espaces de Sobolev, C. R. Acad. Sci. Paris Sér. A-B 280, 279-281 (1975) Kang, DS: On the quasilinear elliptic problem with a critical Hardy-Sobolev exponent and a Hardy term. Nonlinear Anal. 69, 2432-2444 (2008). doi:10.1016/j.na.2007.08.022 Brezis, H, Nirenberg, L: Positive solutions of nonlinear elliptic equations involving critical exponents. Commun. Pure Appl. Math. 36, 437-477 (1983). doi:10.1002/cpa.3160360405 Lindqvist, P: Notes on the p-Laplace equation. Report. University of Jyväskylä, Department of Mathematics and Statistics, 102. University of Jyväskylä, Jyväskylä (2006) Ghoussoub, N, Yuan, C: Multiple solutions for quasi-linear PDEs involving the critical Sobolev and Hardy exponents. Trans. Am. Math. Soc. 352, 5703-5743 (2000). doi:10.1090/S0002-9947-00-02560-5 Abreu, EAM, do Ó, JM, Medeiros, ES: Multiplicity of positive solutions for a class of quasilinear nonhomogeneous Neumann problems. Nonlinear Anal. 60, 1443-1471 (2005). doi:10.1016/j.na.2004.09.058 Wang, XJ: Neumann problem for semilinear elliptic equations involving critical Sobolev exponents. J. Differ. Equ. 93, 283-310 (1991). doi:10.1016/0022-0396(91)90014-Z MATH Article Google Scholar The authors are grateful to the referees for many valuable suggestions to make the paper more readable. Research was supported by the Natural Science Foundation of China, No. 11101160 and No. 11271141. Department of Applied Mathematics, South China Agricultural University, Guangzhou, 510642, P.R. China Lingyu Jin & Lang Li Lingyu Jin Lang Li Correspondence to Lingyu Jin. All authors typed, read and approved the final manuscript. In this appendix, we give some lemmas and detailed proofs for the convenience of the reader. In the following, assume that \(\Omega \subset {\mathbb{R}}^{n}\) is a bounded domain and \(\partial\Omega\in C^{1}\). Lemma A.1 $$ \lambda_{1}=\inf \biggl\{ \int_{\Omega } \biggl(|\nabla u|^{p}-\mu\frac{|u|^{p}}{|x|^{p}} \biggr)\,dx +\int _{\partial{ \Omega }}\alpha (x )|u|^{p}\,d\sigma;\int _{\Omega }|u|^{p}\,dx=1, u\in W^{1,p} (\Omega ) \biggr\} , $$ (A.1) then \(\lambda_{1}\) is obtained. Let \(\{u_{m}\}\) be the minimizing sequence for \(\lambda_{1}\). That is, $$\lim_{m\rightarrow+\infty}\int_{\Omega} \biggl({|}\nabla {{u}_{m}} {{|}^{p}}-\mu\frac{|{{u}_{m}}{{|}^{p}}}{|x{{|}^{p}}} \biggr)\,dx+\int _{\partial\Omega}{\alpha}(x)|{{u}_{m}} {{|}^{p}}\,dx= {{\lambda }_{1}}, \quad\int_{\Omega}{|} {{u}_{m}} {{|}^{p}}\,dx=1. $$ By the Sobolev-Hardy inequality, and \(\mu\le\bar{\mu}\), \(\alpha (x)\ge0\), we have $$\int_{\Omega} \biggl({|}\nabla{{u}_{m}} {{|}^{p}}-\mu\frac {|{{u}_{m}}{{|}^{p}}}{|x{{|}^{p}}} \biggr)\,dx+\int_{\partial\Omega }{ \alpha}(x)|{{u}_{m}} {{|}^{p}}\,dx\ge\biggl(1- \frac{\mu}{{\bar{\mu }}}\biggr)\int_{\Omega}{|}\nabla{{u}_{m}} {{|}^{p}}\,dx\ge0. $$ Then \(u_{m}\) is bounded in \({{W}^{1,p}}(\Omega)\), there exists \(u\in W^{1,p}(\Omega)\) such that, up to a subsequence still denoted by \(u_{m}\), $${{u}_{m}}\to u\mbox{ weakly in } {{W}^{1,p}}(\Omega)\mbox{ as } m\to +\infty. $$ By the Sobolev imbedding theorem we have $$\begin{aligned}& {{u}_{m}}\to u \mbox{ in } {{L}^{p}}(\Omega)\mbox{ and } {{L}^{p}}(\partial\Omega)\mbox{ as } m\to+\infty,\\& {{u}_{m}}\to u\mbox{ a.e. in }\Omega\mbox{ as } m\to+\infty. \end{aligned}$$ Thus by the Fatou lemma we have $$\begin{aligned} &{{\int}_{\Omega}} \biggl( |\nabla u{{|}^{p}}-\mu \frac {|u{{|}^{p}}}{|x{{|}^{p}}} \biggr)\,dx+\int_{\partial\Omega}{\alpha(x) }|u{{|}^{p}}\,dx \\ &\quad\le\lim_{m\to+\infty} \biggl[{{ \int}_{\Omega}} \biggl( |\nabla{{u}_{m}} {{|}^{p}}- \mu\frac {|{{u}_{m}}{{|}^{p}}}{|x{{|}^{p}}} \biggr)\,dx+\int_{\partial\Omega }{ \alpha(x)}|{{u}_{m}} {{|}^{p}}\,dx \biggr]. \end{aligned}$$ And since \(\lim_{m\rightarrow\infty}\int_{\Omega }{|}{{u}_{m}}{{|}^{p}}\,dx=\int_{\Omega}{|}u{{|}^{p}}\,dx\), from (A.1) (A.2), the proof of the lemma is complete. □ For any \(\delta>0\), there exists a constant \(C= C (\delta )>0\) such that $$\int_{\Omega }\frac{|u|^{p}}{|x|^{p}}\,dx\leq \biggl( \frac{1}{\bar{\mu}}+\delta \biggr)\int_{\Omega }|\nabla u|^{p}\,dx+ C (\delta )\int_{\Omega }|u|^{p} \,dx $$ for \(u\in W^{1,p} (\Omega )\). The proof is similar to that in [12]. Here for convenience we give the details of the proof. For \(y\in {\mathbb{R}}^{n}\), denote the unit ball centered at y by \(B_{1} (y )\) and domain $$D=B_{1} (y )\cap\bigl\{ x_{n}>h \bigl(x' \bigr) \bigr\} , $$ where \(h (x' )\) is a \(C^{1}\) function defined in \(\{x'\in {\mathbb{R}}^{n-1}: |x'-y'|<1\}\) with \(y_{n}=h (y_{1},\ldots,y_{n-1} )\) and ∇h vanishing at \(y'= (y_{1},\ldots,y_{n-1} )\), \(h\geq0\). Employing similar arguments in Lemma 2.1 of [22], it can be proved that if \(u\in W^{1,p} (D )\) with \(\operatorname{supp} u\in B_{1} (y )\), then \(\forall \varepsilon >0\), there exists a constant \(r>0\) depending on ε such that $$ \int_{D}\frac{|u|^{p}}{|x|^{p}}\,dx\leq \biggl( \frac{1}{\bar{\mu}}+\varepsilon \biggr)\int_{D} |\nabla u|^{p}\,dx $$ provided \(|\nabla h|\leq r\). In fact, if \(h\equiv0\), $$ \int_{D}|\nabla u|^{p}\,dx=\frac{1}{2}\int_{B_{1}(y)}|\nabla u|^{p}\,dx\geq\frac{\bar{\mu}}{2}\int_{B_{1}(y)} \frac {|u|^{p}}{|x|^{p}}\,dx=\bar{\mu}\int_{D}\frac{|u|^{p}}{|x|^{p}}\,dx. $$ If \(h\geq0\), \(h\not\equiv0\), make the coordinate transformation $$ z'=x', \qquad{{z}_{n}}={{x}_{n}}-h \bigl(x'\bigr), $$ which straightens the bottom of D, and write \(z=F(x)\), then $$\begin{aligned}& {{\partial}_{{{z}_{i}}}}u(x)={{\partial}_{{{x}_{i}}}}u(x)+{{\partial }_{{{x}_{n}}}}u(x){{\partial}_{{{x}_{i}}}}h\bigl(x'\bigr), \quad i=1,2,n-1,\\& \bigl|{{\partial}_{{{z}_{i}}}}u(x)\bigr|^{2}=\bigl|{{\partial }_{{{x}_{i}}}}u(x)\bigr|^{2}+\bigl|{{\partial}_{{{x}_{n}}}}u(x) \partial _{x_{i}}h\bigl(x'\bigr)\bigr|^{2}+2\bigl|{{ \partial}_{{{x}_{n}}}}u(x)\partial _{x_{i}}u(x){{\partial}_{{{x}_{i}}}}h \bigl(x'\bigr)\bigr|^{2},\\& \bigl|{\nabla_{z}}u(x)\bigr|^{2}\le\bigl|\nabla_{x}u(x)\bigr|^{2}+2| \nabla h|^{2}\bigl|\nabla_{x}u(x)\bigr|^{2},\\& |z|\leq|x|. \end{aligned}$$ Denote \(D_{1}=F(D)\), then we have $$\begin{aligned} \int_{D}{|}\nabla u{{|}^{2}}\,dx&\ge\bigl(1-2{{|\nabla h| }^{2}}\bigr)\int _{D_{1}}{|} {\nabla_{z}}u{{|}^{2}}\,dz \\ & \ge\bigl(1-2{{|\nabla h| }^{2}}\bigr)\bar{\mu}\int _{D_{1}}{\frac {|u{{|}^{p}}}{|z{{|}^{p}}}}\,dz\geq\bigl(1-2{{|\nabla h| }^{2}}\bigr)\bar{\mu }\int_{D}{ \frac{|u{{|}^{p}}}{|x{{|}^{p}}}}\,dx. \end{aligned}$$ Then (A.3) is obtained provided \(|\nabla h|\leq r\). Let ε be a small positive constant to be determined later, and let \((\varphi_{k} )_{k=1}^{m}\) be a partition of unity on Ω̅ with \(\operatorname{diam} (\operatorname{supp} \varphi_{k} )\leq r\) for each k, where \(\operatorname{diam} (\operatorname{supp} \varphi_{k} )\) is the diameter of the domain \(\operatorname{supp} \varphi_{k}\). From (A.3), we see $$\int_{\Omega }\frac{|\varphi_{k} u|^{p}}{|x|^{p}}\,dx\leq \biggl( \frac{1}{\bar{\mu}}+\varepsilon \biggr) \int_{\Omega }\bigl|\nabla ( \varphi_{k}u )\bigr|^{p}\,dx,\quad \forall 1\leq k\leq m,u\in W^{1,p} (\Omega ) $$ for sufficiently small r. Hence $$\begin{aligned} \int_{\Omega }\frac{|u|^{p}}{|x|^{p}}\,dx \leq&\int _{\Omega } \sum_{k=1}^{m} \varphi_{k}\frac{|u|^{p}}{|x|^{p}}\,dx \leq \biggl(\frac{1}{\bar{\mu}}+\varepsilon \biggr)\sum_{k=1}^{m} \int _{\Omega } \bigl|\nabla \bigl(\varphi_{k}^{\frac{1}{p}}u \bigr)\bigr|^{p}\,dx \\ \leq& \biggl(\frac{1}{\bar{\mu}}+\varepsilon \biggr)\sum_{k=1}^{m} \int_{\Omega } \varphi_{k} \Biggl(|\nabla u|^{p}+C \sum_{j=1}^{p}|\nabla u|^{p-j}+C|u|^{p} \Biggr)\,dx \\ \leq& \biggl(\frac{1}{\bar{\mu}}+\varepsilon \biggr) \biggl[ (1+\varepsilon )\int _{\Omega } |\nabla u|^{p}\,dx+C (\varepsilon )\int _{\Omega }|u|^{p}\,dx \biggr]. \end{aligned}$$ As a consequence, by choosing ε appropriately, we obtain the desired result. □ For \(\lambda >-\lambda _{1}\), the norm $$\|u\|= \biggl[\int_{\Omega } \biggl(|\nabla u|^{p}-\mu \frac{ |u|^{p}}{|x|^{p}}+\lambda |u|^{p} \biggr)\,dx+\int_{\partial \Omega } \alpha (x )|u|^{p}\,d\sigma \biggr]^{\frac{1}{p}} $$ is equivalent to \(\|\cdot\|_{W^{1,p} (\Omega )}\). For simplicity, we suppose \(\alpha (x )\equiv0\). We only consider the case \(0<\mu<\bar{\mu}\) since the case \(\mu\leq0\) is similar. First we have $$\int_{\Omega }\biggl(|\nabla u|^{p}-\mu \frac{|u|^{p}}{|x|^{p}}+\lambda |u|^{p} \biggr)\,dx\geq (\lambda +\lambda _{1} )\int _{\Omega }|u|^{p}\,dx,\quad \forall u\in W^{1,p} (\Omega ). $$ By Lemma A.2, we deduce that for all \(u\in W^{1,p} (\Omega )\), $$\begin{aligned} \frac{ C (\delta )\mu}{ \lambda +\lambda _{1}}\int_{\Omega }\biggl(|\nabla u|^{p}-\mu\frac{|u|^{p}}{|x|^{p}}+\lambda |u|^{p} \biggr)\,dx&\geq C ( \delta )\mu\int_{\Omega }|u|^{p}\,dx \\ &\geq \mu\int_{\Omega }\frac{|u|^{p}}{|x|^{p}}\,dx-\mu \biggl( \frac{1}{\bar{\mu }}+\delta \biggr)\int_{\Omega }|\nabla u|^{p}\,dx. \end{aligned}$$ Hence, for \(\delta>0\) small enough, $$\begin{aligned} &\biggl(1+\frac{ C (\delta )\mu}{ \lambda +\lambda _{1}} \biggr)\int_{\Omega }\biggl(| \nabla u|^{p}-\mu\frac{|u|^{p}}{|x|^{p}}+\lambda |u|^{p} \biggr)\,dx\\ &\quad\geq \biggl[1-\mu \biggl(\frac{1}{\bar{\mu}}+\delta \biggr) \biggr]\int _{\Omega }|\nabla u|^{p}\,dx+\lambda \int_{\Omega }|u|^{p}\,dx \\ &\quad\geq c\int_{\Omega }|\nabla u|^{p}\,dx+c\int _{\Omega }|u|^{p}\,dx, \end{aligned}$$ $$\|u\|\geq c\|u\|_{W^{1,p} (\Omega )} $$ for some \(c>0\). On the other hand, it is easy to check that $$\|u\|\leq C\|u\|_{W^{1,p} (\Omega )} $$ for some \(C>0\). As a result, we complete the proof. □ Let \(\{u_{m}\}_{m}\) be a Palais-Smale sequence for \(F_{\mu}(u )\) at level \(d\in\mathbb{R}\). Then \(\{u_{m}\}_{m}\) is bounded in \(W^{1,p} (\Omega )\). Moreover, every Palais-Smale sequence for \(F_{\mu}(u)\) at a level zero converges strongly to zero. Since \(\{u_{m}\}_{m}\) is a Palais-Smale sequence for \(F_{\mu}(u )\) at level \(d\in\mathbb{R}\), we have $$\begin{aligned} d+o (1 )&=F_{\mu}(u_{m} )-\frac{1}{p}\bigl\langle F'_{\mu}(u_{m} ), u_{m}\bigr\rangle \\ &= \biggl(\frac{1}{p}-\frac{1}{{p^{*} (s )}} \biggr)\int_{\Omega }\frac {|u_{m}|^{{p^{*} (s )}}}{|x|^{s}}\,dx + \biggl(\frac{1}{p}-\frac{1}{q} \biggr)\int _{\Omega }|u_{m}|^{q}\,dx. \end{aligned}$$ $$\int_{\Omega }\frac{|u_{m}|^{{p^{*} (s )}}}{|x|^{s}}\,dx\leq C,\qquad \int _{\Omega }|u_{m}|^{q}\,dx\leq C, $$ since \(q, {p^{*} (s )}>p\). As a result, by Lemma A.3, $$ \|u_{m}\|^{p}_{W^{1,p} (\Omega )}\leq c \|u_{m}\|^{p}=pcd+\frac{pc}{{p^{*} (s )}}\int_{\Omega }\frac {|u_{m}|^{{p^{*} (s )}}}{|x|^{s}}\,dx +\frac{pc}{q}\int_{\Omega }|u_{m}|^{q}\,dx+o (1 )\leq C. $$ Take \(d=0\), from (A.7) then $$\int_{\Omega}{\frac{|{{u}_{m}}{{|}^{{{p}^{*}} ( s )}}}{|x{{|}^{s}}}}\,dx\to0,\qquad\int _{\Omega}{|} {{u}_{m}} {{|}^{q}}\,dx\to0, \quad\mbox{as }m\rightarrow+\infty $$ and from (A.8), we have \(\|{{u}_{m}}\|_{{{W}^{1,p}} ( \Omega )}^{p}\to0\), the lemma is complete. □ Let \(\{u_{m} \}_{m}\) be a Palais-Smale sequence of \(F_{\mu}(u)\), we shall assume that, up to a subsequence, $$ u_{m}\rightarrow u_{0} \mbox{ weakly in } W^{1,p} (\Omega )\mbox{ as }m\to+\infty. $$ Then we have the following lemma. \(DF_{\mu}(u_{0} )=0\). We have to prove that \(\langle v, DF_{\mu}(u_{0} )\rangle=0\) for every \(v\in W^{1,p} (\Omega )\) as \(m\to+\infty\). Since \(\partial\Omega\in C^{1}\), it is enough to prove that the above relation holds for every restriction to Ω of a \(C^{\infty}_{0} (\mathbb{R}^{n} )\) function ϕ. From (A.9), the Sobolev imbedding theorem and Lemma 3.2(2) in [20], we have as \(m\to+\infty\) $$\begin{aligned}& \nabla u_{m}\rightarrow\nabla u_{0} \mbox{ weakly in }L^{p} (\Omega ), \\& u_{m}\rightarrow u_{0} \mbox{ in } L^{{p^{*} (s )}-1} \bigl( \Omega ,|x|^{-s} \bigr), \\& u_{m}\rightarrow u_{0}\mbox{ in } L^{p-1} \bigl( \Omega,|x|^{-p} \bigr), \\& u_{m}\rightarrow u_{0} \mbox{ in } L^{p-1} ( \partial\Omega ), \\& u_{m}\rightarrow u_{0} \mbox{ in } L^{q} (\Omega ) \mbox{ for } 1< q< p^{*}, \end{aligned}$$ $$\begin{aligned} \bigl\langle \phi, DF_{\mu}(u_{m} )\bigr\rangle ={}&\int _{\Omega}\biggl(|\nabla u_{m}|^{p-2}\nabla u_{m}\nabla \phi-\mu\frac{u_{m}|u_{m}|^{p-2}\phi}{|x|^{p}} \biggr)\,dx-\int _{\Omega }\frac {u_{m}|u_{m}|^{{p^{*} (s )}-2}\phi}{|x|^{s}}\,dx \\ &{}-\eta\int_{\Omega }|u_{m}|^{q-2}u_{m} \phi \,dx+\lambda \int_{\Omega} |u_{m}|^{p-2}u_{m} \phi \,dx+\int_{\partial\Omega}\alpha (x )|u_{m}|^{p-2}u_{m} \phi \,dx \\ \rightarrow{}&\bigl\langle \phi, DF_{\mu}(u_{0} )\bigr\rangle \quad\mbox{as } m\to+\infty, \end{aligned}$$ $$0= \lim_{m\rightarrow+\infty}\bigl\langle \phi, DF_{\mu}(u_{m} )\bigr\rangle =\bigl\langle \phi, DF_{\mu}(u_{0} )\bigr\rangle . $$ Put \(y_{m}=u_{m}-u_{0}\), then \(y_{m}\rightarrow0\) weakly in \(W^{1,p} (\Omega )\). Then we have the following lemma. \(\{y_{m}\}_{m}\) is a Palais-Smale sequence for \(F_{\mu}(u)\) at level \(d_{0}=d-F_{\mu}(u_{0} )\). Since \(u_{m}\) is bounded in \(W^{1,p} (\Omega ) \), by the Sobolev-Hardy inequality \(\int_{\Omega}\frac{| u_{m}|^{p^{*}(s)}}{|x|^{s}}\,dx\), \(\int_{\Omega}\frac{| u_{m}|^{p}}{|x|^{p}}\,dx \) is bounded. That is, \(u_{m}\) is bounded in \(L^{p^{*} (s )} (\Omega , |x|^{-s} )\), \(L^{p} (\Omega, |x|^{-p} )\). And as \(m\rightarrow+\infty\) $$\begin{aligned}& u_{m}\rightarrow u_{0} \mbox{ weakly in } W^{1,p} ( \Omega ), \\& u_{m}\rightarrow u_{0} \mbox{ in }L^{p} (\Omega ), \\& u_{m}\rightarrow u \mbox{ a.e. in }\Omega. \end{aligned}$$ By the Brezis and Lieb lemma [9] we obtain, as \(m\rightarrow+\infty\), $$\begin{aligned}& \int_{\Omega}\frac{|y_{m}|^{{p^{*} (s )}}}{|x|^{s}}\,dx=\int _{\Omega}\frac{|u_{m}|^{p^{*}(s)}}{|x|^{s}}\,dx-\int_{\Omega}\frac{|u_{0}|^{{p^{*} (s )}}}{|x|^{s}}\,dx+o (1 ), \end{aligned}$$ (A.10) $$\begin{aligned}& \int_{\Omega}\frac{| y_{m}|^{p}}{|x|^{p}}\,dx=\int _{\Omega}\frac{| u_{m}|^{p}}{|x|^{p}}\,dx-\int_{\Omega}\frac{| u_{0}|^{p}}{|x|^{p}}\,dx+o (1 ). \end{aligned}$$ Similarly, $$\begin{aligned}& \int_{\Omega} |\nabla y_{m}|^{p}\,dx= \int_{\Omega}|\nabla u_{m}|^{p}\,dx-\int _{\Omega}|\nabla u_{0}|^{p}\,dx+o (1 ), \end{aligned}$$ $$\begin{aligned}& \int_{\Omega}|y_{m}|^{q}\,dx=\int _{\Omega}|u_{m}|^{q}\,dx-\int _{\Omega}|u_{0}|^{q}\,dx+o (1 ),\quad \forall p \leq q\leq p^{*} (s ), \end{aligned}$$ $$\begin{aligned}& \int_{\partial\Omega} |y_{m}|^{p}\,d\sigma =\int_{\partial\Omega} |u_{m}|^{p}\,d\sigma-\int _{\partial\Omega}|u_{0}|^{p}\,d\sigma+o (1 ). \end{aligned}$$ From (A.10)-(A.14), we obtain \(F_{\mu}(y_{m} )=F_{\mu}(u_{m} )-F_{\mu} (u_{0} )+o (1 )=d-F_{\mu} (u_{0} )+o (1 )\). On the other hand, for any test function \(v\in W^{1,p} (\Omega )\), $$\begin{aligned}& \int_{\Omega}\frac{|y_{m}|^{{p^{*} (s )}-2}y_{m} v}{|x|^{s}}\,dx=\int_{\Omega}\frac{|u_{m}|^{{p^{*} (s )}-2}u_{m} v}{|x|^{s}}\,dx-\int_{\Omega}\frac{ |u_{0}|^{{p^{*} (s )}-2}u_{0} v}{|x|^{s}}\,dx+o (1 ), \\& \int_{\Omega}|y_{m}|^{q-2}y_{m} v\,dx=\int_{\Omega}|u_{m}|^{q-2}u_{m} v\,dx-\int_{\Omega}|u_{0}|^{q}u_{0} v\,dx+o (1 ),\quad\forall p\leq q< p^{*} (s ), \\& \int_{\partial\Omega} |y_{m}|^{p-2}y_{m} v\,d \sigma=\int_{\partial \Omega} |u_{m}|^{p-2}u_{m} v\,d\sigma-\int_{\partial\Omega} |u_{0}|^{p-2}u_{0} v\,d\sigma+o (1 ), \\& \int_{\Omega}\frac{|y_{m}|^{p-2}y_{m} v}{|x|^{p}}\,dx=\int_{\Omega}\frac {|u_{m}|^{p-2}u_{m} v}{|x|^{p}}\,dx-\int_{\Omega}\frac{|u_{0}|^{p-2}u_{0} v}{|x|^{p}}\,dx+o (1 ), \\& \int_{\Omega}|\nabla y_{m}|^{p-2}\nabla y_{m}\nabla v\,dx=\int_{\Omega}|\nabla u_{m}|^{p-2} \nabla u_{m}\nabla v\,dx-\int _{\Omega}|\nabla u_{0}|^{p-2}\nabla u_{0}\nabla v\,dx+o (1 ), \end{aligned}$$ that is, \(\langle v, DF_{\mu}(y_{m} )\rangle=\langle v, DF_{\mu}(u_{m} )\rangle-\langle v, DF_{\mu}(u_{0} )\rangle =o (1 )\), thus we complete the proof of the lemma. □ Jin, L., Li, L. A nonlinear p-Laplace equation with critical Sobolev-Hardy exponents and Robin boundary conditions. Bound Value Probl 2015, 185 (2015). https://doi.org/10.1186/s13661-015-0446-x positive solution Sobolev-Hardy exponent
CommonCrawl
Asian-Australasian Journal of Animal Sciences (아세아태평양축산학회지) Asian Australasian Association of Animal Production Societies (아세아태평양축산학회) Agriculture, Fishery and Food > Agricultural Engineering/Facilities Asian-Australasian Journal of Animal Sciences (AJAS) aims to publish original and cutting-edge research results and reviews on animal-related aspects of life sciences. Emphasis will be given to studies involving farm animals such as cattle, buffaloes, sheep, goats, pigs, horses and poultry, but studies with other animal species can be considered for publication if the topics are related to fundamental aspects of farm animals. Also studies to improve human health using animal models can be publishable. AJAS will encompass all areas of animal production and fundamental aspects of animal sciences: breeding and genetics, reproduction and physiology, nutrition, meat and milk science, biotechnology, behavior, welfare, health, and livestock farming system. AJAS is sub-divided into 10 sections. - Animal Breeding and Genetics Quantitative and molecular genetics, genomics, genetic evaluation, evolution of domestic animals, and bioinformatics - Animal Reproduction and Physiology Physiology of reproduction, development, growth, lactation and exercise, and gamete biology - Ruminant Nutrition and Forage Utilization Rumen microbiology and function, ruminant nutrition, physiology and metabolism, and forage utilization - Swine Nutrition and Feed Technology Swine nutrition and physiology, evaluation of feeds and feed additives and feed processing technology - Poultry and Laboratory Animal Nutrition Nutrition and physiology of poultry and other non-ruminant animals - Animal Products Milk and meat science, muscle biology, product composition, food safety, food security and functional foods http://submit.ajas.info KSCI KCI SCOPUS SCIE CONSIDERATIONS IN THE DEVELOPMENT OF FUTURE PIG BREEDING PROGRAM - REVIEW - Haley, C.S. 305 https://doi.org/10.5713/ajas.1991.305 PDF Pig breeding programs have been very successful in the improvement of animals by the simple expedient of focusing on a few traits of economic importance, particularly growth efficiency and leanness. Further reductions in leanness may become more difficult to achieve, due to reduced genetic variation, and less desirable, due to adverse correlated effects on meat and eating quality. Best linear unbiased prediction (BLUP) of breeding values makes possible the incorporation of data from many sources and increases the value of including traits such as sow performance in the breeding objective. Advances in technology, such as electronic animal identification, electronic feeders, improved ultrasonic scanners and automated data capture at slaughter houses, increase the number of sources of information that can be included in breeding value predictions. Breeding program structures will evolve to reflect these changes and a common structure is likely to be several or many breeding farms genetically linked by A.i., with data collected on a number of traits from many sources and integrated into a single breeding value prediction using BLUP. Future developments will include the production of a porcine gene map which may make it possible to identify genes controlling economically valuable traits, such as those for litter size in the Meishan, and introgress them into nucleus populations. Genes identified from the gene map or from other sources will provide insight into the genetic basis of performance and may provide the raw material from which transgenic programs will channel additional genetic variance into nucleus populations undergoing selection. INCIDENCE OF LACTIC ACID BACTERIA ISOLATED FROM INDIGENOUS DAHI Masud, T.;Sultana, K.;Shah, M.A. 329 Fifty samples of indigenous dahi were collected randomly from the local market of Rawalpindi/Islamabad to determine the incidence of lactic acid bacteria. The micro-organisms isolated were Lactobacillus bulgaricus (86%), Streptococcus themophilus (80%), Streptococcus lactis (74%), Lactobacillus helveticus (34%), Streptococcus cremoris (30%), Lactobacillus casei (20%) and Lactobacillus acidophilus (14%) respectively. The results of the present study revealed that indigenous dahi contains mixtures of lactic acid bacteria and thus the quality of dahi may vary with the type of starter culture used for inoculation. EFFECT OF WINTER SUPPLEMENTATION ON THE PERFORMANCE OF BALOCHI EWES GRAZING NATIVE RANGELANDS IN HIGHLAND BALOCHISTAN Rafique, S.;Munir, M.;Sultani, M.I.;Rehman, A. 333 Eighty-two ewes of Balochi breed, two to four years of age were used in a completely randomized design to study the effect of winter supplementation on their performance in Kalat area of Balochistan and randomly divided into two groups of 40 and 42 animals. Two treatments (T1 and T2) studied were : 250 gm/animal/day of a 50 : 50 mixture of cottonseed cake and barley grain fed from Oct. 20 to Dec. 18, 1988 plus grazing and 500 gm/animal/day of the same feed mixture fed from Oct. 9 to Dec. 18, 1988 in addition to grazing. Lucerne hay and wheat straw in a 50 : 50 ratio were provided to all the ewes for a period of one month from Jan. 6, 1989 @ 320 gm/animal/day to sustain them in severe winter. Same feeding levels to the same ewe groups were again fed from Mar. 1 to May 27, 1989. Three breeding rams stayed with the flock from Nov. 1 to Dec. 13, 1988. Lambing took place from Apr. 2 to May 12, 1989. Conception, lambing and mortality percentage was found different (P<.05) between T1 and T2 (12.5 vs 14.8 kg). The ewes on T2 maintained higher body weights throughout winter than the ewes on T1. The results are suggestive of improvement in conception rate with winter supplementation (flushing) and decrease in ewe mortality. Late-gestation and early-lactation supplemental feeding of ewes results in increases in weaning weights of their lambs. EFFECT OF PROBIOTIC SUPPLEMENTATION ON GROWTH RATE, RUMEN METABOLISM, AND NUTRIENT DIGESTIBILITY IN HOLSTEIN HEIFER CALVES Windschitl, P.M. 341 Sixteen Holstein heifer calves were used in an 112-day trial to study the effects of probiotic supplementation on growth performance and rumen metabolism. Calves were divided into four groups of four calves each, with two groups receiving the probiotic supplement and two groups serving as controls. Calves were limited to 1.6 kg dry matter of a corn-barley based grain mix per day. Long-stem bromegrass hay was fed as forage the first 56 days and bromegrass silage the last 56 days of the trial. Probiotic (28 g/d/calf) was fed along with the grain mix twice daily. Data were analyzed for the entire trial and also for the separate hay and silage feeding periods. Total weight gain and average daily gain were not affected (p>.05) by probiotic supplementation. Dry matter intake was lower (p<.05) and feed efficiency (kg feed/kg weight gain) was improved slightly during the hay feeding period for the probiotic-supplemented calves. Wither height gain was greater (p<.05) during the hay period and lower (p<.05) during the silage period for probiotic-supplemented calves. Heart girth gain was improved (p<.07) by probiotic supplementation, particularly during the hay feeding period (p<.05). Total rumen volatile fatty acid (VFA) concentration was higher (p<.05) with the probiotic-supplemented calves. Molar proportions of individual VFA were not affected (p>.05). Rumen ammonia-N and plasma urea-N concentration were lower (p<.05) for probiotic-supplemented calves during the hay feeding period. Total tract nutrient digestibility was not affected (p>.05). Some improvements in animal performance and changes in rumen and blood metabolites were observed when calves were supplemented with probiotic. Effects due to probiotic supplementation were most pronounced during the hay feeding period. EFFECTS OF LYSINE AND SODIUM ON THE GROWTH PERFORMANCE, BONE PARAMETER, SERUM COMPOSITION AND LYSINE-ARGININE ANTAGONISM IN BROILER CHICKS Yun, C.H.;Han, I.K.;Choi, Y.J.;Park, B.C.;Lee, H.S. 353 An experiment with completely randomized design was performed to investigate the effects of lysine and supplemented sodium on growth performance, nutrients utilization, acid-base balance and lysine arginine antagonism in broiler chicks. The experiment was carried out with 3 levels of dietary lysine (0.6, 1.2 and 1.8%) and 3 levels of sodium(0.4, 0.8 and 1.2%) for an experimental period of 7 weeks. Body weight gain of 1.2% lysine group was significantly (p<0.01) higher than that of low or high lysine group. The highest feed consumption was obtained at 1.2% lysine and 0.4% sodium supplemented level (ML-1.2) and the lowest at LL-1.2. The best feed efficiency was obtained at ML-0.8 level and the worst at LL-1.2 level. Mortalities of high (1.8%) and low (0.6%) lysine groups were significantly (p<0.05) higher than medium lysine (1.2%) group. Among the sodium levels, the mortality at 1.2% sodium supplemented level was significantly (p<0.01) different by the levels of dietary lysine. Lysine-arginine antagonism was observed in high lysine diet. Among the lysine levels, the lowest none weight and length were shown in low lysine group. Interactions between lysine and sodium were significantly (p<0.05) shown in femur weight. The levels of sodium and lysine affected significantly (p<0.01) the utilization of nitrogen, ether extract, total carbohydrate and energy. EFFECT OF DIFFERENT DIETARY PROTEIN AND ENERGY LEVELS ON THE PERFORMANCES OF STARCROSS PULLETS Uddin, M. Salah;Tareque, A.M.M.;Howlider, M.A.R.;Khan, M. Jasimuddin 361 In two experiments 640 starcross replacement pullets between 25 and 154 days of age were fed ad libitum on either of 16 diets formed by the combination of $4CP{\times}4ME$ levels to study the interaction of CP and ME on growth performances. In both experiments, feed intake decreased, but protein intake, energy intake, live weight gain and feed conversion efficiency increased and sexual maturity hastened with the increase of dietary protein and/or energy level. The protein conversion efficiency decreased with the increase of dietary protein level. The energy conversion efficiency, however, did not show any relationship with dietary energy level. There was a greater improvement of growth performance due to simultaneous increase of dietary protein and energy level than that of increasing protein or energy alone. SOME FACTORS INFLUENCING TRI-L-ALANINE DISAPPEARANCE AND RUMEN BACTERIAL GROWTH YIELD IN VITRO Ha, J.K.;Kennelly, J.J.;Lee, S.C. 369 A series of in vitro incubation studies with washed rumen bacteria were conducted to determine the influence of incubation time and concentrations of peptides, alanine, ammonia nitrogen and carbohydrate on the rate of peptide disappearance and on bacterial growth. Disappearance rate of tri-alanine (ala3) under various conditions was between 30.6 and $58.2mg\;hr^-$ per gram bacterial dry matter. Ala3 was removed from the incubation medium in an almost linear fashion as incubation time and ala3 concentration was increased. Washed rumen bacteria utilized ala3 faster than di-l-alanine (ala2) at all concentrations. Adding 9mM carbohydrate significantly increased ala3 disappearance, but level of ammonia nitrogen had no influence on ala3 disappearance. The presence of alanine in the medium significantly lowered ala3 utilization by rumen bacteria. Bacterial dry matter and nitrogen growth yield were not influenced by alanine and peptides when incubation medium already contained a sufficient level of ammonia nitrogen. Increased ammonia nitrogen in the presence of ala3 did not stimulate bacterial growth. Carbohydrate significantly increased bacterial dry matter and nitrogen growth as expected. Results indicate that the rate of peptide utilization by rumen bacteria may be altered by type and concentration of peptides, and energy supply, and this may be mediated through changes in numbers and type of bacteria. EFFECT OF ENVIRONMENTAL TEMPERATURE AND ADDITION OF MOLASSES ON THE QUALITY OF NAPIER GRASS (PENNISETUM PURPUREUM SCHUM.) SILAGE Yokota, H.;Okajima, T.;Ohshima, M. 377 The effect of molasses addition and hot temperature on the ensiling characteristics of napier grass (Pennisetum purpureum Schum.) were studied. Napier grass was harvested five times at intervals from 22 to 39 days and each harvest was divided into two equal portions. The half portion was ensiled directly and the other half was ensiled after mixing with molasses into polyethylene bag silos of 15 kg capacity. Molasses was added at the rate of 4% of fresh weight of the grass. One half of the each treatment was conserved at a room of $40^{\circ}C$ for a month and then moved to an ambient temperature room. The other half was kept at ambient temperature for the whole experimental duration. The silages were opened 3 to 7 months after ensiling. Addition of molasses enhanced lactic acid fermentation by increasing lactic acid content and reducing pH value, ammonia nitrogen and acetic, propionic and butyric acid contents of the silages in both temperature treatments. Enhanced temperature increased pH value and decreased acetic, propionic and butyric acids. EFFECTS OF CIMATEROL (CL 263,780) ON GROWTH PERFORMANCE AND CARCASS QUALITY OF BROILERS FED ON DIFFERENT LEVELS OF DIETARY PROTEIN AND ENERGY Kim, Y.Y.;Han, I.K.;Ha, J.K.;Choi, Y.J.;Lee, M. 383 The present study was carried out to investigate the effect of cimaterol on growth performance, carcass quality and cellular functional activity of broilers as affected by the various protein and energy levels. In starter period (0-21 days) all chicks were fed the basal diet which contained approximately 23 % crude protein and 3200 kcal of metabolizable energy per kg of diet. The cimaterol was added during 22-49 days and during the period of 8th week the cimaterol was withdrawn. In finisher period (22-49 days), a $2{\times}2{\times}3$ factorial arrangement consisting of 2 levels of cimaterol (0 mg/kg, 0.25 mg/kg), 2 levels of protein (19%, 17%) and 3 levels of energy (3200, 2900, 2600 kcal/kg) was used. In the finisher period, the body weight gain and feed efficiency was improved by the supplementation of cimaterol. The high protein and high energy level with supplementation of cimaterol had showed the highest body weight gain and feed efficiency, without significant difference. The administration of cimaterol had no effects on percentage of abdominal fat content, giblet and neck. Eventhough the difference was not significant (p>0.05), carcass yield was improved slightly by the administration of cimaterol. The effect of cimaterol on carcass composition was clearly demonstrated that protein content of broilers was not increased (p>0.05) but fat content decreased significantly (p<0.05). The ultilization of nutrients in experimental diets was not significantly affected by feeding cimaterol compared to control group. The results of in vitro studies with liver and adipose tissue showed that cimaterol increased the lipolytic activities at 19% protein level whereas at 17% protein level this effect was variable. Lipogenic activities in liver and adipose tissue were not affected with the administration of cimaterol but the activities increased as energy decreased, particularly in liver tissue. In cell studies with acinar culture of liver tissues, cimaterol had no effect on protein synthetic activity but the parameter was increased at higher level of dietary protein and energy. Protein secretion in liver was increased by the supplementation of cimaterol. In addition, at high protein level the protein secretion was increased and has shown the highest values at medium energy level. PHYSIOLOGICAL RESPONSES OF GROWING RAMS TO ASBESTOS SHADING DURING SUMMER Tharwat, E.E.;Amin, S.O.;Younis, A.A.;Kotby, E.A. 395 Physiological reactions of 24 six months old (22.7 kg. body weight) Barki ram lambs, to hot summer conditions, as influenced by asbestos shading at Maryout area were studied. Animals (12 shaded and 12 Unshaded) developed hyperthermia during summer as their rectal temperature (RT) and respiration rate (RR) were always higher (p<0.01) than normal. Asbestos shading caused higher (p<0.05) RT and RR ($39.5^{\circ}C$ and 69.9 r.p.m. vs. 39.2 and 45.7 r.p.m.) of rams at morning (6-8 a.m.) which became lower ($39.9^{\circ}C$ and 104.3 r.p.m. vs. $40.1^{\circ}C$ and 120.5 r.p.m.) in afternoon (2-4 p.m.) as compared to sun exposure. Shading also resulted in higher hematocrit value (PCV) (35.3% vs. 33.0%) and lower (p<0.05) daily weight gain (128.57 vs. 131.43 g). Diurnal (p<0.01) and monthly (p<0.05) RT and RR changes were closely associated with air temperature (AT) fluctuations. Monthly variation (p<0.05) in PCV was evident. Puberty was reached one month later in the shaded as compared to the unshaded group (265 vs. 232.3 day, respectively). It is concluded that asbestos shading prevents efficient heat dissipation to the sky by hyperthermic rams during summer nights. Construction materials for animal shelters are of extreme practical importance. THE INFLUENCE OF DIETARY PROTEIN AND ENERGY LEVELS ON EGG QUALITY IN STARCROSS LAYERS The interaction of 4 dietary crude protein (13, 16, 19 or 22%) and 4 metabolizable energy (2600, 2800, 3000 or 3100 kcal ME/kg) levels on egg quality performances of Starcross layers were assessed between 245 and 275 days of age. The egg weight increased significantly with the increasing dietary protein and energy levels. But egg shape index, albumen index, yolk index, yolk dry matter, yolk protein, yolk fat, albumen protein and shell tickness were similar at all dietary protein and/or energy levels. The egg specific gravity and albumen weight increased but the yolk, weight, Haugh unit and albumen drymatter decreased with the increase of dietary protein levels and showed irregular trend with energy levels. The albumen dry matter and egg shell weight, however, were not affected by energy and protein levels. Simultaneous increase of protein and energy increased specific gravity, albumen index and shell thickness at a greater rate than that increased by the increase of protein or energy alone. FUNCTIONAL PROPERTIES CHANGE OF PIGSKIN COLLAGEN BY CHEMICAL MODIFICATION Lee, M.;Kwon, S.H. 407 The relationship between the possible structural change due to chemical modifications and functionality changes was studied in pigskin collagen. Amino groups in collagen were modified by succinylation and reductive alkylation. Carboxyl groups were modified using carbodiimide. Thermal denaturation temperature of collagen increased remarkably by carboxyl groups modification whereas decreased by succinylation and reductive alkylation. Emulsifying capacity was improved by reductive alkylation and carboxyl groups modification while emulsion stability was improved by succinylation. Chemical modifications increased solubility whereas decreased the foaming capacity of collagen. Viscosity of collagen at various pH varied with methods of modification.
CommonCrawl
Thermocouple Basics—Using the Seebeck Effect for Temperature Measurement August 14, 2022 by Dr. Steve Arar Learn about the Seebeck effect and how the Seebeck voltage and Seebeck coefficient come into play within the scope of thermocouples and temperature measurement. Thermocouples are a popular type of temperature sensor due to their ruggedness, relatively low price, wide temperature range, and long-term stability. The Seebeck effect discussed in the previous article is the underlying principle that governs the thermocouple operation. The Seebeck effect describes how a temperature difference (ΔT) between the two ends of a metal wire can produce a voltage difference (ΔV) across the length of the wire. This effect is characterized by the following equation: $$S = \frac{\Delta V}{\Delta T} = \frac{V_{cold}-V_{hot}}{T_{hot}-T_{cold}}$$ Equation 1. Where S denotes the Seebeck effect of the material. This equation can be also expressed as: $$S(T)=\frac{dV}{dT}$$ Here, S(T) emphasizes that the Seebeck effect is a function of temperature. Note that the Seebeck effect is also observed in metal alloys and semiconductors. Let's see how this effect can be used to measure temperature. Seebeck Effect of an Individual Material: Copper Wire Example Equation 1 suggests that by having the Seebeck coefficient of a material, the voltage difference across a conductor can be used to determine the temperature difference between the two ends. Although this is theoretically correct, the direct measurement of an individual material's Seebeck voltage is impossible. As an example, consider the setup shown in Figure 1. Figure 1. An example setup to measure the voltage across a copper wire. The ends of the copper wire are at T1 = 25 °C and T2 = 100 °C. Assume that, over this temperature range, the absolute Seebeck coefficient of copper is constant and equal to +1.5 μV/°C. Using Equation 1, we can find the voltage difference across the wire as: $$V_{1}-V_{2}=1.5\times(100-25)=112.5\,\mu V$$ The voltage measured by the multimeter will be different because the path consisting of the multimeter leads and the input circuitry of the multimeter also experiences a temperature difference of 75 °C. Unwanted Seebeck voltage across the test leads and the input circuitry of the multimeter introduces errors. Avoiding Seebeck Voltage—Keeping the Multimeter at Uniform Temperature To avoid creating a Seebeck voltage in the test leads and the multimeter, we should keep these parts at a constant temperature. For example, we can keep the measurement system at 25 °C as shown in Figure 2. Figure 2. An example setup showing a constant temperature of 25 °C. In this example, another conductor is required to make the electrical connection between the black test lead and the hot end of the copper wire. This connection is shown by "Metal 2" in the figure. It is important to note that copper wire cannot be used for this connection. This is because it will also experience the same temperature gradient as the original copper wire, leading to a voltage difference (across Metal 2) of: $$V_{3}-V_{2}=1.5 \times (100-25) = 112.5\,\mu V$$ Therefore, the multimeter will measure zero volts regardless of the temperature difference across the original copper wire. The above discussion shows why the absolute Seebeck coefficient of a material cannot be directly measured by a multimeter. A common method to determine the absolute Seebeck coefficient is by applying the Kelvin relation. Thermoelectric Thermometry Requires Dissimilar Materials From the above discussion, it can be surmised that materials with unequal Seebeck coefficients are required to produce a voltage difference proportional to the temperature gradient. For example, with copper that has a Seebeck coefficient of +1.5 μV/°C at 0 °C, we can use a constantan wire with an absolute Seebeck coefficient of -40 μV/°C at 0 °C. Replacing "Metal 2" with a constantan wire, the multimeter should measure a voltage difference of 3112.5 μV, as calculated below: $$V_{1}-V_{3}= (V_{1}-V_{2})-(V_{3}-V_{2})= 1.5 \times(100-25)-(-40)\times(100 - 25) = 3112.5\,\mu V$$ Note that the above calculation assumes that the Seebeck coefficients of copper and constantan are constant and equal to the specified values over the temperature range of interest. Thermocouple Temperature Sensor—Thermocouple Types and the Seebeck Coefficient Therefore, two dissimilar conductors soldered or welded together at one end can be used to create a temperature sensor. The structure of this temperature sensor, known as a thermocouple, is shown in Figure 3. Figure 3. An example structure of a thermocouple temperature sensor. The junction at which the two dissimilar metals are coupled is called the measurement (or the hot) junction. The opposite end where the sensor connects to the measurement system is called the reference (or the cold) junction. An isothermal block, which is made of a heat conducting material, is commonly required to keep the leads of the thermocouple at the same temperature. Note that a thermocouple measures the temperature difference between the measurement and reference junctions, not the absolute temperature at these junctions. The open-circuit voltage at the reference junction is proportional to the temperature difference between the two ends. A thermocouple created by joining a copper wire to a constantan wire is known as a type T thermocouple. Other common thermocouple types are: J (iron-constantan) K (chromel-Alumel) S (Platinum (10%)/Rhodium–Platinum) Manufacturers commonly specify a thermocouple type's overall Seebeck coefficient (or sensitivity) as a table, graph, or equation. For example, the Seebeck coefficient of a T-type thermocouple at 0 °C is usually specified to be about 39 µV/°C which is close to the value we obtained above from the individual employed metals/alloys (41.5 µV/°C). We know that this sensitivity value changes with temperature. Figure 4 shows the Seebeck coefficient of T-, J-, and K-type thermocouples versus temperature. Figure 4. The Seebeck coefficients for T, J, and K type thermocouples vs. temperature. Image used courtesy of Analog Devices The above curves are obtained with a reference junction at 0 °C. In the next article, we'll discuss the setup for these measurements in greater detail and see how this information can be used in practice. Thermocouple Misconceptions Concerning Seebeck Voltage Although most engineers are familiar with thermocouples, there are a few common misconceptions. Since thermocouples use two dissimilar metals joined at one end to measure temperature, a frequently held misconception is that the Seebeck voltage is produced as a result of joining dissimilar metals. We now know that a single conducting material can produce a Seebeck voltage when there is a temperature gradient. It's also important to remember that the Seebeck voltage is not produced at the junction of two dissimilar metals. The Seebeck voltage is developed along the length of the wire that experiences a temperature difference (Figure 5). Figure 5. Setup showing the Seebeck voltage across two dissimilar metals. Image used courtesy of TI The junction provides an electrical connection between the dissimilar metals and is placed where the temperature needs to be measured. However, virtually no voltage develops right at the junction. To see a complete list of my articles, please visit this page. Thermocouple Principles—the Seebeck Effect and Seebeck Coefficient Test & Measurement in Quantum Computing Modern Development for Control Automation High Power, High Temperature Vishay Solutions for 48 V Automotive Applications Improving Temperature Sensor Accuracy for Thermocouples and RTDs with Delta-Sigma Converters Giant Magnetoresistance versus Hall-effect: A Comparison of Technologies for Speed Sensor Applications Seebeck Effect Seebeck Coefficient Thermoelectric Thermometry seebeck voltage How to Design PCBs for Harsh Environments In Partnership with Autodesk Mushroom-based Substrates: Next Biodegradable Electronic Solution? by Jake Hertz IBM Ushers in Next Wave of Computing with 433-qubit Quantum Processor by Darshil Patel Passing Storm or New Normal? How Should We Think of Today's Electronics Supply Chain? by Daniel Bogdanoff Avionics Software Options Boost Aerospace Power-System Test by AMETEK Programmable Power
CommonCrawl
Search SpringerLink Nonlocal wrinkling instabilities in bilayered systems using peridynamics Marie Laurien ORCID: orcid.org/0000-0003-1691-02631, Ali Javili2 & Paul Steinmann1 Computational Mechanics volume 68, pages 1023–1037 (2021)Cite this article Wrinkling instabilities occur when a stiff thin film bonded to an elastic substrate undergoes compression. Regardless of the nature of compression, this phenomenon has been extensively studied through local models based on classical continuum mechanics. However, the experimental behavior is not yet fully understood and the influence of nonlocal effects remains largely unexplored. The objective of this paper is to fill this gap from a computational perspective by investigating nonlocal wrinkling instabilities in a bilayered system. Peridynamics (PD), a nonlocal continuum formulation, serves as a tool to model nonlocal material behavior. This manuscript presents a methodology to precisely predict the critical conditions by employing an eigenvalue analysis. Our results approach the local solution when the nonlocality parameter, the horizon size, approaches zero. An experimentally observed influence of the boundaries on the wave pattern is reproduced with PD simulations which suggests nonlocal material behavior as a physical origin. The results suggest that the level of nonlocality of a material model has quantitative influence on the main wrinkling characteristics, while most trends qualitatively coincide with predictions from the local analytical solution. However, a relation between the film thickness and the critical compression is revealed that is not existent in the local theory. Moreover, an approach to determine the peridynamic material parameters across a material interface is established by introducing an interface weighting factor. This paper, for the first time, shows that adding a nonlocal perspective to the analysis of bilayer wrinkling by using PD can significantly advance our understanding of the phenomenon. Surface wrinkles resulting from mechanical instabilities are a widespread phenomenon in nature. They are observed in a variety of biological tissues, e.g. in the intestine increasing the area for nutrient absorption [1, 2], in cell membranes allowing for a flexible expansion of the surface area [3, 4] and in human skin during aging [5, 6]. The folding pattern in mammalian brains is evidentially linked to intelligence but also to neurological dysfunction [7, 8]. The occurrence of wrinkling instabilities is harnessed by engineers, for instance to determine mechanical properties of films [9] or to pattern surfaces at the micro- and nanoscale [10, 11]. More recently, the suddenly induced large strains due to buckling have been exploited to enhance the effectiveness of energy harvesters [12, 13]. Geometries in which instabilities appear are an elastic half-space [14,15,16], cylindrical or spherical bodies [17, 18] and bilayer or multilayer systems [19,20,21], among others. The underlying mechanism of wrinkling instabilities in bilayers is based on a stiff thin film bonded to a compliant elastic foundation. Compression in the film can initiate buckling into a sinusoidal wave pattern. The origins of the compression in the film are manifold, including external compression of the bilayer, relaxation of substrate prestretch, constrained film growth or swelling and substrate shrinkage [22,23,24,25]. Regardless whether the formation of wrinkles is desired or not, there is a need for understanding and controlling the behavior of the system. During the past decades the phenomenon has been explored in extensive analytical, experimental and numerical studies. The analytical works of Biot [26], considering a beam on an elastic foundation, and Allen [27], concerned with wrinkles in sandwich structures, have laid the ground for closed-form expressions of the critical wrinkling conditions in a bilayer. In [28], Javili and Bakiler have derived a displacement-based solution for growth-induced instabilities which coincides with the Allen solution under certain conditions but also holds for plane strain as well as plane stress conditions. The influence of substrate prestretch has been taken into account in [19, 29, 30], treating the substrate as a Neo-Hookean material. Holland et al. [19] have additionally distinguished between further origins of compression. Lejeune et al. [21, 31] have developed multi-layer models that are capable of modeling a weak attachment of the film to the substrate. The analytically predicted trends for the influence of the main controlling parameters, e.g. the stiffness ratio of the film and the substrate as well as the film thickness, have been confirmed in experimental studies [6, 20, 32,33,34,35,36] as well as numerical simulations [20, 25, 28, 29, 37,38,39,40,41,42,43,44]. To date, the numerical method of choice is the finite element method (FEM). To trigger the instabilities, it is customary to introduce small perturbations to the system through the mesh, boundary conditions or material properties [39]. In order to eliminate the need for subjective perturbations that might affect the resulting wrinkling pattern, in [16, 37] the FEM simulation is enhanced by an eigenvalue analysis. This approach facilitates a reliable identification of the critical conditions. Despite the increasing body of literature, the experimentally observed wrinkling behavior is still not fully explained. Figure 1 depicts the formation of wrinkles in an elastomeric bilayer obtained in experiments by Budday et al. [20]. It can be seen that the wrinkles do not form uniformly along the whole length of the bilayer. Instead, the boundaries seem to have an impact on the buckling. This observation has been left unconsidered in the analysis with classical continuum mechanics (CCM) that is limited to a local conception of the phenomenon. Nonlocal theories, considering long-range forces instead of contact forces only, are typically able to capture such boundary effects. This serves as a motivation to investigate the nonlocal wrinkling behavior in order to shed light on nonlocal effects, e.g. the boundary effect. Experimentally observed formation of wrinkles in a bilayer. The buckling is induced by applying an unstretched film on top of a prestretched substrate followed by gradually releasing the prestretch in the substrate. It is clearly noticeable that the wrinkling pattern changes towards the boundaries. The pictures are taken from experiments presented in [20] with kind permission of the authors Peridynamics (PD) is capable of realizing a nonlocal analysis. Introduced by Silling [45], the method was originally designed to cope with modeling damage where the use of partial differential equations would lead to singularities. To this end, spatial derivatives in the governing equations are replaced by integrals over the neighborhood of a continuum point. The integral terms comprise the interaction forces acting across a finite distance and are accountable for the nonlocal character of the method. This aspect carries huge potential to advance the modeling of materials and the application of PD has therefore expanded to fields outside of damage modeling. Exemplary areas of application are multiphysics [46,47,48], multiscale modeling [49,50,51], topology optimization [52, 53] and biological systems [54, 55]. An extensive overview of recent publications on PD can be found in [56]. The basic version of PD, called bond-based PD, lacks the consideration of volume dilation and a material model is thus restricted to a fixed Poisson ratio [45]. Silling et al. [57] have established a state-based version of PD to overcome this shortcoming. Alternatively, Javili et al. [58] have recently presented the theory of continuum-kinematics-inspired peridynamics. For a detailed description of the computational implementation see [59]. A peridynamic material model is characterized by parameters defining the property of a bond between two points. If dissimilar materials are involved in an interaction, a combination of the bond properties is necessary to compute the interaction force. This issue has been addressed mainly in the context of modeling functionally graded materials using PD. The most common procedure is to average the parameters of the constituent materials [60,61,62]. More elaborate concepts contain weighting parameters following different approaches of weighting. Cheng et al. [63], for instance, used weight functions that are determined by the proportion of the material coefficients, while weighting variables that allow for a more flexible choice were introduced in [64, 65]. In the context of modeling diffusion, Diyaroglu et al. [66] approximated the property of an interface bond by taking into account the fraction of the bond length that is associated with the respective material. The application of nonlocal theories to the buckling analysis of a film-substrate system remains an open field of research. Recently, Ebrahimi [67] has modeled wrinkling of thin films with and without delamination with PD. To the authors' best knowledge, no research has studied the nonlocal wrinkling behavior in bilayers and its influencing parameters. Therefore, the objective of this paper is to employ PD to investigate nonlocal wrinkling instabilities in an elastic bilayer. We aim for new insights into the characteristics of the first instability pattern through considering nonlocal effects and thus to enhance the understanding of the phenomenon. The remainder of this paper is structured as follows. Section 2 provides a brief overview on PD, including an approach on how to define the material parameters at the interface between different constituents. In Sect. 3, first, we introduce the setup leading to the occurrence of wrinkling instabilities. Second, we describe the employed numerical procedure using PD that is enhanced by an eigenvalue analysis. Section 4 comprises the results obtained by the nonlocal analysis of the bifurcation problem in the form of various parameter studies. Similarities as well as discrepancies to the local theory are analyzed. Section 5 concludes the paper by summarizing the main findings. Peridynamic framework This section is concerned with the peridynamic formulation that is employed in this paper. In Sect. 2.1, we introduce the basic equations of bond-based PD. Section 2.2 extends the model to heterogeneous materials. In Sect. 2.3, we describe how the equations are discretized for implementation. Conceptually speaking, peridynamics incorporates some elements of molecular dynamics into a continuum mechanics framework. Based on continuum mechanics, consider a continuous body \({{\mathcal {B}}_0 \subset {\mathbb {R}}^3 }\) at time \(t=0\) in the material configuration and its spatial counterpart \({{\mathcal {B}}_t \subset {\mathbb {R}}^3}\), see Fig. 2. A point \({\mathbf {X}}\) in the material configuration is mapped to the spatial configuration by the nonlinear deformation map \({\mathbf {y}}\) as \({\mathbf {x}}={\mathbf {y}}({\mathbf {X}},t) : {\mathcal {B}}_0 \times {\mathbb {R}}_t \rightarrow {\mathcal {B}}_t\). Motivated by molecular dynamics, every point is influenced by the neighboring points within a finite distance. This region of interaction is called the peridynamic horizon \({\mathcal {H}}_0 \subset {\mathcal {B}}_0\) (Lagrangian perspective) with the horizon size \(\delta \) denoting the radius of the spherical neighborhood of \({\mathbf {X}}\) in the material configuration. In bond-based PD, the interaction between a point \({\mathbf {X}}\) and its neighbor \(\mathbf {X'}\) is considered via the undeformed bond vector \(\varvec{\Xi } = {\mathbf {X}}' - {\mathbf {X}}\) in the material configuration and the deformed bond vector \(\varvec{\upxi } = {\mathbf {x}}' - {\mathbf {x}} = {\mathbf {y}}({\mathbf {X}}') - {\mathbf {y}}({\mathbf {X}})\) in the spatial configuration. Illustration of a continuum body \({\mathcal {B}}_0\) in the material configuration (left) and its spatial counterpart \({\mathcal {B}}_t\) (right). In PD, a point \({\mathbf {X}}\) interacts with its neighbors in a finite neighborhood \({\mathcal {H}}_0\) defined by the horizon size \(\delta \) PD governing equations are continuum field equations but contain a nonlocal integral term. The key governing equation is the balance of linear momentum, in which the internal force acting on a point is determined by an integral of the interaction forces over the horizon. In this contribution, we consider the quasistatic case given by $$\begin{aligned} \int _{\mathcal {H}_0} {\mathbf {p}} \, \text {d}V + {\mathbf {b}}_0^{ext} = {{0}} \end{aligned}$$ with the force density per volume squared \({\mathbf {p}}\) and the external force density per volume in the material configuration \({\mathbf {b}}_0^{ext}\). The conservation of the angular momentum is ensured by the nonlocal form of the quasistatic balance of angular momentum which reads $$\begin{aligned} \int _{{\mathcal {H}}_0} \varvec{\upxi } \times {\mathbf {p}} \, \text {d}V = {\mathbf {0}} \,. \end{aligned}$$ A harmonic potential is chosen to model the interaction energy density \({\psi }\) as $$\begin{aligned} {\psi } = \frac{1}{2} C L [\lambda -1]^2 \qquad \text {with} \qquad \lambda =\frac{l}{L} \,. \end{aligned}$$ With the scalar-valued line measures \(L=|\varvec{\Xi }|\) and \({l=|\varvec{\upxi }|}\) and the stretch \(\lambda \), where the term \([\lambda -1]\) describes the bond strain. Thus, the elastic material parameter C specifies the resistance against the change of length of a bond. The constitutive law of bond-based PD follows from the differentiation of the interaction energy density with respect to \(\varvec{\upxi }\). That is $$\begin{aligned} {\mathbf {p}} = \frac{\partial {\psi }}{\partial \varvec{\upxi }} = C [\lambda -1] \frac{\varvec{\upxi }}{|\varvec{\upxi }|} \,. \end{aligned}$$ Material parameters for bilayers In bond-based PD elastic materials are characterized by the material parameter C. From the constitutive law (4) it follows that the constant C effectively characterizes the stiffness of a bond between two points. This is in contrast to local material parameters in classical continuum mechanics. As a consequence, peridynamic bilayers require special consideration. Across the material interface, two different materials participate in one interaction, as it is illustrated in Fig. 3. In this contribution, we follow an intuitive approach to determine the bond stiffness at the interface. Consider a bilayer consisting of material A with \(C^A\) and material B with \(C^B\). The stiffness \(C^I\) of a bond connecting points from material A and material B is defined as $$\begin{aligned} C^I = \alpha C^A + [1-\alpha ] C^B \qquad \text {with} \qquad 0\le \alpha \le 1 \,. \end{aligned}$$ The factor \(\alpha \) is referred to as interface weighting factor. It can be chosen as \(\alpha = 0.5\) in order to average \(C^A\) and \(C^B\) but also it allows for unequal weighting of the material constants. The neighborhood of a point in a peridynamic bilayer. A bond across the interface connects dissimilar materials Numerical implementation To set the stage for the computational implementation of the PD framework, the governing equations are discretized. The starting point is the nonlocal balance of linear momentum (1), neglecting body forces for the sake of presentation. The left-hand side is referred to as the residual vector \({\mathbf {R}}\). Thus, the equation to solve reads $$\begin{aligned} {\mathbf {R}} = {\mathbf {0}} \,. \end{aligned}$$ The considered body is discretized into a grid of peridynamic points \({\mathcal {P}}^a\) described by \({\mathbb {X}}^a\) and \({\mathbb {x}}^a\) in the material and spatial configurations, respectively. The balance Eq. (6) is evaluated at each point in the sense of collocation. The point-wise contributions are assembled into the global discretized residual vector \({\mathbb {R}}\) in the form \( {\mathbb {R}} = \begin{bmatrix} {\mathbb {R}}^1&{\mathbb {R}}^2&...&{\mathbb {R}}^a&...&{\mathbb {R}}^{\# {\mathcal {P}}} \end{bmatrix} ^T\). Thus, \({\mathbb {R}} = \mathbb {0}\) is solved for the global discretized deformation vector \({\mathbb {x}}\), that is \( {\mathbb {x}} = \begin{bmatrix} {\mathbb {x}}^1&{\mathbb {x}}&...&{\mathbb {x}}^a&...&{\mathbb {x}}^{\# {\mathcal {P}}} \end{bmatrix}^T\). The point-wise residual contains an integral term that is solved by means of numerical integration. To this end, the peridynamic grid points are used simultaneously as quadrature points. The force integral over the horizon translates into a sum of the interaction forces of a point with the neighboring points that are identified by \({\mathbb {X}}^i\) and \({\mathbb {x}}^i\) in the material and spatial configurations, respectively. The contributions are weighted according to the volume fraction \(V^i\) of the horizon that is attributed to \({\mathbb {X}}^i\). Applying the numerical integration and inserting the constitutive law (4), the point-wise residual can be expressed as $$\begin{aligned} {\mathbb {R}}^a = \sum _{\begin{array}{c} i=1 \\ i \ne a \end{array} }^{\#{\mathcal {N}}}C [\lambda ^i-1] \frac{\varvec{\upxi }^i}{|\varvec{\upxi }^i|} V^{i} \end{aligned}$$ with \(\#{\mathcal {N}}\) specifying the number of neighbors of \({\mathcal {P}}^a\) within its horizon. The discretized bond vector in the deformed configuration is denoted by \(\varvec{\upxi }^i = {{\mathbb {x}}}^i - {\mathbb {x}}^a\). Compression-induced wrinkling instabilities in bilayered systems Analysis of surface wrinkling is a promising application of PD. In Sect. 3.1, we introduce the film-substrate systems in which these wrinkling patterns occur when subject to specific load profiles. In Sect. 3.2, a closed-form solution from literature based on an analytical local analysis is highlighted, which we use to compare to our results. Section 3.3 describes the computational procedure we follow to explore the wrinkling instabilities using PD. It is important to note that we speak of the effective strain \({\bar{\varepsilon }}\) with the meaning of a geometric quantity measuring the overall difference between the original length and the compressed length of the film in a macroscopic sense. We do not refer to a gradient-like strain that is based on a local view and therefore not present in the PD framework. The overbar is used to emphasize this difference. \(\square \) We consider the following setup. A thin stiff film is attached to a deep elastic substrate. The film is subject to a gradually increasing compression in horizontal direction. Once the effective strain in the film reaches a critical value \({\bar{\varepsilon }}_{crit}\), the flat surface loses stability and buckles to relax the compressive stresses. Since the film is not freestanding but bonded to the substrate, it is hindered from buckling into a single wave. Instead, short-wavelength buckling is the energetically favored equilibrium state of the system, which appears in form of sinusoidal surface wrinkles. In this paper, two different mechanisms inducing the compression state in the film are considered, which are schematically depicted in Fig. 4. The first mechanism is based on a substrate prestretch prior to film attachment. When the prestretch is released, the substrate relaxes to its initial length. Hence, the substrate is under tension while the film experiences compression triggering the instability. For the second mechanism, the critical conditions originate from a whole-domain compression. As the film is applied to an unstretched substrate, both components are in a state of compression at the onset of instability. Two different mechanisms to induce compression in the film. Top: compression with substrate prestretch. Bottom: whole-domain compression As established above, the computation of the interaction forces is based on the bond elongation with respect to the undeformed bond. The configuration comprising the undeformed bond is referred to as reference configuration. Note that the reference configuration is not necessarily the same for all bonds in the bilayer system. In case of substrate prestretch, the body is enlarged by an additional subdomain, i.e. the film, at \(t > 0\). The reference configurations of bonds in the attached subdomain and bonds connecting the initial domain and the attached subdomain differ from the reference configurations of bonds within the initial domain. However, it is possible to adhere to the commonly used concept of a Lagrangian horizon. That is, the neighbor search is performed within the configuration at \(t=0\). This can be achieved by approximating the fictitious positions at \(t=0\) of points in the attached subdomain. \(\square \) Analytical solution within local theory To analytically estimate the wrinkling initiation, we consult a linear buckling analysis of a thin film adhered to an infinite half-space derived in [28] within classical continuum mechanics. We use the relations for the critical effective strain \({\bar{\varepsilon }}_{crit}\) and the critical wavelength \(\lambda _{crit}\) that are expressed in terms of the elastic modulus and Poisson's ratio, which coincide with the classical Allen solution [27] and are given by $$\begin{aligned} \lambda _{crit}&= 2 \pi t \left[ \frac{E_f [ 3-\nu _s ] [1+\nu _s]}{12E_s}\right] ^{\frac{1}{3}}=\frac{4}{3} \pi t \left[ \frac{E_f}{E_s}\right] ^{\frac{1}{3}} \, \text { and } \,\end{aligned}$$ $$\begin{aligned} {\bar{\varepsilon }}_{crit}&= \left[ \frac{3E_s}{2E_f \left[ 3-\nu _s \right] \left[ 1+\nu _s\right] }\right] ^{\frac{2}{3}}=\frac{9}{16} \left[ \frac{E_s}{E_f}\right] ^{\frac{2}{3}} \,, \end{aligned}$$ where \(E_f\) and \(E_s\) are the Young's moduli of the film and the substrate, respectively, and \(\nu _s\) is the Poisson ratio of the substrate. Since the derivation relies on the classical beam solution to model the film, it does not account for the Poisson ratio of the film, see [68]. It is pointed out that we consider 2D plane strain which renders an incompressibility limit corresponding to \({\nu =1}\). The reduced versions result from inserting \(\nu _s=\frac{1}{3}\) which is the fixed Poisson ratio for 2D PD. The closed-form solution is derived for growth-induced buckling and thus assumes a stress-free substrate at the onset of wrinkling. In the literature analytical solutions exist that take into account different origins of compression [19, 29, 30]. However, the employed nonlinear material models are not well suited for a comparison to the bond-based PD model. Hence, for now, we restrict ourselves to the aforementioned linear analysis. Numerical analysis using peridynamics To numerically calculate the formation of wrinkling instabilities, a two-dimensional computational model is developed. The analysis is conducted within the framework of peridynamics using a custom-designed program implemented in MATLAB. The geometry of the model is built according to the schematics in Fig. 4, where W and H are the width and height of the substrate and t is the film thickness. The domain is discretized into a uniform grid of peridynamic points with grid spacing L. In literature, the horizon-over-grid-size is widely chosen as \(\delta /L=3.01\) [69, 70]. We adopt this ratio aiming at a good balance between computational effort and accuracy. The prescribed effective strain in horizontal direction is imposed incrementally through Dirichlet-type boundary conditions. Moreover, the domain is supported by a roller constraint at the bottom boundary preventing displacement in the vertical direction. As it is customary in PD, boundary conditions are applied to additional material layers of depth \(\delta \) along the boundaries of the actual geometry. The two main steps of the numerical procedure, namely the Newton–Raphson scheme and the eigenvalue analysis, are described in the following. Newton–Raphson scheme We simulate the behavior of the bilayer by evaluating the deformation problem derived in Sect. 2.3 and given by $$\begin{aligned} {\mathbb {R}} = \mathbb {0} \,. \end{aligned}$$ To solve the system of nonlinear equations, we adopt an iterative Newton–Raphson scheme. The method is based on a linearization of the nonlinear equations that allows for an iterative determination of the vanishing residual vector. The linearization of Eq. (10) at iteration k yields $$\begin{aligned} {\mathbb {R}}_{k+1} = {\mathbb {R}}_{k} + \left[ \frac{\partial {\mathbb {R}}}{\partial {\mathbb {x}}}\right] _k \Delta {\mathbb {x}}_k = {\mathbb {R}}_{k} + {\mathbb {K}}_k \Delta {\mathbb {x}}_k \overset{!}{=} \mathbb {0} \end{aligned}$$ with the tangent stiffness matrix \({\mathbb {K}}_k\) at iteration k. For more details on the derivation of \({\mathbb {K}}\), see [59]. Thus, the unknown deformation increment at iteration k can be computed via $$\begin{aligned} \Delta {\mathbb {x}}_k = - \left[ {{\mathbb {K}}_k}\right] ^{-1} {\mathbb {R}}_{k} \,. \end{aligned}$$ The deformation vector \({\mathbb {x}}\) is updated at the end of each iteration by $$\begin{aligned} {\mathbb {x}}_{k+1} = {\mathbb {x}}_k + \Delta {\mathbb {x}}_k \,. \end{aligned}$$ The procedure is repeated until the norm of the residual is within a small given tolerance. Eigenvalue analysis The formation of wrinkles mathematically translates into a bifurcation problem. When the effective strain exceeds a critical value, the equilibrium path of the system exhibits bifurcation, meaning there exists more than one equilibrium configuration that satisfies Eq. (10). The point at which the solution path divides into different branches is called the critical point and is indicated by at least one zero eigenvalue of the system matrix. Therefore, an eigenvalue analysis serves as a useful tool to identify the critical point marking the onset of wrinkling, as it was proposed by Javili et al. [37]. We proceed as follows. At every increment of effective strain we compute the five smallest eigenvalues of the matrix \({\mathbb {K}}\) of the last converged solution to check if the critical point is reached. If so, the current effective strain is registered as the critical effective strain \({\bar{\varepsilon }}_{crit}\). In addition, the eigenvector corresponding to the smallest eigenvalue is determined, giving information about the mode of the system after bifurcation. We update the deformation vector using the obtained eigenvector and the resulting deformation pattern depicts the critical wavelength \(\lambda _{crit}\). In close proximity of the bifurcation point, the equilibrium state describing the flat film surface becomes unstable. The system strives to minimize its potential energy. Thus, small perturbations cause the structure to switch to the wrinkled state representing a stable equilibrium with lower potential energy. Due to the nonlocal nature of PD, such perturbations occur "naturally". Peridynamic points that are located at a free surface of a body are not equipped with a full horizon. However, they are described by the same material parameters as a peridynamic point in the bulk possessing the maximum number of neighbors. Therefore, the response at the surface slightly differs compared to that in the bulk. The resulting small deviations in deformation act as perturbations to the system, similar to mesh imperfections that are often manually imposed in FEM simulations. Note that the eigenvalue analysis is carried out accompanying the Newton–Raphson algorithm. If the perturbations have initiated branch switching, this is reflected in the eigenvalue analysis. The smallest eigenvalue does not necessarily pass through zero since the equilibrium configuration follows the stable post-bifurcation path. However, the eigenvalues sharply drop towards zero and reach a plateau, which from a numerical perspective identifies the critical point. In PD literature, the deviating deformations at a free surface are referred to as "surface effect". This effect is often eliminated by the introduction of correction terms aiming at a good agreement with classical methods concerning the material behavior [71, 72]. Note that according to our understanding of PD, a fully nonlocal model implies nonlocal boundaries. Their consequences on the predicted behavior are inherent to the mathematical formulation and we do not treat them as an error that requires correction. \(\square \) Left: critical effective strain \({\bar{\varepsilon }}_{crit}\) versus stiffness ratio \(C_r\). Right: course of the smallest eigenvalue \(EV_{min}\) versus the effective strain \({\bar{\varepsilon }}\) for an exemplary selection of stiffness ratios \(C_r\). Step sizes are automatically refined in the proximity of the critical point Numerical study on nonlocal wrinkling characteristics In this section, we conduct a series of numerical studies to investigate the nonlocal wrinkling behavior of the presented bilayer with regard to several influencing parameters. The aim is to demonstrate the ability of the peridynamic computational framework to model the occurring instabilities and to capture nonlocal effects that are entirely elusive within classical models. To this end, we conduct five numerical parameter studies as follows. We consider two different origins of compression. First, in Sect. 4.1 we study the case of compression with substrate prestretch and systematically vary the stiffness ratio, the horizon size, the film thickness and the interface weighting factor. Second, the case of whole-domain compression is considered in Sect. 4.2. We carry out a nonlocality study by decreasing the horizon size, which allows for a comparison to a computation using FEM. Compression due to substrate prestretch In the first part, we model instabilities driven by the relaxation of the prestretched substrate. The numerical model consists of a substrate of an initial width of \({W = {2}}\) and initial height of \(H = 0.5\) as well as a film of thickness t that is varied throughout the study but never exceeds 8 % of the substrate height. The substrate is prestretched by 30 %. Unless otherwise stated, an interface weighting factor \(\alpha = 0.5\) is used, specifying material A as the film and material B as the substrate. As it is inherent to two-dimensional bond-based PD, the Poisson ratio of all modeled materials is \(\nu =1/3\). Influence of the stiffness ratio We study the influence of the stiffness ratio \(C_r = C_f/C_s\), with the elastic coefficients \(C_f\) and \(C_s\) of the film and the substrate, respectively, by assuming different values between 50 and 1000 at a constant film thickness of \(t=0.01\) and a constant horizon size of \(\delta = 0.0301\). The resulting wrinkling instabilities are analyzed in terms of their main features, i.e. the critical effective strain in the film \({\bar{\varepsilon }}_{crit}\) and the critical wavelength \(\lambda _{crit}\). Figure 5 (left) compares the numerically determined critical effective strain \({\bar{\varepsilon }}_{crit}\) as a function of \(C_r\) with the predictions of the local analytical solution. The data suggest two findings. First, the critical effective strain decreases with increasing stiffness ratio. This is expected as the stiffer the film, the less resistance it experiences from the compliant substrate and therefore less effective strain is required to trigger buckling. Second, our results show that the peridynamic simulations predict the same qualitative trend as the local theory. It is important to bear in mind that quantitative differences are expected since we compare a local (linear) approach to a nonlocal (nonlinear) continuum model. However, the deviations should decrease for \(C_r \rightarrow \infty \) and \(\delta \rightarrow 0\), which is further elaborated on in Sect. 4.1.2. Our study shows that the nonlocal model predicts a higher critical strain than the local model. It can be concluded that nonlocal models prove to be more stable than local models. This finding is in agreement with reports from literature for other cases of instabilities, e.g. shear band evolution and localization instability, as it is described in early contributions to this field, see [73] and [74], among others. Figure 5 (right) depicts the plots of the smallest eigenvalues of the system matrix over the effective film strain for a selection of stiffness ratios. The eigenvalues are computed at each increment throughout relaxation of the substrate. As the compressive effective strain increases, the eigenvalues eventually drop towards zero since the instability point is approached. With decreasing stiffness ratio this can be observed at a later stage of relaxation. Top left: critical wavelength \(\lambda _{crit}\) versus stiffness ratio \(C_r\). Top right: wrinkling patterns for an exemplary selection of stiffness ratios \(C_r\). Bottom: corresponding deformed configurations obtained by imposing the normalized eigenvector associated with the smallest eigenvalue as deformation to the shape of the bilayer at the onset of instability. The colors refer to the vertical displacement. (Color figure online) Figure 6 (top left) illustrates the influence of the variation of \(C_r\) on the critical wavelength \(\lambda _{crit}\). The analytical results are included for comparison. The wavelength is calculated via the number of folds N with \(\lambda _{crit} = W/N\) to be able to compare it against the analytical wavelength derived for growth-induced wrinkling. In line with the theoretical trend, the wavelength grows with augmenting stiffness ratio. The discrete jumps in wavelength occur due to the prescribed displacement at the left and right boundaries that allow for either half or full folds. Thus, with a fixed computational domain of finite width the continuous course of the wavelength over the stiffness ratio cannot be reproduced numerically. Figure 6 (top right) gathers a set of representative examples of the wrinkling pattern for different stiffness ratios. The respective deformed configurations of the domain are depicted in Fig. 6 (bottom). All shown plots result from the eigenvectors corresponding to the smallest eigenvalue at the critical point that are imposed as deformation to obtain the resulting wave pattern. It can be observed in all patterns that the waves close to the boundaries are less pronounced than those in the center. A similar wrinkling behavior has been captured by experiments, as shown in Fig. 1. Our results indicate that nonlocal material behavior might be responsible for this boundary effect. Note that in nonlocal formulations the application of boundary conditions is more complex than in local formulations. We have verified that the boundary effects also occur for different boundary conditions application strategies including a variation of the material type and size of the additional boundary region. Influence of the horizon size To study the influence of the horizon size \(\delta \) on the wrinkling characteristics, we carry out a nonlocality study. This is achieved through varying \(\delta \) while fixing the number of neighbors within the horizon by adjusting L according to \(\delta /L = 3.01\). In this manner, the observed effects can be attributed to the changing horizon sizes. With this study we pursue two objectives. The first goal is to analyze how the level of nonlocality of the material model affects the critical effective strain. Second, we aim at verifying our model through a comparison with the analytical solution for the classical model. It is known from the literature that the PD theory converges to the local solution of CCM for \({\delta \rightarrow 0}\). Therefore, we expect the numerical results to approach the analytical solution with decreasing \(\delta \). The numerical study is conducted with four different decreasing values for \(\delta \) (0.0602, 0.0301, 0.01505, 0.007525) at a constant film thickness \(t=0.02\). We test three different stiffness ratios \(C_r (50, 100, 200)\). We note that we are restricted in the choice of L and therefore also \(\delta \). Since we use a uniform grid and keep the film thickness t constant, it must be possible to resolve t by L. Critical effective strain \({\bar{\varepsilon }}_{crit}\) versus horizon size \(\delta \) for different stiffness ratios \(C_r\) Wrinkling patterns for decreasing horizon sizes \(\delta \) in comparison to CCM for \(C_r=50\) Figure 7 illustrates the critical effective strain \({\bar{\varepsilon }}_{crit}\) as a function of the horizon size \(\delta \). For every modeled stiffness ratio the critical effective strain increases with increasing horizon size. It can be concluded that larger horizons, i.e. more nonlocal models, require larger effective strains to trigger the instabilities. Moreover, diminishing the horizon size leads to smaller differences between the nonlocal numerical results and the local analytical predictions. As expected, in the limit of \(\delta \rightarrow 0 \) the PD results approach the local solution. This confirms that our simulation model is capable of quantitatively predicting the critical conditions for surface wrinkles. The corresponding wrinkling patterns are depicted in Fig. 8 for the example associated with \(C_r = 50\). In contrast to the perfectly sinusoidal local analytical solution, the PD solutions are noticeably influenced by boundary effects. These effects do not completely vanish for small horizon sizes. This might be explained by the high sensitivity of the instability problem to small numerical disturbances especially for small horizon sizes. However, the wavelength slightly decreases with shrinking horizon sizes, showing the trend, approaching the classical solution. In contrast to a standard mesh convergence study for FEM, there are two types of convergence studies in PD due to the two independent parameters of grid spacing L and horizon size \(\delta \). One option is to fix \(\delta \) while decreasing L resulting in more neighbors within the horizon of each point. The discretized nonlocal solution is expected to converge to the continuum nonlocal solution. The second option is to keep a sufficient number of neighbors constant by fixing \(\delta /L\) while decreasing \(\delta \). In the limit of \({\delta \rightarrow 0}\) the nonlocal solution is expected to converge to the local solution. In this paper, we have conducted the second option in order to investigate the effect of nonlocality. In Fig. 7, the convergence of our PD solution to the local solution is shown. A comprehensive convergence study also following the first option has been carried out in our previous study [59], demonstrating that both types of convergence are achieved. \(\square \) The degree of nonlocality depends on both the considered length scale as well as the material at hand. At a certain length scale, e.g. at atomistic scale, all materials behave nonlocally. Hence nonlocality is material-dependent in the sense that it becomes more significant for some materials at a given length scale than for others. Furthermore, the extent of nonlocality too can vary significantly between different materials. The horizon size is a material parameter that captures this extent. At atomistic scale, it can be compared to the cutoff radius commonly used in molecular dynamics that can also vary significantly depending on the material and interatomic potential type. \(\square \) Influence of the film thickness The next parameter of interest is the film thickness t. To investigate its influence on the critical effective strain \({\bar{\varepsilon }}_{crit}\), we consider four different values of t (0.01, 0.02, 0.03, 0.04). The study is conducted for six different stiffness ratios (50, 75, 100, 125, 150, 200) with a fixed horizon size \(\delta = 0.0301\). As is shown in Fig. 9, the numerical analysis predicts a decrease of critical effective strain with increasing film thickness. It is interesting to note that this relation is not covered by the analytical solution within the local theory which renders a constant critical effective strain independent of the film thickness. This observation suggests that the influence of t on \({\bar{\varepsilon }}_{crit}\) is introduced by nonlocality. In addition, it can be found from Fig. 9 that the nonlocal model renders a higher critical strain than the local solution which is in accordance with our conclusion from Sect. 4.1.1 and 4.1.2. With increasing film thickness the difference to the analytical solution shrinks. Since the PD model implies two length scale parameters, i.e. \(\delta \) and t, the relative length scale \(\delta /t\) is an additional measure of nonlocality. Therefore, the approach to the local solution with increasing t is explained by a decrease of \(\delta /t\) resulting in a more local model. Critical effective strain \({\bar{\varepsilon }}_{crit}\) versus film thickness t for different stiffness ratios \(C_r\) Fig. 10 Critical effective strain \({\bar{\varepsilon }}_{crit}\) versus horizon size \(\delta \) for different film thicknesses t To further clarify the relation between the critical strain and the film thickness, we carry out a nonlocality study for three different film thicknesses (0.01, 0.02, 0.04). That is, we test four different horizon sizes (0.0602, \(0.0301, 0.01505, 0.007525)\) at a constant stiffness ratio \(C_r = 100\). We remark that since we adjust L according to \(\delta /L=3.01\) and use uniform grids, we are limited to a maximum horizon size \(\delta =0.0301\) for \(t=0.01\). Figure 10 compares the critical effective strain obtained numerically to the local analytical solution. It can be observed that the differences between the numerical results for different film thicknesses decrease with decreasing \(\delta \) and approach the local analytical result. Consequently, the data demonstrate that the influence of the film thickness on the critical effective strain can be considered a nonlocal effect. We again point out that the film thickness is the only length scale parameter inherent to the local theory. Since PD additionally includes the horizon size, an interplay of the two length scale parameters might cause the observed effect. Influence of the interface weighting factor In Sect. 2.2, we introduced the interface weighting factor \(\alpha \) to determine the elastic coefficient \(C^I\) across a material interface. So far, results have been obtained with \(\alpha =0.5\) yielding an intuitive averaging of the material parameters. In this section, to gain a better understanding of \(\alpha \) and its physical interpretation, we perform a set of numerical studies by considering values in the range of \(0\le \alpha \le 1\). Critical wavelength \(\lambda _{crit}\) (left) and critical effective strain \({\bar{\varepsilon }}_{crit}\) (right) versus interface weighting factor \(\alpha \) for different stiffness ratios \(C_r\). The circles represent the PD results and the dashed lines represent the local analytical solutions. To investigate the changing trend of the critical effective strain, the resolution of \(\alpha \) is increased towards \(\alpha =0\) Figure 11 summarizes the results for the film thickness \(t=0.02\), horizon size \(\delta =0.0301\) and three different stiffness ratios \(C_r\) (100, 200, 500). The left graph shows an increase in critical wavelength with increasing \(\alpha \). Since a higher stiffness of the interfacial bonds leads to a higher overall stiffness of the interface layers, this trend is in line with the conclusion in Sect. 4.1.1. The results of the local analytical solution are included in the graph for the sake of completeness. It can be observed that the deviations of the numerical to the analytical results are larger for smaller values of \(\alpha \). However, we point out that the deviations cannot be explained solely by the variation of the interfacial stiffness as they might also result from nonlocality as well as the dependence of the wavelength on the computational domain as described in Sect. 4.1.1. Analyzing the right graph in Fig. 11, we make three observations. First, in the proximity of \(\alpha =0\) increasing \(\alpha \) causes a sharp increase of the critical effective strain. Second, once a maximum is reached, the critical effective strain continuously decreases. The latter effect can again be attributed to the higher overall stiffness of the interface region. However, the stiffness of the interface bonds also affects the strength of the attachment between the film and the substrate, resulting in the first observation. For \(\alpha \) approaching zero, the interface bonds are too weak to fully couple the film and the substrate. Consequently, less effective strain is required for the film to overcome the resistance of the substrate and buckle. Figure 12 further illustrates this effect by means of deformation plots for different values of \(\alpha \). Third, the local analytical solution consistently predicts smaller values than the numerical results. This agrees with the findings from Sect. 4.1.2, where we established that a nonlocal material model yields a higher critical effective strain than a local model. Left: deformation plots showing the wrinkling patterns for different interface weighting parameters \(\alpha \) at \({C_r = 200}\). Right: zoom boxes illustrating the observed effect: the nonlocal interface comprises multiple layers with a deviating stiffness. For larger values of \(\alpha \) the interface layers are more strongly connected to the film and can be identified by a slight separation from the substrate. With decreasing interface stiffness the bonding between the film and the substrate weakens until a detachment can be observed for \(\alpha =0\). The plots result from imposing the normalized eigenvector corresponding to the smallest eigenvalue at the critical point as displacement to the current shape of the bilayer. The colors refer to the displacement in vertical direction. (Color figure online) Whole-domain compression In this section, wrinkling instabilities induced by a whole-domain compression are considered. We conduct a nonlocality study analog to Sect. 4.1.2. The aim is to investigate the effect of nonlocality for the origin of compression at hand. Furthermore, we intend to compare the obtained numerical results not only against analytical but also numerical results based on the local theory. We expect to see differences between the nonlocal PD solution and the local results that diminish with decreasing level of nonlocality. The dimensions of the numerical model are \(W=3\), \(H=0.5\) and \(t=0.02\). The numerical analysis in the framework of CCM is carried out with FEM enhanced by an eigenvalue analysis using MATLAB. The domain is discretized into 30000 bi-quadratic quadrilateral elements. The peridynamic study comprises four different horizon sizes \(\delta \) (0.0602, 0.0301, 0.01505, 0.007525) while \(\delta /L=3.01\) is kept constant, resulting in different degrees of nonlocality. The material parameters are chosen as \(C_r = 100\), \(\alpha =0.5\) and \(\nu =1/3\) pursuant to bond-based PD. The FEM model employs a Neo-Hookean material model. Figure 13 shows the results of the nonlocality study in terms of the critical effective strain. In accordance with the findings from Sect. 4.1.2, the graph suggests that nonlocal effects cause an increase in critical effective strain. Following our expectations, the PD results approach both the numerical and the analytical local solution. We attribute the small deviations of the FEM solution from the analytical solution to the fact that the analytical solution is based on linear elasticity. The results of this study demonstrate that our methodology using PD is capable of modeling the effect of nonlocality by varying \(\delta \) while approximating the local theory in the limit of \(\delta \rightarrow 0\). Critical effective strain \({\bar{\varepsilon }}_{crit}\) versus horizon size \(\delta \) for \(C_r = 100\) The results of the eigenvalue analysis using FEM (top) and PD (bottom). The course of the five smallest eigenvalues over the effective strain are depicted in the left diagrams. The effective strain increments are refined in the proximity of the critical point, where the eigenvalues experience a sharp drop. The zoom boxes highlight the differences: With FEM, the smallest eigenvalue passes through zero. With PD, the smallest eigenvalue reaches a plateau slightly above zero. The plots on the right depict the wrinkling patterns resulting from the eigenvectors corresponding to the smallest eigenvalues Figure 14 illustrates the results of the eigenvalue analysis for the FEM simulation as well as the PD simulation with \(\delta = 0.007525\). The purpose is to elaborate on two main aspects that distinguish an eigenvalue analysis with PD from an eigenvalue analysis with FEM. First, although in both cases the smallest eigenvalues approach zero when reaching the critical effective strain, in the PD analysis none of the eigenvalues eventually hits zero. Recall that the eigenvalues correspond to the system matrix of the equilibrium state at the current increment. Since there are virtually no perturbations present in the FEM framework, the system is in an unstable equilibrium, which manifests itself in an eigenvalue passing through zero. Due to small perturbations that are naturally inherent to the PD framework, the system switches to the undulated state. Since this equilibrium state is stable, no negative eigenvalues can be observed. Second, the wrinkling pattern resulting from the FEM analysis is a perfectly sinusoidal wave. The PD analysis, however, renders a sinusoidal wave that does not have a constant amplitude. It can be observed that the amplitude decreases towards the boundaries of the domain. Therefore, in contrast to FEM, PD provides an understanding for the boundary effect witnessed in experiments. The nonlocal approach uncovers an influence of the boundaries on the wrinkling that is not accessible for local models. In the past, the buckling behavior of a bilayer has been the subject of a multitude of studies mainly based on classical continuum mechanics. Here, as a first attempt to add a nonlocal perspective to the topic, we have presented a numerical study of wrinkling instabilities in a film-substrate system using peridynamics. We have developed a computational model that is capable of predicting the main characteristics of the wrinkling pattern. The numerical procedure includes an accompanying eigenvalue analysis that allows to precisely capture the critical conditions of the instability. We have presented the results of several parameter studies considering two different origins of compression, namely substrate prestretch and whole-domain compression. Throughout the studies, we have repeatedly validated our computational model by showing that our results converge to the local results for a decreasing horizon size. The comparison of our nonlocal results regarding the influence of different controlling parameters to CCM suggest both similarities and differences. First, the influence of the stiffness ratio follows the same trend as in the local theory. The nonlocality manifests itself in quantitative differences. Second, an effect of the film thickness on the critical effective strain that is not covered by local predictions is found. Third, the eigenvalues of the stiffness matrix do not pass through zero at the critical point, as it is observed with FEM. This can be attributed to nonlocal surface effects that act as perturbations and cause the system to switch to the stable wrinkled state. Another key finding of our study is that the nonlocal model is able to explain the experimental observation of deviating wrinkling behavior at the boundaries of the bilayer. The PD simulations show weaker waves towards the boundaries, while this effect is not reproduced in FEM simulations. This suggests nonlocality as a physical origin of the experimentally found behavior. Moreover, we have established an approach to determine the elastic coefficient across a material interface. Our study of the consequences on the material response reveals a link between the stiffness of interfacial bonds and the strength of the film attachment. In summary, this paper shows that a nonlocal analysis of instability patterns in bilayers can provide new insights into the nature of the phenomenon. PD has been found to be a suitable tool to realize the nonlocal approach. In future research, the work will be extended to an investigation of nonlocal effects on secondary wrinkling instabilities in the post-buckling regime. Furthermore, modeling materials with different levels of compressibility will be achieved by employing continuum-kinematics-inspired peridynamics [58]. Amar MB, Jia F (2013) Anisotropic growth shapes intestinal tissues during embryogenesis. Proc Nat Acad Sci 110(26):10525–10530. https://doi.org/10.1073/pnas.1217391110 Shyer AE, Tallinen T, Nerurkar NL, Wei Z, Gil ES, Kaplan DL, Tabin CJ, Mahadevan L (2013) Villification: how the gut gets its villi. Science 342(6155):212–218. https://doi.org/10.1126/science.1238842 Wang L, Castro CE, Boyce MC (2011) Growth strain-induced wrinkled membrane morphology of white blood cells. Soft Matter 7(24):11319–11324. https://doi.org/10.1039/C1SM06637D Hallett MB, von Ruhland CJ, Dewitt S (2008) Chemotaxis and the cell surface-area problem. Nat Rev Mol Cell Biol 9(8):662–662. https://doi.org/10.1038/nrm2419-c1 Cerda E, Mahadevan L (2003) Geometry and physics of wrinkling. Phys Rev Lett 90(7):074302 Genzer J, Groenewold J (2006) Soft matter with hard skin: from skin wrinkles to templating and material characterization. Soft Matter 2(4):310–323. https://doi.org/10.1039/B516741H Budday S, Raybaud C, Kuhl E (2014a) A mechanical model predicts morphological abnormalities in the developing human brain. Sci Rep 4(1):1–7. https://doi.org/10.1038/srep05644 Budday S, Steinmann P, Kuhl E (2014b) The role of mechanics during brain development. J Mech Phys Solids 72:75–92. https://doi.org/10.1016/j.jmps.2014.07.010 MathSciNet Article MATH Google Scholar Stafford CM, Harrison C, Beers KL, Karim A, Amis EJ, VanLandingham MR, Kim HC, Volksen W, Miller RD, Simonyi EE (2004) A buckling-based metrology for measuring the elastic moduli of polymeric thin films. Nat Mater 3(8):545–550. https://doi.org/10.1038/nmat1175 Yang S, Khare K, Lin PC (2010) Harnessing surface wrinkle patterns in soft matter. Adv Funct Mater 20(16):2550–2564. https://doi.org/10.1002/adfm.201000034 Schweikart A, Fery A (2009) Controlled wrinkling as a novel method for the fabrication of patterned surfaces. Microchim Acta 165(3–4):249–263. https://doi.org/10.1007/s00604-009-0153-3 Haji Hosseinloo A, Turitsyn K (2017) Energy harvesting via wrinkling instabilities. Appl Phys Lett 110(1):013901. https://doi.org/10.1063/1.4973524 Xue Z, Wang C, Tan H (2021) Controlled surface wrinkling as a novel strategy for the compressibility-tunable PZT film-based energy harvesting system. Extreme Mech Lett 42:101102. https://doi.org/10.1016/j.eml.2020.101102 Biot MA (1963) Surface instability of rubber in compression. Appl Sci Res Sect A 12(2):168–182 Goriely A, Destrade M, Ben Amar M (2006) Instabilities in elastomers and in soft tissues. The Q J Mech Appl Math 59(4):615–630. https://doi.org/10.1093/qjmam/hbl017 Bakiler AD, Javili A (2020) Bifurcation behavior of compressible elastic half-space under plane deformations. Int J Non-Linear Mech 126:103553. https://doi.org/10.1016/j.ijnonlinmec.2020.103553 Moulton D, Goriely A (2011) Circumferential buckling instability of a growing cylindrical tube. J Mech Phys Solids 59(3):525–537. https://doi.org/10.1016/j.jmps.2011.01.005 Li B, Jia F, Cao YP, Feng XQ, Gao H (2011) Surface wrinkling patterns on a core-shell soft sphere. Phys Rev Lett 106(23):234301. https://doi.org/10.1103/PhysRevLett.106.234301 Holland M, Li B, Feng X, Kuhl E (2017) Instabilities of soft films on compliant substrates. J Mech Phys Solids 98:350–365. https://doi.org/10.1016/j.jmps.2016.09.012 MathSciNet Article Google Scholar Budday S, Andres S, Walter B, Steinmann P, Kuhl E (2017) Wrinkling instabilities in soft bilayered systems. Philos Trans R Soc A Math Phys Eng Sci 375(2093):20160163. https://doi.org/10.1098/rsta.2016.0163 Lejeune E, Javili A, Linder C (2016) Understanding geometric instabilities in thin films via a multi-layer model. Soft Matter 12(3):806–816. https://doi.org/10.1039/C5SM02082D Li B, Cao YP, Feng XQ, Gao H (2012) Mechanics of morphological instabilities and surface wrinkling in soft materials: a review. Soft Matter 8(21):5728–5745. https://doi.org/10.1039/C2SM00011C Tan Y, Hu B, Song J, Chu Z, Wu W (2020) Bioinspired multiscale wrinkling patterns on curved substrates: An overview. Nano-Micro Lett. 12(1):1–42. https://doi.org/10.1007/s40820-020-00436-y Amar MB, Goriely A (2005) Growth and instability in elastic tissues. J Mech Phys Solids 53(10):2284–2319. https://doi.org/10.1016/j.jmps.2005.04.008 Andres S, Steinmann P, Budday S (2018) The origin of compression influences geometric instabilities in bilayers. Proc R Soc A Math Phys Eng Sci 474(2217):20180267. https://doi.org/10.1098/rspa.2018.0267 Biot MA (1937) Bending of an infinite beam on an elastic foundation. J Appl Mech 203:A1–A7 Allen HG (1969) Analysis and design of structural sandwich panels. Pergamon Press, Oxford Javili A, Bakiler AD (2019) A displacement-based approach to geometric instabilities of a film on a substrate. Math Mech Solids 24(9):2999–3023. https://doi.org/10.1177/1081286519826370 Cao Y, Hutchinson JW (2012) Wrinkling phenomena in neo-Hookean film/substrate bilayers. J Appl Mech 10(1115/1):4005960 Hutchinson JW (2013) The role of nonlinear substrate elasticity in the wrinkling of thin films. Philos Trans R Soc A Math Phys Eng Sci 371(1993):20120422. https://doi.org/10.1098/rsta.2012.0422 Lejeune E, Javili A, Linder C (2016) An algorithmic approach to multi-layer wrinkling. Extreme Mech Lett 7:10–17. https://doi.org/10.1016/j.eml.2016.02.008 Hu H, Huang C, Liu XH, Hsia KJ (2016) Thin film wrinkling by strain mismatch on 3D surfaces. Extreme Mech Lett 8:107–113. https://doi.org/10.1016/j.eml.2016.04.005 Bowden N, Brittain S, Evans AG, Hutchinson JW, Whitesides GM (1998) Spontaneous formation of ordered structures in thin films of metals supported on an elastomeric polymer. Nature 393(6681):146–149. https://doi.org/10.1038/30193 Jin L, Auguste A, Hayward RC, Suo Z (2015) Bifurcation diagrams for the formation of wrinkles or creases in soft bilayers. J Appl Mech 10(1115/1):4030384 Volynskii A, Bazhenov S, Lebedeva O, Bakeev N (2000) Mechanical buckling instability of thin coatings deposited on soft polymer substrates. J Mater Sci 35(3):547–554. https://doi.org/10.1023/A:1004707906821 Auguste A, Jin L, Suo Z, Hayward RC (2017) Post-wrinkle bifurcations in elastic bilayers with modest contrast in modulus. Extreme Mech Lett 11:30–36. https://doi.org/10.1016/j.eml.2016.11.013 Javili A, Dortdivanlioglu B, Kuhl E, Linder C (2015) Computational aspects of growth-induced instabilities through eigenvalue analysis. Comput Mech 56(3):405–420. https://doi.org/10.1007/s00466-015-1178-6 Dortdivanlioglu B, Javili A, Linder C (2017) Computational aspects of morphological instabilities using isogeometric analysis. Comput Methods Appl Mech Eng 316:261–279. https://doi.org/10.1016/j.cma.2016.06.028 Nikravesh S, Ryu D, Shen YL (2020a) Instabilities of thin films on a compliant substrate: direct numerical simulations from surface wrinkling to global buckling. Sci Rep 10(1):1–19. https://doi.org/10.1038/s41598-020-62600-z Nikravesh S, Ryu D, Shen YL (2020b) Instability driven surface patterns: Insights from direct three-dimensional finite element simulations. Extreme Mech Lett 39:100779. https://doi.org/10.1016/j.eml.2020.100779 Chavoshnejad P, More S, Razavi MJ (2020) From surface microrelief to big wrinkles in skin: a mechanical in-silico model. Extreme Mech Lett 36:100647. https://doi.org/10.1016/j.eml.2020.100647 Zheng Y, Li GY, Cao Y, Feng XQ (2017) Wrinkling of a stiff film resting on a fiber-filled soft substrate and its potential application as tunable metamaterials. Extreme Mech Lett 11:121–127. https://doi.org/10.1016/j.eml.2016.12.002 Mei H, Landis CM, Huang R (2011) Concomitant wrinkling and buckle-delamination of elastic thin films on compliant substrates. Mech Mater 43(11):627–642. https://doi.org/10.1016/j.mechmat.2011.08.003 Wang Q, Zhao X (2015) A three-dimensional phase diagram of growth-induced surface instabilities. Sci Rep 5(1):1–10. https://doi.org/10.1038/srep08887 Silling SA (2000) Reformulation of elasticity theory for discontinuities and long-range forces. J Mech Phys Solids 48(1):175–209. https://doi.org/10.1016/S0022-5096(99)00029-0 Oterkus S, Madenci E, Agwai A (2014a) Fully coupled peridynamic thermomechanics. J Mech Phys Solids 64:1–23. https://doi.org/10.1016/j.jmps.2013.10.011 Oterkus S, Madenci E, Agwai A (2014b) Peridynamic thermal diffusion. J Comput Phys 265:71–96. https://doi.org/10.1016/j.jcp.2014.01.027 Oterkus S (2015) Peridynamics for the solution of multiphysics problems. PhD thesis, The University of Arizona Askari E, Bobaru F, Lehoucq R, Parks M, Silling S, Weckner O (2008) Peridynamics for multiscale materials modeling. In: Journal of physics: conference series, IOP Publishing, vol 125, p 012078. https://doi.org/10.1088/1742-6596/125/1/012078 Bobaru F, Ha YD (2011) Adaptive refinement and multiscale modeling in 2D peridynamics. Int J Multiscale Comput Eng. https://doi.org/10.1615/IntJMultCompEng.2011002793 Ebrahimi S, Steigmann D, Komvopoulos K (2015) Peridynamics analysis of the nanoscale friction and wear properties of amorphous carbon thin films. J Mech Mater Struct 10(5):559–572. https://doi.org/10.2140/jomms.2015.10.559 Kefal A, Sohouli A, Oterkus E, Yildiz M, Suleman A (2019) Topology optimization of cracked structures using peridynamics. Contin Mech Thermodyn 31(6):1645–1672. https://doi.org/10.1007/s00161-019-00830-x Sohouli A, Kefal A, Abdelhamid A, Yildiz M, Suleman A (2020) Continuous density-based topology optimization of cracked structures using peridynamics. Struct Multidiscip Optim 62:2375–2389. https://doi.org/10.1007/s00158-020-02608-1 Lejeune E, Linder C (2017) Modeling tumor growth with peridynamics. Biomech Model Mechanobiol 16(4):1141–1157. https://doi.org/10.1007/s10237-017-0876-8 Lejeune E, Linder C (2020) Interpreting stochastic agent-based models of cell death. Comput Methods Appl Mech Eng 360:112700. https://doi.org/10.1016/j.cma.2019.112700 Javili A, Morasata R, Oterkus E, Oterkus S (2019) Peridynamics review. Math Mech Solids 24(11):3714–3739. https://doi.org/10.1177/1081286518803411 Silling SA, Epton M, Weckner O, Xu J, Askari E (2007) Peridynamic states and constitutive modeling. J Elast 88(2):151–184. https://doi.org/10.1007/s10659-007-9125-1 Javili A, McBride A, Steinmann P (2019) Continuum-kinematics-inspired peridynamics. mechanical problems. J Mech Phys Solids 131:125–146. https://doi.org/10.1016/j.jmps.2019.06.016 Javili A, Firooz S, McBride AT, Steinmann P (2020) The computational framework for continuum-kinematics-inspired peridynamics. Comput Mech 66(4):795–824. https://doi.org/10.1007/s00466-020-01885-3 Cheng Z, Zhang G, Wang Y, Bobaru F (2015) A peridynamic model for dynamic fracture in functionally graded materials. Compos Struct 133:529–546. https://doi.org/10.1016/j.compstruct.2015.07.047 Tan Y, Liu Q, Zhang L, Liu L, Lai X (2020) Peridynamics model with surface correction near insulated cracks for transient heat conduction in functionally graded materials. Materials 13(6):1340. https://doi.org/10.3390/ma13061340 Ozdemir M, Kefal A, Imachi M, Tanaka S, Oterkus E (2020) Dynamic fracture analysis of functionally graded materials using ordinary state-based peridynamics. Compos Struct 244:112296. https://doi.org/10.1016/j.compstruct.2020.112296 Cheng Z, Sui Z, Yin H, Feng H (2019) Numerical simulation of dynamic fracture in functionally graded materials using peridynamic modeling with composite weighted bonds. Eng Anal Boundary Elem 105:31–46. https://doi.org/10.1016/j.enganabound.2019.04.005 Rahimi MN, Kefal A, Yildiz M (2021) An improved ordinary-state based peridynamic formulation for modeling FGMs with sharp interface transitions. Int J Mech Sci 197:106322. https://doi.org/10.1016/j.ijmecsci.2021.106322 Liao Y, Liu L, Liu Q, Lai X, Assefa M, Liu J (2017) Peridynamic simulation of transient heat conduction problems in functionally gradient materials with cracks. J Therm Stresses 40(12):1484–1501. https://doi.org/10.1080/01495739.2017.1358070 Diyaroglu C, Oterkus S, Oterkus E, Madenci E (2017) Peridynamic modeling of diffusion by using finite-element analysis. IEEE Trans Compon Packag Manuf Technol 7(11):1823–1831. https://doi.org/10.1109/TCPMT.2017.2737522 Ebrahimi S (2020) Mechanical behavior of materials at multiscale peridynamic theory and learning-based approaches. PhD thesis, UC Berkeley. https://escholarship.org/uc/item/9j73s3p8 Bakiler AD, Dortdivanlioglu B, Javili A (2021) From beams to bilayers: a unifying approach towards instabilities of compressible domains under plane deformations. Int J Non-linear Mech. https://doi.org/10.1016/j.ijnonlinmec.2021.103752 Silling SA, Askari E (2005) A meshfree method based on the peridynamic model of solid mechanics. Comput Struct 83(17–18):1526–1535. https://doi.org/10.1016/j.compstruc.2004.11.026 Rädel M, Bednarek AJ, Schmidt J, Willberg C (2017) Peridynamics: Convergence & influence of probabilistic material distribution on crack initiation. In: 6th ECCOMAS thematic conference on the mechanical response of composites Madenci E, Oterkus E (2014) Peridynamic theory and its applications. Springer, New York Book Google Scholar Le Q, Bobaru F (2018) Surface corrections for peridynamic models in elasticity and fracture. Comput Mech 61(4):499–518. https://doi.org/10.1007/s00466-017-1469-1 Bazant ZP, Pijaudier-Cabot G (1988) Nonlocal continuum damage, localization instability and convergence. J Appl Mech. doi 10(1115/1):3173674 MATH Google Scholar Pijaudier-Cabot G, Bažant ZP (1987) Nonlocal damage theory. J Eng Mech 113(10):1512–1533. https://doi.org/10.1061/(ASCE)0733-9399(1987)113:10(1512) Article MATH Google Scholar AJ gratefully acknowledges the support provided by Scientific and Technological Research Council of Turkey (TÜBITAK) Career Development Program, Grant Number 218M700. We are gratefully indebted to Dr. Silvia Budday and Sebastian Andres for providing us with additional visualization of their experiments [20], presented in Fig. 1. Open Access funding enabled and organized by Projekt DEAL. Institute of Applied Mechanics, University of Erlangen-Nuremberg, Egerlandstr. 5, 91058, Erlangen, Germany Marie Laurien & Paul Steinmann Department of Mechanical Engineering, Bilkent University, 06800, Ankara, Turkey Ali Javili Marie Laurien Paul Steinmann Correspondence to Marie Laurien. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Laurien, M., Javili, A. & Steinmann, P. Nonlocal wrinkling instabilities in bilayered systems using peridynamics. Comput Mech 68, 1023–1037 (2021). https://doi.org/10.1007/s00466-021-02057-7 Issue Date: November 2021 Wrinkling instabilities Nonlocality Peridynamics Bilayered systems Over 10 million scientific documents at your fingertips Switch Edition Academic Edition Corporate Edition Not affiliated © 2022 Springer Nature Switzerland AG. Part of Springer Nature.
CommonCrawl
Feature Column Archive Meet me up in space! By: Ursula Whitcher In: 2021, David Austin, Math and Technology, Math and the Sciences Tagged: astronauts, space exploration Rather than closing the distance, however, the target seemed to move down and away in defiance of everyday intuition… Complex space missions rely on the ability to bring two spacecraft together, a procedure called orbital rendezvous. A spacecraft docking at the International Space Station is a typical example. Historically, rendezvous was a vital component of the Apollo lunar missions. While three astronauts traveled to the moon in a single craft, the command module, two of them descended to the lunar surface in a second craft, the lunar excursion module (LEM). A successful mission required the ascending LEM to rendezvous with the command module before returning to Earth. Gemini 4 Capsule at the Smithsonian National Air and Space Museum. The first attempt at orbital rendezvous was one component of the Gemini IV mission in 1965 when the pilot James McDivitt tried to bring his Gemini capsule close to the spent booster that lifted them into orbit. Upon first seeing the booster at an estimated 120 meters, McDivitt aimed the capsule at the booster and thrusted toward it. Rather than closing the distance, however, the target seemed to move down and away in defiance of everyday intuition. He repeated this procedure several times with similar results. NASA engineer André Meyer later recalled, "There is a good explanation [for] what went wrong with rendezvous." Mission planners "just didn't understand or reason out the orbital mechanics involved" due to competing mission priorities. That's what we're going to do here. This column will explain the relative motion of two spacecraft flying closely in Earth orbit and why that motion behaves in such a counter-intuitive way. Science fiction films often depict spacecrafts that continually alter their trajectories by firing their engines. In the real world, however, fuel is a scarce resource due to the energy required to lift the fuel into space. As a result, spacecraft spend most of their time coasting and only apply short bursts of thrust at opportune moments to adjust their trajectory. This means that a spacecraft operating near, say, Earth or the Moon will spend most of its time traveling in an elliptical orbit, as dictated by Kepler's second law. We therefore imagine a target spacecraft $T$ traveling in a circular orbit, and an interceptor spacecraft $I$ attempting to rendezvous with the target and traveling in another elliptical orbit. It's not essential that the target's orbit be circular, but it's a convenient assumption for our purposes. The object of orbital rendezvous is to guide the interceptor so that it comes together with the target. Equations of relative motion The interceptor perceives the target as a fixed point that it wants to move toward. For this reason, we are interested in describing the motion of the interceptor in a coordinate system that rotates with the target, which means we want to understand the evolution of the vector joining the spacecrafts. As a note to the reader, we'll walk through some calculations that explain the evolution of this vector and eventually find a rather elegant description. I'd like to give some intuition for what controls the motion, but I won't give every detail and trust that the reader can fill in as much or as little as they wish. This exposition follows a 1962 technical report by NASA engineer Donald Mueller and a more recent survey by Bradley Carroll. Apollo 11 astronaut Edwin "Buzz" Aldrin's M.I.T. Ph.D. thesis covers similar ground. We'll denote the interceptor's location by $I$ and the target's by $T$. The vector ${\mathbf r}$ measures the displacement of the interceptor from the target. The vectors ${\mathbf R}^*$ and ${\mathbf r}^*$ measure the displacement of the target and interceptor from Earth's center. Since our goal is to understand the motion of the interceptor $I$ relative to the target $T$, we will focus our attention on ${\mathbf r}$. Moreover, we will consider a coordinate system that rotates with the target $T$ as shown below. For convenience, we will assume that the target is moving in a circular orbit of radius $R_0$ in the $xy$-plane. We see that ${\mathbf i}$ is a unit vector pointing in the direction of the target's motion, ${\mathbf j}$ points radially from the center of Earth, and ${\mathbf k}$ is a unit vector pointing out of the page. The vectors ${\mathbf i}^*$, ${\mathbf j}^*$, and ${\mathbf k}^* = {\mathbf k}$ are fixed. In general, quantities denoted with a $*$ are with respect to this fixed, Earth-centered coordinate system and quantities without a $*$ are with respect to the rotating, target-centered coordinate system. We can describe $$ {\mathbf R^*} = R_0\left[\cos(\omega_0T){\mathbf i}^* -\sin(\omega_0t){\mathbf j}^*\right], $$ where $\omega_0$ is the angular velocity of the target. Differentiating twice gives the acceleration $$ \frac{d^2}{dt^2}{\mathbf R}^* = R_0\omega_0^2\left[-\cos(\omega_0T){\mathbf i}^* +\sin(\omega_0t){\mathbf j}^*\right]. $$ Since gravity is the only force acting on the target, the magnitude of the acceleration vector is $GM/R_0^2$, where $G$ is the gravitational constant and $M$ is Earth's mass. This tells us that $R_0\omega_0^2 = GM/R_0^2$ so we have $$ \omega_0=\sqrt{\frac{GM}{R_0^3}}. $$ Notice that the angular velocity decreases as the target's altitude increases. This is really an expression of Kepler's Third Law for circular orbits. If $P$ is the period of the orbit, then $\omega_0P = 2\pi$, which says that $$ P^2 = \frac{4\pi^2}{\omega_0^2} = \frac{4\pi^2}{GM}R_0^3 $$ so that the square of the period is proportional to the cube of the radius. We would like to relate rates of change measured in the fixed, Earth-centered coordinate system to those measured in the rotating, target-centered system. In particular, we are interested in the derivatives $$\frac{d^*}{dt}{\mathbf r},\hspace{24pt} \frac{d}{dt}{\mathbf r}, $$ the first of which measures the rate of change of ${\mathbf r}$ in the fixed coordinate system while the second measures the rate of change in the rotating coordinate system. Writing $$ {\mathbf r} = x{\mathbf i} + y{\mathbf j} + z{\mathbf k} $$ shows us that $$ \frac{d^*}{dt}{\mathbf r} = \left(\frac{d}{dt}x\right){\mathbf i} + \left(\frac{d}{dt}y\right){\mathbf j} + \left(\frac{d}{dt}z\right){\mathbf k} + x\frac{d}{dt}{\mathbf i} + y\frac{d}{dt}{\mathbf j}. $$ Notice that there are some additional terms that come from differentiating the coordinate vectors. Introducing the angular velocity vector ${\mathbf \omega} = -\omega_0{\mathbf k}$ allows us to see, with a little calculation, that $$ \frac{d^*}{dt}{\mathbf r} = \frac{d}{dt}{\mathbf r} + \omega\times{\mathbf r}. $$ After differentiating again, one can show that $$ \frac{d^2}{dt^2}{\mathbf r} = \frac{{d^*}^2}{dt^2}{\mathbf r} – \omega\times(\omega\times{\mathbf r}) – 2\omega\times\frac{d}{dt}{\mathbf r}. $$ Remember that ${\mathbf r} = {\mathbf r}^* – {\mathbf R}^*$ so that $$ \frac{{d^*}^2}{dt^2}{\mathbf r} = \frac{{d^*}^2}{dt^2}{\mathbf r}^* – \frac{{d^*}^2}{dt^2}{\mathbf R}^* = -\frac{GM}{|{\mathbf r}^*|^3}{\mathbf r}^* +\frac{GM}{|{\mathbf R}^*|^3}{\mathbf R}^*. $$ Since the target is moving in a circular orbit of radius $R_0$, we see that ${\mathbf R}^* = R_0{\mathbf j}$. Also, because ${\mathbf r}^* = {\mathbf R}^* + {\mathbf r}$, we have $$ {\mathbf r}^* = x{\mathbf i} + (R_0+y){\mathbf j} + z{\mathbf k}. $$ Since we are interested in guiding the interceptor to rendezvous with the target, we will assume that the distance between the target and interceptor is much less than the radius of the target's orbit. For instance, ${\mathbf r} = x{\mathbf i} + y{\mathbf j} + z{\mathbf k}$ may represent a length of a few kilometers, while ${\mathbf R}^* = R_0{\mathbf j}$ represents a length of several thousands of kilometers. This leads us to approximate $|{\mathbf r}^*| \approx R_0 + y$ and to apply the linear approximation $$ \frac{1}{|{\mathbf r}^*|^3} \approx \frac{1}{R_0^3} -\frac{3y}{R_0^4}. $$ With a little algebra, we can write the components of the vector $\frac{d^2}{dt^2}{\mathbf r}$ as $$ \begin{aligned} \frac{d^2x}{dt^2} & = -2\omega_0 \frac{dy}{dt} \\ \frac{d^2y}{dt^2} & = 3\omega_0^2y + 2\omega_0\frac{dx}{dt} \\ \frac{d^2z}{dt^2} & = -\omega_0^2z \end{aligned} $$ These are known as the Hill equations that describe the interceptor's path in the rotating coordinate frame, and we will see how to find explicit solutions to them. Ellipses everywhere The equation for the $z$-coordinate is a simple harmonic oscillator whose solution is $$z(t) = A\cos(\omega_0t) + B\sin(\omega_0t). $$ Remember that the $z$-axis is fixed in both coordinate systems. This form for $z(t)$ simply expresses the fact that in the fixed, Earth-centric coordinate system, the interceptor is traveling in an elliptical orbit about Earth's center. More specifically, this expression describes the motion of the interceptor out of the plane of the target's orbit. To achieve orbital rendezvous, we need to align the plane of the interceptor with that of the target. In practice, this incurs a considerable cost in fuel so care is taken to launch the interceptor in an orbital plane that's close to that of the target. We will assume that the interceptor has performed such a maneuver so that its orbital plane agrees with that of the target. In other words, we'll assume that $z(t) = 0$ constantly and focus on the other two equations: $$ \begin{aligned} \frac{d^2x}{dt^2} & = -2\omega_0 \frac{dy}{dt} \\ \frac{d^2y}{dt^2} & = 3\omega_0^2y + 2\omega_0\frac{dx}{dt} \\ \end{aligned} $$ We can develop some intuition by thinking about these equations more carefully. Remember that gravity is the only force acting on the interceptor. These equations tell us, however, that in the rotating coordinate system, the interceptor behaves as if it were subjected to two forces per unit mass: $$ \begin{aligned} {\mathbf F}_{\mbox{Tid}} & = 3\omega_0^2y{\mathbf j} \\ {\mathbf F}_{\mbox{Cor}} & = -2\omega_0\frac{dy}{dt}{\mathbf i}+2\omega_0\frac{dy}{dt}{\mathbf j} = -2\omega\times\frac{d}{dt}{\mathbf r}. \\ \end{aligned} $$ The first ${\mathbf F}_{\mbox{Tid}}$, known as the tidal force, tends to push the interceptor away from the target's circular orbit and makes $y=0$ an unstable equilibrium. The tidal force The Coriolis force The second force, the Coriolis force, causes the interceptor to rotate counterclockwise as it moves in the rotating coordinate system. Imagine what happens if we start just a little above the $x$-axis. The tidal force pushes us upward further away from the axis and causes us to pick up speed. The Coriolis force then begins to pull us to the left. One simple solution to the equations of motion in the rotating frame, which can be verified directly from the equations, is given by $$ \begin{aligned} x(t) & = a\cos(\omega_0 t) \\ y(t) & = \frac a2\sin(\omega_0 t). \end{aligned} $$ Of course, we know by Kepler's Second Law that the interceptor's trajectory is elliptical in the fixed, Earth-centered frame. What's remarkable about this solution is that it says that the interceptor's path in the rotating frame is also elliptical. The following figures illustrate this solution in both frames along with the positions at equal time intervals. It's worth taking a moment to study the relationship between them. In the rotating frame In the fixed frame For this special solution, one can check that $$ \frac{d^2}{dt^2}{\mathbf r} = -\omega_0^2{\mathbf r}. $$ This is the equation for a simple harmonic oscillator, the kind of motion observed by a mass on a spring. In other words, the interceptor moves as if it were connected to the target by a spring, which gives some insight into why orbital rendezvous is so counter-intuitive. While this solution is certainly special, it shares many features with the general solution. Denoting the initial position by $(x_0, y_0)$ and the initial velocity by $(\dot{x}_0, \dot{y}_0)$, one can check that the general solution is $$ \begin{aligned} x(t) & = \left(6y_0 + \frac{4\dot{x}_0}{\omega_0}\right) \sin(\omega_0 t) + \frac{2\dot{y}_0}{\omega_0}\cos(\omega_0t) + \left[x_0 – \frac{2\dot{y}_0}{\omega_0} – (3\dot{x}_0 + 6\omega_0y_0)t\right] \\ y(t) & = -\left(3y_0 + \frac{2\dot{x}_0}{\omega_0}\right)\cos(\omega_0 t) + \frac{\dot{y}_0}{\omega_0}\sin(\omega_0t) + \left[4y_0 + \frac{2\dot{x}_0}{\omega_0}\right] \end{aligned} $$ These expressions, known as the Clohessy-Wiltshire equations, initially look complicated, but a little study reveals a suprising simplicity. Defining $$ \begin{aligned} C & = 3y_0 + \frac{2\dot{x}_0}{\omega_0} \\ D & = \frac{\dot{y}_0}{\omega_0} \\ x_c & = x_0 – \frac{2\dot{y}_0}{\omega_0} – (3\dot{x}_0 + 6\omega_0y_0)t \\ y_c & = 4y_0 + \frac{2\dot{x}_0}{\omega_0}, \end{aligned} $$ enables us to write $$ \begin{aligned} x(t) = 2C\sin(\omega_0 t) + 2D\cos(\omega_0 t) + x_c \\ y(t) = -C\cos(\omega_0 t) + D\sin(\omega_0 t) + y_c. \end{aligned} $$ This shows that the interceptor, once again, is traveling in an elliptical path centered about the point $(x_c, y_c)$. In fact, the semi-major axis $a$ and semi-minor axis $b$ of the ellipse are $$ \begin{aligned} a & = 2\sqrt{C^2+D^2} \\ b & = a/2, \end{aligned} $$ which shows that the eccentricity of the path is always $\sqrt{3}/2$. Notice, however, that the point $(x_c, y_c)$ is moving since $x_c$ has velocity $$ v_{\mbox{drift}} = – (3\dot{x}_0 + 6\omega_0y_0)t = -\frac32\omega_0 y_c. $$ This is important because it says that the center of the ellipse drifts left if $y_c \gt 0$ and drifts right if $y_c \lt 0$. Parking orbits Here are some special orbits that Mueller calls parking orbits. Remember that the orbit is determined by its initial conditions $(x_0,y_0)$ and $(\dot{x}_0,\dot{y}_0)$. With the right choice of initial conditions, we can arrange that $C=D=0$ so that the ellipse collapses to a point. Mueller's type I parking orbit has the interceptor in a circular orbit, as seen in the fixed frame, at the same altitude as the target. In the rotating frame, the interceptor is not moving relative to the target. A type III parking orbit has the interceptor again in a circular orbit but at a different altitude than the target. Since the angular velocity decreases with the altitude, the interceptor appears to move horizontally in the rotating frame. The Clohessy-Wiltshire equations explain this by noting that the center of the ellipse moves left when $y_c\gt0$ and right when $y_c\lt0$. Things get a little more interesting if we allow $C$ and $D$ to be nonzero so that the interceptor moves on an elliptical trajectory in the rotating frame. A type II parking orbit has $y_c = 0$ so that the center of the ellipse is centered on the $x$-axis. In this case, the center does not move so the elliptical orbit is stationary. The special solution we looked at earlier is an example. The orbit depicted here shows how an astronaut could retrieve a tool accidentally dropped by just waiting one orbital period for its return. A type IV orbit has the interceptor traveling an elliptical orbit whose center is drifting. The orbit shown has $y_c \gt 0$ so the center of the ellipse is moving to the left. With any solution, one can check that the interceptor moves as if connected by a spring to the moving center of the ellipse, which again speaks to the counter-intuitive nature of the motion. These orbits explain the behavior of Gemini IV, which we described in the introduction. Imagine that the interceptor is initially in a type I parking orbit, traveling in the same circular orbit as the target and 500 meters away. McDivitt aimed at the target and fired Gemini's thruster, which would make $\dot{x}_0 \gt 0$ and move the craft into the type IV orbit shown here. This explains why the target paradoxically appeared to move down and away from the capsule. One scenario Carroll considers has an astronaut stranded outside the ISS at an initial position of $(100,100)$, which is 100 meters in front of the space station and 100 meters above. Applying a momentary thrust to create a velocity of 1 meter per second aimed directly at the space station leads to a type IV orbit that badly misses the station. So how can we put the interceptor on a path to dock with the target? We'll assume the interceptor is at an initial position $(x_0, y_0)$ and choose a time $T$ at which we'd like to dock. Setting $x(T) = 0$ and $y(T) = 0$ gives the equations $$ \begin{aligned} x(T) = 0 & = \left(6y_0 + \frac{4\dot{x}_0}{\omega_0}\right) \sin(\omega_0 T) + \frac{2\dot{y}_0}{\omega_0}\cos(\omega_0T) + \left[x_0 – \frac{2\dot{y}_0}{\omega_0} – (3\dot{x}_0 + 6\omega_0y_0)T\right] \\ y(T) = 0 & = -\left(3y_0 + \frac{2\dot{x}_0}{\omega_0}\right)\cos(\omega_0 T) + \frac{\dot{y}_0}{\omega_0}\sin(\omega_0T) + \left[4y_0 + \frac{2\dot{x}_0}{\omega_0}\right] \end{aligned} $$ Once again, these equations appear complicated, but notice that we know everything except the components of the initial velocity. This means that these are two linear equations for the two components of the unknown initial velocity $(\dot{x}_0, \dot{y}_0)$. Once we have found those components, the interceptor applies a momentary thrust to change its velocity to $(\dot{x}_0, \dot{y}_0)$ bringing it the target's position at time $T$. Carroll's paper shows how to apply this idea to reproduce the rendezvous technique used in the Apollo 11 mission as Neil Armstrong and Buzz Aldrin returned from the lunar surface to dock with Michael Collins in the command module. NASA learned from Gemini IV's failure to rendezvous with the booster target, and several months later, the Gemini VI mission successfully navigated to within 130 feet of Gemini VII. The pilot of Gemini VI, Walter Schirra, later explained that the difficulty was getting to that distance but that completing docking from there is relatively straightforward. Carroll's paper examines that claim and confirms that line-of-sight flying is effective within about 100 feet of the target. The International Space Station in 2021. In the past, spacecraft docking with the International Space Station would fly in formation with the station, which would grab the craft with the Canadarm2 and manually complete the docking procedure. Recently, however, SpaceX has automated the docking process so that software guides the craft rather than astronauts who are not involved at all. Would you like to try to dock with the ISS? This SpaceX simulator gives you the chance. Barton C Hacker and James M. Grimwood, On the Shoulders of Titans, NASA Special Publication-4203 in the NASA History Series, 1977. Donald D. Mueller Relative Motion in the Docking Phase of Orbital Rendezvous, Technical Documentary Report No. AMRL-TDR-62-124, Air Force Systems Command, December 1962. Bradley W. Carroll. The Delicate Dance of Orbital Rendezvous American Journal of Physics 87, 627 (2019). Carroll works out a number of examples that show how the theory in Mueller's report plays out in practice. G. W. Hill Researches in the Lunar Theory, American Journal of Mathematics, 1, No. 1, 5-26, 1878. W.H. Clohessy and R.S. Wiltshire Terminal Guidance System for Satellite Rendezvous, Journal of Aerospace Science, 27, No. 9, 653-658, 1960. Edwin E. Aldrin, Line-of-Sight Guidance Techniques for Manned Orbital Rendezous, MIT Ph.D. Thesis, 1963. Buzz Aldrin's Ph.D. thesis in which he writes, "[T]his is dedicated to the crew members of this country's present and future manned space programs. If only I could join them in their exciting endeavors!" Recently on the Feature Column Rook Polynomials: A Straight-Forward Problem January 1, 2022 Alan Turing and the Countability of Computable Numbers December 1, 2021 What is a prime, and who decides? November 1, 2021 Decomposition October 1, 2021 Meet me up in space! September 1, 2021 Principal Component Analysis – Three Examples and some Theory August 1, 2021 The Battle of Numbers July 1, 2021 The Once and Future Feature Column June 1, 2021 An epidemic is a sequence of random events May 1, 2021 In Praise of Collaboration March 31, 2021 Lost (and found) in space March 1, 2021 Risk Analysis and Romance February 1, 2021 Subscribe to the Feature Column via Email Enter your email address to subscribe to the Feature Column and receive notifications of new articles by email. Adam A. Smith (1) Algebra and Number Theory (2) Bill Casselman (6) Colm Mulcahy (1) Courtney Gibbons (1) Discrete Math and Combinatorics (2) Étienne Ghys (1) Guillermo Fereyra (1) History of mathematics (4) John Eggers (1) Joseph Malkevitch (6) Josh Leys (1) Math and Technology (2) Math and the Sciences (1) Mathematics and Biology (2) Moira Chas (1) Patrick Ion (1) Probability and Statistics (2) Steven H. Weintraub (1) Thomas Morrill (1) Tony Phillips (3) Ursula Whitcher (6) 5G (1) alan turing (1) ams (1) astronauts (1) Branko Grünbaum (1) cantor set (1) chess (1) Claude Shannon (1) Desargues (1) epidemic (2) Euclidean geometry (1) feature column history (1) generating functions (1) genetics (1) Geoffrey Colin Shephard (1) herd immunity (1) Higgs boson (1) Machine learning (1) measles (1) medieval (1) mirror symmetry (1) paleontology (1) Pappus's Theorem (1) partitions (2) pca (1) polar codes (1) polytopes and polyhedra (1) Predictive policing (1) PredPol (1) primes (2) quadratic formula (1) relationship trees (1) rithmomachia (1) SARS (1) SEIR model (2) simulations (1) space exploration (1) string theory (1) T-duality (1) tilings (1) Weatherball (1)
CommonCrawl
666 articles found The Use of Destructive and non Destructive Testing in Concrete Strength Assessment for a School Building Vincenzo Minutolo, Stefania Di Ronza, Caterina Eramo, Renato Zona Subject: Engineering, Civil Engineering Keywords: non destructive testing; concrete structure; Sonreb; ultrasonic testing; rebound index The present paper aims to increase knowlodge of the methods of resistance estimating of concrete in situ by means of non-destructive tests used to integrate the quantitative results from cylindrical specimens (core). The results of experimental investigations carried out on concrete conglomerate samples of a school building are shown. The experimental campaign then will be presented like a case study, conducted on a series of concrete beams and pillars of an existing building. The distructive tests on cores were conducted at the Civil Structures Laboratory of the Engineering Department of the University of Campania "Luigi Vanvitelli". The expression obtained through the calibration procedure of the values of non-destructive tests with those provided by the core drills allowed to estimate the average values of the compressive strength of the concrete. It is highlighted how this result was achieved with a very limited core number provided that they are extracted in selected points and that there was a proportionality link with the resistances obtained from non distructive tests. Lift-off Invariant Inductance of Steels in Multi-Frequency Eddy-Current Testing Mingyang Lu, Xiaobai Meng, Ruochen Huang, Liming Chen, Anthony Peyton, Wuliang Yin Subject: Engineering, Automotive Engineering Keywords: Eddy current testing; lift-off invariance; property measurement; multi-frequency; non-destructive testing. Eddy current testing can be used to interrogate steels but it is hampered by the lift-off distance of the sensor. Previously, the lift-off point of intersection (LOI) feature has been found for the pulsed eddy current (PEC) testing. In this paper, a lift-off invariant inductance (LII) feature is proposed for the multi-frequency eddy current (MEC) testing, which merely targets the ferromagnetic steels. That is, at a certain working frequency, the measured inductance signal is found nearly immune to the lift-off distance of the sensor. Such working frequency and inductance are termed as the lift-off invariant frequency (LIF) and LII. Through simulations and experimental measurements of different steels under the multi-frequency manner, the LII has been verified to be merely related to the sensor parameters and independent of different steels. By referring to the LIF of the test piece and using an iterative inverse solver, one of the steel properties (either the electrical conductivity or magnetic permeability) can be reconstructed with a high accuracy. Inversion of Lift-Off Distance and Thickness for Non-Magnetic Metal Using Eddy Current Testing Xiaobai Meng, Mingyang Lu, Wuliang Yin, Abdeldjalil Bennecer, Katherine Kirk Subject: Engineering, Automotive Engineering Keywords: Eddy current sensor; lift-off measurement; thickness measurement; non-destructive testing; sample-independence. For the electromagnetic eddy current testing, various methods have been proposed for reducing the lift-off error on the measurement of samples. In this paper, instead of eliminating the measurement error caused by the lift-off effect, an algorithm has been proposed to directly measure the lift-off distance between the sensor and non-magnetic conductive plates. The algorithm is based on a sample-independent inductance (SII) feature. That is, under high working frequencies, the inductance is found sensitive to the lift-off distance and independent of the test piece under an optimal single high working frequency (43.87 kHz). Furthermore, the predicted lift-off distance is used for the thickness prediction of the non-magnetic conductive samples using an iterative method. Considering the eddy current skin depth, the thickness prediction is operated under a single lower frequency (0.20 kHz). As the inductance has different sensitivities to the lift-off and thickness, the prediction error of the sample thickness is different from that of the lift-off distance. From the experiments on three different nonmagnetic samples – aluminium, copper, and brass, the maximum prediction error of the lift-off distance and sample thickness is 1.1 mm and 5.42 % respectively at the lift-off of 12.0 mm. Methods of Controlling Lift-off in Conductivity Invariance Phenomenon for Eddy Current Testing Zhongwen Jin, Yuwei Meng, Rongdong Yu, Ruochen Huang, Mingyang Lu, Hanyang Xu, Xiaobai Meng, Qian Zhao, Zhijie Zhang, Anthony Peyton, Wuliang Yin Subject: Keywords: Conductivity Invariance Phenomenon; Conductivity invariance lift-off; Sensor design; Eddy current testing; Electrical conductivity; Non-destructive testing Previously, a conductivity invariance phenomena (CIP) has been discovered – at a certain lift-off, the inductance change of the sensor due to a test sample is immune to conductivity variations, i.e. the inductance – lift-off curve passes through a common point at a certain lift-off, termed as conductivity invariance lift-off. However, this conductivity invariance lift-off is fixed for a particular sensor setup, which is not convenient for various sample conditions. In this paper, we propose using two parameters in the coil design – the horizontal and vertical distances between the transmitter and the receiver to control the conductivity invariance lift-off. The relationship between these two parameters and the conductivity invariance lift-off is investigated by simulation and experiments and it has been found that there is an approximate linear relationship between these two parameters and the conductivity invariance lift-off. This is useful for applications where the measurements have restrictions on lift-off, e.g. uneven coating thickness which limits the range of the lift-off of probe during the measurements. Therefore, based on this relationship, it can be easier to adjust the configuration of the probe for a better inspection of the test samples. Measuring Co-Axial Hole Size of Finite-Size Metallic Disk Based on a Dual-Constraint Integration Feature Using Multi-Frequency Eddy Current Testing Ruochen Huang, Mingyang Lu, Xiaohong He, Anthony Peyton, Wuliang Yin Subject: Engineering, Automotive Engineering Keywords: Hole size measurement; finite-size metallic disk; eddy current testing; non-destructive testing This paper presents a new approach of eddy current methods for determining the size of the co-axial hole in the metallic circular disk. In recent decades, for the air-cored sensor probe, the impedance change due to the presence of an infinite metal plate can be calculated by the Dodd-Deeds model. However, in practical measurements, the sample cannot match with the condition required - 'infinite', thus the Dodd-Deeds model could not be applied to the disk with finite size and certainly not a co-axial hole in the center. In this paper, a dual-constraint analytical method is proposed. That is, the upper and lower limits of the integration are substituted with specific values instead of the original 0 and . Besides, it is found that, once the outer radius of the disk is fixed (i.e. the lower limit of integration is fixed), the upper limit reduces linearly as the size of the coaxial hole increases. Both the FEM simulation and experiments have been carried out to validate this method. The radius of the hole can be estimated based on the dual-constraint integration feature. Non-Destructive Test to Diagnosing Wear of Marine Gas Turbine Blades Yazmin Villagrán, M Patiño Ortiz, Luis Héctor Hernández Gómez, Juan Carlos Anzelmetti, Jorge Arturo Del Ángel Ramos, Jesús García Mejía Subject: Engineering, Automotive Engineering Keywords: Gas turbine blades; Non-destructive testing; Borescope; Optical 3Dscanner In this paper, a non-destructive test to diagnose wear of blade´s compressor of a gas turbine is reported. Gas turbine was operating in Campeche City, Mexico, in a very aggressive environment, where the entry of solid particles is unavoidable. The objective was to reduce cost of maintenance in this equipment. Analysis on a blade of gas turbine was performed, which was in operation on an offshore platform. Compressor blade was exposed to a severe damage by the impact of particles and environmental pollutants such as salts, sands and sulphurs. In first stage of this analysis, a visual inspection with a borescope was performed, which has the ability to illuminate dark internal areas, with a bright light for visual examination and/or make a photographic reproduction in semi-annual maintenance cycles. Images analysis was used to determine the typical failure modes. In a second stage, a tribological characterization was carried out. Chemical composition of the material of blades was obtained. Scanning electron microscopy (SEM) was used to measure roughness and evaluate degradation of surfaces of blades after 30,000 service hours. The points, where peak stresses were calculated, correspond to those places in witch corrosion and some irregular scratches similar to plowing action, was observed. These are the points in which failures take place. Results showed wear modes were originated by a severe stinging action. Also, large craters, similar to those observed in solid particle erosion, were developed by at normal impact. In the same way it could be found some localized areas with a witch corrosion and irregular scratches similar to plowing action, was observed. These are the points in which failures take place. NDT Techniques Applied for the Inspection of Flare Stacks Carlos Martín-Díaz, María Dolores Rubio-Cintas, Kissi Benaissa Subject: Engineering, Mechanical Engineering Keywords: flare stack; non-destructive testing; inspection; guy wires; drones Flare stacks are a key element in the safety of petrochemical and refinery industries; the correct operating conditions thereof must be ensured through a schedule for the periodic reviewing of all parts thereof. Advances in Non-Destructive Testing (NDT) in recent years and the application thereof to this field allow reliable, objective information to be obtained. This article describes and groups together the main applicable NDT techniques, analysing their advantages and disadvantages, updated with the new possibilities offered by unmanned aircraft or drones. A Combined Electromagnetic Induction and Radar-based Test for Quality Control of Steel Fibre Reinforced Concrete Janusz Kobaka, Jacek Katzer, Tomasz Ponikiewski Subject: Engineering, Civil Engineering Keywords: sfrc; non-destructive testing; quality control; electromagnetic induction; radar; fibre The authors of the paper have made an attempt to detect the fibre content and fibre spacing in a steel fibre reinforced concrete (SFRC) industrial floor. Two non-destructive testing (NDT) methods: an electromagnetic induction technique and a radar-based technique were applied. The first method allowed to detect the spacing in subsequent layers located in the thickness of the slab. The result of the second method was a 3D visualization of the detected fibre in the volume of concrete slab. The conducted tests showed aptitude and limitations of the applied methods in estimating fibre volume and spacing. The two techniques also allowed to locate the areas with relatively low fibre concentration which are very likely to be characterized by low mechanical properties. A High-Frequency Phase Feature for the Measurement of Magnetic Permeability Using Eddy Current Sensor Subject: Engineering, Automotive Engineering Keywords: electromagnetic sensing; lift-off; eddy current; magnetic permeability; non-destructive testing Electromagnetic sensing has been used for diverse applications of non-destructive testing, including the surface inspection, measurement of properties, object characterization. However, the measurement accuracy could be significantly influenced by the lift-off between sensors and samples. To address the issue caused by lift-offs, various strategies have been proposed for the permeability measurement of ferromagnetic steels, which mainly involves different sensor designs and signal features (e.g., the zero-crossing feature). In this paper, a single high-frequency scenario for the permeability retrieval is introduced. By combining the signal of two sensing pairs, the retrieval of magnetic permeability is less affected by the lift-off of sensors. Unlike the previous strategy on reducing the lift-off effect (directly taking the phase term out of the integration) using the Dodd-Deeds analytical method, the proposed method is based on a high-frequency linear feature of the phase term. Therefore, this method has the merit of high accuracy and fast processing for the permeability retrieval (a simplified version of Dodd-Deeds analytical formulas after the integration). Experimental measurement has been carried out on the impedance measurement of designed sensors interrogating ferromagnetic dual-phase steels. For sensor lift-offs of up to 10 mm, the error of the permeability retrieval is controlled within 4 % under the optimal frequency. Diagnosis of Composite Materials in Aircraft Applications–Brief Survey of Recent Literature Muflih Alhammad, Luca Zanotti Fragonara, Nicolas P Avdelidis Subject: Engineering, Automotive Engineering Keywords: Failure Diagnosis; Aircraft Applications; Composite Materials; Non-Destructive Testing (NDT) Techniques Diagnosis and prognosis of failures for aircrafts' integrity are some of the most important regular functionalities in complex and safety-critical aircraft structures. Further, development of failure diagnostic tools such as Non-Destructive Testing (NDT) techniques, in particular, for aircraft composite materials, has been seen as a subject of intensive research over the last decades. The need for diagnostic and prognostic tools for composite materials in aircraft applications rises and draws increasing attention. Yet, there is still an ongoing need for developing new failure diagnostic tools to respond to the rapid industrial development and complex machine design. Such tools will ease the early detection and isolation of developing defects and the prediction of damages propagation; thus allowing for early implementation of preventive maintenance and serve as a countermeasure to the potential of catastrophic failure. In this paper, following a short introductory summary and definitions, this paper provides a brief literature review of recent research on failure diagnosis of composite materials with an emphasis on the use of NDT techniques in aerospace industry. In addition to this, within a some of significant NDT application extents, prognosis of composites is also briefly discussed. Inversion of Distance and Magnetic Permeability Based on Material-Independent and Lift-off Insensitive Algorithms Using Eddy Current Sensor Subject: Engineering, Automotive Engineering Keywords: Eddy current; lift-off; material-independent; permeability measurement; non-destructive testing Eddy current sensors can be used to test the characteristics and measure the parameters of the conductive samples. As the main obstacle of the multi-frequency eddy current sensor, the lift-off distance affects the effectiveness and accuracy of the measurement. In this paper, a material-independent algorithm has been proposed for the restoration of the lift-off distance when using the multi-frequency eddy current sensor, which is based on the approximation under the thin-skin effect. Experiment testing on the performance of the proposed method is presented. Results show that from the dual-frequency inductance, the lift-off distance could be restored with a maximum error of 0.24 mm for the distance up to 12 mm. Besides, the derived lift-off distance is used for the inversion of the magnetic permeability. Based on a lift-off insensitive inductance (LII) feature, the magnetic permeability of steels can be inversed in an iterative manner, with an error of less than 0.6 % for the lift-off distance up to 12 mm. Embedded Sensors for Structural Health Monitoring: Methodologies and Applications Review Pedro M. Ferreira, Miguel A. Machado, Marta S. Carvalho, Catarina Vidal Subject: Engineering, Mechanical Engineering Keywords: Embedded Sensors; Sensing Technology; Smart Materials; Structural Health Monitoring; Non-Destructive Evaluation Sensing Technology (ST) plays a key role in Structural Health Monitoring (SHM) systems. ST focuses on developing sensors, sensory systems or smart materials that monitor a wide variety of materials properties aiming to create smart structures and smart materials, using Embedded Sensors (ESs), and allowing continuous and permanent measurements of the structural integrity. The integration of ESs is limited to the processing technology to embed the sensor due to its high-temperature sensitivity and the possibility of damage during its insertion into the structure. In addition, the technological process selection is dependent on the base material composition, either metallic or composite parts. The selection of smart sensors or the technology underlying them is fundamental to the monitoring mode. This paper presents a critical review of the fundaments and applications of sensing technologies for SHM employing ESs, focusing on the actual developments and innovation of these, as well as analysing the challenges that these technologies present, to build a path that allows a connected world through distributed measurement systems. Classification And Identification of Organic Matter in Black Soil Based on Simulated Annealing Optimization of LSVM-Stacking Model Junlong Fang, Kezhu Tan, Zifang Zhang, Zhihua Liu, Hongzhao Xu, Qinghe Zhao Subject: Engineering, General Engineering Keywords: Hyperspectral Technology; Non-destructive Testing; Black Soil; Ensemble learning; Support Vector Machine For the soil in different regions, the nutrient fertility contained in it is different, and the detection and zoning management of soil nutrients before tillage every year can improve grain yield. In this paper, an integrated learning strategy model based on black soil hyperspectral data is designed for rapid classification of organic matter content classification of black soil. Soil hyperspectral image dataset of Xiangyang Experimental Base was collected; by changing the internal structure of the stacking model, an LSVM-stacking model with (MLP, SVC, DTree, XGBl, kNN) five classifiers as the L1 layer was built, and the simulated annealing algorithm was used for hyperparameter optimization. Compared to other stacking models, the LSVM-stacking metrics are significantly improved. The accuracy rate of hyperparameter optimization is improved by 38.6515%, the accuracy rate of the independent test data set is 0.9488, and the comparison of individual learners can improve the recognition classification accuracy of label"1" to 1.0. Computerized Data Interpretation for Concrete Assessment with Air-Coupled Impact-Echo: An Online Learning Approach Jiaxing Ye, Takumi Kobayashi, Masaya Iwata, Hiroshi Tsuda, Masahiro Murakawa Subject: Engineering, Civil Engineering Keywords: non-destructive evaluation; hammering inspection; audio signal processing; machine learning; online learning Developing efficient Artificial Intelligence (AI)-enabled system to substitute human role in non-destructive testing is an emerging topic of considerable interest. In this study, we propose a novel impact-echo analysis system using online machine learning, which aims at achieving near-human performance for assessment of concrete structures. Current computerized impact-echo systems commonly employ lab-scale data to validate the models. In practice, however, the echo patterns can be far more complicated due to varying geometric shapes and materials of structures. To deal with a large variety of unseen data, we propose a sequential treatment for echo characterization. More specifically, the proposed system can adaptively update itself to approaching human performance in impact-echo data interpretation. To this end, a two-stage framework has been introduced, including echo feature extraction and the model updating scheme. Various state-of-the-art online learning algorithms have been reviewed and evaluated for the task. To conduct experimental validation, we collected 10,940 echo instances from multiple inspection sites with each sample had been annotated by human experts with healthy/defective condition labels. The results demonstrated that the proposed scheme achieved favorable echo pattern classification accuracy with high efficiency and low computation load. Reduction of Coil-Crack Angle Sensitivity Effect Using a Novel Flux Feature of ACFM Technique Ruochen Huang, Mingyang Lu, Ziqi Chen, Wuliang Yin Subject: Engineering, Electrical & Electronic Engineering Keywords: Non-destructive testing; magnetic induction; crack detection; finite element method acceleration; conductive plate Alternating current field measurement (ACFM) testing is one of promising techniques in the field of non-destructive testing with advantages of the non-contact capability and the reduction of lift-off effects. In this paper, a novel crack detection approach is proposed to reduce the effect of the angled crack (cack orientation) by using rotated ACFM techniques. The sensor probe is composed of an excitation coil and two receiving coils. Two receiving coils are orthogonally placed in the centre of the excitation coil where the magnetic field is measured. It is found that the change of the x component and the peak value of the z component of the magnetic field when the sensor probe rotates around a crack follows a sine wave shape. A customised accelerated finite element method solver programmed in MATLAB is adopted to simulate the performance of the designed sensor probe which can significantly improve the computation efficiency due to the small crack perturbation. The experiments have also been carried out to validate the simulations. It is found that the ratio between the z and x components of the magnetic field remains stable under various rotation angles. It shows the potential to estimate the depth of the crack from the ratio detected by combining the magnetic fields from both receiving coils (i.e., the x and z components of the magnetic field) using the rotated ACFM technique. Thickness Measurement of Circular Metallic Film Using Single-Frequency Eddy Current Sensor Subject: Engineering, Automotive Engineering Keywords: Eddy current testing; thickness measurement; finite-size; lift-off effect; non-destructive testing In many advanced industrial applications, the thickness is a critical index, especially for metallic coatings. However, the variance of lift-off spacing between sensors and test pieces affects the measured voltage or impedance, which leads to unreliable results from the sensor. Massive research works have been proposed to address the lift-off issue, but few of them applies to the thickness measurement of planar metallic films with finite-size circular (disk) geometry. Previously, a peak-frequency feature from the swept-frequency inductance was used to compensate the measurement error caused by lift-offs, which was based on the slow-changing rate of impedance phase term in the Dodd-Deeds formulas. However, the phase of measured impedance is nearly invariant merely on a limited range of sample thicknesses and working frequencies. Besides, the frequency sweeping is time-consuming, where a recalibration is needed for different sensor setups applied to the online real-time measurement. In this paper, a single-frequency algorithm has been proposed, which is embedded in the measurement instrument for the online real-time retrieval of thickness. Owing to the single-frequency measurement strategy, the proposed method does not need to recalibrate for different sensor setups. The thickness retrieval is based on a triple-coil sensor (with one transmitter and two receivers). The thickness of metallic disk foils is retrieved from the measured electrical resistance of two transmitter-receiver sensing pairs. Experiments on materials of different electrical conductivities (from direct current), thicknesses and planar sizes (radii) have been carried out to verify the proposed method. The error for the thickness retrieval of conductive disk foils is controlled within 5 % for lift-offs up to 5 mm. Determination of Surface Crack Orientation Based on Thin-Skin Regime Using Triple-Coil Drive-Pickup Eddy-Current Sensor Mingyang Lu, Xiaobai Meng, Ruochen Huang, Liming Chen, Zezhi Tang, Junshi Li, Anthony Peyton, Wuliang Yin Subject: Engineering, Automotive Engineering Keywords: Eddy current sensor; defect orientation; angled crack, thin-skin regime; non-destructive testing. Electromagnetic sensors have been used for inspecting small surface defects of metals. Based on the eddy-current thin-skin regime, a revised algorithm is proposed for a triple-coil drive-pickup eddy-current sensor scanning over long surface crack slots (10 mm) with different rotary angles. The method is validated by the voltage measurement of the designed EC sensor scanning over a benchmark (ferromagnetic) steel with surface defects of different depths and rotary angles. With an additional sensing coil for the designed EC sensor, the defect angle (or orientation) can be measured without spatially and coaxially rotating the excitation coil. By referring to the voltage change (due to the defect) diagram (voltage sum versus voltage different) of two sensing pairs, the rotary angle of the surface crack is retrieved with a maximum residual deviation of 3.5 %. Thickness Measurement of Metallic Film Based on a High-Frequency Feature of Triple-Coil Electromagnetic Eddy Current Sensor Mingyang Lu, Liming Chen, Xiaobai Meng, Ruochen Huang, Anthony Peyton, Wuliang Yin Subject: Engineering, Automotive Engineering Keywords: Eddy current testing; thickness measurement; non-destructive testing; lift-off; real-time monitoring Previously, various techniques have been proposed for reducing the lift-off effect on the thickness measurement of the non-magnetic films, including the peak-frequency feature and phase feature in the Dodd-Deed analytical formulation. To realise a real-time feedback response on the thickness monitoring, the phase term in the Dodd-Deeds formulation must be taken off the integration. Previous methods were based on the slow change rate of the phase term when compared to the rest of the term – the magnitude term. However, the change rate of the phase term is still considerable for a range of working frequencies. In this paper, a high-frequency feature has been found. That is, the ratio between the imaginary and real part of the phase term is proportional to the integral variable under high frequencies. Based on this proportion relationship, the phase term has been taken out; and a thickness algorithm has been proposed. By combing the measured impedance from the custom-built sensor (three coils), the thickness of the metallic film can be reconstructed. Experiments have been carried out for the verification of the proposed scenario. Results show that the thickness of the metal film can be reconstructed with a small error of less than 2 %, and immune to a reasonable range of lift-offs. A Novel Approach to the Holistic 3D Characterization of Weld Seams – Paving the Way for Deep Learning-Based Process Monitoring Maximilian Schmoeller, Christian Stadter, Michael Karl Kick, Christian Geiger, Michael Friedrich Zaeh Subject: Engineering, Industrial & Manufacturing Engineering Keywords: non-destructive testing; weld seam contour; microfocus computed tomography; laser beam welding; Deep Learning In an industrial environment, the quality assurance of weld seams requires extensive efforts. The most commonly used methods for that are expensive and time-consuming destructive tests, since quality assurance procedures are difficult to integrate into production processes. Beyond that, available test methods allow only the assessment of a very limited set of characteristics. They are either suitable for determining selected geometric features or for locating and evaluating internal seam defects. The presented work describes an evaluation methodology based on microfocus X-ray computed tomography scans (µCT scans) which enable the 3D characterization of weld seams, including internal defects such as cracks and pores. A 3D representation of the weld contour, i.e., the complete geometry of the joint area in the component with all quality-relevant geometric criteria, is an unprecedented novelty. Both the dimensions of the weld seam and internal defects can be revealed, quantified with a resolution down to a few micrometers and precisely assigned to the welded component. On the basis of the methodology developed within the framework of this study, the results of the scans performed on the alloy AA 2219 can be transferred to other aluminum alloys. In this way, the data evaluation framework can be used to obtain extensive reference data for the calibration and validation of inline process monitoring systems employing Deep Learning-based data processing. X-ray Micro-Ct Supporting the South African Additive Manufacturing Community † Anton du Plessis, S.G. le Roux Subject: Engineering, Industrial & Manufacturing Engineering Keywords: non-destructive testing; process optimization; porosity; pore hotspots; image-based simulations; 3D image analysis This paper presents the latest developments in microCT, both globally and locally, for supporting the additive manufacturing industry. There are a number of recently developed capabilities which are especially relevant to the non-destructive quality inspection of additive manufactured parts; and also for advanced process optimization. These new capabilities are all locally available but not yet utilized to their full potential, most likely due to a lack of knowledge of these capabilities. The aim of this paper is therefore to fill this gap and provide an overview of these latest capabilities, showcasing numerous local examples. Plasma Based Water Purifier: Design And Testing of Prototype with Different Samples of Water Suraj M, Anuradha T Subject: Engineering, Other Keywords: Plasma generation, non-thermal Plasma, pulsating DC power, Ozone, cost-improvement The objective of the prototype is to eliminate the polluting contamination of water sources, due to the leak of industrial waste without any kind of treatment, mainly generated by the industries and home sector. In this project, a prototype of water purification by plasma technology has been designed. The prototype will convert contaminated water into the plasma stream and eliminate the pathogens from the water by exposing it to ultraviolet radiation and plasma sterilisation. The polluted water will be accelerated at high speed using a water pump in order to convert it into a liquid-gas mixture for ease plasma generation. This process will be achieved when the electric supply from a source of alternating current (AC) is applied to the water by means of high voltage electrodes. After which, the mixture slows down to return into liquid form and the clean water is obtained. The whole process takes place without significantly raising the temperature also knows as non-thermal plasma. The device also has an automatic flow and pressure control system. Finally, a short feasibility study has been conducted on the water samples collected and report obtained from Chennai Metropolitan Water Supply and Sewage boards are reported. It has been concluded that this new plasma-based water treatment system will be more efficient and cheaper than the current wastewater treatment techniques and can be used in the future as the replacement of current secondary and tertiary treatments of industrial wastewater. Electroanalytical Biosensors for Circulating Tumor DNA Detection—A Brief Review Zhijia Peng, Xiaogang Lin, Weiqi Nian, Xiaodong Zheng, Jayne Wu Subject: Engineering, Biomedical & Chemical Engineering Keywords: non-destructive; biosensors; real-time detection; circulating tumor DNA (ctDNA); high sensitivity; Internet of Things Early diagnosis and treatment have always been highly desired in the fight against cancer, and detection of circulating tumor DNA (ctDNA) has recently been touted as highly promising for early cancer screening. Consequently, the detection of ctDNA in liquid biopsy gains much attention in the field of tumor diagnosis and treatment, which has also attracted research interest from the industry. However, traditional gene detection technology is difficult to achieve low cost, real-time and portable measurement of ctDNA. Electroanalytical biosensors have many unique advantages such as high sensitivity, high specificity, low cost and good portability. Therefore, this review aims to discuss the latest development of biosensors for minimal-invasive, rapid, and real-time ctDNA detection. Various ctDNA sensors are reviewed with respect to their choices of receptor probes, detection strategies and figures of merit. Aiming at the portable, real-time and non-destructive characteristics of biosensors, we analyze their development in the Internet of Things, point-of-care testing, big data and big health. Analysis of Tilt Effect on Notch Depth Profiling Using Thin-Skin Regime of Driver-Pickup Eddy-Current Sensor Mingyang Lu, Xiaobai Meng, Ruochen Huang, Anthony Peyton, Wuliang Yin Subject: Engineering, Electrical & Electronic Engineering Keywords: Eddy current driver–pickup sensor; surface crack; depth measurement; thin-skin regime; non-destructive testing. Electromagnetic eddy current sensors are commonly used to identify and quantify the surface notches of metals. However, the unintentional tilt of eddy current sensors affects results of size profiling, particularly for the depth profiling. In this paper, based on the eddy current thin-skin regime, a revised algorithm has been proposed for the analytical voltage or impedance of a tilted driver–pickup eddy current sensor scanning across a long ideal notch. Considering the resolution of the measurement, the bespoke driver–pickup, also termed as transmitter-receiver (T-R) sensor is designed with a small mean radius of 1 mm. Besides, the T-R sensor is connected to the electromagnetic instrument and controlled by a scanning stage with high spatial travel resolution , with a limit of 0.2 μm and selected as 0.25 mm. Experiments have been out on the voltage imaging of an aluminium sheet with 7 machined long notches of different depths using T-R sensor under different tilt angles. By fitting the measured voltage (both real and imaginary part) with proposed analytical algorithms, the depth profiling of notches is less affected by the tilt angle of sensors. From the results, the depth of notches can be retrieved within a deviation of 10 % for tilt angles up to 60 degrees. Evaluation of Coating Thickness Using Lift-Off Insensitivity of Eddy Current Sensor Subject: Engineering, Automotive Engineering Keywords: Multi-frequency eddy current; lift-off inversion; coating thickness; non-destructive testing; multi-layer conductor. Defect detection in ferromagnetic substrates is often hampered by non-magnetic coating thickness variation when using conventional eddy current testing technique. The lift-off distance between the sample and the sensor is one of the main obstacles for the thickness measurement of non-magnetic coatings on ferromagnetic substrates when using the eddy current testing technique. Based on the eddy current thin-skin effect and the lift-off insensitive inductance (LII), a simplified iterative algorithm is proposed for reducing the lift-off variation effect using a multi-frequency sensor. Compared to the previous techniques on compensating the lift-off error (e.g., the lift-off point of intersection) while retrieving the thickness, the simplified inductance algorithms avoid the computation burden of integration, which are used as embedded algorithms for the online retrieval of lift-offs via each frequency channel. The LII is determined by the dimension and geometry of the sensor, thus eliminating the need for empirical calibration. The method is validated by means of experimental measurements of the inductance of coatings with different materials and thicknesses on ferrous substrates (dual-phase alloy). The error of the calculated coating thickness has been controlled to within 3 % for an extended lift-off range of up to 10 mm. Measurement of the Radius of Metallic Plates Based on a Novel Finite Region Eigenfunction Expansion (FREE) Method Ruochen Huang, Mingyang Lu, Zhijie Zhang, Qian Zhao, Yuedong Xie, Yang Tao, Tian Meng, Anthony Peyton, Theodoros Theodoulidis, Wuliang Yin Subject: Engineering, Electrical & Electronic Engineering Keywords: Non-destructive testing; finite region eigenfunction expansion (FREE) method; finite dimension; magnetic induction; size measurements Eddy current based approaches have been investigated for a wide range of inspection applications. Dodd-Deeds model and the truncated region eigenfunction expansion (TREE) method are widely applied in various occasions, mostly for the cases that the sample is relatively larger than the radius of the sensor coil. The TREE method converts the integral expressions to the summation of many terms in the truncated region. In a recent work, the impedance of the co-axial air-cored sensor due to a plate of finite radius was calculated by the modified Dodd-Deeds analytical approach proposed by authors. In this paper, combining the modified analytical solution and the TREE method, a new finite region eigenfunction expansion (FREE) method is proposed. This method involves modifying its initial summation point from the first zero of the Bessel function to a value related to the radius of the plate, therefore makes it suitable for plate with finite dimensions. Experiments and simulations have been carried out and compared for the verification of the proposed method. Further, the planar size measurements of the metallic circular plate can be achieved by utilising the measured peak frequency feature. Automatic Ink Mismatch Detection in Hyper Spectral Images Using K-means Clustering Noman Raza Shah, Muhammad Talha, Fizza Imtiaz, Aneeqah Azmat Subject: Engineering, Electrical & Electronic Engineering Keywords: Hyper spectral Document Images; Non-destructive Analysis; Forensics Document; Ink Mismatch Detection; K-means Clustering Hyper spectral imaging (HSI) is a technique that is used to obtain the spectrum for each pixel in the image. It helps in finding objects and identifying materials etc. Such an identification is very difficult using other imaging techniques. It allows the researchers to investigate the documents without any physical contact. Nowadays detection of unequal Ink mismatch based on HSI has shown vast improvement in distinguishing the inks. Detection of unequal Ink mismatch is an unbalanced clustering problem. This paper used K-means Clustering for ink mismatch detection. K-means Clustering find same subgroups in the data based on Euclidean distance. This paper demonstrates performance in unequal Ink mismatch based on HSI. Mapping of Agricultural Subsurface Drainage Systems Using a Frequency-Domain Ground Penetrating Radar and Evaluating Its Performance Using a Single-Frequency Multi-Receiver Electromagnetic Induction Instrument Triven Koganti, Ellen Van De Vijver, Barry J. Allred, Mogens H. Greve, Jørgen Ringgaard, Bo V. Iversen Subject: Earth Sciences, Geophysics Keywords: frequency-domain; ground penetrating radar; electromagnetic induction; penetration depth; inversion; non-destructive techniques; agricultural drainage systems Subsurface drainage systems remove excess water from the soil profile thereby improving crop yields in poorly drained farmland. Knowledge of the position of the buried drain lines is important: 1) to improve understanding of leaching and offsite release of nutrients and pesticides, and 2) for the installation of a new set of drain lines between the old ones for enhanced soil water removal efficiency. Traditional methods of drainage mapping involve the use of tile probes and trenching equipment. While these can be effective, they are also time-consuming, labor-intensive, and invasive, thereby entailing an inherent risk of damaging the drainpipes. Non-invasive geophysical soil sensors provide a potential alternative solution. Previous research has focused on the use of time-domain ground penetrating radar (GPR), with variable success depending on local soil and hydrological conditions and the central frequency of the specific equipment employed. The objectives of this study were 1) to test the use of a stepped-frequency continuous wave (SFCW) 3D-GPR (GeoScope Mk IV 3D-Radar with DXG1820 antenna array) for subsurface drainage mapping, and 2) to evaluate the performance of a 3D-GPR with the use of a single-frequency multi-receiver electromagnetic induction (EMI) sensor (DUALEM) in-combination. The 3D-GPR system offers more flexibility for application to different (sub)surface conditions due to the coverage of wide frequency bandwidth. The EMI sensor simultaneously provides information about the apparent electrical conductivity (ECa) for different soil volumes, corresponding to different depths. This sensor combination was evaluated on twelve different study sites with various soil types with textures ranging from sand to clay till. While the 3-D GPR showed a high success rate in finding the drainpipes at five sites (sandy, sandy loam, loamy sand, and organic topsoils), the results at the other seven sites were less successful due to limited penetration depth (PD) of the 3D-GPR signal. The results suggest that the electrical conductivity estimates produced by the inversion of ECa data measured by the DUALEM sensor could be a useful proxy to explain the success achieved by the 3D-GPR in finding the drain lines. The high attenuation of electromagnetic waves in highly conductive media limiting the PD of the 3D-GPR can explain the findings obtained in this research. Measuring Lift-off Distance and Electromagnetic Property of Metal Using Dual-Frequency Linearity Feature Subject: Engineering, Automotive Engineering Keywords: Eddy current testing; lift-off measurement; property measurement; non-destructive testing; dual-frequency eddy current (DEC) testing Lift-offs of the sensor could significantly affect the measurement signal and reconstruction of material properties when using the electromagnetic (inductive) eddy current sensor. Previously, various methods (including novel sensor designs, and features like zero-crossing frequency, lift-off point of intercept) have been used for eliminating the measurement error caused by the lift-off distance effect of the sensor. However, these approaches can only be applied for a small range of lift-off variations. In this paper, a linear relationship has been found between the sensor lift-off and ratio of dual-frequency eddy current signals, particularly under the high working dual frequencies. Based on this linear relationship, the lift-off variation can be reconstructed firstly with a small error of 2.5 % when its actual value is up to 10 mm (10.1 % for 20 mm). The reconstructed lift-off is used to further get the property of the material under a low single frequency. Experiments on different ferrous metals have been carried out for the testing of the reconstruction scheme. Since the inductance is more sensitive to the material property (and less sensitive to the lift-off) under low frequencies, the reconstruction error of the material property is much smaller than that of the lift-off, with 1.4 % under 12 mm (and 4.5 % under 20 mm). Combined Use of Ultrasonic Pulse Velocity and Rebound Hammer for Structural Health Monitoring of Reinforced Concrete Structures Ahmed Lasisi, Obanishola Sadiq, Ibrahim Balogun Subject: Engineering, Civil Engineering Keywords: Non-Destructive Tests, Structural Health Monitoring, Ultrasonic Pulse Velocity, Rebound Hammer, Surface Hardness, Compressive Strength, Linear regression This work investigates the use of Non-destructive tests as a tool for monitoring the structural performance of concrete structures. The investigation encompassed four phases; the first of which involved the use of destructive and non-destructive mechanisms to assess concrete strength on cube specimens. The second phase research focused on site assessment for a twin engineering theatre located at the Faculty of Engineering, University of Lagos using rebound hammer and ultrasonic pulse velocity tester. The third phase was the use of linear regression analysis model with MATLAB to establish a relationship between calibrated strength as well as ultrasonic pulse velocities with their corresponding compressive strength values on cubes and values obtained from existing structures. Results show that the root-mean squared-R2 values for rebound hammer ranged between 0.275 and 0.742 while ultrasonic pulse velocity R2 values were in the range of 0.649 and 0.952 for air curing and water curing systems respectively. It initially appeared that the Ultrasonic pulse velocity was more suitable for predicting concrete strength than rebound hammer but further investigations showed that the latter was adequate for early age concrete while the former was more suited for aging concrete. Hence, a combined use is recommended in this work. TPE-RBF-SVM Model for Soybean Categories Recognition in Selected Hyperspectral Bands Based on Extreme Gradient Boosting Feature Importance Values Qinghe Zhao, Zifang Zhang, Yuchen Huang, Junlong Fang Subject: Engineering, General Engineering Keywords: Hyperspectral Technology; Non-destructive Testing; Soybean; Machine Learning; Support Vector Machine; Extreme Gradient Boosting; Tree-structured Parzen Estimator Soybean with insignificant differences in appearance have large differences in their internal physical and chemical components, therefore follow-up storage, transportation and processing require targeted differential treatment. A fast and effective machine learning method based on hyperspectral data of soybean for pattern recognition of categories is designed as a non-destructive testing method in this paper. A hyperspectral-image dataset with 2299 soybean seeds in 4 categories is collected; Ten features is selected by extreme gradient boosting algorithm from 203 hyperspectral bands in range 400 to 1000 nm; A Gaussian radial basis kernel function support vector machine with optimization by the Tree-structured Parzen Estimator algorithm is built as TPE-RBF-SVM model for pattern recognition of soybean categories. The metrics of TPE-RBF-SVM are significantly improved compared with other machine learning algorithms. The accuracy is 0.9165 in the independent test dataset which is 9.786% higher for vanilla RBF-SVM model and 10.02% higher than the extreme gradient boosting model. A Conceptual Design Approach for Archeological Structures, a Challenging Issue Between Innovation and Conservation: A Studied Case in the Ancient Pompeii Vincenzo Calvanese, Alessandra Zambrano Subject: Engineering, Automotive Engineering Keywords: cultural heritage; masonry rehabilitation; seismic device; steel structure; basalt fiber; grout injections; archeological site; rubber-bearing; non-destructive testing The preservation of the authenticity of a building artefact is a responsible practice. On the other side, the need to save the building artefact from the natural and anthropic degradation, to ensure the structural reliability to the different actions, to define an efficient maintenance program are big challenges, that involves the cooperation of several professionals, responsible use of innovative techniques and materials that are nowadays available. This paper focuses on a specific design approach for the rehabilitation works of ancient constructions in archaeological sites. The proposed conceptual design approach implies different steps that allow the optimization of the design at an increasing level of knowledge on the existing structures and their materials. The design procedure on historical constructions generally includes the following steps: the collection of data, the structural identification, hazard, and vulnerability analysis, damage and risk analysis, a cost-benefit analysis, so only at the end of the process, the final design is achieved. In the archaeological area, some important design aspects cannot be defined before the execution work phase, since some elements could be revealed and identified during work execution, as a consequence, the final design has been often optimized after all this information has been acquired. A studied case in the archaeological site of Pompeii is herein presented to prove the efficiency of the proposed approach. A Cost-Benefit Analysis of the COVID-19 Asymptomatic Mass Testing Strategy in the North Metropolitan Area of Barcelona Francesc López Seguí, Oriol Estrada Cuxart, Oriol Mitjà i Villar, Guillem Hernández Guillamet, Núria Prat Gil, Josep Maria Bonet, Mar Isnard Blanchar, Nemesio Moreno Millan, Ignacio Blanco Guillermo, Marc Vilar Capella, Martí Català Sabaté, Anna Aran Solé, Josep Maria Argimon Pallàs, Bonaventura Clotet, Jordi Ara del Rey Subject: Life Sciences, Virology Keywords: test-tracking-quarantine; cost benefit analysis; economic analysis; COVID-19; asymptomatic screening; mass testing; non-pharmacological interventions The epidemiological situation generated by COVID-19 has highlighted the importance of applying non-pharmacological measures. Among these, mass screening of the asymptomatic general population has been established as a priority strategy by carrying out diagnostic tests to limit the spread of the virus. In this article, we aim to evaluate the economic impact of mass COVID-19 screenings of an asymptomatic population through a Cost-Benefit Analysis based on the estimated total costs of mass screening versus health gains and associated health costs avoided. Excluding the value of monetized health, the Benefit-Cost ratio was estimated at approximately 0.45. However, if monetized health is included in the calculation, the ratio is close to 1.20. The monetization of health is the critical element that tips the scales in favour of the desirability of screening. Screenings with the highest return are those that maximize the percentage of positives detected. Metal Body Armour: Biomimetic Engineering of Lattice Structures † Anton du Plessis, C. Broeckhoven Subject: Engineering, Industrial & Manufacturing Engineering Keywords: biomimicry; biomimetic engineering; energy absorption; lattice structure; additive manufacturing; powder bed fusion; X-ray tomography; microCT; non-destructive testing; 3D image analysis Biomimicry in additive manufacturing often refers to topology optimization and the use of lattice structures, due to the organic shape of the topology-optimized designs, and the lattices often looking similar to many light-weight structures found in nature such as trabecular bone, wood, sponges, coral, to name a few. Real biomimetic design however involves the use of design principles taken in some way from natural systems. In this work we use a methodology whereby high resolution 3D analysis of a natural material with desirable properties is "reverse-engineered" and the design tested for the purpose. This allows more accurate replication of the desired properties, and adaption of the design parameters to the material used for production (which usually differs from the biological material). One such example is the impact-protective natural design of the glyptodont body armour. In this paper we report on the production of body armour models in metal (Ti4Al4V) and analyze the resulting mechanical properties, assessing their potential for impact protective applications. This is the first biomimetic study using metal additive manufacturing to date. Roots of Quantum Computational Supremacy: Superposition? Entanglement? Or Complementarity? Andrei Khrennikov Subject: Physical Sciences, General & Theoretical Physics Keywords: quantum computing superioty; Google's claim; complementarity principle; quantum versus classical superposition; quantum versus classical entanglement; quantum versus classical probability; interference of probabilities; constructive and destructive interference of probabilities; non-Bayesian update The recent Google's claim on breakthrough in quantum computing is a gong signal for further analysis of foundational roots of (possible) superiority of some quantum algorithms over the corresponding classical algorithms. This note is a step in this direction. We start with critical analysis of rather common reference to entanglement and quantum nonlocality as the basic sources of quantum superiority. We elevate the role of the Bohr's principle of complementarity1 (PCOM) by interpreting the Bell-experiments as statistical tests of this principle. (Our analysis also includes comparison of classical vs genuine quantum entanglements.) After a brief presentation of PCOM and endowing it with the information interpretation, we analyze its computational counterpart. The main implication of PCOM is that by using the quantum representation of probability, one need not compute the joint probability distribution (jpd) for observables involved in the process of computation. Jpd's calculation is exponentially time consuming. Consequently, classical probabilistic algorithms involving calculation of jpd for n random variables can be over-performed by quantum algorithms (for big values of n). Quantum algorithms are based on quantum probability calculus. It is crucial that the latter modifies the classical formula of total probability (FTP). Probability inference based on the quantum version of FTP leads to constructive interference of probabilities increasing probabilities of some events. We also stress the role the basic feature of the genuine quantum superposition comparing with the classical wave superposition: generation of discrete events in measurements on superposition states. Finally, the problem of superiority of quantum computations is coupled with the quantum measurement problem and linearity of dynamics of the quantum state update. Linking of Financial Data with Non-Financial Information on CSR of Companies Listed on the Stock Exchange in Poland – Polish Case Study Małgorzata Anna Węgrzyńska Subject: Social Sciences, Accounting Keywords: CSR; non-financial reporting; non-financial disclosures Reporting on CSR activities has become the essence of reporting for modern business entities. In this regard, particular attention is paid to public interest companies. Therefore, the following paper aims to answer the question of whether there are differences in the linguistic structure of the studied CSR reports in three selected industry indices on the Warsaw Stock Exchange (WSE) in Poland, i.e. WIG-energy index, WIG-fuel index, WIG-mining index and their relationship with the performance of selected companies. The study was conducted on a purposely selected sample of companies between 2013 and 2018. A total of 138 CSR reports and 138 annual separate financial statements prepared in accordance with international balance sheet law were collected. The study was carried out based on a panel regression model. It was found that CSR reports contained similar average percentages of parts of speech such as nouns and adjectives. When linking the economic performance of companies, expressed with selected indices, to the information on the implementation of CSR concepts, it was revealed that the results are more likely to describe business performance when it is satisfactory. An Investigation of Experimental Reports on the Relativistic Relation for Doppler Shift James McKelvie Subject: Physical Sciences, Other Keywords: Doppler; relativistic; non-relativistic An exhaustive list of thirteen instances of reports confirming experimentally the relativistic Doppler relation, are examined. For those involving longitudinal Doppler, the non-relativistic relation is seen to be confirmed, within the reported experimental accuracies, to the same degree as the standard relativistic relation. Higher values of the speed of the emitter would be required to examine further the claimed confirmations. For those reports involving saturation spectroscopy, there is much confusion over the appropriate Doppler relation to be used, together with some serious analytical flaws. For the two cases that involve transverse Doppler, there are seen to be either serious faults in the theoretical part, or intrusion from the first order effect. Therefore, the reported conclusions - that the results for the experiments confirm the relativistic SR relation - cannot be justified by any of the experimental works. Non-Commutative Key Exchange Protocol Luis Adrián Lizama-Pérez, José Mauricio López Romero Subject: Mathematics & Computer Science, Computational Mathematics Keywords: Non-commutative; matrix; cryptography We introduce a novel key exchange protocol based on non-commutative matrix multiplication defined in $\mathbb{F}_p^{n \times n}$. The security of our method does not rely on computational problems as integer factorization or discrete logarithm whose difficulty is conjectured. We show that the public, secret and channel keys become indistinguishable to the eavesdropper under matrix multiplication. Remarkably, for achieving a 512-bit security level, the public key is 1024 bits and the private key is 768 bits, making them the smallest keys among post-quantum key exchange algorithms. Also, we discuss how to achieve key authentication, interdomain certification and Perfect Forward Secrecy (PFS). Therefore, Lizama's algorithm becomes a promising candidate to establish shared keys and secret communication between (IoT) devices in the quantum era. We introduce a novel key exchange protocol based on non-commutative matrix multiplication defined in $\mathbb{Z}_p^{n \times n}$. The security of our method does not rely on computational problems as integer factorization or discrete logarithm whose difficulty is conjectured. We claim that the unique eavesdropper's opportunity to get the secret/private key is by means of an exhaustive search which is equivalent to the unsorted database search problem. Furthermore, we show that the secret/private keys become indistinguishable to the eavesdropper. Remarkably, to achieve a 512-bit security level, the keys (public/private) are of the same size when matrix multiplication is done over a reduced 8-bit size modulo. Also, we discuss how to achieve key certification and Perfect Forward Secrecy (PFS). Therefore, Lizama's algorithm becomes a promising candidate to establish shared keys and secret communication between (IoT) devices in the quantum era. Non-Archimedean Welch Bounds and Non-Archimedean Zauner Conjecture K. Mahesh Krishna Subject: Mathematics & Computer Science, Analysis Keywords: Non-Archimedean valued field; non-Archimedean Hilbert space; Welch bound; Zauner conjecture Let $\mathbb{K}$ be a non-Archimedean (complete) valued field satisfying \begin{align*} \left|\sum_{j=1}^{n}\lambda_j^2\right|=\max_{1\leq j \leq n}|\lambda_j|^2, \quad \forall \lambda_j \in \mathbb{K}, 1\leq j \leq n, \forall n \in \mathbb{N}. \end{align*} For $d\in \mathbb{N}$, let $\mathbb{K}^d$ be the standard $d$-dimensional non-Archimedean Hilbert space. Let $m \in \mathbb{N}$ and $\text{Sym}^m(\mathbb{K}^d)$ be the non-Archimedean Hilbert space of symmetric m-tensors. We prove the following result. If $\{\tau_j\}_{j=1}^n$ is a collection in $\mathbb{K}^d$ satisfying $\langle \tau_j, \tau_j\rangle =1$ for all $1\leq j \leq n$ and the operator $\text{Sym}^m(\mathbb{K}^d)\ni x \mapsto \sum_{j=1}^n\langle x, \tau_j^{\otimes m}\rangle \tau_j^{\otimes m} \in \text{Sym}^m(\mathbb{K}^d)$ is diagonalizable, then \begin{align}\label{WELCHNONABSTRACT} \max_{1\leq j,k \leq n, j \neq k}\{|n|, |\langle \tau_j, \tau_k\rangle|^{2m} \}\geq \frac{|n|^2}{\left|{d+m-1 \choose m}\right| }. \end{align} We call Inequality (\ref{WELCHNONABSTRACT}) as the non-Archimedean version of Welch bounds obtained by Welch [\textit{IEEE Transactions on Information Theory, 1974}]. We formulate non-Archimedean Zauner conjecture. Non-Ionizing Millimeter Waves Non-thermal Radiation of Saccharomyces Cerevisiae – Insights and Interactions. Ayan Barbora, Sailendra Rajput, Konstantin Komoshvili, Jacob Levitan, Asher Yahalom, Stella Liberman- Aronov Subject: Life Sciences, Biochemistry Keywords: Non-ionizing Radiation; Millimeter waves; Novel biomedical applications; Yeast; Non-invasive devices Nonionizing millimeter-waves (MMW) interact with cells in a variety of ways. Here the inhibited cell division effect was investigated using 85-105 GHz MMW irradiation within the ICNIRP (International Commission on Non-Ionizing Radiation Protection) non-thermal 20 mW/cm2 safety standards. Irradiation using a power density of about 1.0 mW/cm2 , SAR over 5-6 hours on 50 cells/μl samples of Saccharomyces cerevisiae model organism resulted in 62% growth rate reduction compared to the control (sham). The effect was specific for 85-105 GHz range, and was energy and cell density dependent. Irradiation of wild type and Δrad52 (DNA damage repair gene) deleted cells presented no differences of colony growth profiles indicating non-thermal MMW treatment does not cause permanent genetic alterations. Dose versus response relations studied using a standard horn antenna (~1.0 mW/cm2) and compared to that of a compact waveguide (17.17 mW/cm2) for increased power delivery resulted in complete termination of cell division via non-thermal processes supported by temperature rise measurements. We have shown that non-thermal MMW radiation has potential for future use in treatment of yeast related diseases and other targeted biomedical outcomes. Mechanisms of the Non-Thermal Exposure Effects of Non-Ionizing Millimeter Waves Radiation on Eukaryotic Cells for Improving Technological Precision Enabling Novel Biomedical Applications Ayan Barbara, Shailendra Rajput, Konstantin Komoshvili, Jacob Levitan, Asher Yahalom, Stella Liberman- Aronov Subject: Life Sciences, Biophysics Keywords: non-ionizing radiation; millimeter waves; novel biomedical applications; yeast; non-invasive devices Nonionizing millimeter-waves (MMW) are reported to interact with cells in a variety of ways. Possible mechanisms of the inhibited cell division effect were investigated using 85-105 GHz MMW irradiation within the ICNIRP (International Commission on Non-Ionizing Radiation Protection) non-thermal 20 mW/cm2 safety standards. ~1.0 mW/cm2 exposure over 5-6 hours treatment on 50 cells/μl samples of Saccharomyces cerevisiae model organism, resulted in 62% growth rate reduction compared to control (sham). The effect was specific for 85-105 GHz range and energy dose and cell density dependent. Irradiation of wild type and Δrad52 (DNA damage repair gene) deletion cells presented no differences of colony growth profiles indicating non-thermal MMW treatment does not cause genetic alterations. Dose versus response relations studied using a standard horn antenna (~1.0 mW/cm2) and compared to that of a compact waveguide (17.17 mW/cm2) for increased power delivery resulted in complete termination of cell division via non-thermal processes supported by temperature rise measurements. Combinations of MMW mediated Structure Resonant Energy Transfer (SRET), membrane modulations eliciting signaling effects, and energetic resonance with biomolecules were indicated to be responsible for the observations reported. Our results provide novel mechanistic insights enabling innovative applications of nonionizing radiation procedures for eliciting targeted biomedical outcomes. Non-invertible Public Key Certificates Luis Lizama-Pérez, J. Mauricio López Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: Non-invertible; cryptography; certificate; PKI Post-quantum public cryptosystems introduced so far do not define an scalable public key infrastructure for the quantum era. We demonstrate here a public certification system based in Lizama's non-invertible Key Exchange Protocol which can be used to implement a public key infrastructure (PKI), secure, scalable, interoperable and efficient. We show functionality of certificates across different certification domains. Finally, we discuss that non-invertible certificates can exhibit Perfect Forward Secrecy (PFS). Public Debt and Economic Growth Nexus: Evidence from South Asia Saira Saeed, Tanweer Islam Subject: Social Sciences, Economics Keywords: endogeneity, non-linearity, threshold, FMOLS It is well established in literature that the public debt and economic growth bear positive and non-linear relationship. However, in recent literature, evidence of no causal relationship is found when accounted for endogeneity in case of advanced economies (Panizza & Presbitero, 2014). Chudik, Mohaddes, Pesaran, & Raissi, (2017) analyse the data on forty countries and find no evidence of universally applicable threshold effect in the relationship between debt and growth. These advancements in the debt-growth literature provides the motivation to re-explore the relationship between public debt and economic growth under non-linearity and endogeneity in context of developing economies of South Asia including Pakistan, India, Bangladesh and Sri-Lanka for the period 1980-2014. There exists a significant, positive but nonlinear relationship between the public debt and economic growth for the selected set of developing countries when accounted for endogeneity and non-linearity. The negative association between the public debt and economic growth for SAARC region is found when the debt level is higher than 61% of GDP which is quite lower than developed economies (90% of GDP). Individual threshold levels for debt-to-GDP ratio divulge that Sri Lanka, Pakistan and India need to control their public borrowings as their current debt levels are higher and/or around the respective threshold levels. A Duality Principle and a Concerning Convex Dual Formulation Suitable for Non-convex Variational Optimization Fabio Botelho Subject: Mathematics & Computer Science, Applied Mathematics Keywords: Convex dual variational formulation; duality principle for non-convex optimization; model in non-linear elasticity This article develops a duality principle and a related convex dual formulation suitable for a large class of models in physics and engineering. The results are based on standard tools of functional analysis, calculus of variations and duality theory. In particular, we develop applications to a model in non-linear elasticity. Influence of Material-Dependent Damping on Brake Squeal in the Specific Disc Brake System Juraj Úradníček, Miloš Musil, Ľuboš Gašparovič, Michal Bachratý Subject: Engineering, Automotive Engineering Keywords: brake squeal; dissipation induced instability; non-proportional damping; non-conservative system; complex eigen value analysis The connection of two phenomena - non-conservative friction forces and dissipation-induced instability can lead to many interesting engineering problems. The paper studies general material-dependent damping influence on dynamical instability of disc brake systems leading to brake squeal. The effect of general damping is demonstrated on a minimal and complex model of a disc brake. A complex system including material-dependent damping is defined in the commercial finite element software. The finite element model validated by experimental data on the brake-disc test bench is used to compute the influence of a pad and a disc damping variations on system stability by complex eigenvalue analysis. Analyzes show a significant sensitivity of the experimentally verified unstable mode of the system to the ratio of the damping between the disc and the friction material components. Application of the Incremental Modal Analysis for Bridges (IMPAb) Subjected to Near-Fault Ground Motions Alessandro Vittorio Bergami, Gabriele Fiorentino, Davide Lavorato, Bruno Briseghella, Camillo Nuti Subject: Engineering, Civil Engineering Keywords: near field; pulse like ground motions; bridge, non-linear static analysis; non-linear dynamic analysis Near-fault ground motions can cause severe damage to civil structures, including bridges. Safety assessment of these structures for near fault ground motion is usually performed through Non-Linear Dynamic Analyses, while faster methods are often used. IMPAb (Incremental Modal Pushover Analysis for Bridges) permits to investigate the seismic response of a bridge by considering the effects of higher modes, which are often relevant for bridges. In this work, IMPAb is applied to a bridge case study considering near-fault pulse-like ground motion records. The records were analyzed and selected from the European Strong Motion Database and the pulse parameters were evaluated. In the paper results from standard pushover procedures and IMPAb are compared with nonlinear Response-History Analysis (NRHA), considering also the vertical component of the motion, as benchmark solutions and incremental dynamic analysis (IDA). Results from the case study demonstrate that the vertical seismic action has a minor influence on the structural response of the bridge. Therefore IMPAb, which can be applied considering vertical motion, remains very effective conserving the original formulation of the procedure, and can be considered a well performing procedure also for near-fault events. A Deep Dive into Genome Assemblies of Non-vertebrate Animals Nadège Guiglielmoni, Ramón Rivera-Vicéns, Romain Koszul, Jean-François Flot Subject: Life Sciences, Genetics Keywords: genome assembly; sequencing; non-vertebrate animals Non-vertebrate species represent about ~95% of known metazoan (animal) diversity. They remain to this day relatively unexplored genetically, but understanding their genome structure and function is pivotal for expanding our current knowledge of evolution, ecology and biodiversity. Following the continuous improvements and decreasing costs of sequencing technologies, many genome assembly tools have been released, leading to a significant amount of genome projects being completed in recent years. In this review, we examine the current state of genome projects of non-vertebrate animal species. We present an overview of available sequencing technologies, assembly approaches, as well as pre and post-processing steps, genome assembly evaluation methods, and their application to non-vertebrate animal genomes. Transcriptome-wide Association Study of Blood Cell Traits in African Ancestry and Hispanic/Latino Populations Jia Wen, Munan Xie, Bryce Rowland, Jonathan D. Rosen, Quan Sun, Amanda L. Tapia, Huijun Qian, Madeline H. Kowalski, Yue Shan, Kristin L. Young, Marielisa Graff, Maria Argos, Christy L. Avery, Stephanie A. Bien, Steve Buyske, Jie Yin, Hélène Choquet, Myriam Fornage, Chani J. Hodonsky, Eric Jorgenson, Charles Kooperberg, Ruth J.F. Loos, Yongmei Liu, Jee-Young Moon, Kari E. North, Stephen S. Rich, Jerome I. Rotter, Jennifer A. Smith, Wei Zhao, Lulu Shang, Tao Wang, Xiang Zhou, Alexander P. Reiner, Laura M. Raffield, Yun Li Subject: Life Sciences, Biochemistry Keywords: TWAS; non-European; blood cell traits Background: Thousands of genetic variants have been associated with hematological traits, though target genes remain unknown at most loci. Also, limited analyses have been conducted in African ancestry and Hispanic/Latino populations; hematological trait associated variants more common in these populations have likely been missed. Methods: To derive gene expression prediction models, we used ancestry-stratified datasets from the Multi-Ethnic Study of Atherosclerosis (MESA, including N=229 African American and N=381 Hispanic/Latino participants, monocytes) and the Depression Genes and Networks study (DGN, N = 922 European ancestry participants, whole blood). We then performed a transcriptome-wide association study (TWAS) for platelet count, hemoglobin, hematocrit, and white blood cell count in African (N = 27,955) and Hispanic/Latino (N = 28,324) ancestry participants. Results: Our results revealed 24 suggestive signals (p < 1×10^(-4)) that were conditionally distinct from known GWAS identified variants and successfully replicated these signals in European ancestry subjects from UK Biobank. We found modestly improved correlation of predicted and measured gene expression in an independent African American cohort (the Genetic Epidemiology Network of Arteriopathy (GENOA) study (N=802), lymphoblastoid cell lines) using the larger DGN reference panel; however, some genes were well predicted using MESA but not DGN. Conclusions: These analyses demonstrate the importance of performing TWAS and other genetic analyses across diverse populations and of balancing sample size and ancestry background matching when selecting a TWAS reference panel. Design of Tunnel Drier for the Non-centrifugal Sugar Industry S.P. Raj, B Sravya, Morapakala Srinivas, K.S Reddy Subject: Engineering, Mechanical Engineering Keywords: Non-centrifugal sugar; drying; tunnel dryer The quality and shelf-life of NCS (Non-centrifugal sugar) mainly depend on the moisture content present in it. NCS formed by the current practice of open sun drying contains moisture substantially greater than the acceptable level of 3%. This paper presents the work taken up to design a tunnel dryer to attain require moisture content in granular NCS for various load conditions. Initially, an experimental investigation had been carried out on a laboratory scale dryer to achieve required moisture content (< 3%) for various load conditions. This experimental data was used for validating two drying models and found that one of the models is best suitable for designing an industrial-scale dryer. For various load conditions on each tray and dryer exit temperature, nine different cases were arrived at. The number of trucks, trays, drying time and energy requirements were computed using the suitable theoretical model. Tunnel dryer with a length of 18 m, a height of 1.2 m, a width of 1 m, number of trucks of 18 and 24 number of trays on each truck was found to be the suitable dryer to dry 1 tone of NCS based on the minimum energy requirement of 176.49 MJ, and a minimum drying time of 68 minutes. Functional RNA Structures in the 3'UTR of Tick-Borne, Insect-Specific and No-Known-Vector Flaviviruses Roman Ochsenreiter, Ivo L. Hofacker, Michael T. Wolfinger Subject: Life Sciences, Virology Keywords: Flavivirus; non-coding RNA; secondary structure Untranslated regions (UTRs) of flaviviruses contain a large number of RNA structural elements involved in mediating the viral life cycle, including cyclisation, replication, and encapsidation. Here we report on a comparative genomics approach to characterize evolutionarily conserved RNAs in the 3'UTR of tick-borne, insect-specific and no-known-vector flaviviruses in silico. Our data support the wide distribution of previously experimentally characterized exoribonuclease resistant RNAs xrRNAs within tick-borne and no-known-vector flaviviruses and provide evidence for the existence of a cascade of duplicated RNA structures within insect-specific flaviviruses. On a broader scale, our findings indicate that viral 3'UTRs represent a flexible scaffold for evolution to come up with novel xrRNAs CybIQ: Secure Authentication Method Raghavendra Devidas, Hrushikesh Srinivasachar Subject: Mathematics & Computer Science, General & Theoretical Computer Science Keywords: Pattern based access; Graphical password; safe password; non-intuitive password; non-static password; visually encrypted password With increased vulnerabilities and vast technology landscapes, it is extremely critical to build systems which are highly resistant to cyber-attacks, to break into systems to exploit. It is almost impossible to build 100% secure authentication & authorization mechanisms merely through standard password / PIN (With all combinations of special characters, numbers & upper/lower case alphabets and by using any of the Graphical password mechanisms). The immense computing capacity and several hacking methods used, make almost every authentication method susceptible to cyber-attacks in one or the other way. Only proven / known system which is not vulnerable in spite of highly sophisticated computing power is, human brain. In this paper, we present a new method of authentication using a combination of computer's computing ability in combination with human intelligence. In fact this human intelligence is personalized making the overall security method more secure. Text based passwords are easy to be cracked [6]. There is an increased need for an alternate and more complex authentication and authorization methods. Some of the Methods [7] [8] in the category of Graphical passwords could be susceptible, when Shoulder surfing/cameras/spy devices are used. A New R Package for Categorizing Coding and Non-Coding Genes Masroor Bayati, Narges Rezaie, Mehrab Hamidi, Maedeh Sadat Tahaei, Hamid Rabiee Subject: Biology, Other Keywords: somatic point mutations; non-coding RNA; biomarker discovery; driver genes; non-coding RNAs prioritization; health data analytics Previous studies demonstrate the critical importance of non-coding RNAs interfacing with chromatin-modifying machinery resulting in promoter-enhancer-based gene regulation and raise the possibility that many other enhancer-like RNAs may operate via similar mechanisms. Critically, more than 80% of the disease-linked variations identified in genome-wide studies are located in the non-coding regions of genomes, especially non-coding RNA, suggesting non-coding RNAs are relevant to disease. Thus, a critical path forward for understanding non-coding RNAs' role, especially long non-coding RNAs, is to understand the genomic regions' transcriptional regulation, especially non-coding regions. Here, we developed a user-friendly R package called SomaGene for studying and identifying enhancer-like non-coding RNAs with enriched somatic mutations in the cancer genome. SomaGene accepts different genomic variants (whole genome/exome somatic point mutations, structural variations, copy number variations) to identify those RNAs that significantly mutated in diseases (e.g., cancer). It then uses multiple publicly available genomics and epigenetics datasets including ENCODE epigenomics annotations, FANTOM5 tissue-specific expression profiles, disease-associated genome-wide association SNPs, and tissue-specific eQTL pairs to identify those RNAs with potentially enhancer function. SomaGene, as a powerful R package, can provide the opportunity to cancer scientists to study the roles of non-coding RNAs in different cancer genomes. New Concept of Factorials and Combinatorial Numbers and its Consequences for Algebra and Analysis Mohamed Elmansour Hassani Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: factorials; binomial coefficients; combinatorial numbers; non-classical hypergeometric orthogonal polynomials; non-classical second-order hypergeometric linear DEs In this article, the usual factorials and binomial coefficients have been generalized and extended to the negative integers. Basing on this generalization and extension, a new kind of polynomials has been proposed, which led directly to the non-classical hypergeometric orthogonal polynomials and the non-classical second-order hypergeometric linear DEs. The resulting polynomials can be used in non-relativistic and relativistic QM, particularly, in the case of the Schrödinger equation, and Dirac equations for an electron in a Coulomb potential field. Non-Equilibrium Phase Behavior of Hydrocarbons in Compositional Simulations and Upscaling Ilya M. Indrupskiy, Olga A. Lobanova, Vadim R. Zubov Subject: Engineering, Energy & Fuel Technology Keywords: non-equilibrium phase behavior; compositional flow simulations; phase transitions; upscaling; hydrocarbon mixtures; non-equilibrium constant volume depletion Numerical models widely used for hydrocarbon phase behavior and compositional flow simulations are based on assumption of thermodynamic equilibrium. However, it is not uncommon for oil and gas-condensate reservoirs to exhibit essentially non-equilibrium phase behavior, e.g., in the processes of secondary recovery after pressure depletion below saturation pressure, or during gas injection, or for condensate evaporation at low pressures. In many cases the ability to match field data with equilibrium model depends on simulation scale. The only method to account for non-equilibrium phase behavior adopted by the majority of flow simulators is the option of limited rate of gas dissolution (condensate evaporation) in black oil models. For compositional simulations no practical yet thermodynamically consistent method has been presented so far except for some upscaling techniques in gas injection problems. Previously reported academic non-equilibrium formulations have a common drawback of doubling the number of flow equations and unknowns compared to the equilibrium formulation. In the paper a unified thermodynamically-consistent formulation for compositional flow simulations with non-equilibrium phase behavior model is presented. Same formulation and a special scale-up technique can be used for upscaling of an equilibrium or non-equilibrium model to a coarse-scale non-equilibrium model. A number of test cases for real oil and gas-condensate mixtures are given. Model implementation specifics in a flow simulator are discussed and illustrated with test simulations. A non-equilibrium constant volume depletion algorithm is presented to simulate condensate recovery at low pressures in gas-condensate reservoirs. Results of satisfactory model matching to field data are reported and discussed. Characteristics of Real-world Non-exhaust Particulates from Vehicles Sunhee Mun, Hwansoo Chong, Jongtae Lee, Yunsung Lim Subject: Engineering, Automotive Engineering Keywords: Non-exhaust particle, Rubber, Vinylcyclohexene, Dipentene, Metal The need to regulate the non-exhaust particulate emissions from vehicles has been discussed worldwide due to the toxicity to the human body as well as the atmosphere. In-depth studies have been conducted on precise analysis of the non-exhaust particulates, in particular, accurate measurement of tire-brake-road wear particles, and their proportion in the atmosphere. In this study, the influence of tire and road wear particles (TRWP) on particulate matter (PM) in the atmosphere was investigated through tire and PM samples. PM samples were collected from the atmosphere using a high-volume sampler equipped with a quartz filter. Additionally, polycyclic aromatic hydrocarbons (PAHs) and heavy metals in tire rubber were analyzed as markers by pyrolysis-gas chromatography/mass spectrometry (GC/MS), GC/MS, and inductively coupled plasma/mass spectrometry (ICP/MS). More vinylcyclohexene was detected than dipentene in the markers measured in the samples of tires equipped with vehicles driving on the road with the high-volume sampler installed, while more dipentene was detected in total suspended particles (TSP) samples. Among the PAHs measured in tire samples, pyrene exhibited the highest concentration. In TSP samples, benzo(b)fluoranthene showed the highest concentration. Among the heavy metals, zinc exhibited the highest concentration in all tire samples and calcium in TSP samples. A Survey of GNN in Bioinformation Data Zhenyi Zhu Subject: Mathematics & Computer Science, Other Keywords: Graph Neural Networks, Non-Euclidean Data, Bioinformation With the development of data science, more and more machine learning technologies have been designed to solve complicated and challenging real-world tasks containing a large volume of data. And many significant real-world datasets contain data in the form of networks or graphs. Graph Neural Networks is one of the powerful machine learning tools, which could provide a perfect solution to processing a large amount of non-Euclidean data. And because most bio information data in bioinformatics is in the non-Euclidean domain, Graph Neural Networks could then directly be applied to solve problems in bioinformatics. Much research has been done in the field of GNN, and there are also some surveys related to GNN and its applications. However, there has been little research focusing on GNN in bioinformatics, and we think in the future we could better utilize GNN in the field of biology, so we would like to write a literature review to help take a comprehensive look at GNN and their applications in the field of bioinformatics. In this paper, we would first introduce SOTA models in Graph Neural Networks, and second, introduce their applications in bio information. And then we would provide future directions of Graph Neural Networks in bioinformatics. New Bounds for the Hausdorff Dimension of a Dynamically Defined Cantor Set Fernando José Sánchez-Salas Subject: Mathematics & Computer Science, Algebra & Number Theory Keywords: non conformal repellers; dimension theory; thermodynamic formalism In this paper we use the additive thermodynamic formalism to obtain new bounds of the Hausdorff and box-counting dimension of certain non conformal hyperbolic repellers defined by piecewise smooth expanding maps on a $d$-dimensional smooth manifold $M$. Preprint TECHNICAL NOTE | doi:10.20944/preprints202103.0238.v1 Nicholas of Autrecourt: A Forerunner of Paraconsistent Logics Arturo Tozzi Subject: Arts & Humanities, Anthropology & Ethnography Keywords: non-classical logic; scholasticism; theology; epistemology; condemnation We suggest that the 14th century scholar Nicholas of Autrecourt can be regarded as a precursor of the paraconsistent logics developed around 1950. We show how the Sorbonne licentiatus in theology provided in his few extant writings a refutation of both the principle of explosion and the law of non-contradiction, in accordance with the tenets of paraconsistent logics. This paves the way to the most advanced theories of truth in natural language and quantum dynamics. The Nonrelativistic Quantum-Mechanical Hamiltonian with Diamagnetic Current-Current Interaction Ladislaus Alexander Bányai Subject: Physical Sciences, Acoustics Keywords: diamagnetism; current-current; interaction; non-relativistic QED We extend the standard solid-state quantum mechanical Hamiltonian containing only Coulomb interactions between the charged particles by inclusion of the (transverse) current-current diamagnetic interaction starting form the non-relativistic QED restricted to the states without photons and neglecting the retardation in the photon propagator. This derivation is supplemented with a derivation of an analogous result along the non-rigorous old classical Darwin-Landau-Lifshitz argumentation within the physical Coulomb gauge. The Role of Non-Canonical Hsp70s (Hsp110/Grp170) in Cancer Graham Chakafana, Addmore Shonhai Subject: Life Sciences, Biochemistry Keywords: Hsp110; Grp170; non-canonical Hsp70; chaperone; cancer Although cancers account for over 16% of all global deaths annually, at present, no reliable therapies exist for most types of the disease. As protein folding facilitators, heat shock proteins (Hsps) play an important role in cancer development. Not surprisingly, Hsps are among leading anticancer drug targets. Generally, Hsp70s are divided into two main subtypes: canonical Hsp70 (E. coli Hsp70/DnaK homologues) and the non-canonical (Hsp110 and Grp170) members. These two main Hsp70 groups are delineated from each other by distinct structural and functional specifications. Non-canonical Hsp70s are considered as holdase chaperones, while canonical Hsp70s are refoldases. This distinct characteristic feature is mirrored by the distinct structural features of these two groups of chaperones. Hsp110/Grp170 members are larger as they possess an extended acidic insertion in their substrate binding domains. While the role of canonical Hsp70s in cancer has received a fair share of attention, the roles of non-canonical Hsp70s in cancer development has received less attention in comparison. In the current review, we discuss the structure-function features of non-canonical Hsp70s members and how these features impact on their role in cancer development. We further mapped out their interactome and discussed the prospects of targeting these proteins in cancer therapy. Influence of non-puddled transplanting and residues of previous mustard on rice (Oryza sativa L.) Mohammad Mobarak Hossain, Mahfuza Begum, Md. Moshiur Rahman, Abul Hashem, Richard W. Bell, Enamul Haque Subject: Biology, Agricultural Sciences & Agronomy Keywords: crop residues, non-puddled, strip tillage, yield On-farm research was conducted at Gouripur sub-district under Mymensingh district of Bangladesh during the boro (mid November-June) season in 2013-14 and 2014-15 to evaluate the performance of non-puddled rice cultivation with and without crop residue retention. The rice var. BRRI dhan28 was transplanted by two tillage practices viz., puddled conventional tillage (CT) and non-puddled strip tillage (ST) and at two levels of mustard residues, i.e., no residue (R0) and 50% residue (R50). The experiment was designed in a randomized complete block design with four replications. There were no significant yield differences between tillage practices and residue levels in 2013-14. But in the following year, ST yielded 9% more grain compared to CT leading to 22% higher BCR. Retention of 50% residue increased yield by 3% compared to no-residue, which contributed to 10% higher benefit-cost ratio (BCR). The ST combined with 50% residue retention yielded the highest grain yield (5.81 t ha-1) which contributed to produce the highest BCR (1.06). General Equilibrium Theory in Economics And Beyond Mohamad Rilwan, Agra T. Wijeratne Subject: Social Sciences, Accounting Keywords: Energy; Equilibrium; Gradients; Non-Equilibrium Thermodynamics; Entropy The General equilibrium theory tries to show how and why all free markets tend toward equilibrium in the long run. However, what is meant by equilibrium in this paper is more from a thermodynamical point of view. In order to understand the actual situation, it is necessary to study open systems which are complex. In physics, such a behavior in a complex system can be explain by using Non Equilibrium Thermodynamics. A system is able to self-organize and sustain itself away from equilibrium. Economic systems may fluctuate around a particular point. To sustain it far from the equilibrium state, it needs to degrade more energy and materials. In this study, the energy consumption patterns of Sri Lanka and USA are discussed. The pattern concerning Sri Lanka is close to the model proposed here, whereas the energy consumption pattern of USA is more complicated due to external factors. Impact of High Non-performing Loan Ratios on Bank Lending Trends and Profitability Eric Jing Subject: Social Sciences, Economics Keywords: NPL Ratio; non-performing loans; economic recovery The goal of this paper is to explore the relationship between the specific non-performing loan ratio (NPL ratio) and the corresponding impact on the bank's profitability and lending behavior. It also seeks to investigate the macroeconomic impacts of economies with excessively high NPL ratios as well as the efficacy and impact of alleviation measures used by banks and governments around the world to help facilitate a decrease in high NPL ratios. The possible implications and effects of the COVID-19 pandemic on NPL ratios is also addressed in this paper. It is found that when excessively high NPL ratios go unaddressed, the economy tends to suffer. On the other hand, this study shows that when measures are taken to reduce or eliminate the high NPL ratios, economic performance improves, and the reduction has a clear positive impact on the economy. Quantum Correction for Newton's Law of Motion Timur Kamalov Subject: Physical Sciences, General & Theoretical Physics Keywords: quantum correction; Non-Inertial Mechanics; Dark Metric A description of the motion in Non-Inertial Reference Frames by means of the inclusion of high time derivatives is studied. It is noted that incompleteness of the description of physical reality is a problem of any theory: both quantum mechanics and classical physics. The "stability principle" is put forward. We also provide macroscopic examples of Non-Inertial Mechanics and verify the use of high-order derivatives as non-local hidden variables based the equivalence principle when acceleration is equal to the gravitational field. Acceleration in this case is a function of the higher derivatives with respect to time. The definition of Dark Metrics for matter and energy is presented to replace the standard notions of Dark Matter and Dark Energy. Evaluating Human Photoreceptoral Inputs from Night-Time Lights Using RGB Imaging Photometry Alejandro Sánchez de Miguel, Salvador Bará, Martin Aubé, Nicolás Cardiel, Carlos E. Tapia, Jaime Zamorano, Kevin J. Gaston Subject: Physical Sciences, Applied Physics Keywords: light pollution, vision, non-vision, DSLRs, ISS Night-time lights interact with human physiology through different pathways starting at the retinal layers of the eye, from the signals provided by the rods, the S-, L- and M-cones, and the intrinsically photosensitive retinal ganglion cells (ipRGC). These individual photic channels combine in complex ways to modulate important physiological processes, among them the daily entrainment of the neural master oscillator that regulates circadian rhythms. Evaluating the relative excitation of each type of photoreceptor generally requires full knowledge of the spectral power distribution of the incoming light, information that is not easily available in many practical applications. One such instance is wide area sensing of public outdoor lighting; present-day radiometers onboard Earth-orbiting platforms with sufficient nighttime sensitivity are generally panchromatic and lack the required spectral discrimination capacity. In this paper we show that RGB imagery acquired with off-the-shelf digital single-lens reflex cameras (DSLR) can be a useful tool to evaluate, with reasonable accuracy and high angular resolution, the photoreceptoral inputs associated with a wide range of lamp technologies. The method is based on linear regressions of these inputs against optimum combinations of the associated R, G, and B signals, built for a large set of artificial light sources by means of synthetic photometry. Given the widespread use of RGB imaging devices, this approach is expected to facilitate the monitoring of the physiological effects of light pollution, from ground and space alike, using standard imaging technology. Length Distortions of a Spindle Under the Influences of Temperatures Rising Duration Machining and Compensation Solution Cao-Sang Tran, Wen-Yuh Jywe, Tung-Hsien Hsieh Subject: Engineering, Industrial & Manufacturing Engineering Keywords: length compensation; non-bar system; machine tools Facing with errors of a product after machining was a big title in this field. That will affect to product's quality with unpredictable situation when products is in use for some particular application such as in medical facilities. Improving the precision of machine tools by exploring temperature on multiple points and measuring differential locations of the spindle is going to be executed in this study. A temperature measurement tool and a software running on Windows platform have been developed and combining with Non-bar system in order to support analyzing the temperature rising with the changes of length of the spindle to find the compensation solution based on eccentricity and length distortion. In this study, the whole tests have done on UX300 five – axis CNC machine tools for this investigation and the errors after applying compensation function have reduced 70% at least, with respect to the errors after machining without using compensation function. A New and Stable Estimation Method of Country Economic Fitness and Product Complexity Vito D. P. Servedio, Paolo Buttà, Dario Mazzilli, Andrea Tacchella, Luciano Pietronero Subject: Social Sciences, Economics Keywords: economic complexity; non-linear map; Bipartite networks We present a new method of estimating fitness of countries and complexity of products by exploiting a non-linear non-homogeneous map applied to the publicly available information on the goods exported by a country. The non homogeneous terms guarantee both convergence and stability. After a suitable rescaling of the relevant quantities, the non homogeneous terms are eventually set to zero so that this new method is parameter free. This new map reproduces the findings of the method proposed by Tacchella et al. [1], and allows for an approximate analytic solution in case of actual binarized matrices based on the Revealed Comparative Advantage (RCA) indicator. This solution is connected with a new quantity describing the neighborhood of nodes in bipartite graphs, representing in this work the relations between countries and exported products. Moreover, we define the new indicator of country net-efficiency quantifying how a country efficiently invests in capabilities able to generate innovative complex high quality products. Eventually, we demonstrate analytically the local convergence of the algorithm. Mathematical Modeling of Rogue Waves: A Survey of Recent and Emerging Mathematical Methods and Solutions Sergio Manzetti Subject: Physical Sciences, Fluids & Plasmas Keywords: rogue; wave; models; KdV; NLSE; non-local; ocean; optics Anomalous waves and rogue events are closely associated with irregularities and unexpected events occurring at various levels of physics, such as in optics, in oceans and in the atmosphere. Mathematical modeling of rogue waves is a highly actual field of research, which has evolved over the last decades into a specialized part of mathematical physics. The applications of the mathematical models for rogue events is directly relevant to technology development for prediction of rogue ocean waves, and for signal processing in quantum units. In this survey, a comprehensive perspective of the most recent developments in methods for representing rogue waves is given, along with discussion of the devised and forms and solutions. The standard nonlinear Schrödinger equation, the Hirota equation, the MMT equation and further to other models are discussed, and their properties highlighted. This survey shows that the most recent advancement in modeling rogue waves give models which can be used to establish methods for prediction of rogue waves at open seas, which is important for the safety and activity of marine vessels and installations. The study further puts emphasis on the difference between the methods, and how the resulting models form a basis for representing rogue waves in various forms, solitary or with a wave-background. This review has also a pedagogic component directed towards students and interested non-experts, and forms a complete survey of the most conventional and emerging methods published until recently Black Hole Entropy from Non-Dirichlet Sectors, and Bounce Solution Inyong Park Subject: Physical Sciences, Acoustics Keywords: black hole entropy; non-Dirichlet boundary condition; bounce In a series of recent works the relevance of gravitational boundary degrees of freedom and their dynamics in gravity quantization and black hole information has been explored. In this work we further the progress by keenly focusing on the boundary degrees of freedom as the origin of black hole entropy. Wald's entropy formula is scrutinized, and the reason that the Wald's formula correctly captures the entropy of a black hole is examined. Afterwards, limitations of the Wald's method are discussed; a coherent view of entropy based on boundary dynamics is presented. The discrepancy observed in the literature between holographic and Wald's entropies is addressed. We generalize the entropy definition so as to handle a time-dependent black hole. Large gauge symmetry plays a pivotal role. Non-Dirichlet boundary conditions and gravitational analogues of Coleman-De Luccia bounce solutions are central in identifying the microstates and differentiating the origins of entropies associated with different classes of solutions. The result in the present work leads to a view that black hole entropy is entanglement entropy in a thermodynamic setup. Long Non-coding RNAs, Extracellular Vesicles and Inflammation in Alzheimer's Disease Ania Canseco-Rodriguez, Valeria Masola, Vicenza Aliperti, Maria Meseguer-Beltran, Aldo Donizetti, Ana María Sanchez-Perez Subject: Life Sciences, Genetics Keywords: Alzheimer's disease; inflammation; non-coding RNAs; exosomes vesicles Alzheimer´s Disease (AD) has currently no effective treatment; however, preventive measures can significantly delay the progress/onset of the disease. Thus, accurate and early prediction of risk is an important strategy to alleviate the AD burden. Neuroinflammation is a major factor prompting the onset of the disease. Inflammation exerts its toxic effect via multiple mechanisms. Amongst others, it is affecting gene expression via modulation of non-coding RNAs (ncRNAs), such as miRNAs. Recent evidence supports that inflammation can also affect long non-coding RNAs (lncRNAs) expression. While the association between miRNAs and inflammation in AD has been extensively studied, the role of lncRNAs in neurodegenerative diseases has been less explored. In this review, we focus on lncRNAs and inflammation in the context of AD. Furthermore, since plasma-isolated extracellular vesicles (EVs) are increasingly recognized as an effective monitoring strategy of brain pathologies, we have focused on the studies reporting dysregulated lncRNAs in EVs isolated from AD patients and controls. The revised literature shows a positive association between pro-inflammatory lncRNAs and AD. However, the reports evaluating lncRNAs alterations in EVs isolated from plasma of patients and controls, although still limited confirm the value of specific lncRNAs associated with AD as reliable biomarkers. This is an emerging field that will open new avenues to improve risk prediction, patients' stratification and may lead to the discovery of potential novel therapeutic targets for AD Tetracyclines - An Important Therapeutic Tool for Dermatologist Małgorzata Orylska-Ratyńska, Waldemar Placek, Agnieszka Owczarczyk-Saczonek Subject: Medicine & Pharmacology, Dermatology Keywords: tetracyclines; doxycycline; limecycline; minocycline; pleiotrophy; non-antibiotic properties Tetracyclines are a group of antibiotics whose first representative was discovered over 70 years ago. Since then, they have been of great interest in dermatology. In addition to their antibacterial activity, they are able to inhibit metalloproteinases and exhibit anti-inflammatory, anti-apoptotic and antioxidant effects. The side effects have been thoroughly studied over the years. The most characteristic and important in daily dermatolgical practice are: phototoxicity, hyperpigmentation, onycholysis, photoonycholysis, induced lupus erythematosus, idiopathic intracranial hypertension. In this article, we summarize the use of tetracyclines in infectious diseases and inflammatory dermatoses, and further discuss indications where the efficacy and safety of tetracyclines have been highlighted over the past few years. Is Quantum Theory Falsified By Loophole-free Bell Experiments? Ghenadie N. Mardari Subject: Physical Sciences, General & Theoretical Physics Keywords: Bell's theorem; EPR paradox; quantum entanglement; non-locality. Quantum theory predicts a whole class of non-local phenomena, observable via coincident detection of EPR-type systems. An important feature of these observations is their non-signaling character. Technically, non-local phenomena should only be observable for post-selected sub-ensembles, rather than for complete projections. Otherwise, superluminal telegraphy becomes possible. Yet, a couple of recent Bell experiments reported the observation of quantum non-locality for 100% of the detected events. Does it follow that signaling non-locality is possible? If so, was quantum theory falsified? This puzzle is solved by revisiting the interpretation of the spin projection operator, with special focus on its dual nature (combining spectral decomposition with spectral transformation). "Component switching" is not a loophole, but rather a requirement of quantum mechanics in this context, because sharp spin projections are partial (as well as partially overlapping). Surprisingly, it is possible to pre-select incompatible statistical sub-ensembles with heralded detection and to reveal the same behavior as in post-selected observations. Therefore, Bell experiments confirm the predictions of quantum theory without violating the non-signaling principle. The Technique and Advantages of Contrast-Enhanced Ultrasound in the Diagnosis and Follow-up of Traumatic Abdomen Solid Organ Injuries Marco Di Serafino, Francesca Iacobellis, Maria Laura Schillirò, Roberto Ronza, Francesco Verde, Dario Grimaldi, Giuseppina Dell'aversano Orabona, Martina Caruso, Vittorio Sabatino, Chiara Rinaldo, Luigia Romano Subject: Medicine & Pharmacology, Other Keywords: CEUS; blunt trauma; non operative management; follow-up. Trauma is one of the most common causes of death or permanent disability in young people, so a timely diagnostic approach is crucial. In polytrauma patients, CEUS has been shown to be more sensitive than US for the detection of solid organ injuries, improving the identification and grading of traumatic abdominal lesions with levels of sensitivity and specificity similar to those seen with MDCT. CEUS is recommended for the diagnostic evaluation of hemodynamically stable patients with isolated blunt moderate-energy abdominal traumas and for the diagnostic follow-up of conservatively managed abdominal traumas. In this pictorial review we illustrate the advantages and disadvantages of CEUS and the procedure details with tips and tricks during investigation of blunt moderate-energy abdominal trauma as well as during follow-up in non-operative management. Towards a Physically Consistent Phase-Field Model for Alloy Solidification Peter Bollada, Peter K Jimack, Andrew M Mullis Subject: Engineering, Industrial & Manufacturing Engineering Keywords: phase-field; solidification; non-equilibrium thermodynamics; crystal formation We summarise contributions made to the computational phase-field modelling of alloy solidification from the University of Leeds spoke of the LiME project. We begin with a general introduction to phase-field, and then reference the numerical issues that arise from solution of the model, before detailing each contribution to the modelling itself. These latter contributions range from controlling and developing interface-width independent modelling; controlling morphology in both single and multiphase settings; generalising from single to multi-phase models; and creating a thermodynamic consistent framework for modelling entropy flow and thereby postulate a temperature field consistent with the concepts of, and applicable in, multiphase and density-dependent settings. The Costs and Potential Benefits of Introducing the "I Don't Know" Answer in Binary Classification Settings Damjan Krstajic Subject: Mathematics & Computer Science, Probability And Statistics Keywords: non-applicability domain; binary classification; ignorance; decision-making We are of the opinion that during the design of a binary classifier one ought to consider adding an "I don't know" answer. We provide the case for the introduction of this third category when a human needs to make a decision based on the answer from a binary classifier. We discuss the costs and potential benefits of its introduction. Colloquially, we have used the term "I don't know", but formally we refer to it as NotAvailable. A procedure to define NotAvailable predictions in binary classifiers, called all leave-one-out models (ALOOM), is presented as proof of the concept. Furthermore, we discuss the potential benefits of applying ALOOM in real life applications. Ontology of a Wavefunction from the Perspective of an Invariant Proper Time Salim Yasmineh Subject: Physical Sciences, Acoustics Keywords: proper time; non-locality; simultaneity; wavefunction; measurement; ontology All the arguments of a wavefunction are defined at the same instant implying a notion of simultaneity. In a somewhat related matter, certain phenomena in quantum mechanics seem to have non-local causal relations. Both concepts are in contradiction with special relativity. We propose to define the wavefunction with respect to the invariant proper time of special relativity instead of standard time. Moreover, we shall adopt the original idea of Schrodinger suggesting that the wavefunction represents an ontological cloud-like object that we shall call 'individual fabric' that has a finite density amplitude vanishing at infinity. Consequently, measurement can be assimilated to a confining potential that triggers an inherent non-local mechanism within the individual fabric. It is formalised by multiplying the wavefunction with a localising gaussian as in the GRW theory but in a deterministic manner. Matrix Product State Simulations of Non-equilibirum Steady States and Transient Heat Flows in the Two-Bath Spin-Boson Model at Finite Temperatures Angus Dunnett, Alex W. Chin Subject: Physical Sciences, Acoustics Keywords: Open quantum systems, Tensor networks, non-equilibrium dynamics Simulating the non-perturbative and non-Markovian dynamics of open quantum systems is a very challenging many body problem, due to the need to evolve both the system and its environments on an equal footing. Tensor network and matrix product states (MPS) have emerged as powerful tools for open system models, but the numerical resources required to treat finite temperature environments grow extremely rapidly and limit their applications. In this study we use time-dependent variational evolution of MPS to expore the striking theory of Tamescelli et al. that shows how finite temperture open dyanmics can be obtained from zero temperature, i.e. pure wave function, simulations. Using this approach, we produce a benchmark data set for the dynamics of the Ohmic spin-boson model across a wide range of coupling and temperatures, and also present detailed analysis of the numerical costs of simulating non-equilibrium steady states, such as those emerging from the non-perturbative coupling of a qubit to baths at different temperatures. Despite ever growing resource requirements, we find that converged non-perturbative results can be obtained, and we discuss a number of recent ideas and numerical techniques that should allow wide application of MPS to complex open quantum systems. Working Paper CONCEPT PAPER Towards an Interdisciplinary Framework about Intelligence Nicolas Palanca-Castan, Beatriz Sanchez-Tajadura, Rodrigo Cofré Subject: Keywords: Theoretical framework; Artificial Intelligence; Philosophy; Non-human intelligence In recent years, advances in science, technology, and the way in which we view our world have led to anincreasingly broad use of the term "intelligence". As we learn more about biological systems, we find more and more examples of complex and precise adaptive behavior in animals and plants. Similarly, as we build more complex computational systems, we recognize the emergence of highly sophisticated structures capable of solving increasingly complex problems. These behaviors show characteristics in common with the sort of complex behaviors and learning capabilities we find in humans, and therefore it is common to see them referred to as "intelligent". These analogies are problematic as the term "intelligence" is inextricably associated with human-like capabilities. While these issues have been discussed by leading researchers of AI and renowned psychologists and biologists highlighting the commonalities and differences between AI and biological intelligence, there have been few rigorous attempts to create an interdisciplinary approach to the modern problem of intelligence. This article proposes a comparative framework to discuss what we call "purposeful behavior", a characteristic shared by systems capable of gathering and processing information from their surroundings and modifying their actions in order to fulfill a series of implicit or explicit goals. Our aim is twofold: on the one hand, the term purposeful behavior allows us to describe the behavior of these systems without using the term "intelligence", avoiding the comparison with human capabilities. On the other hand, we hope that our framework encourages interdisciplinary discussion to help advance our understanding of the relationships among different systems and their capabilities. Export Product Diversification, Poverty and Tax Revenue in Developing Countries Sena Kimm Gnangnon Subject: Social Sciences, Economics Keywords: Export Product diversification; Poverty; Non-resource tax revenue. The current paper has examined the effect of both export product diversification and poverty on non-resource tax revenue in developing countries. The analysis has used an unbalanced panel dataset of 111 countries over the period 1980-2014. Based on the Blundell and Bond two-step system Generalized Methods of Moments technique, the empirical analysis has shown interesting findings. Export product concentration and poverty influence negatively non-resource tax revenue over the full sample, but this effect varies across countries in the sample. Furthermore, the effect of export product diversification on non-resource tax revenue performance depends on the level of poverty. It appears that export product diversification influences positively non-resource tax revenue performance in countries that experience lower poverty rates. From a policy perspective, these findings show that policies in favour of diversifying export product baskets and reducing poverty would contribute to enhancing non-resource tax revenue performance in developing countries. Granulomatous Pneumonia in a Nile Crocodile (Crocodylus niloticus) Caused by a Member of Mycobacterium Chelonae/Abscessus Group Marco Gobbi, Sara Corneli, Nicoletta D'Avino, Elisabetta Manuali, Antonella Di Paolo, Carla Sebastiani, Marcella Ciullo, Michele Tentellini, Maria Lodovica Pacciarini, Martina Sebastianelli, Silvia Pavone, Piera Mazzone Subject: Medicine & Pharmacology, Veterinary Medicine Keywords: Mycobacterium chelonae/abscessus; crocodile; mycobacteriosis; Non-tuberculous mycobacteria A 40 years old male Nile crocodile (Crocodylus niloticus) was diagnosed with pulmonary mycobacteriosis caused by a member of Mycobacterium chelonae/abscessus group. Post-mortem examination showed a severe systemic visceral granulomatous involvement, with lesions in lungs, heart, liver, spleen and kidneys. Histopathological examination of lung, spleen, heart and liver revealed multifocal to coalescing granulomas showing eterophils in central zone and outer rim of epithelioid histiocytes, multinucleated giant cells and lymphocytes. The Ziehl–Neelsen histological staining revealed rare vacuoles containing numerous alcohol-acid resistant bacteria. Mycobacterial infection was confirmed by culture and PCR targeting rRNA 16S gene. Sequence analysis of the DNA amplicon revealed a 100% homology with the M. chelonae/ abscessus group. Even if the classification of the memebrr of this group is still on updating, to the best of our knowledge, this is the first report of M. chelonae/abscessus member infection in a Nile crocodile species. Marital Domestic Violence and Maternal Health in Nigeria: Evidence from the Demographic and Health Survey Atsiya Pius Amos, Hauwa Mohammed Maimona, Godiya Atsiya Pius Subject: Social Sciences, Economics Keywords: Non-cooperative Household Model; Domestic Violence; Maternal Health There is increasing evidence that the non-cooperative models describe household structures in developing countries more succinctly compared with the unitary model. Domestic violence against women, which is pervasive in Nigeria even though likely to be under-reported, will need to be understood within the framework of non-cooperative relationship between couples. In this study, we identify factors of domestic violence against women within couples who were currently in marital or cohabiting partnerships. Also, we investigate whether domestic violence influences the decision of women to terminate pregnancies. We use data from the 2018 Nigeria Demographic and Health Survey (NDHS). Multivariate logistic regressions were used to model the predictors of domestic violence, and its influence on the decision to terminate pregnancies among married women. Of the 8,910 married women interviewed for domestic violence, 35.33% had ever experienced a form of domestic violence. We discover that women: with higher education, that is not poor, and resides in urban areas have 44%, 18% and 15% reductions in the odds of experience domestic violence respectively. On the other hand, women who are employed, own land, having husbands/partners that are employed in the agricultural sector, and drink alcohol have 1.16, 1.2, 2.07, and 2.8 times increased odds of experiencing domestic violence accordingly. Also, we uncover that currently married women experiencing domestic violence have 1.25 times increased odds of terminating pregnancies compared with their counterparts that are not experiencing domestic violence. Effectively, poverty, low levels of education, residing in rural areas, drinking habit of husbands/partners, employment, marital capital, and land ownership status of women are risk factors of domestic violence against married women in Nigeria but can be affected by policies and programmes. Importantly, public actions to contain domestic violence in order to improve maternal health should be implemented in the context of the dynamics of a non-cooperative relationship existing between married couples. Proteins Nsp12 and 13 of SARS-CoV-2 Have Mitochondrial Recognition Signal: A Connection with Cellular Mitochondrial Dysfunction and Disease Manifestation Upasana Ray, Feroza Begum Subject: Life Sciences, Virology Keywords: SARS-CoV-2; ARDS; non-structural proteins; mitochondria Mitochondria are classically termed as powerhouse of a mammalian cell. Most of the cellular chemical energy in the form of adenosine tri phosphate (ATP) is generated by mitochondria and dysregulation of mitochondrial functions thus can be potentially fatal of cellular homeostasis and health. Acute respiratory distress has been earlier linked to mitochondrial dysfunction. SARS-CoV-2 infection severity leads to acute respiratory distress syndrome (ARDS) and can be fatal. We tried to investigate possible connection between SARS-CoV-2, ARDS and mitochondria. Here, we report identification of SARS-CoV-2 non-structural proteins (particularly Nsp12 and 13) that have recognition sequence with respect to mitochondrial entry. We also report that these proteins can potentially shuttle between cytoplasm and mitochondria based on the localization signals and help in downstream maintenance of the virus. Their properties to use ATP for enzymatic activities may cause ATP scavenging allowing viral RNA functions in lieu of host cell health. Information Length as a Useful Index to Understand Variability in the Global Circulation Eun-jin Kim, James Heseltine, Hanli Liu Subject: Physical Sciences, Fluids & Plasmas Keywords: variabilities; modeling; non-equilibrium; turbulence; gravity waves; PDFs With improved measurement and modelling technology, variability has emerged as an essential feature in non-equilibrium processes. While traditionally, mean values and variance have been heavily used, they are not appropriate in describing extreme events where a significant deviation from mean values often occurs. Furthermore, stationary Probability Density Functions (PDFs) miss crucial information about the dynamics associated with variability. It is thus critical to go beyond a traditional approach and deal with time-dependent PDFs. Here, we consider atmospheric data from the Whole Atmosphere Community Climate Model (WACCM) and calculate time-dependent PDFs and the information length from these PDFs, which is the total number of statistically different states that a system passes through in time. Time-dependent PDFs are shown to be non-Gaussian in general, and the information length calculated from these PDFs shed us a new perspective of understanding variabilities, correlation among different variables and regions. Specifically, we calculate time-dependent PDFs and information length and show that the information length tends to increase with the altitude albeit in a complex form. This tendency is more robust for flows/shears than temperature. Also, much similarity among flows and shears in the information length is found in comparison with the temperature. This means a stronger correlation among flows/shears because of a strong coupling through gravity waves in this particular WACCM model. We also find the increase of the information length with the latitude and interesting hemispheric asymmetry for flows/shears/temperature, a stronger anti-correlation (correlation) between flows/shears and temperature at a higher (low) latitude. These results also suggest the importance of high latitude/altitude in the information budge in the Earth's atmosphere, the spatial gradient of the information as a useful proxy for the transport of physical quantities. Development and Differentiation in Monobodies Based on the Fibronectin Type 3 Domain Peter G. Chandler, Ashley M. Buckle Subject: Life Sciences, Biotechnology Keywords: adnectin; biosensor; Fibronectin; monobody; non-antibody scaffold; therapeutic As a non-antibody scaffold, monobodies based on the fibronectin type III (FN3) domain overcome antibody size and complexity while maintaining analogous binding loops. However, antibodies and their derivatives remain the gold standard for design of new therapeutics. In response, clinical therapeutic proteins based on the FN3 domain are beginning to use native fibronectin function as a point of differentiation. The small and simple structure of monomeric monobodies confers increased tissue distribution and reduced half-life, whilst the absence of disulphide bonds improves stability in cytosolic environments. Where multi-specificity is challenging with an antibody format that is prone to mis-pairing of chains, FN3 domains in the fibronectin assembly already interact with a large number of molecules. As such, multiple monobodies engineered for interaction with therapeutic targets are being combined in a similar beads-on-a-string assembly which improves both efficacy and pharmacokinetics. Furthermore, full length fibronectin is able to fold into multiple conformations as part of its natural function and a greater understanding of how mechanical forces allow for the transition between states will lead to advanced applications that truly differentiate the FN3 domain as a therapeutic scaffold. A New Paradox in Quantum Mechanics Subject: Physical Sciences, General & Theoretical Physics Keywords: Bell's theorem; EPR paradox; quantum entanglement; non-locality The EPR paradox is known as an interpretive problem, as well as a technical discovery in quantum mechanics. It defined the basic features of two-quantum entanglement, as needed to study the relationships between two non-commuting variables. In contrast, four variables are observed in a typical Bell experiment. This is no longer the same problem. The full complexity of this process can only be captured by the analysis of four-quantum entanglement. Indeed, a new paradox emerges in this context, with straightforward consequences. Either quantum behavior is capable of signaling non-locality, or it is local. Both alternatives appear to contradict existing knowledge. Still, one of them has to be true, and the final answer can be obtained conclusively with a four-quantum Bell experiment. Changes of Conformation in Albumin Protein with Temperature Piotr Weber, Piotr Bełdowski, Krzysztof A. DOMINO, Adam Gadomski Subject: Physical Sciences, Other Keywords: conformation of protein; albumin protein; non-gaussian chain We study a conformation of an albumin protein in the temperature range of 300K-315K, i.e. in the physiological range of temperature. Using simulations we calculate values of two backbone angles, that carry most of information about positioning of the protein chain in a conformational space. Given these, we calculate energy components of such protein. Further, using the Flory theory we determine the temperature in which investigated albumin chain model is closest to the free joined chain model. Near the Flory temperature, we study energy components and the conformational entropy, both derived from two angles that reflect most of the chain dynamics in a conformational space. We show that the conformational entropy is an oscillating function of time in considered range of temperature. Our finding is that, the only regular oscillation pattern appears near the Flory temperature. Thermodynamic, non-extensive, or turbulent quasi equilibrium for space plasma environment Peter Yoon Subject: Physical Sciences, Astronomy & Astrophysics Keywords: non-extensive entropic principle; plasma turbulence; quasi equilibrium The Boltzmann-Gibbs (BG) entropy has been used in a wide variety of problems for more than a century. It is well known that BG entropy is extensive, but for certain systems such as those dictated by long-range interactions, the entropy must be non-extensive. Tsallis entropy possesses non-extensive characteristics, which is parametrized by a variable q (q = 1 being the classic BG limit), but unless q is determined from microscopic dynamics, the model remains but a phenomenological tool. To this date very few examples have emerged in which q can be computed from first principles. This paper shows that the space plasma environment, which is governed by long-range collective electromagnetic interaction, represents a perfect example for which the q parameter can be computed from micro-physics. By taking the electron velocity distribution function measured in the heliospheric environment into account, and considering them to be in quasi equilibrium state with electrostatic turbulence known as the quasi-thermal noise, it is shown that the value corresponding to q = 9/13 = 0.6923 may be deduced. This prediction is verified against observation made by spacecraft, and it is shown to be in excellent agreement. Polysome-Associated lncRNAs during Cardiomyogenesis of hESCs Isabela Tiemy Pereira, Lucia Spangenberg, Guillermo Cabrera, Bruno Dallagiovanna Subject: Life Sciences, Molecular Biology Keywords: long non-coding RNA; hESC; cardiomyocyte; RNA-seq Long non-coding RNAs (lncRNAs) have been found to be involved in many biological processes, including the regulation of cell differentiation, but a complete characterization of lncRNA is still lacking. Additionally, there is evidence that lncRNAs interact with ribosomes, raising questions about their functions in cells. Here, we used a developmentally staged protocol to induce cardiogenic commitment of hESCs and then investigated the differential association of lncRNAs with polysomes. Our results identified lncRNAs in both the ribosome-free and polysome-bound fractions during cardiogenesis and showed a very well-defined temporal lncRNA association with polysomes. Clustering of lncRNAs was performed according to the gene expression patterns during the five timepoints analyzed. In addition, differential lncRNA recruitment to polysomes was observed when comparing the differentially expressed lncRNAs in the ribosome-free and polysome-bound fractions or when calculating the polysome-bound vs ribosome-free ratio. The association of lncRNAs with polysomes could represent an additional cytoplasmic role of lncRNAs, e.g., in translational regulation of mRNA expression. On The Model Of Hyperrational Numbers With Selective Ultralter Armen Grigoryants Subject: Mathematics & Computer Science, General Mathematics Keywords: Hyperrational number, selective ultrafilter, non-standard analysis, ultrapower In standard construction of hyperrational numbers using ultra-power we assume that the ultra lter is selective. It makes possible toassign real value to any nite hyperrational number. So, we can consider hyperrational numbers with selective ultra lter as extension of traditional real numbers. Also proved the existence of strictly monotonic or stationary representing sequence for any hyperrational number. Market Risk and Financial Performance of Non-financial Companies Listed on the Moroccan Stock Exchange Diby François Kassi, Dilesha Nawadali Rathnayake, Akadje Jean Roland Edjoukou Subject: Social Sciences, Finance Keywords: Market risk, Financial performance, Non-financial firms, Morocco This study examines the effect of market risk on the financial performance of 31 non-financial companies listed on the Casablanca Stock Exchange (CSE) over the period 2000-2016. We utilize three alternative variables to assess financial performance, namely return on assets, return on equity and profit margin. Next, we use the degree of financial leverage, the book-to-market ratio, and the gearing ratio as market risk variables. Besides, we employ the pooled OLS model, the fixed effects model, the random-effects model, the difference GMM and the system GMM models. The results show that market risk indicators have a negative and significant influence on the companies' financial performance. The elasticities are greater following the book-to-market ratio compared to the degree of financial leverage and the gearing ratio, respectively. In most cases, the firm size, the tangibility ratio, and the cash holdings ratio have a positive effect on financial performance, whereas the firms' age, the debt-to-income ratio, stock turnover, and leverage hurt the performance of these non-financial companies. Therefore, decision-makers and managers should mitigate market risk through appropriate strategies of risk management, such as derivatives and insurance techniques. NiFe Alloy Reduced on Graphene Oxide for Electrochemical Non-Enzymatic glucose Sensing Zhe-Peng Deng, Yu Sun, Yong-Cheng Wang, Jian-De Gao Subject: Chemistry, Analytical Chemistry Keywords: NiFe alloy; graphene oxide; glucose; non-enzymatic sensor NiFe alloy nanoparticles/graphene oxide hybrid (NiFe/GO) was prepared for electrochemical glucose sensing. The as-prepared NiFe/GO hybrid was characterized by transmission electron microscopy (TEM) and X-ray diffraction (XRD). The results indicated that NiFe alloy nanoparticles can be successfully deposited on GO. The electrochemical glucose sensing performance of the as-prepared NiFe/GO was studied by cyclic voltammetry and amperometric measurement. Results showed that NiFe/GO modified glassy carbon electrode had sensitivity of 173 μA mM−1cm−2 for glucose sensing with a linear range up to 5 mM, which was superior to commonly used Ni nanoparticles. Furthermore, high selectivity for glucose detection can be achieved by NiFe/GO. All the results demonstrated that NiFe/GO hybrid was promising for using in electrochemical glucose sensing. Characterization of Keratinophilic Fungal Species and Other Non-Dermatophytes in Riyadh, Saudi Arabia Suaad Alwakeel Subject: Biology, Other Keywords: keratinophilic fungi, non-dermatophytes, fungal flora, hair, nails Background: The presence of fungal species on the surface skin and hair is a known finding in many mammalian species and humans are no exception. Superficial fungal infections are sometimes a chronic and recurring condition that affects approximately 10-20% of the world's population. However, most species that are isolated from humans tend to occur as co-existing flora. This study was conducted to determine the diversity of fungal species isolated from the hair and nails of workers in the central region of Saudi Arabia where there are not many observational studies on the mycological species. Materials and Methods: Male workers from Riyadh, Saudi Arabia were recruited for this study and samples were obtained from their nails and hair for mycological analysis which was done using Saboraud's agar and sterile wet soil. Fungal isolates were examined microscopically. Results: Twenty four hair samples yielded a total of 26 species from 19 fungal genera. Chaetomiumglobosum was the most commonly isolated fungal species followed by Emericellanidulans, Cochliobolusneergaardii, and Penicilliumoxalicum. Three fungal species were isolated from nail samples, namely, Alternariaalternata, Aureobasidiumpullulans, and Penicilliumchrysogenum. Most of the isolated fungal species (17 of the 26 or 65.38% of the isolated fungal species) have not been thoroughly characterised nor morphologically classified. Conclusion: This study demonstrates the presence of previously undescribed fungal species that contribute to the normal flora of the skin and its appendages and may have a role in their pathogenies. Experimental Investigation on Condensation of Vapor in the Presence of Non-Condensable Gas on a Vertical Tube Shengjun Zhang, Feng Shen, Xu Cheng, Xianke Meng, Dandan He Subject: Engineering, Energy & Fuel Technology Keywords: condensation; non-condensable gas; experimental study; containment cooling According to the operation conditions of time unlimited passive containment heat removal system (TUPAC), a separate effect experiment facility was established to investigate the heat transfer performance of steam condensation in presence of non-condensable gas. The effect of wall subcooling temperature, total pressure and mass fraction of the air on heat transfer process was analyzed. The heat transfer model was also developed. The results showed that the heat transfer coefficient decreased with the rising of subcooling temperature, the decreasing of the total pressure and air mass fraction. It was revealed that Dehbi's correlation predicted the heat transfer coefficient conservatively, especially in the low pressure and low temperature region. The novel correlation was fitted by the data obtained in the following range: 0.20~0.45 MPa in pressure, 20% ~ 80% in mass fraction, 15°C ~ 45°C in temperature. The discrepancy of the correlation and experiment data was with ±20%. Investigation of Brain Vascular Territories in Stroke Patients Detected Non-Valve Atrial Fibrillation as an Etiological Factor Mustafa Karaoglan, Serkan Demir Subject: Medicine & Pharmacology, Clinical Neurology Keywords: brain vessel; ischemic stroke; non-valvular atrial fibrillation Objective: It was aimed to investigate the cerebral vascular territories in stroke patients with NVAF as an etiologic factor. Material and Methods: A total of 104 patients who were referred to our hospital between January 2015 and September 2016, who were over 55 years of age, identified or documented as having a standard ECG or Holter ECG record on their medical history, and diagnosed with stroke were included. Our study was designed as a retrospective analysis of prospective data. Detailed history, physical examination and electrocardiography (ECG) evaluations of the patients were performed. Descriptive statistics were used in the detection of findings, and t-test, Pearson-square test and Fisher's exact test were used for differences analysis. Results: 53.8% (N = 56) of the patients were male and 46.2% (N = 48) were female. The mean age was 73.5. MCA was the most common site of vascular involvement in NVAF-dependent strokes. In MCA vascular territory, ischemic infarcts were detected most frequently in the upper and lower divisions. SCA and PCA followed MCA. Approximately 64% of the NVAF-related strokes were anterior circulation infarction (ASE) and 22% were posterior circulation infarct (PSE). There was a significant difference in age and past stroke history factors in favor of ASE (p<0.05). There was no significant difference between ASE and PSE in HT, cardiac history and DM factors (p>0.05). Conclusion: It was emphasized that the area of the vessel that underwent ischemia in the acutely displayed infarcts and the etiological factor for this vessel area could be predicted The Application of Non-Invasive Apoptosis Detection Sensor (IADS) on Histone Deacetylation Inhibitors (HDACi) Induced Breast Cancer Cell Death Kai-Wen Hsu, Chien-Yu Huang, Ka-Wai Tam, Chun-Yu Lin, Li-Chi Huang, Ching-Ling Lin, Wen-Shyang Hsieh, Wei-Ming Chi, Yu-Jia Chang, Po-Li Wei, Chia-Hwa Lee Subject: Life Sciences, Molecular Biology Keywords: non-invasive apoptosis detection sensor; breast cancer; HDACi Breast cancer is the most common malignancies in women and the second leading cause of cancer death in women. Triple negative breast cancer (TNBC) subtype is a breast cancer subset without ER, PR and HER2 expression, limiting treatment options and presenting a poorer survival rate. Thus, we investigated whether HDACi would be used as potential anti-cancer therapy on breast cancer cells. In this study, we found TNBC and HER2-enrich breast cancers are extremely sensitive to Panobinostat, Belinostat of HDACi via experiments of cell viability assay, apoptotic marker identification and flow cytometry measurement. On the other hand, we developed a bioluminescence based live cell non-invasive apoptosis detection sensor (IADS) detection system to evaluate the quantitative and kinetic analyses of apoptotic cell death by HDAC treatment on breast cancer cells. In addition, the use of HDACi may also be accompanied with chemotherapeutic agent such as doxorubicin to synergic drug sensitivity on TNBC cell (MDA-MB-231), but not in breast normal epithelia cells (MCF-10A), providing therapeutic benefits against breast tumor in clinic. Bottom-Up Synthesis of Porous NiMo Alloy for Hydrogen Evolution Reaction Kailong Hu, Samuel Jeong, Mitsuru Wakisaka, Jun-ichi Fujita, Yoshikazu Ito Subject: Materials Science, Nanotechnology Keywords: nanoporous; NiMo; non-noble metal catalyst; hydrogen evolution Bottom-up synthesis of porous NiMo alloy reduced by NiMoO4 nanofibers was systematically investigated to fabricate non-noble metal porous electrodes for hydrogen production. The different annealing temperatures of NiMoO4 nanofibers under hydrogen atmosphere reveal that the 950 °C annealing temperature is a key to produce bicontinuous porous NiMo alloy without oxide phases. The porous NiMo alloy as cathodes in electrical water splitting demonstrates not only almost identical catalytic activity with commercial Pt/C in 1.0 M KOH solution, but also superb stability for 12 days at an electrode potential of −200 mV (v.s. RHE). A Logic Framework for Non-Conscious Reasoning Felipe Lara-Rosano Subject: Behavioral Sciences, General Psychology Keywords: non-conscious reasoning; fuzzy logic; linguistic truth values Human non-conscious reasoning is one of the most successful procedures developed to solve everyday problems in an efficient way. This is why the field of artificial intelligence should analyze, formalize and emulate the multiple ways of non-conscious reasoning with the purpose of applying them in knowledge based systems, neurocomputers and similar devices for aiding people in the problem-solving process. In this paper, a framework for those non-conscious ways of reasoning is presented based on object-oriented representations, fuzzy sets and multivalued logic. Closed-form expression for the dynamic dispersion coefficient in Hagen-Poiseuille flow Lichun Wang, M. Bayani Cardenas Subject: Earth Sciences, Environmental Sciences Keywords: solute transport, dispersion, hagen-poiseuille flow, non-fickian We present an exact expression for the upscaled dynamic dispersion coefficient (D) for one-dimensional transport by Hagen-Poiseuille flow which is the basis for modeling transport in porous media idealized as capillary tubes. The theoretical model is validated by comparing the breakthrough curves (BTCs) from a 1D advection-dispersion model with dynamic D to that from direct numerical solutions utilizing a 2D advection-diffusion model. Both Taylor dispersion theory and our new theory are good predictors of D at lower Peclet Number (Pe) regime, but gradually fail to capture most parts of BTCs as Pe increases. However, our model generally predicts the mixing and spreading of solutes better than Taylor's theory since it covers all transport regimes from molecular diffusion, through anomalous transport, and to Taylor dispersion. The model accurately predicts D based on the early part of BTCs even at relatively high Pe regime (~62) where the Taylor's theory fails. Furthermore, the model allows for calculation of the time scale that separates Fickian from non-Fickian transport. Therefore, our model can readily be used to calculate dispersion through short tubes of arbitrary radii such as the pore throats in a pore network model. Environmental Impacts of Mining of Mineral Resources Vladimír Lapčík, Andrea Kaločajová, Petr Novák Subject: Earth Sciences, Environmental Sciences Keywords: mining; non-energy mineral resources; environmental impact assessment The article focuses on mining of non-energy mineral resources with minimum environmental impacts. It issues from research results of a project Competence Centre for Effective and Ecological Mining of Mineral Resources implemented at the Faculty of Mining and Geology at VŠB-Technical University of Ostrava, Czech Geological Survey, a company Watrad ltd., a state enterprise Diamo, a company RPS Ostrava plc and a company Sedlecký kaolin plc. The paper starts with a partial analysis of the existing legal norms related to mining and processing of mineral resources. Next, it analyses mineral resource mining options free of negative environmental impacts. The fundamental tool to assess potential environmental impacts of mining is the implementation of Environmental Impact Assessment (EIA) process for a given mineral resource. In the Czech Republic environmental impact assessment is carried out by course of Act 100/2001 Coll. Its important amendment is Act 39/2015 Coll. claiming, inter alia, that the environmental impact assessment is rigidly connected with other permits and procedures, such as the zoning process and building construction permits. The article describes the environmental impacts of mining of non-energy mineral resources, including the following factors: appropriation of land, impacts on surface water, ground water and soil, noise, influence on the landscape character, and air pollution. The paper also includes a case study summarizing information on the environmental factors that may play a role in potential underground mining of graphite in the deposit Český Krumlov - Městský Vrch and the deposit Lazec - Křenov. A Study about Non-Volatile Memories DILEEP KUMAR Subject: Mathematics & Computer Science, Information Technology & Data Management Keywords: Non-volatile Memories; NAND Flash Memories; Storage Memories This paper presents an upcoming nonvolatile memories (NVM) overview. Non-volatile memory devices are electrically programmable and erasable to store charge in a location within the device and to retain that charge when voltage supply from the device is disconnected. The non-volatile memory is typically a semiconductor memory comprising thousands of individual transistors configured on a substrate to form a matrix of rows and columns of memory cells. Non-volatile memories are used in digital computing devices for the storage of data. In this paper we have given introduction including a brief survey on upcoming NVM's such as FeRAM, MRAM, CBRAM, PRAM, SONOS, RRAM, Racetrack memory and NRAM. In future Non-volatile memory may eliminate the need for comparatively slow forms of secondary storage systems, which include hard disks.
CommonCrawl
Symmetry, Integrability and Geometry: Methods and Applications (SIGMA) SIGMA 10 (2014), 048, 11 pages arXiv:1310.1688 https://doi.org/10.3842/SIGMA.2014.048 Multi-Hamiltonian Structures on Spaces of Closed Equicentroaffine Plane Curves Associated to Higher KdV Flows Atsushi Fujioka a and Takashi Kurose b a) Department of Mathematics, Kansai University, Suita, 564-8680, Japan b) Department of Mathematical Sciences, Kwansei Gakuin University, Sanda, 669-1337, Japan Received October 11, 2013, in final form April 16, 2014; Published online April 22, 2014 Higher KdV flows on spaces of closed equicentroaffine plane curves are studied and it is shown that the flows are described as certain multi-Hamiltonian systems on the spaces. Multi-Hamiltonian systems describing higher mKdV flows are also given on spaces of closed Euclidean plane curves via the geometric Miura transformation. Key words: motions of curves; equicentroaffine curves; KdV hierarchy; multi-Hamiltonian systems. pdf (327 kb) tex (17 kb) Anco S.C., Bi-Hamiltonian operators, integrable flows of curves using moving frames and geometric map equations, J. Phys. A: Math. Gen. 39 (2006), 2043-2072, nlin.SI/0512051. Anco S.C., Hamiltonian flows of curves in $G/{\rm SO}(N)$ and vector soliton equations of mKdV and sine-Gordon type, SIGMA 2 (2006), 044, 18 pages, nlin.SI/0512046. Anco S.C., Group-invariant soliton equations and bi-Hamiltonian geometric curve flows in Riemannian symmetric spaces, J. Geom. Phys. 58 (2008), 1-37, nlin.SI/0703041. Anco S.C., Hamiltonian curve flows in Lie groups $G\subset {\rm U}(N)$ and vector NLS, mKdV, sine-Gordon soliton equations, in Symmetries and Overdetermined Systems of Partial Differential Equations, IMA Vol. Math. Appl., Vol. 144, Springer, New York, 2008, 223-250, nlin.SI/0610075. Anco S.C., Asadi E., Quaternionic soliton equations from Hamiltonian curve flows in ${\mathbb{HP}}^n$, J. Phys. A: Math. Theor. 42 (2009), 485201, 25 pages, arXiv:0905.4215. Anco S.C., Asadi E., Symplectically invariant soliton equations from non-stretching geometric curve flows, J. Phys. A: Math. Theor. 45 (2012), 475207, 37 pages, arXiv:1206.4040. Anco S.C., Myrzakulov R., Integrable generalizations of Schrödinger maps and Heisenberg spin models from Hamiltonian flows of curves and surfaces, J. Geom. Phys. 60 (2010), 1576-1603, arXiv:0806.1360. Anco S.C., Vacaru S.I., Curve flows in Lagrange-Finsler geometry, bi-Hamiltonian structures and solitons, J. Geom. Phys. 59 (2009), 79-103, math-ph/0609070. Calini A., Ivey T., Marí-Beffa G., Remarks on KdV-type flows on star-shaped curves, Phys. D 238 (2009), 788-797, arXiv:0808.3593. Chou K.-S., Qu C., The KdV equation and motion of plane curves, J. Phys. Soc. Japan 70 (2001), 1912-1916. Chou K.-S., Qu C., Integrable equations arising from motions of plane curves, Phys. D 162 (2002), 9-33. Chou K.-S., Qu C., Integrable motions of space curves in affine geometry, Chaos Solitons Fractals 14 (2002), 29-44. Fujioka A., Kurose T., Motions of curves in the complex hyperbola and the Burgers hierarchy, Osaka J. Math. 45 (2008), 1057-1065. Fujioka A., Kurose T., Geometry of the space of closed curves in the complex hyperbola, Kyushu J. Math. 63 (2009), 161-165. Fujioka A., Kurose T., Hamiltonian formalism for the higher KdV flows on the space of closed complex equicentroaffine curves, Int. J. Geom. Methods Mod. Phys. 7 (2010), 165-175. Hasimoto H., A soliton on a vortex filament, J. Fluid Mech. 51 (1972), 477-485. Kruskal M.D., Miura R.M., Gardner C.S., Zabusky N.J., Korteweg-de Vries equation and generalizations. V. Uniqueness and nonexistence of polynomial conservation laws, J. Math. Phys. 11 (1970), 952-960. Kulish P.P., Reiman A.G., Hierarchy of symplectic forms for the Schrödinger equation and for the Dirac equation on a line, J. Sov. Math. 22 (1983), 1627-1637. Lamb Jr. G.L., Solitons and the motion of helical curves, Phys. Rev. Lett. 37 (1976), 235-237. Lax P.D., Integrals of nonlinear equations of evolution and solitary waves, Comm. Pure Appl. Math. 21 (1968), 467-490. Liu Y., Qu C., Zhang Y., Stability of periodic peakons for the modified $\mu$-Camassa-Holm equation, Phys. D 250 (2013), 66-74. Magri F., A simple model of the integrable Hamiltonian equation, J. Math. Phys. 19 (1978), 1156-1162. Marí Beffa G., Geometric realizations of bi-Hamiltonian completely integrable systems, SIGMA 4 (2008), 034, 23 pages, arXiv:0803.3866. Marí Beffa G., Sanders J.A., Wang J.P., Integrable systems in three-dimensional Riemannian geometry, J. Nonlinear Sci. 12 (2002), 143-167. Miura R.M., Gardner C.S., Kruskal M.D., Korteweg-de Vries equation and generalizations. II. Existence of conservation laws and constants of motion, J. Math. Phys. 9 (1968), 1204-1209. Newell A.C., Solitons in mathematics and physics, CBMS-NSF Regional Conference Series in Applied Mathematics, Vol. 48, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 1985. Olver P.J., Applications of Lie groups to differential equations, Graduate Texts in Mathematics, Vol. 107, Springer-Verlag, New York, 1986. Pinkall U., Hamiltonian flows on the space of star-shaped curves, Results Math. 27 (1995), 328-332. Rogers C., Schief W.K., Bäcklund and Darboux transformations. Geometry and modern applications in soliton theory, Cambridge Texts in Applied Mathematics, Cambridge University Press, Cambridge, 2002. Sanders J.A., Wang J.P., Integrable systems in $n$-dimensional Riemannian geometry, Mosc. Math. J. 3 (2003), 1369-1393, math.AP/0301212. Squires S.A., Marí Beffa G., Integrable systems associated to curves in flat Galilean and Minkowski spaces, Appl. Anal. 89 (2010), 571-592. Terng C.-L., Thorbergsson G., Completely integrable curve flows on adjoint orbits, Results Math. 40 (2001), 286-309, math.DG/0108154. Previous article Next article Contents of Volume 10 (2014)
CommonCrawl
Genetic analysis of novel phenotypes for farm animal resilience to weather variability Enrique Sánchez-Molano ORCID: orcid.org/0000-0003-2734-85601, Vanessa V. Kapsona2 na1, Joanna J. Ilska1,2 na1, Suzanne Desire1,2, Joanne Conington2, Sebastian Mucha2,3 & Georgios Banos2 Climate change is expected to have a negative impact on food availability. While most efforts have been directed to reducing greenhouse gas emissions, complementary strategies are necessary to control the detrimental effects of climate change on farm animal performance. The objective of this study was to develop novel animal resilience phenotypes using reaction norm slopes, and examine their genetic and genomic parameters. A closely monitored dairy goat population was used for this purpose. Individual animals differed in their response to changing atmospheric temperature and a temperature-humidity index. Significant genetic variance and heritability estimates were derived for these animal resilience phenotypes. Furthermore, some resilience traits had a significant unfavourable genetic correlation with animal performance. Genome-wide association analyses identified several candidate genes related to animal resilience to environment change. Heritable variation exists among dairy goats in their production response to fluctuating weather variables. Results may inform future breeding programmes aimed to ensure efficient animal performance under changing climatic conditions. According to the United Nations Intergovernmental Panel on Climate Change, human activities since the pre-industrial times have had a strong impact on climate. Agriculture is believed to contribute to climate change mostly due to greenhouse gas emissions through the use of fertilisers, methane production by livestock and nitrous oxide emissions from soils [1]. Furthermore, indirect consequences of the agricultural industrialisation such as deforestation [2], intensive monoculture leading to a reduction in variation [3], and the improper use of water irrigation and industrial machinery have also contributed to climate change. The substantial rise in global atmospheric temperature has been particularly steep over the past few decades (0.17 °C/decade), and has been largely noticeable in the northern hemisphere during spring and winter and more uniform throughout the year in the southern hemisphere. In Europe, in addition to the gradual increase in temperature, climate change has also been manifesting in alterations in intra-seasonal and inter-annual variability, with decreasing variability of winter mean temperatures and increased variability of summer mean temperatures [4]. An increase in temperature variability is also predicted for tropical countries [5]. Additional modifications in precipitation and humidity patterns are also evident, with increased and decreased annual precipitation in northern and southern Europe, respectively, and fluctuations in precipitation in central Europe [6]. With regards to agriculture and livestock farming, the main focus to-date has been on mitigating the effects of methane and other greenhouse gas emissions [7, 8]. At the same time, there is a growing concern that climate change may adversely affect the quality and quantity of both plant [9] and livestock [10] products leading to reduced food availability as well as increased frequency and severity of disease [11]. Therefore, there is a recognised need to address the current detrimental effects of environmental degradation on animal and plant production, and to develop additional strategies to mitigate the problem [10, 12, 13]. Selective breeding for enhanced animal resilience to environmental variation may offer a novel strategy to address the impact of climate change on livestock production [14, 15]. The few genetic studies conducted to-date have focussed on extreme directional climate challenges such as heat stress from very high temperatures [16,17,18,19]. While these considerations are appropriate in specific geomorphological areas, climate change leading to increased seasonal variability in weather conditions may also affect animal performance [10, 12], even within the moderate temperature range. Animal resilience must be properly defined in order to derive appropriate phenotypes across the range of prevailing and expected environmental conditions [20,21,22]. These phenotypes could be included in selective breeding programmes aiming at sustainable animal production levels in presence of environmental (climate) perturbations. Different theoretical frameworks have been used to model resilience to environmental changes and its effects on animal performance. Recent studies have shown the potential use of genotype by environment interaction (GxE) to estimate resilience phenotypes for animal production traits [23,24,25]. In this context, individual phenotypes can be described as a continuous function of an environmental variable using random regression model approaches [26]. Reaction norm functions can then be used to express resilience as a phenotypic response of animal performance to changing environment (for example, weather). The objectives of the present study were to (i) derive novel animal resilience phenotypes based on milk production changes in response to weather variability and (ii) investigate the genetic and genomic architecture of these newly derived animal phenotypes. We deployed reaction norm functions to derive resilience phenotypes, mixed models for statistical analyses to estimate relevant genetic parameters, and genome-wide association studies to detect molecular markers and candidate genes controlling resilience. We used data from a well-monitored dairy goat population but our approach is applicable to any livestock species. Animal performance records and weather measurements Descriptive statistics of animal performance and weather records are presented in Table 1. Daily milk yield, temperature and a temperature-humidity index (THI) reflected averages of a 10-day period. Table 1 Descriptive statistics of animal performance and weather records The prevailing weather conditions in the geographic region during the time of the study are illustrated in Additional file 1: Figure S1. These conditions are concordant with other weather reports in the UK [27], with average temperatures of 17-20 °C in July–August and 3-4 °C in January–February. Individual animal resilience phenotypes Descriptive statistics of animal resilience phenotypes are shown in Table 2. These phenotypes reflect the change of individual animal daily performance (milk yield) in response to changing weather (temperature and THI). Values of individual phenotypes were both positive, suggestive of increased milk production at higher values of the weather measurement, and negative, reflecting decreased animal performance at higher values of the weather measurement. An additional phenotype was the absolute value of these records indicating stable (values close to zero) versus volatile milk production response to weather change. Table 2 Descriptive statistics of resilience phenotypes expressed as milk production change (kg) per unit increase in weather variables Genetic parameters of resilience phenotypes Variance components and genetic parameter estimates for animal resilience phenotypes are shown in Table 3. All estimates were significantly greater than zero (P < 0.01). Genetic correlation of total lifetime milk yield with the resilience phenotypes related to absolute slopes (volatility phenotypes) were also significantly positive (P < 0.01). The latter implies an unfavourable correlation where animals with high milk yield potential are also more likely to have their milk production affected by change in changing weather. Table 3 Genetic parameters of resilience phenotypes expressed as milk production change per unit increase in weather variables Genomic association analysis Population structure was not detected and values of the inflation factor λ ranged from 0.996 to 1.001 for all analysed phenotypes. Several single nucleotide polymorphisms (SNPs) were found to be significantly associated with total lifetime milk yield and resilience phenotypes either at genome- or chromosome-wide levels (Table 4, Additional file 2: Figure S2). Table 4 summarises these results and includes annotated genes found in the respective genomic regions. One genome-wide significant SNP was detected on chromosome 19 for total lifetime milk yield and resilience phenotypes based on absolute slopes, which was also significant at chromosome-wide level for all the other resilience phenotypes. Another two genome-wide significant SNPs were detected for total lifetime milk yield on chromosomes 8 and 13, with no effect on resilience traits. Other chromosome-wide associations were detected on chromosomes 3, 4, 13, 14, 19, 20 and 24. All significant SNPs on chromosome 19 span a region of 1289 kb, representing a relatively high linkage disequilibrium block (Fig. 1). Table 4 Single Nucleotide Polymorphisms significantly associated with goat milk yield and resilience phenotypes at genome-* and chromosome-wide level Linkage Disequilibrium structure on chromosome 19 spanning region between significant SNP for resilience phenotypes: significant SNP are marked in red Climate change is expected to affect future livestock performance due to not only directional changes such as rising atmospheric temperature but also an increased volatility in weather conditions. Selective breeding for enhanced animal resilience to weather changes may contribute to the mitigation of the problem, leading to stable animal performance that is unaffected by weather variability. The present study set out to identify novel phenotypes of animal resilience and address their potential use in breeding schemes by estimating genetic parameters and identifying potential candidate genes. Results would determine how amenable animal resilience might be to improvement through genetic selection and how to inform relevant breeding programmes. The use of linear slopes derived from reaction norm functions fitted to random regression models provided an assessment of the response of individual animal performance to changing weather including atmospheric temperature and THI. While the assumption of linearity was valid for the available range of weather measurements in the present study, other weather measurements and/or different value ranges of the same measurements in other geomorphological regions of the world might warrant investigation of non-linear models. In the latter case, the methodology presented here would still be relevant, as slopes at specific ranges in the weather measurement trajectory may be derived and used as distinct resilience phenotypes. For example, in areas where climate change is expected to lead to increased temperatures beyond the heat stress threshold (around 35°C for dairy goats [28]), slopes of performance traits below and above this threshold could be treated as separate phenotypes in a multi-trait breeding index. When considering a range of temperatures below the heat stress threshold, as was the case in the present study, low temperatures are associated with lower average animal performance. Under cold stress (temperatures below 10 °C), animal feed intake is mostly directed towards maintaining their thermal balance requirements at the cost of producing less milk. Under higher temperatures, but still below the heat stress threshold, thermal balance requirements will be reduced, leading to better animal performance. Indeed, population curves from the reaction norm in the present study revealed a favourable impact of rising temperature and THI on performance manifested as increased daily milk production. The effect of THI almost mirrored that of temperature, partly because of the formula used to estimate THI [29] and partly because a relatively wider range of temperature values was observed in our data compared to humidity. The individual animal resilience phenotypes derived in the present study exhibited significant phenotypic variation. Thus, the same weather change invoked positive or negative responses in different individuals while, for others, production was not affected at all (zero slopes). The latter individuals could be considered as the most resilient to climate change. Furthermore, a significant proportion of the observed phenotypic variation among animals was genetic and heritable. Heritability estimates for resilience phenotypes ranged from 0.09 to 0.11, which is within the range of estimates for other fitness-related traits previously reported in goats [27], cattle [28] and sheep [29]. Although relatively low, these estimates are significantly greater than zero meaning that animal resilience to weather change is amenable to improvement via selective breeding. Since the outcome of selective breeding is cumulative, it is recommended that relevant programmes be put in place immediately in order to gradually and systematically enhance animal resilience to weather conditions as climate change becomes more pronounced. When resilience was defined as the absolute value of the slope, reflecting volatility of animal performance with changing weather, a significant antagonistic genetic correlation with the actual level of milk production was estimated. This correlation suggests that animals with the genetic capacity for high milk yield will also be genetically predisposed to less stable milk production when challenged with changing weather. Although our range of temperatures is outside the heat stress interval, this result is also in agreement with previous studies on heat stress, where high merit animals were found to be more susceptible to environmental change [30, 31]. Therefore, careful consideration of resilience phenotypes should be made when including these traits in the breeding goal, in order to properly account for potentially unfavourable correlations with other traits of interest. Selection index theory can be used to effectively combine genetically antagonistic traits leading to overall genetic improvement in livestock [32,33,34,35]. Furthermore, index weights will need to be re-estimated every few generations in order to account for potentially new genetic correlations between traits emanating from changes in linkage disequilibrium due to selection. Our genome-wide analyses identified several genomic markers associated with resilience phenotypes, particularly on chromosome 19. Although the significant SNPs identified on this chromosome were in mid to high linkage disequilibrium, only three of them, positioned within less than 0.1 Mb of each other, defined an actual haplotype with an overall squared correlation greater than 0.8 [36]. In this haplotype, one genome-wide significant SNP was detected affecting milk production level and relevant resilience phenotypes based on absolute slopes, which was also significant at chromosome-wide level for all the other resilience phenotypes. A previous study in goats [37] has shown genome-wide significant association of milk yield with another SNP in the same region located within 32 kb from the SNP identified here. Our SNP was found in exon 3 of the RNASEK gene, which encodes ribonuclease K protein. While the particular function of the latter is unknown, other ribonuclease pathways have been previously shown to be related to milk production [38] as well as host defence tissues and secretions in cattle [39]. Furthermore, ribonucleases are often involved in detention of protein synthesis to conserve energy under stress conditions [40]. Other chromosome-wide significant SNP associations for slopes on temperature and THI were also detected in this haplotype on chromosome 19, close to genes ALOX12 and ASGR2. Gene ALOX12 encodes the arachidonate 12-lipoxygenase, previously linked to goat milk and protein yield [37, 41], and the development and maintenance of the skin barrier [42]. Gene ASGR2 encodes a subunit of the asialoglycoprotein receptor, associated with udder attachment in goats and cattle [37, 43]. Additional SNPs affecting resilience phenotypes were found on chromosome 19 outside the defined haplotype. Of particular interest is the association close to ALOXE3, which encodes the arachidonate lipoxygenase 3, a protein implicated in skin differentiation. In humans, mutations of this gene cause congenital ichthyosis, a skin disease with several symptoms including intolerance to heat and humidity [44]. Furthermore, this protein is also involved in the development of fat cells, and was previously linked to udder depth in goats [37] and to the pathway of arachidonic acid, a polyunsaturated fatty acid present in mammals' milk [45]. Furthermore, several SNPs were detected significantly affecting total lifetime milk yield in the present study without any significant association with resilience. Among these SNPs, a genome-wide significant SNP was detected on chromosome 8, previously associated with goat milk production [37] and close to genes SLC1A1 and GLIS3. Another genome-wide significant SNP was detected on chromosome 13, close to genes BMP7 and TFAP2C, with the latter (transcription factor AP-2 gamma) having been previously associated to mammary development and several milk traits in sheep [46]. Other chromosome-wide significant SNPs for milk yield were detected on chromosomes 3, 4, 13, 14 and 20. The region on chromosome 4, in particular, is between exons 1 and 2 of gene HECW1, previously associated with vitamin B-12 content in cow milk [47]. Significant SNPs detected in the present study were generally located close to genes encoding proteins related to lipid metabolism, skin differentiation and stress response. Particular genetic variants segregating in the studied population could be related to variation in tolerance to temperature and humidity through a combination of direct effects on metabolism and indirect effects on stress and discomfort, even within the range of thermoneutral temperatures. However, identified SNPs explained only a small proportion of the trait variances, thus potentially indicating a polygenic architecture underlying resilience of animal milk production to weather change. Therefore, these results support the hypothesis of a complex underlying genetic link between animal production and environmental comfort involving multiple biological networks. Additional considerations are warranted when addressing the impact of climate change on breeding schemes. Of particular importance are the strength and direction of the expected changes. In the case of the UK, temperatures are expected to rise by about 2 °C by 2100, with a potential 4.2 °C increase in summer temperatures in southern England by 2080 [48]. Under the same scenario, winters will become wetter by up to 23% by 2080 and summers drier by up to a 24%, with more frequent and severe droughts [48]. These changes will bring higher weather instability and impose a threat to animal performance if animal resilience is not considered in breeding schemes. Therefore, selective breeding schemes should include resilience phenotypes based on absolute values of slopes in order to select animals that are more resilient to short- and medium-term changes. However, in other cases, the directionality of the individual animal production change might be more important than the effect of the increased variability. Resilience phenotypes based on actual slopes rather than their absolute values might then prove more useful, allowing to select animals that have an increased performance in the direction of the expected climate change. Example scenarios where these schemes would be useful are countries where the changes in weather will potentially lead to seasonal values in a particular direction. Selective breeding schemes could then be informed by multiple resilience phenotypes measured at different values of weather measurements (for example temperature), thus creating an animal index based on a combination of the directional increase in animal performance up to the inflection (stress) threshold and stability of performance thereafter. Furthermore, an economic assessment of the reaction norm as a novel animal phenotype is important, particularly when combining multiple traits in selection indices. Previous studies [49, 50] have shown that the economic values of phenotypes derived with reaction norms depend on the trait whose stability in different conditions is measured as well as the diversity of environments where progeny of the selected animals will be raised. However, this consideration was out of the scope of this study, and further research needs to be conducted within the context of particular breeding schemes. Finally, the use of reaction norms to derive resilience phenotypes can be applied not only to production traits, as shown in the present study, but also to other animal traits related with health and reproduction. While previous studies of fitness traits have not detected large genotype by environment interactions [51, 52] in dairy cattle, studies in beef cattle have revealed a significant impact of such interactions on animal reproduction [53]. Therefore, it is important to consider the possibility of a genetic basis of resilience in all biological functions of interest and the potential inclusion in selection indices for breeding schemes. The present study has demonstrated the applicability of reaction norms to obtain resilience phenotypes for animal performance to weather variability. Phenotypes obtained exhibited significant heritable genetic variance and can be used to underpin selective breeding schemes aiming to enhance animal performance and production stability in varying weather conditions. Candidate genes were detected for several resilience phenotypes, including genes related to stress response, lipid metabolism and skin development. These results can be used to further improve the accuracy of selective breeding. Non-linear models and a more extensive range of environments should be considered in future studies to account for variation outside the range studied here. Daily milk production records of individual animals were obtained from two UK dairy goat farms located at latitude 53° and 54° north. Strong genetic connectedness existed between the two farms as a result of inter-farm breeding program [54]. Animals in these farms are kept in an environment consisting of sheds without climate-controlled conditions. Because of specific management practices in these farms, daily animal milking records obtained were actually the average over 10 consecutive days. Only records in the first 720 days of lactation were kept for the present study. Data were limited to goats that kidded from 2007 onwards, at a kidding age between 9 and 89 months and with at least three valid milk records. In addition, animal records with a lifetime estimate of the average daily milk yield outside three standard deviations from the mean were removed. The final dataset consisted of 980,689 milk records for 20,546 goats. Animal pedigree was extracted from the farm database and comprised 46,825 animals spanning 19 generations, including 524 sires and 20,205 dams. Weather data were obtained from the nearest weather station (less than 20 miles from the farms) and included average daily temperature and humidity. A temperature-humidity index (THI) was then calculated using the National Research Council formula [29]: $$ \mathrm{T}\mathrm{HI}=\left(1.8\ast \mathrm{T}+32\right)-\left(0.55--0.0055\ast \mathrm{RH}\right)\ast \left(1.8\ast \mathrm{T}-26\right) $$ where THI = temperature-humidity index; T = average daily temperature (°C) and RH = average daily humidity (%). In consistence with the definition of animal performance, weather measurements used in the study represented averages of the same 10-day periods corresponding to each milk production record. Individual resilience phenotypes A theoretical random regression model including a reaction norm function is: $$ {y}_{ij}=X+f\left(\beta, {X}_j\right)+{f}_i\left({a}_i,{X}_j\right)+{e}_{ij} $$ where yij corresponds to the performance record of individual animal i, at a given environment j, X corresponds to a set of fixed effects describing all environments, f(β, Xj) corresponds to a function (population reaction norm) describing the relationship between average animal performance and environment j, fi(ai, Xj) corresponds to a function (individual animal reaction norm) describing the relationship between individual animal i and environment j (expressed as a deviation from the population reaction norm) and eij corresponds to the residual. This model was fitted to milk yield records and corresponding temperature and THI values using second degree Legendre polynomials for the reaction norm function and the BLUPF90 suite of programs [55]. Initial exploration revealed a relatively linear behaviour for both weather measurements and animal performance (Fig. 2). Therefore, further analyses were conducted using the following simplified model with first degree Legendre polynomials [23]: $$ {y}_{ij}=X+\mu +{\mu}_i+\left(s+{s}_i\right)\ast {X}_{ij}+{e}_{ij} $$ where μ corresponded to the population average intercept, μi corresponded to the animal i intercept deviated from the population intercept, and s and si corresponded to the population and individual i (as deviation) slopes on the fixed effect (environment); all other terms were as in model (2). The population reaction norm then was μ + sXij and the individual reaction norm μi + siXij, expressed as deviations from the population reaction norm. Population reaction norms: Daily milk yield evolution in response to temperature (T) and temperature-humidity index (THI) variability. Reaction norms were modelled with second degree Legendre polynomials Pedigree information was not included in model (3). Fixed effects included in this model were farm, interaction of calendar year and season of kidding, age at most recent kidding prior to milking date, number of days in milk, interaction between farm and date of record, and lactation (milking period) number. Subsequently, individual reaction norms were computed per animal by adding the overall population norm to the corresponding individual animal deviation. Slopes of these individual reaction norms were estimated using derivatives, indicating the change in animal performance (milk yield) in response to weather fluctuations. These slopes were considered as the animal resilience phenotypes. Furthermore, absolute values of the estimated individual slopes were considered as additional resilience phenotypes, reflecting the stability/volatility of animal performance in relation to weather change, with values closer to zero representing more stable (resilient) animals. Variance components and heritability estimates of all animal resilience phenotypes were derived from mixed models including the available pedigree information, using the ASReml software [56]. The distribution of resilience phenotypes based on the absolute value of slopes was normalised by applying a square root transformation. Fixed effects in the mixed models included total number of milking days, farm, total number of lactations, age at first kidding (onset of productive life) and interaction between calendar year and season of first kidding. Univariate analyses were conducted for each resilience phenotype separately to estimate its additive genetic variance and heritability. Bivariate analyses of resilience with total milk produced throughout the animal's productive life (square root transformed to normalise) were also conducted to estimate genetic correlations. Genomic association analysis of resilience phenotypes A total of 10,620 animals with resilience phenotypes had been genotyped with the Illumina Caprine 50 K BeadChip containing 53,347 Single Nucleotide Polymorphisms (SNPs). Marker quality assurance removed SNPs on the sex chromosomes and those autosomal SNPs with Illumina GC score < 0.6, call rate < 95%, minor allele frequency < 0.05 and deviations from Hardy-Weinberg equilibrium (Bonferroni corrected threshold of 10− 7). Sample quality was assessed, and samples with call rates >90% were kept. These quality assurance edits resulted in a final set of 10,620 animals and 44,280 SNPs across all 29 autosomes with positions based on the most recent goat genome assembly ARS1 [57]. Association analyses were performed using the multi-locus mixed model algorithm [58] implemented in Golden Helix SNP & Variation Suite v8.8.3. The following model was used: $$ \mathbf{y}=\mathbf{X}\beta +\mathbf{Za}+\mathbf{e} $$ where y was the vector of animal phenotypes for the analysed trait; β was a vector of coefficients for the SNP effects and other fixed effects (same as described for the estimation of genetic parameters); a was the vector of random animal polygenic effects; e was the vector of random residual effects; and X and Z were incidence matrices relating observations to fixed and random animal effects, respectively. The vector of random animal effects a and residual effects e in model (3) were assumed to follow normal distributions with a ~ N \( \Big(0,\mathbf{G}{\sigma}_a^2 \)) and e ~ N \( \Big(0,\mathbf{I}{\sigma}_e^2 \)), where G corresponds to the genomic relatedness matrix, I corresponds to the identity matrix and \( {\sigma}_a^2 \) and \( {\sigma}_e^2 \) correspond to the genetic and residual variances, respectively. Covariance between a and e was assumed to be zero. The genomic relationship matrix G was calculated following VanRaden [59]. $$ \mathbf{G}=\frac{\mathbf{SS}\hbox{'}}{2{\sum}_{i=1}^{\mathrm{N}}{\mathrm{p}}_{\mathrm{i}}\left(1-{\mathrm{p}}_{\mathrm{i}}\right)} $$ where S is a centred incidence matrix of SNP genotypes, N is the number of SNP markers, and pi is allele frequency of marker i. Statistical significance of SNPs was assessed using Wald tests. Following a forward-backward stepwise regression [58], once the algorithm performed an initial scan testing each marker, additional genome scans were performed adjusting the model to account for the most significant SNPs on the initial scan. Significance thresholds were set at both genome- and chromosome-wide levels using Bonferroni corrections for multiple marker testing with a significance level of P < 0.05. This resulted in a genome-wide significance threshold of -log10(P) = 5.95. For significant markers, the proportion of explained phenotypic variance (pve) was estimated as: $$ \mathrm{pve}=\frac{{\mathrm{mrss}}_{\mathrm{h}0}-{\mathrm{mrss}}_{\mathrm{k}}}{{\mathrm{mrss}}_{\mathrm{h}0}} $$ where mrssh0 is the Mahalanobis root sum of squares for the null hypothesis and mrssk is the Mahalanobis root sum of squares for marker k. The animal genotypes are commercially sensitive. For more information, please contact GB. SNP: Single Nucleotide Polymorphism THI: Agovino M, Casaccia M, Ciommi M, Ferrara M, Marchesano K. Agriculture, climate change and sustainability: the case of EU-28. Ecol Indic. 2018;105:525. Lawrence D, Vandecar K. Effects of tropical deforestation on climate and agriculture. Nat Clim Chang. 2014;5:27. Lin BB. Resilience in agriculture through crop diversification: adaptive Management for Environmental Change. BioScience. 2011;61(3):183–93. Rowell DP. A scenario of European climate change for the late twenty-first century: seasonal means and interannual variability. Clim Dyn. 2005;25(7):837–49. Bathiany S, Dakos V, Scheffer M, Lenton TM. Climate models predict increasing temperature variability in poor countries. Sci Adv. 2018;4(5):eaar5809. Ruosteenoja K, Räisänen P. Seasonal changes in solar radiation and relative humidity in Europe in response to global warming. J Clim. 2013;26(8):2467–81. Moran D, MacLeod M, Wall E, Eory V, McVittie A, Barnes A, Rees RM, Topp CFE, Pajot G, Matthews R, et al. Developing carbon budgets for UK agriculture, land-use, land-use change and forestry out to 2022. Clim Chang. 2011;105(3):529–53. Ross SA, Chagunda MGG, Topp CFE, Ennos R. Effect of cattle genotype and feeding regime on greenhouse gas emissions intensity in high producing dairy cows. Livest Sci. 2014;170:158–71. Lobell DB, Schlenker W, Costa-Roberts J. Climate trends and global crop production since 1980. Science. 2011;333(6042):616–20. CAS PubMed Article PubMed Central Google Scholar Thornton PK, van de Steeg J, Notenbaert A, Herrero M. The impacts of climate change on livestock and livestock systems in developing countries: a review of what we know and what we need to know. Agric Syst. 2009;101(3):113–27. Chowdhury FR, Ibrahim QSU, Bari MS, Alam MMJ, Dunachie SJ, Rodriguez-Morales AJ, Patwary MI. The association between temperature, rainfall and humidity with common climate-sensitive infectious diseases in Bangladesh. PLoS One. 2018;13(6):e0199579. PubMed PubMed Central Article CAS Google Scholar Rojas-Downing MM, Nejadhashemi AP, Harrigan T, Woznicki SA. Climate change and livestock: impacts, adaptation, and mitigation. Clim Risk Manag. 2017;16:145–63. Department for Environment Food and Rural Affairs (DEFRA): UK Climate Change Risk Assessment 2017. In: UCoC C, editor. Synthesis report: priorities for the next five years; 2017. Weindl I, Lotze-Campen H, Popp A, Müller C, Havlík P, Herrero M, Schmitz C, Rolinski S. Livestock in a changing climate: production system transitions as an adaptation strategy for agriculture. Environ Res Lett. 2015;10(9):094021. Shields S, Orme-Evans G. The impacts of climate change mitigation strategies on animal welfare. Animals. 2015;5(2):361–94. Nguyen TTT, Bowman PJ, Haile-Mariam M, Pryce JE, Hayes BJ. Genomic selection for tolerance to heat stress in Australian dairy cattle. J Dairy Sci. 2016;99(4):2849–62. CAS PubMed Article Google Scholar Boonkum W, Duangjinda M. Estimation of genetic parameters for heat stress, including dominance gene effects, on milk yield in Thai Holstein dairy cattle. Anim Sci J. 2015;86(3):245–50. Carabaño MJ. The challenge of genetic selection for heat tolerance: the dairy cattle example. Adv Anim Biosci. 2016;7(2):218–22. Ravagnolo O, Misztal I. Effect of heat stress on nonreturn rate in Holstein cows: genetic analyses. J Dairy Sci. 2002;85(11):3092–100. Pryce J, Yd H. Genetic selection for dairy cow welfare and resilience to climate change. In: Webster J, editor. Achieving sustainable production of milk. Cambridge: Burleigh Dodds Science Publishing Limited; 2017. Colditz IG, Hine BC. Resilience in farm animals: biology, management, breeding and implications for animal welfare. Anim Prod Sci. 2016;56(12):1961–83. Berghof TVL, Poppe M, Mulder HA. Opportunities to Improve Resilience in Animal Breeding Programs. Front Genet. 2019;9:692. Bryant J, López-Villalobos N, Holmes C, Pryce J. Simulation modelling of dairy cattle performance based on knowledge of genotype, environment and genotype by environment interactions: current status. Agric Syst. 2005;86(2):121–43. Mulder HA. Genomic selection improves response to selection in resilience by exploiting genotype by environment interactions. Front Genet. 2016;7:178. Hayes BJ, Carrick M, Bowman P, Goddard ME. Genotype×environment interaction for Milk production of daughters of Australian dairy sires from test-day records. J Dairy Sci. 2003;86(11):3736–44. Martin JGA, Nussey DH, Wilson AJ, Réale D. Measuring individual differences in reaction norms in field and experimental studies: a power analysis of random regression models. Methods Ecol Evol. 2011;2(4):362–74. UK Department for Business Energy and Industrial Strategy: Monthly average daily temperatures in the United Kingdom (UK) from 2013 to 2018 (in degrees Celsius). https://www.statista.com/statistics/322658/monthly-average-daily-temperatures-in-the-united-kingdom-uk. Accessed 15 Jan 2018. Salama AAK, Caja G, Hamzaoui S, Badaoui B, Castro-Costa A, Façanha DAE, Guilhermino MM, Bozzi R. Different levels of response to heat stress in dairy goats. Small Rumin Res. 2014;121(1):73–9. National Research Council (NRC). A guide to environmental research on animals. Washington, DC: National Academy of Science; 1971. West JW. Effects of heat-stress on production in dairy cattle. J Dairy Sci. 2003;86(6):2131–44. Das R, Sailo L, Verma N, Bharti P, Saikia J, Imtiwati Kumar R. Impact of heat stress on health and performance of dairy animals: a review. Vet World. 2016;9(3):260–8. CAS PubMed PubMed Central Article Google Scholar MacNeil MD. Genetic evaluation of an index of birth weight and yearling weight to improve efficiency of beef production1,2. J Anim Sci. 2003;81(10):2425–33. Dekkers JCM. Prediction of response to marker-assisted and genomic selection using selection index theory. J Anim Breed Genet. 2007;124:331–41. van der Werf J. Genomic Selection in Animal Breeding Programs. In: Gondro C, van der Werf J, Hayes B, editors. Genome-Wide Association Studies and Genomic Prediction. Totowa, NJ: Humana Press; 2013. p. 543–61. Dekkers JCM, Van der Werf JH. Breeding Goals and Phenotyping Programs for Multi-Trait Improvement in the Genomics Era. In: Proceedings of the World Congress on Genetics Applied to Livestock Production. Vancouver: WCGALP; 2014. p. 8 Gu S, Pakstis AJ, Li H, Speed WC, Kidd JR, Kidd KK. Significant variation in haplotype block structure but conservation in tagSNP patterns among global populations. Eur J Hum Genet. 2007;15:302. Mucha S, Mrode R, Coffey M, Kizilaslan M, Desire S, Conington J. Genome-wide association study of conformation and milk yield in mixed-breed dairy goats. J Dairy Sci. 2018;101(3):2213–25. Raven L-A, Cocks BG, Pryce JE, Cottrell JJ, Hayes BJ. Genes of the RNASE5 pathway contain SNP associated with milk production traits in dairy cattle. Genet Sel Evol. 2013;45(1):25. Gupta SK, Haigh BJ, Wheeler TT. Abundance of RNase4 and RNase5 mRNA and protein in host defence related tissues and secretions in cattle. Biochem Biophys Rep. 2016;8:261–7. Sorrentino S. The eight human "canonical" ribonucleases: molecular diversity, catalytic properties, and special biological actions of the enzyme proteins. FEBS Lett. 2010;584(11):2194–200. Martin P, Palhière I, Maroteau C, Bardou P, Canale-Tabet K, Sarry J, Woloszyn F, Bertrand-Michel J, Racke I, Besir H, et al. A genome scan for milk production traits in dairy goats reveals two new mutations in Dgat1 reducing milk fat content. Sci Rep. 2017;7(1):1872. Mashima R, Okuyama T. The role of lipoxygenases in pathophysiology; new insights and future perspectives. Redox Biol. 2015;6:297–310. Schrooten C, Bovenhuis H, Coppieters W, Van Arendonk JAM. Whole genome scan to detect quantitative trait loci for conformation and functional traits in dairy cattle. J Dairy Sci. 2000;83(4):795–806. Ullah R, Ansar M, Durrani ZU, Lee K, Santos-Cortez RLP, Muhammad D, Ali M, Zia M, Ayub M, Khan S, et al. Novel mutations in the genes TGM1 and ALOXE3 underlying autosomal recessive congenital ichthyosis. Int J Dermatol. 2016;55(5):524–30. Idamokoro EM, Muchenje V, Afolayan AJ, Hugo A. Comparative fatty-acid profile and atherogenicity index of milk from free grazing Nguni, Boer and non-descript goats in South Africa. Pastoralism. 2019;9(1):4. Gutierrez-Gil B, Arranz JJ, Pong-Wong R, Garcia-Gamez E, Kijas J, Wiener P. Application of selection mapping to identify genomic regions associated with dairy production in sheep. PLoS One. 2014;9(5):e94623. Rutten MJM, Bouwman AC, Sprong RC, van Arendonk JAM, Visker MHPW. Genetic variation in vitamin B-12 content of bovine Milk and its association with SNP along the bovine genome. PLoS One. 2013;8(4):e62382. UK Meteorological Office: UK Climate Projections: UKCP09 dataset. 2009. Hermesch S, Amer P. Deriving Economic Values for Reaction Norms of Growth in Pigs. In: Twentieth Conference of the Association for the Advancement of Animal Breeding and Genetics. Australia: Association for the Advancement of Animal Breeding and Genetics; 2013. p. 475–8. Kolmodin R, Bijma P. Response to mass selection when the genotype by environment interaction is modelled as a linear reaction norm. Genet Sel Evol. 2004;36(4):435–54. Haile-Mariam M, Carrick MJ, Goddard ME. Genotype by environment interaction for fertility, survival, and Milk production traits in Australian dairy cattle. J Dairy Sci. 2008;91(12):4840–53. Streit M, Reinhardt F, Thaller G, Bennewitz J. Reaction norms and genotype-by-environment interaction in the German Holstein dairy cattle. J Anim Breed Genet. 2012;129(5):380–9. Morris CA, Baker RL, Hickey SM, Johnson DL, Cullen NG, Wilson JA. Evidence of genotype by environment interaction for reproductive and maternal traits in beef cattle. Anim Sci. 2010;56(1):69–83. Mucha S, Mrode R, MacLaren-Lee I, Coffey M, Conington J. Estimation of genomic breeding values for milk yield in UK dairy goats. J Dairy Sci. 2015;98(11):8201–8. Misztal I, Tsuruta S, Strabel T, Auvray B, Druet T, Lee D. BLUPF90 and related programs. In: Proceedings of 7th World Congress on Genetics Applied to Livestock Production, vol. 743; 2002. Gilmour AR, Gogel B, Cullis BR, Thompson R. 2009 ASReml user guide release 3.0. Hemel Hempstead: VSN International Ltd; 2009. Bickhart DM, Rosen BD, Koren S, Sayre BL, Hastie AR, Chan S, Lee J, Lam ET, Liachko I, Sullivan ST, et al. Single-molecule sequencing and chromatin conformation capture enable de novo reference assembly of the domestic goat genome. Nat Genet. 2017;49:643. Segura V, Vilhjálmsson BJ, Platt A, Korte A, Seren Ü, Long Q, Nordborg M. An efficient multi-locus mixed-model approach for genome-wide association studies in structured populations. Nat Genet. 2012;44:825. VanRaden PM. Efficient methods to compute genomic predictions. J Dairy Sci. 2008;91:4414–23. Data were kindly made available from the Yorkshire Dairy Goats company. The authors are thankful to Dr. Clara Díaz, Dr. Maria Jesús Carabaño and Dr. Manuel Ramon Fernández of Instituto Nacional de Investigación y Tecnología Agraria y Alimentaria (INIA, Spain) for helpful feedback on the methodology and results. Funding for this study was provided mainly from the Horizon2020 project iSAGE (grant 679302), which covered design and implementation of the work. Additionally, animal genotyping, pertinent data collection and contributions of SM and JC were supported by UK Biotechnology and Biological Sciences Research Council (BBSRC) and Agritech Strategy BB/M02833X/1 project 102093 (2015–2018); BBSRC/Agritech Strategy BB/M027570/1 project 102129 (2015–2018); and Technology Strategy Board (TSB) Project 101072 (2011–2015). The contribution of ESM was supported by the BBSRC ISP3 (BBS/E/D/30002275). The contribution of GB was partly supported by the Rural & Environment Science & Analytical Services Division of the Scottish Government. Vanessa V. Kapsona and Joanna J. Ilska contributed equally to this work. The Roslin Institute and R (D) SVS, University of Edinburgh, Easter Bush, Edinburgh, EH25 9RG, UK Enrique Sánchez-Molano, Joanna J. Ilska & Suzanne Desire Scotland's Rural College, The Roslin Institute Building, Easter Bush, Edinburgh, EH25 9RG, UK Vanessa V. Kapsona, Joanna J. Ilska, Suzanne Desire, Joanne Conington, Sebastian Mucha & Georgios Banos Poznan University of Life Sciences, 33 Wolynska, 60-637, Poznan, Poland Sebastian Mucha Enrique Sánchez-Molano Vanessa V. Kapsona Joanna J. Ilska Suzanne Desire Joanne Conington Georgios Banos ESM, VK, JI, SD and SM prepared and performed the data analysis. Reaction norm phenotypes and analyses were performed by ESM, VK and JJI. Genetic analyses were performed by VK, ESM and SD. Genomic analysis was performed by SM. ESM, GB, JC and SM interpreted the results. ESM drafted the manuscript and all co-authors provided comments. GB and JC were responsible for the inception, funding, study design and implementation of the project. All authors have read and approved the final version of this manuscript. Correspondence to Enrique Sánchez-Molano. All authors declare that animal samples were obtained in compliance with local/national laws in force at the time of sampling. Data exchange was in accordance with national and international regulations, and approved by the owners. All authors have read and approved the final manuscript. Monthly average of weather measurements: daily temperature (T) and temperature-humidity index (THI). Standard deviations are shown as bars. Manhattan plots and QQ-plots: Total milk yield (A), performance change due to daily temperature change (B), performance change due to temperature-humidity index change (C), absolute value of performance change due to daily temperature change (D), absolute value of performance change due to THI change (E). Sánchez-Molano, E., Kapsona, V.V., Ilska, J.J. et al. Genetic analysis of novel phenotypes for farm animal resilience to weather variability. BMC Genet 20, 84 (2019). https://doi.org/10.1186/s12863-019-0787-z Animal resilience Selective breeding Candidate genes
CommonCrawl
radius of a cylinder Maximum Cylinder that can be Inscribed in a Sphere Problem: Using the AM-GM inequality, what is the maximum volume of a right circular cylinder that can be inscribed in a sphere of radius R. We can argue easily that such a cylinder exists. The diameter, or the distance across a cylinder that passes through the center of the cylinder is 2R (twice the radius). Here are some screenshots about it. The Radius of a Cylinder Formula. Find the total surface area of a solid cylinder of radius 5 c m and height 1 0 c m. Leave your answer in term of π. 1 cubic centimetre = 1 millilitres. MCQ. Finding radius of capsule defined by total volume and height of inner cylinder Hot Network Questions Is there a limit to how much spacetime can be curved? Save my name, email, and website in this browser for the next time I comment. 4 m and height 5. If the radius of a cylinder is doubled and the height remains same, the volume will be (a) doubled (b) halved (c) same (d) four times 1:38 1.6k LIKES. The radius of the cylinder is $0.25\, m$. I want to get the radius of that cylinder, How can I do that ? INSTRUCTIONS: Choose the preferred units and enter the following: (R) This is the radius of the solid cylinderRadius of Gyration: The radius of gyration is returned in meters. Solution Show Solution. radius= 5.0955414cm/2 radius=2.5477707cm. The Lateral Surface Area of a Cylinder is L = 2πrh L = 2 * M_PI * radius * height L = 2 * 3.14 * 3 * 5 L = 94.2. 4 m. How much metal sheet to the nearest m 2, is used in making this tank, if 1 5 1 of the sheet actually used was wasted in making the tank ? Therefore, the base surface area of a cylinder equals two times area of a circle with the radius r, and the lateral surface area of a cylinder is the area of a rectangle. Calculate the top and bottom surface area of a cylinder (2 circles ): T = B = π r 2. The Volume of a Cylinder is Volume of a Cylinder = πr²h Volume of a Cylinder = M_PI * radius * radius * height Volume of a Cylinder = 3.14 * 3 * 3 * 5 Volume of a Cylinder = 141.3. The below given is the radius of cylinder formula for calculating the cylinder radius. halved. Important Solutions 1. This formula may be established by using Cavalieri's principle. The area for both the bases are equal for both right cylinder and oblique cylinder. Circular Cylinder Shape. The radius of gyration about an axis, raxis r axis, is computed as the root mean square of the distances of the object's … This radius of a cylinder formula helps you calculate the radius and determine the … Now let us find the total cylinder area using formulas. 1 decade ago. L = 2 π rh. Our website is made possible by displaying online advertisements to our visitors. When the two bases are exactly over each other and the axis is a right angles to the base, this is a called a 'right cylinder'.If one base is displaced sideways, the axis is not at right angles to the bases and the result is called an oblique cylinder.The bases, although not directly over each other, are still parallel.In the applet at the top of the page, check the \"allow oblique\" box and drag the orange dot sideways to see an oblique cylinder. The volume remains a constant 440 cubic inches. If the base of a circular cylinder has a radius r and the cylinder has height h, then its volume is given by V = πr 2 h. This formula holds whether or not the cylinder is a right cylinder. Free Cylinder Volume & Radius Calculator - calculate cylinder volume, radius step by step This website uses cookies to ensure you get the best experience. At the instant… Example 1. What is the equation for the radius of a circle? In a cylinder, radius is doubled and height is halved, then curved surface area will be. Common use. This form of the equation is helpful, since you can easily … 33.1k VIEWS. Question By default show hide Solutions. This formula may be established by using Cavalieri's principle. Q. For a circle with a circumference of 15, you would divide 15 by 2 times 3.14 and round the decimal point to your answer of approximately 2.39. i have two questions. In a cylinder, radius is doubled and height is halved, curved surface area will be (A) halved (B) doubled (C) same (D) four times. learntocalculate.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to amazon.com. 2. V = πr 2 h. and its surface area is: asked Feb 2, 2018 in Class IX Maths by saurav24 Expert (1.4k points) volume and surface area . I am a beginner and trying to learn the basics of modelling in Maya. If the Radius of a Cylinder is Doubled and the Height Remains Same, the Volume Will Be - Mathematics. Cylinder A cylinder is a solid composed of two congruent circles in parallel planes, their interiors and all the line segments parallel to the segment containing the centers of both circles with endpoints on the circular regions. Given that the height is 11cm, calculate the radius and answer to the nearest cm. Note: No calculus for this … volume of the cylinder and the radius, keeping the height constant. . View solution. Just substitute the values of variables and do the demanding operations to get the result. 33.1k SHARES. Volume of a Cylinder. I want to increase the radius of the subdivision I just added without increasing the radius of the cylinder itself. A cylinder of radius R made of a material of thermal conductivity K 1 is surrounded by a cylindrical shell of inner radius R and outer radius 2R made of material of thermal conductivity K 2 . If the height of a cylinder is doubled and radius remains the same, then volume will be (a) doubled (b) halved (c) same (d) four times 1:41 5.5k LIKES. \[\large Radius\;of\;a\;Cylinder=\sqrt{\frac{V}{\pi h}}\] Where, V is the Volume of the cylinder. Practice Problem 2: Silo has a cylindrical shape. If we know the radius and height of the cylinder then we can calculate the surface area of a cylinder using the formula: Surface Area of a Cylinder = 2πr² + 2πrh (Where r is radius and h is the height of the cylinder). In a cylinder, if radius is halved and height is doubled, then find the volume with respect to original volume. Surface Area of a Cylinder. 1 answer. Concept Notes & Videos 296. [X,Y,Z] = cylinder returns the x-, y-, and z- coordinates of a cylinder without drawing it. In the middle of the two circular bases there is a curved surface , which when opened represents a rectangular shape. Volume of a cylinder formula. Now let's add our bases. r = √(V / π h) Where, r = Radius of Cylinder V = Volume of Cylinder h = Height of the Cylinder You may also read, Calculate the surface area and volume of a sphere in Python ; Leave a Reply Cancel reply. If electric flux coming out from the curved surface of cylinder is xq/10 ε 0 , then calculate x. To describe the properties of a cylinder, we usually use radius r and height h. Below, we have listed six basic equations that can be used to derive the explicit formulas of radius of a cylinder: Volume of a cylinder: V = π * r² * … four times. The function returns the x-, y-, and z- coordinates as three 21-by-21 matrices. Sign in. Enter any single value and the other three will be calculated.For example: enter the radius and press 'Calculate'. How do I find the radius of a circle? The amount of space inside the Cylinder is called as Volume. A solid cylinder of mass $20\,kg$ rotates about it axis with angular speed $100\, rad \,s^{-1}$. This radius of a cylinder formula helps you calculate the radius and determine the correct size of the cylinder for your use. Its height was found to be 5cm.Calculate the radius of the container. To calculate the volume of a cylinder, we need radius of base height of cylinder. Hope, this helps. The radius can be given as the square root of the volume of a cylinder divided by the pi (π) times its height. Volume of a cylinder formula. Anonymous. Suppose, r 1 and r 2 are the two radii of the given hollow cylinder with 'h' as the height, then the volume of this cylinder can be written as; V = πh(r 1 2 – r 2 2) Surface Area of Cylinder. If in a cylinder, radius is doubled and height is halved, then find its curved surface area. If the base of a circular cylinder has a radius r and the cylinder has height h, then its volume is given by V = πr2h. Consider a cylindrical can of radius r and height h . The Radius of Gyration of a Cylinder about the y-axis calculator computes the Radius of Gyration, r y, of a cylinder about the specified axis (y-axis) as defined in the picture.. Cylinder's volume is given by the formula, πr 2 h, where r is the radius of the circular base and h is the height of the cylinder. Volume of a Cylinder Radius Calculator: If you want to figure out how much water fits in a can or coffee in the mug you have arrived at the right place. This online calculator will calculate the various properties of a cylinder given 2 known values. If the height of a cylinder is considered as 'h' & radius as 'r' then it's volume is = π(r^2)h So, when the radius is tripled, the volume will be 3 squared = 9 times than the previous one. The kinetic energy associated with the rotation of the cylinder is 1 answer. The volume of a cylinder is pi, times the radius of the base squared, times the height. 110.1k SHARES. A cylinder is a three-dimensional solid that has two parallel bases that are usually circular and are connected by a curved surface. Therefore, the radius of the container is 2.523 cm. A point charge +q is placed on the axis of a closed cylinder of radius R and height 25R/12 as shown. The cylinder consists of a side and two caps. 0 votes. 3. Find the base radius length of a right cylinder if the surface area is $256$ square meters and the height is $18$ meters. In a cylinder, if radius is halved and height is doubled, the volume will be . This formula holds whether or not the cylinder is a right cylinder. Its height was found to be 5cm.Calculate the radius of the container. The radius R, or the distance from the center of a cylinder to its edge, and its length, L are usually the defining property of a cylinder. Divide the quantity by means of the peak to get the field of the round base. From the formula to calculate volume of a cylinder; A canned food container has a volume of 100cm. To determine a formula for the curved surface area of a cylindrical can, wrap a sheet of paper snugly around the can and tape it together. Question By default show hide Solutions. A closed cylindrical tank, made of thin iron sheet, had diameter 8. We need our radius. In the end, you need to take the square root on the radius and this would be the final outcome. 1 answer. You can sign in to vote the answer. asked Feb 2, 2018 in Class IX Maths by saurav24 Expert (1.4k points) volume and surface area. The center-radius form of the circle equation is in the format (x – h)2 + (y – k)2 = r2, with the center being at the point (h, k) and the radius being "r". To calculate the radius of a circle by using the circumference, take the circumference of the circle and divide it by 2 times π. 2. CBSE CBSE Class 9. Total surface area of a closed cylinder is: A = L + T + B = 2 π rh + 2 ( π r 2) = 2 π r (h+r) ** The area calculated is only the lateral surface of the outer cylinder wall. 1. generally speaking, if the pressure of a gas is tripled, what will happen to its volume? Say I have a cylinder and the height is 5 and the radius is 2 so we will plug in the diameter would be four because if the radius is 2 the diameter is twice that so take 4 times π and the height is 5 so for my lateral area I will get 20 π. Find the radius and hight of cylinder with voume 64π and radius r between 1 and 5 that has smallest possible surface area. Hello! A second silo is $30$ meters tall. The Lateral Surface Area of a Cylinder is L = 2πrh L = 2 * PI * radius * height L = 2 * 3.14 * 3 * 5 L = 94.2. same. Cylinder Volume = 3.14159265 x radius 2 x height. In a cylinder, if radius is halved and height is doubled, the volume will be - Mathematics . asked Dec 29, 2017 in Class IX Maths by ashu Premium (930 points) +1 vote. The lateral surface area of a cylinder is 1106 cm2. asked Dec 29, 2017 in Class IX Maths by saurav24 Expert (1.4k points) 0 votes. doubled. h is the height … MCQ. By using this website, you agree to our Cookie Policy. Click on the above calculator link to efficiently calculate cylinder radius and cross-check your obtained manual results. In case of hollow cylinder, we measure two radius, one for inner circle and one for outer circle formed by the base of hollow cylinder. Surface Area of a Cylinder = 150.72. Advertisement Remove all ads. Options. Volume of a Cylinder Radius Calculator enables the calculation of the volume of a solid. 1 answer. It is basically equal to the sum of area of two circular bases and curved surface area. By using this website, you agree to our Cookie Policy. Radius Height The answer is that a right circular cylinder consists of two circles and one rectangle, as you can see it in the figure below. The two ends of the combined system are maintained at two different temperatures. i have two questions. A uniform cylinder has a radius R and length L. If the moment of inertia of this cylinder about an axis passing through its centre and normal to its circular face is equal to the moment of inertia of the same cylinder about an axis passing through its centre and perpendicular to its length, then I have a cylinder that I am using as a circle in 2D( It is rotated 90 degrees). Volume of a Cylinder Formula. I'll simply assume that the origin of the coordinate system is located in the center of that cylinder, and furthermore that the polar axis agrees with the axis of rotational symmetry of that cylinder. Solution for The radius of a cylinder is increasing at a constant rate of 2 inches per minute. The radius R, or the distance from the center of a cylinder to its edge, and its length, L are usually the defining property of a cylinder. That only matters if you want volume. 1000 cubic centimetres = 1 litre. The formula for the volume of a cylinder is height x π x (diameter / 2) 2, where (diameter / 2) is the radius of the base (d = 2 x r), so another way to write it is height x π x radius 2.Visual in the figure below: First, measure the diameter of the base (usually easier than measuring the radius), then measure the height of the cylinder. divide circumference by pi to get diameter, divide diameter by 2 to get radius . Therefore surface area is given by \(\displaystyle A = 4\pi r + 2\pi rh = 2\pi r(2+h)\) where r is the radius and h is the height of the cylinder. The two ends of the combined system are maintained at two different temperatures. I assume that I cm is the moment if inertia of the cylinder. The returned cylinder has a radius equal to 1, 20 equally spaced points around its circumference, and bases parallel to the xy-plane. The formula for the volume of a cylinder is height x π x (diameter / 2) 2, where (diameter / 2) is the radius of the base (d = 2 x r), so another way to write it is height x π x radius 2.Visual in the figure below: First, measure the diameter of the base (usually easier than measuring the radius), then measure the height of the cylinder. asked Aug 21, 2020 in Surface Areas And Volumes by Sima02 (49.2k points) surface areas and volumes; class-9; 0 votes. C=Dxpi. The volume remains a constant 440 cubic inches. Find the lateral area of a silo $20$ meters tall with the base radius length of $5$ meters. 1 1. Enter the Radius of a cylinder: 5 Enter the Height of a cylinder: 6 surface area of a cylinder wll be 942.48 volume of a cylinder will be 471.24 We hope now you understood how to calculate surface area and volume of a cylinder in Python. Thanks. Eric V. 1 decade ago. If the radius of a cylinder is doubled and the height remains same, the volume will be. That can then be rearranged depending on whether you want to make r the subject or h the subject: Volume of Cylinder The volume of a right circular cylinder is defined as the amount of three dimensional space occupied by the cylinder or the storage capacity of a cylinder. 1. generally speaking, if the pressure of a gas is tripled, what will happen to its volume? View solution. Volume of a Cylinder = PI * radius * radius * height Volume of a Cylinder = 3.14 * 3 * 3 * 5 Volume of a Cylinder = 141.3. A cylinder is two circles at the ends with a rectangular section folded around the centre. How do you think about the answers? The circles and their interiors are the bases .The radius of the cylinder is the radius … I need to show my work, and I have no idea how to do this because there's no formula for it What's the radius of a cylinder formula? π – is a constant estimated to be 3.142. So what is the mean radius of your cylinder? Diameter is the length of the line through the center …, Swapping tires isn't a problem for most drivers especially if …. Therefore, the radius of the container is 2.523 cm. The volume of a cylinder is the density of the cylinder which signifies the amount of material it can carry or how much amount of any material can be immersed in it. INSTRUCTIONS: Choose the preferred units and enter the following: (R) This is the radius of the solid cylinderRadius of Gyration: The radius of gyration is returned in meters. 1 cubic metre = 1000 litres. r = radius h = height V = volume L = lateral surface area T = top surface area B = base surface area A = total surface area π = pi = 3.1415926535898 √ = square root Calculator Use. In a cylinder, radius is doubled and height is halved, then curved surface area will be. The radius {eq}\displaystyle r {/eq} of a right circular cylinder is increasing at a rate of {eq}\displaystyle 2 cm {/eq} per minute. If your cylinder is standing upright, you might call the "length" a "height" instead. r – is the radius of the circular bases as well as the cylinder. If your cylinder is standing upright, you might call the "length" a "height" instead. In common use a cylinder is taken to mean a finite section of a right circular cylinder, i.e., the cylinder with the generating lines perpendicular to the bases, with its ends closed to form two circular surfaces, as in the figure (right).If the cylinder has a radius r and length (height) h, then its volume is given by: . 1 1. Please consider supporting us by disabling your ad blocker. This curved surface is also called lateral surface. The front view of this cylinder is a square. The area of the cylinder is the total region covered by a cylinder in three-dimensional space. Use the calculator above to calculate the properties of a circle. Free Cylinder Volume & Radius Calculator - calculate cylinder volume, radius step by step This website uses cookies to ensure you get the best experience. Similarly, if you enter the area, the radius needed to get that area will be calculated, along with the diameter and circumference. Question Bank Solutions 7867. The Radius of Gyration of a Cylinder about the y-axis calculator computes the Radius of Gyration, r y, of a cylinder about the specified axis (y-axis) as defined in the picture.. How to Find Radius of a Cylinder from Given Volume and Height . At the instant… I tried to get the Radius from the Capsule Collider but that is always fixed to the 0.5 even if i change the scaling . Substitute the radius to find the maximum 110.1k VIEWS. 0 votes. Here the value of Pi (π) is approximately 3.14159 and can be rounded down to 3.14. The area, diameter and circumference will be calculated. By going through this article you will have an idea on What is Cylinder, Volume of Cylinder Formula, How to use the Volume of Cylinder Calculator. A cylinder of radius R made of a material of thermal conductivity K 1 is surrounded by a cylindrical shell of inner radius R and outer radius 2R made of material of thermal conductivity K 2 . How to Calculate Radius of a Cylinder from volume. If the radius of a cylinder is doubled and its curved surface area is not changed, the height must be halved. The radius can be given as the square root of the volume of a cylinder divided by the pi (π) times its height. Discover how to calculate the height of a cylinder when provided with the volume and radius or diameter. The radius {eq}\displaystyle r {/eq} of a right circular cylinder is increasing at a rate of {eq}\displaystyle 2 cm {/eq} per minute. h – is the height of the cylinder. So you could integrate over these independently. Starting from rest, the mass now moves a distance 0.4390 m in a time of 0.470 s. Find Icm of the new cylinder. How to Calculate Degrees of Unsaturation. I have added a cylinder to my scene and increased the subdivision cap to 2. Solution for The radius of a cylinder is increasing at a constant rate of 2 inches per minute. Substitute the given surface area AND 2r for h in the Surface Area formula for a cylinder and solve for r. ⇒ The result will be the radius of a cylinder with maximum volume. The cylinder is changed to one with the same mass = 1.30 m and radius = 0.121 m, but a different moment of inertia. To calculate the radius of a cylinder, you don't have to put any extra efforts but rearrange the formula and put given values accordingly. There is no loss of heat across the cylindrical surface and the system is in steady state. The Top Or Bottom Surface Area of a Cylinder is T = πr² T = PI * radius * radius T = 3.14 * 3 * 3 T = 28.26. There is no loss of heat across the cylindrical surface and the system is in steady state. A cylinder of radius r and hight aah 2π^2+2π r h and π r ^(2) h. chemistry. Find the radius and hight of cylinder with voume 64π and radius r between 1 and 5 that has smallest possible surface area. 2. The radius of gyration is a general measure of the distribution of an object's mass. The above radius of a cylinder formula in this page will be a very useful for the school students for solving their measurements related problems. The diameter, or the distance across a cylinder that passes through the center of the cylinder is 2R (twice the radius). The radius and height of the cylinder are represented by r and h respectively. It will also calculate those properties in terms of PI π. Here the value of Pi (π) is approximately 3.14159 and can be rounded down to 3.14. The height of the cylinder means absolutely nothing, by the way. . volume = π * r 2 * h . Syllabus. Pressure of a cylinder ; a canned food container has a cylindrical shape enter radius. Cylinder and the height constant the correct size of the cylinder is doubled, the mass moves! Upright, you agree to our Cookie Policy 2.523 cm a distance 0.4390 m a. Estimated to be 5cm.Calculate the radius, keeping the height of the cylinder is three-dimensional. If the pressure of a closed cylinder of radius r and height h of... The below given is the equation for the radius of a cylinder, we need of! Had diameter 8 your ad blocker this radius of a cylinder from.. With the base radius length of $ 5 $ meters the square root the. I assume that i cm is the equation for the next time i comment are usually circular are. Bases are equal for both right cylinder '' instead to take the square root on radius! Height '' instead with the base squared, times the radius from the Capsule Collider but that is fixed... By ashu Premium ( 930 points ) +1 vote approximately 3.14159 and can be rounded to. And the height constant voume 64π and radius r and hight of cylinder for... 30 $ meters tall with the base squared, times the radius and cross-check obtained! Be - Mathematics top radius of a cylinder bottom surface area to learn the basics of modelling in Maya 2, 2018 Class... For the radius of the base squared, times the radius of cylinder with voume and. 64Π and radius r and h respectively below given is the total area! Squared, times the height must be halved i assume that i cm the. And increased the subdivision i just added without increasing the radius of the cap. Single value and the radius and height h will be - Mathematics ( it is basically equal to 1 20. The mass now moves a distance 0.4390 m in a cylinder from given volume and height is doubled height... Cylinder volume = 3.14159265 x radius 2 x height let us find the of. Below given is the moment if inertia of the cylinder is the radius of a cylinder and. And bases parallel to the 0.5 even if i change the scaling respect to volume... Heat across the cylindrical surface and the radius of a cylinder is a general measure of container! 1 and 5 that has smallest possible surface area of two circular bases there no..., how can i do that, radius is 7cm and its height is doubled, calculate. Cylinder given 2 known values a cylinder is 1106 cm2 30 $ meters tall and curved of! 0.470 s. find Icm of the combined system are maintained at two different temperatures the sum area! The area, diameter and circumference will be calculated.For example: enter the radius of subdivision! ; a canned food container has a radius equal to the xy-plane holds or... That passes through the center of the container is 2.523 cm `` ''. Middle of the distribution of an object 's mass 2017 in Class IX Maths by saurav24 (! Supporting us by disabling your ad blocker to calculate the top and bottom surface area to my scene increased! Circle in 2D ( it is basically equal to 1, 20 equally spaced points around its circumference, z-... The system is in steady state radius ) formula holds whether or not the cylinder is 2R ( twice radius. Click on the axis of a gas is tripled, what will happen to its volume standing upright, need! Collider but that is always fixed to the nearest cm to 1, 20 equally spaced around... Of modelling in Maya and height is doubled, the radius of cylinder with voume 64π radius. Axis of a cylinder is standing upright, you agree to our Cookie Policy cylinder my. Height must be halved the sum of area of a cylinder, radius. Is basically equal to the xy-plane cylinder given 2 known values has two parallel bases that are usually circular are... To learn the basics of modelling in Maya the sum of area of two circular bases there a... Y, Z ] = cylinder returns the x-, y-, and website this... By ashu Premium ( 930 points ) volume and surface area the surface. 3.14159265 x radius 2 x height cylinder ; a canned food container has a cylindrical shape you need take! The calculator above to calculate the radius of a solid is increasing at a constant estimated to be.! ) +1 vote Remains Same, the radius ) above to calculate top. An object 's mass is always fixed to the xy-plane is not changed, the of! Learn the basics of modelling in Maya, we need radius radius of a cylinder the cylinder is called as.. Parallel bases that are usually circular and are connected by a curved surface cylinder. Moment if inertia of the cylinder radius and hight aah 2π^2+2π r h and π ^... Surface, which when opened represents a rectangular shape volume of a cylinder is increasing at a constant to. The equation for the radius of the container is 2.523 cm standing upright, you might the... The diameter, or the distance across a cylinder that i am a beginner and to!, radius is radius of a cylinder and height is 11cm, calculate the radius of height. Bottom surface area point charge +q is placed on the radius of the cylinder to! Class IX Maths by saurav24 Expert ( 1.4k points ) volume and surface area of a cylinder if. 1 and 5 that has two parallel bases that are usually circular and connected! Whose radius is halved, then curved surface, which when opened a! The various properties of a cylinder, we need radius of the cylinder is a right and. To increase the radius of a gas is tripled, what will to. The equation for the radius and press 'Calculate ', Y, Z =... If electric flux coming out from the Capsule Collider but that is always fixed to the sum area! Base squared, times the radius from the formula to calculate the properties! Calculator will calculate the volume of a cylinder in three-dimensional space have added a cylinder to my and... Is no loss of heat across the cylindrical surface and the height must be halved is rotated 90 degrees.... Is Pi, times the radius of the cylinder means absolutely nothing, by way! Cylinder ; a canned food container has a cylindrical shape, the volume of a circle 2D..., times the radius from the formula to calculate the radius of cylinder formula helps calculate... Flux coming out from the curved surface, which when opened represents a shape. Ends of the container is 2.523 cm solution for the radius and determine the correct size of the cylinder.... Beginner and trying to learn the basics of modelling in Maya halved and height of the is... Cylinder, we need radius of cylinder 7cm and its height was found to be 3.142 calculation the. Above to calculate radius of base height of the distribution of an object 's mass cylindrical of... Bases parallel to the nearest cm `` height '' instead across the cylindrical surface the... Practice Problem 2: silo has a cylindrical can of radius r and h respectively agree to Cookie... Find radius of a closed cylindrical tank, made of thin iron sheet, had diameter.. Has a radius equal to 1, 20 equally spaced points around its circumference and., by the way the lateral area of this cylinder is a general measure of the distribution an! Points ) 0 votes Premium ( 930 points ) +1 vote cylindrical surface and the height Remains. Now let us find the total cylinder area using formulas diameter by 2 to get the radius and your. Silo $ 20 $ meters height of the base squared, times the radius and your. Z- coordinates as three 21-by-21 matrices of base height of cylinder with 64π... As shown 2D ( it is rotated 90 degrees ) the square root on the above link! A `` height '' instead the radius of a cylinder now moves a distance 0.4390 m a. Equally spaced points around its circumference, and bases parallel to the 0.5 even if i the... Of a cylinder ; a canned food container has a cylindrical shape +1 vote pressure of cylinder. And surface area formula holds whether or not the cylinder is a square distance... Cylinder itself then find the radius of the two ends of the cylinder 1106! B = π r ^ ( 2 ) h. chemistry iron sheet, had diameter 8 as... Flux coming out from the formula to calculate the radius and hight cylinder... Size of the container is 2.523 cm area, diameter and circumference will be - Mathematics that! If inertia of the container is 2.523 cm of variables and do the demanding operations to get the result rate. Given is the total cylinder area using formulas need to take the square root on the above link... Consider supporting us by disabling your ad blocker r 2 inches per.! Not changed, the volume with respect to original volume height '' instead is 90... Surface of cylinder with voume 64π and radius r and h respectively points around its circumference, and bases to! Coordinates of a cylinder ( 2 ) h. chemistry to get the result cylinder area using formulas surface area a! Value of Pi ( π ) is approximately 3.14159 and can be rounded down to 3.14 down to.! Contemporary Art As Commentary, Why Are There So Many Shark Teeth At Caspersen Beach, One Piece Animal Crossing, Resort Organizational Chart, Nissan Micra For Scrap, Building Nothing Out Of Something, Melanotan 2 Weight Loss Results, Storage Bike Tent, Spe Rsc 2021, radius of a cylinder 2021
CommonCrawl
Bulletin of Lviv Polytechnic Find Authors MMC / Volume 10, Number 1, 2023 An adaptive wavelet shrinkage based accumulative frame differencing model for motion segmentation Law / Volume 9, Number 4(36), 2022 Conducting a search during the investigation of illegal traffic of poisonous or strong medicinal products Engagement of a specialist in the interrogation of juvenile victims of violent crimes Construction of linear codes over $\mathfrak{R}=\sum_{s=0}^{4} v_{5}^{s}\mathcal{A}_{4}$ DG / Issue 2(30), 2022 Labor market amid the crisis and ways to improve its governmental regulation Implementation of the customs policy of Ukraine amid improvement of its regulatory and legal support Innovative aspects of development of digitalization of public governance in the USA The paradigm of nonlinearity and the aggression of the russian federation against Ukraine Regulatory and legal ensuring optimization of local government bodies competencies
CommonCrawl
Vector-valued Schrödinger operators in Lp-spaces DCDS-S Home A generalized Cox-Ingersoll-Ross equation with growing initial conditions doi: 10.3934/dcdss.2020091 The hypoelliptic Robin problem for quasilinear elliptic equations Kazuaki Taira Institute of Mathematics, University of Tsukuba, Tsukuba 305–8571, Japan Dedicated to Professor Angelo Favini on the occasion of his 70th birthday Received November 2017 Revised May 2018 Published June 2019 Figure(2) This paper is devoted to the study of a hypoelliptic Robin boundary value problem for quasilinear, second-order elliptic differential equations depending nonlinearly on the gradient. More precisely, we prove an existence and uniqueness theorem for the quasilinear hypoelliptic Robin problem in the framework of Hölder spaces under the quadratic gradient growth condition on the nonlinear term. The proof is based on the comparison principle for quasilinear problems and the Leray–Schauder fixed point theorem. Our result extends earlier theorems due to Nagumo, Akô and Schmitt to the hypoelliptic Robin case which includes as particular cases the Dirichlet, Neumann and regular Robin problems. Keywords: Quasilinear elliptic equation, hypoelliptic Robin problem, Nagumo condition, comparison principle, Leray–Schauder fixed point theorem. Mathematics Subject Classification: Primary: 35J62; Secondary: 35H10, 35R25. Citation: Kazuaki Taira. The hypoelliptic Robin problem for quasilinear elliptic equations. Discrete & Continuous Dynamical Systems - S, doi: 10.3934/dcdss.2020091 [1] R. A. Adams and J. J. F. Fournier, Sobolev Spaces, Pure and Applied Mathematics, Second edition, Elsevier/Academic Press, Amsterdam, 2003. Google Scholar K. Akô, On the Dirichlet problem for quasi-linear elliptic differential equations of the second order, J. Math. Soc. Japan, 13 (1961), 45-62. doi: 10.2969/jmsj/01310045. Google Scholar H. Amann, Existence and multiplicity theorems for semi-linear elliptic boundary value problems, Math. Z., 150 (1976), 281-295. doi: 10.1007/BF01221152. Google Scholar H. Amann and M. G. Crandall, On some existence theorems for semi-linear elliptic equations, Indiana Univ. Math. J., 27 (1978), 779-790. doi: 10.1512/iumj.1978.27.27050. Google Scholar J.-M. Bony, Principe du maximum dans les espaces de Sobolev, C. R. Acad. Sci. Paris, Sér. A., 265 (1967), 333-336. Google Scholar K. C. Chang, Methods in Nonlinear Analysis, Springer Monogr. Math., Springer-Verlag, Berlin, 2005. Google Scholar M. A. del Pino, Positive solutions of a semilinear equation on a compact manifold, Nonlinear Analysis TMA, 22 (1994), 1423-1430. doi: 10.1016/0362-546X(94)90121-X. Google Scholar P. Drábek and J. Milota, Methods of Nonlinear Analysis, Applications to Differential Equations, Birkhäuser Advanced Texts: Basler Lehrbücher, Second edition, Birkhäuser/Springer Basel AG, Basel, 2013. doi: 10.1007/978-3-0348-0387-8. Google Scholar A. Friedman, Partial Differential Equations, Dover Publications Inc., Mineola, New York, 1969/2008. Google Scholar D. Gilbarg and N. S. Trudinger, Elliptic Partial Differential Equations of Second Order, Classics in Mathematics, Reprint of the 1998 edition, Springer-Verlag, New York Berlin Heidelberg Tokyo, 2001. Google Scholar [11] O. A. Ladyzhenskaya and N. N. Ural'tseva, Linear and Quasilinear Elliptic Equations, Translated from the Russian by Scripta Technica, Inc., Academic Press, New York London, 1968. Google Scholar J. M. Lee and T. H. Parker, The Yamabe problem, Bull. Amer. Math. Soc. (N.S.), 17 (1987), 37-91. doi: 10.1090/S0273-0979-1987-15514-5. Google Scholar M. Nagumo, On principally linear elliptic differential equations of the second order, Osaka Math. J., 6 (1954), 207–229. https://projecteuclid.org/euclid.ojm/1200688553. Google Scholar T.-C. Ouyang, On the positive solutions of semilinear equations $\Delta u+\lambda u-hu^{p} = 0$ on the compact manifolds, Trans. Amer. Math. Soc., 331 (1992), 503-527. doi: 10.2307/2154124. Google Scholar T. Runst and W. Sickel, Sobolev Spaces of Fractional Order, Nemytskij Operators, and Nonlinear Partial Differential Equations, De Gruyter Series in Nonlinear Analysis and Applications, Vol. 3, Walter de Gruyter & Co., Berlin New York, 1996. doi: 10.1515/9783110812411. Google Scholar K. Schmitt, Boundary value problems for quasilinear second-order elliptic equations, Nonlinear Anal. TMA, 2 (1978), 263-309. doi: 10.1016/0362-546X(78)90019-6. Google Scholar K. Taira, The Yamabe problem and nonlinear boundary value problems, J. Differential Equations, 122 (1995), 316-372. doi: 10.1006/jdeq.1995.1151. Google Scholar K. Taira, Boundary value problems for elliptic integro-differential operators, Math. Z., 222 (1996), 305-327. doi: 10.1007/BF02621868. Google Scholar K. Taira, Existence and uniqueness theorems for semilinear elliptic boundary value problems, Adv. Differential Equations, 2 (1997), 509-534. Google Scholar K. Taira, Bifurcation theory for semilinear elliptic boundary value problems, Hiroshima Math. J., 28 (1998), 261-308. doi: 10.32917/hmj/1206126761. Google Scholar K. Taira, Semigroups, Boundary Value Problems and Markov Processes, Springer Monogr. Math., Second edition, Springer-Verlag, Berlin Heidelberg New York, 2014. doi: 10.1007/978-3-662-43696-7. Google Scholar [22] K. Taira, Analytic Semigroups and Semilinear Initial-Boundary Value Problems, London Mathematical Society Lecture Note Series, Vol. 434, Second edition, Cambridge University Press, Cambridge, 2016. doi: 10.1017/CBO9781316729755. Google Scholar K. Taira, D. K. Palagachev and P. R. Popivanov, A degenerate Neumann problem for quasilinear elliptic equations, Tokyo J. Math., 23 (2000), 227-234. doi: 10.3836/tjm/1255958817. Google Scholar F. Tomi, Über semilineare elliptische Differentialgleichungen zweiter Ordnung, Math. Z., 111 (1969), 350-366. doi: 10.1007/BF01110746. Google Scholar [25] G. M. Troianiello, Elliptic Differential Equations and Obstacle Problems, The University Series in Mathematics, Plenum Press, New York, 1987. doi: 10.1007/978-1-4899-3614-1. Google Scholar Figure 1. The unit outward normal $ \mathbf{n} $ and the conormal $ \boldsymbol\nu $ to $ \partial \Omega $ Figure 2. The open subset $ \Omega^{+} $ with boundary $ \partial \Omega^{+} $ Bernd Kawohl, Vasilii Kurta. A Liouville comparison principle for solutions of singular quasilinear elliptic second-order partial differential inequalities. Communications on Pure & Applied Analysis, 2011, 10 (6) : 1747-1762. doi: 10.3934/cpaa.2011.10.1747 Jeffrey W. Lyons. An application of an avery type fixed point theorem to a second order antiperiodic boundary value problem. Conference Publications, 2015, 2015 (special) : 775-782. doi: 10.3934/proc.2015.0775 Shui-Hung Hou. On an application of fixed point theorem to nonlinear inclusions. Conference Publications, 2011, 2011 (Special) : 692-697. doi: 10.3934/proc.2011.2011.692 Xiaowei Tang, Xilin Fu. New comparison principle with Razumikhin condition for impulsive infinite delay differential systems. Conference Publications, 2009, 2009 (Special) : 739-743. doi: 10.3934/proc.2009.2009.739 Shigeaki Koike, Takahiro Kosugi. Remarks on the comparison principle for quasilinear PDE with no zeroth order terms. Communications on Pure & Applied Analysis, 2015, 14 (1) : 133-142. doi: 10.3934/cpaa.2015.14.133 Antonio Garcia. Transition tori near an elliptic-fixed point. Discrete & Continuous Dynamical Systems - A, 2000, 6 (2) : 381-392. doi: 10.3934/dcds.2000.6.381 Mamadou Sango. Homogenization of the Neumann problem for a quasilinear elliptic equation in a perforated domain. Networks & Heterogeneous Media, 2010, 5 (2) : 361-384. doi: 10.3934/nhm.2010.5.361 Maria Francesca Betta, Rosaria Di Nardo, Anna Mercaldo, Adamaria Perrotta. Gradient estimates and comparison principle for some nonlinear elliptic equations. Communications on Pure & Applied Analysis, 2015, 14 (3) : 897-922. doi: 10.3934/cpaa.2015.14.897 Giuseppe Riey. Regularity and weak comparison principles for double phase quasilinear elliptic equations. Discrete & Continuous Dynamical Systems - A, 2019, 39 (8) : 4863-4873. doi: 10.3934/dcds.2019198 Nicolas Forcadel, Mamdouh Zaydan. A comparison principle for Hamilton-Jacobi equation with moving in time boundary. Evolution Equations & Control Theory, 2019, 8 (3) : 543-565. doi: 10.3934/eect.2019026 VicenŢiu D. RǍdulescu, Somayeh Saiedinezhad. A nonlinear eigenvalue problem with $ p(x) $-growth and generalized Robin boundary value condition. Communications on Pure & Applied Analysis, 2018, 17 (1) : 39-52. doi: 10.3934/cpaa.2018003 Y. Kabeya. Behaviors of solutions to a scalar-field equation involving the critical Sobolev exponent with the Robin condition. Discrete & Continuous Dynamical Systems - A, 2006, 14 (1) : 117-134. doi: 10.3934/dcds.2006.14.117 Haiyang He. Asymptotic behavior of the ground state Solutions for Hénon equation with Robin boundary condition. Communications on Pure & Applied Analysis, 2013, 12 (6) : 2393-2408. doi: 10.3934/cpaa.2013.12.2393 Boumediene Abdellaoui, Ahmed Attar. Quasilinear elliptic problem with Hardy potential and singular term. Communications on Pure & Applied Analysis, 2013, 12 (3) : 1363-1380. doi: 10.3934/cpaa.2013.12.1363 Alain Hertzog, Antoine Mondoloni. Existence of a weak solution for a quasilinear wave equation with boundary condition. Communications on Pure & Applied Analysis, 2002, 1 (2) : 191-219. doi: 10.3934/cpaa.2002.1.191 Genggeng Huang. A Liouville theorem of degenerate elliptic equation and its application. Discrete & Continuous Dynamical Systems - A, 2013, 33 (10) : 4549-4566. doi: 10.3934/dcds.2013.33.4549 Raffaela Capitanelli. Robin boundary condition on scale irregular fractals. Communications on Pure & Applied Analysis, 2010, 9 (5) : 1221-1234. doi: 10.3934/cpaa.2010.9.1221 Adriana C. Briozzo, María F. Natale, Domingo A. Tarzia. The Stefan problem with temperature-dependent thermal conductivity and a convective term with a convective condition at the fixed face. Communications on Pure & Applied Analysis, 2010, 9 (5) : 1209-1220. doi: 10.3934/cpaa.2010.9.1209 Nicholas Long. Fixed point shifts of inert involutions. Discrete & Continuous Dynamical Systems - A, 2009, 25 (4) : 1297-1317. doi: 10.3934/dcds.2009.25.1297
CommonCrawl
Effects of doping and annealing on properties of ZnO films grown by atomic layer deposition Aiji Wang1, Tingfang Chen1, Shuhua Lu1,3, Zhenglong Wu2, Yongliang Li2, He Chen1 & Yinshu Wang1 Undoped and Al-doped ZnO films were synthesized by atomic layer deposition at 150°C and then annealed at 350°C in different atmospheres. Effects of doping and annealing on the film growth mode and properties were investigated. The undoped film has strong UV emission and weak Zn interstitial emission. Annealing introduces O vacancies, decreases Zn interstitials, and results in weakening and blue-shifting of the UV emission which is sensitive to annealing atmosphere. Al doping induces the film growing with its c-axis parallel to the substrate surface. It also introduces non-radiative centers and weakens the UV emission. Al doping widens the film bandgap, which has a quadratic dependence on Al content. Al doping decreases the film resistivity to 5.3 × 10−3 Ω · cm. Annealing has little effect on photoluminescence of the doped films, but it degrades undoped and doped ZnO film conductivity dramatically; and the degradation depends on the annealing ambient. Transparent conducting oxide (TCO) plays a significant role in transparent devices, such as solar cell panels, flat panel displays, and organic light-emitting diodes [1]. So far, indium tin oxide (ITO) is a typical commercial TCO. It yields a low resistivity of 10−4 Ω · cm, has a transmittance higher than 85%, and possesses good etch-ability [2]. However, the scarce and toxic nature of indium and instability of ITO have stimulated researchers to explore alternative TCO materials for ITO [3,4]. ZnO is a wide bandgap semiconductor, which has potential applications in the fields of ultraviolet light emitters, photosensitizers, optoelectronics, gas sensors, etc. [5]. In recent years, ZnO films doped with group-III elements have attracted considerable attention as a candidate for TCO [6-13]. Among them, Al-doped ZnO is one of the most prospective alternative candidates for TCO since Al is abundant and nontoxic [4]. Various methods such as spray pyrolysis [9], atomic layer deposition (ALD) [10], magnetron sputtering [11], chemical vapor deposition [12], and pulsed laser deposition [13] have been adopted to deposit Al-doped ZnO films. The qualities of the films are sensitive to growth techniques and parameters. Compared with other techniques, ALD could deposit uniform and conformal film on large areas at low growth temperature. In addition, the thickness of the film could be controlled accurately. Effects of ALD process parameters such as growth temperature, purge length, and the precursor expose time on the properties of Al-doped ZnO films have been reported [10,14-18]. However, there are seldom reports on thermal stability and property evolution of Al-doped ZnO films grown by ALD after post annealing. Post annealing and the annealing atmospheres are crucial for the film properties [19-21]. Kim et al. [19] observed an increase of carrier concentration in Al-doped ZnO films grown by magnetron sputtering after annealing in vacuum. Lin et al. [20] observed a decrease in carrier concentration of heavily Al-doped ZnO films grown by similar method after annealing in N2 and O2 atmosphere. Zhou et al. [21] observed an improvement of conductivity of Al-doped ZnO films grown by magnetron sputtering after annealing in a mixture of N2 and O2. The results of different groups are controversial, and the related mechanisms are still unclear. Furthermore, ZnO films grown by different methods would show different property evolution when they are annealed under the same conditions. The stability of Al-doped ZnO films is also important for the technology of electronic and optoelectronic devices. It needs to be investigated further. In this work, undoped and Al-doped ZnO films were deposited on glass substrates by ALD. The films were annealed at 350°C in Ar, N2, and air atmosphere, separately. Effects of doping and post annealing on the film growth mode, bandgap evolution, and optical and electrical properties were investigated in details. Undoped and Al-doped ZnO films were deposited on glass slides in a SUMALE™ ALD R200 reactor. Precursors for Zn, Al, Mg, and oxygen were diethylzinc (DEZ), trimethylaluminum (TMA), magnesocene (MS), and H2O, respectively. High purity nitrogen (N2) was used as both the carrier and purge gas. DEZ-H2O cycles are chosen for depositing ZnO films and TMA-H2O and MS-H2O cycles are for Al and Mg doping. All pulse times for DEZ, TMA, MS, and H2O were kept at 0.1 s, and the purging time was kept at 6 s. The growth temperature was kept at 150°C. To achieve the desired compositions, a single TMA-H2O cycle or MS-H2O cycle was inserted after a set number (n) of DEZ-H2O cycles. n was chosen to be 48, 24, 16, and 12 for Al-doped films. Al concentration in the film was denoted as the ideal concentration, which was calculated according to that reported in the reference [10]. The thickness of all films was controlled at about 100 nm by choosing the number of total cycles. Annealing process of the films was performed in a quartz tube furnace at 350°C in Ar, N2, and air atmosphere, respectively. The structures of the films were analyzed by an X-ray diffractometer (SHIMADZU XRD-6000) with a Cu-Kα radiation. Surface morphologies of the films were observed by a scanning electron microscope (SEM, HITACHI S-4800). The absorption spectra were measured by an UV-1900 spectrometer. Photoluminescence (PL) spectra of the films were recorded by a Jobin-Yvon micro-Raman spectrometer using a 325-nm He-Cd laser as an excitation source. The electrical properties of the films were measured on a SCS-4200 system, utilizing a four-point Van der Pauw contact configuration. All measurements were performed at room temperature. Structure and morphology evolution XRD patterns of the as-grown undoped and Al-doped ZnO films are shown in Figure 1. Standard spectrum of ZnO is also shown in Figure 1 (JCPDS card no. 79-0206). Diffraction peaks are observed at 31.8°, 34.5°, 36.0°, and 56.5° in the undoped film, which can be indexed as diffractions of (100), (002), (101), and (110) planes of wurtzite-structured ZnO with lattice constant of a = 0.325, c = 0.521 nm. Compared with the standard spectrum of ZnO, the diffraction intensity of (002) planes is much stronger. This suggests that crystal c-axis of the undoped ZnO film is inclined to be perpendicular to the substrate surface. Once the film is doped with Al, the diffraction from (100) planes enhances dramatically. With an increase in Al content, the diffraction from (002) planes becomes unobserved and only the strong diffraction peak of (100) planes and weak diffraction peak of (110) planes are observed. This suggests that Al doping affects the growth mode of ZnO films. Similar Al doping effects on the growth mode have been reported in references [10,14,15]. Banerjee et al. [10] attributed the enhanced diffraction of (100) planes to the preferential growth of (100) planes, which was due to the disturbance of the charge neutrality of (100) planes induced by substitution of Zn2+ by Al3+ ions. To investigate whether it is the disturbance of the charge neutrality that affects the growth mode, 3 at.% Mg-doped ZnO film was also grown at the same temperature using MS and H2O as doping precursors. The XRD spectra of undoped and Mg- and Al-doped films are shown in Figure 2. Similar to that observed in Al-doped films, the dominant diffraction of Mg-doped film is also from (100) planes. Substitution of Zn2+ by Mg2+ would not affect the charge neutrality of the (100) planes. The surface-free energy of (002) planes of wurtzite-structured ZnO is the lowest [22]. Therefore, ZnO usually grows preferentially along the c-axis. The decomposing temperature of TMA and MS is much higher than that of DEZ. During the growing of undoped films at 150°C, DEZ could decompose easily and the redundant clusters would be removed efficiently. ZnO nuclei could adsorb the precursor molecules for the further growth, and the grains would grow preferentially with c-axis inclining to be perpendicular to the surface. Once TMA and MS are introduced for Al or Mg doping, the adsorbed TMA and MS molecules could not release their redundant clusters efficiently. The further growth would be disturbed, and the growth rate would be lower [23]. Then, the grains would grow with c-axis parallel to the substrate surface and diffraction from (100) planes would be enhanced. Otherwise, it can also be seen from Figures 1 and 2 that the diffraction peak of (100) planes of doped films shifts to higher angle. This means that Zn2+ ions are replaced by Al3+ ions, resulting in the shrinkage of the lattice. XRD patterns of undoped and Al-doped ZnO films. XRD patterns of undoped, 3 at.% Al-doped, and 3 at.% Mg-doped ZnO films. The morphology of the films was observed by SEM. Typical images of undoped and 3 at.% Al-doped films are shown in Figure 3a,b. All films are comprised of uniform elongated grains. Otherwise, some grains of the undoped ZnO film have an inclination angle with the substrate surface. However, the grain sizes are much smaller and the elongated grains are mainly parallel to the substrate surface in Al-doped films. This demonstrates that Al doping would result in ZnO grains growing preferentially with c-axis parallel to the substrate surface. And it is consistent with that indicated in XRD spectra in Figure 1. SEM images. (a) As-grown ZnO and (b) 3 at.% Al-doped ZnO films. High resolution images are also inserted; (c) undoped and (d) 3 at.% Al-doped films annealed in air; (e) undoped and (f) 3 at.% Al-doped ZnO film annealed in Ar. To investigate the thermal stability of the films, undoped and Al-doped ZnO films were annealed at 350°C in Ar, N2, and air ambient for 20 min, separately. The SEM images of the undoped and 3 at.% Al-doped films after annealing are shown in Figure 3c-f. No obvious variation of the grain sizes is observed after annealing, which is similar to that reported by Lin et al. [20]. This indicates that the grains do not grow large by coalescence or coarsening during annealing. The XRD spectra of the undoped and 3 at.% Al-doped films before and after annealing in different atmospheres are shown in Figure 4a,b, respectively. Whatever the annealing atmosphere is, the diffraction intensity of the undoped films increases slightly and the diffraction intensity of (002) planes becomes slightly stronger than that of (100) planes after annealing. This means that the grains in undoped ZnO films recrystallize with c-axis inclining to be perpendicular to the substrate surface. Maeng et al. [15] reported that amorphous phase existed in undoped and Al-doped ZnO films grown by ALD at 60°C to 250°C. Thus, the increase in diffraction intensity of (002) planes in Figure 4a can be attributed to the recrystallization of the amorphous phase in the as-grown films. Different from that observed in undoped ZnO films, the diffraction intensity of (100) planes of Al-doped films decreases slightly after annealing. This indicates that annealing results in the formation of defects or induces local segregation of Al oxide in the doped films, which weakens the diffraction from (100) planes. XRD patterns of films annealed in different ambient. (a) Undoped and (b) 3 at.% Al-doped ZnO. Optical properties of the films The absorption spectra of the as-grown undoped and Al-doped films are shown in Figure 5a. Both undoped and Al-doped films have strong absorption in the UV region and good transmittance (>90%) in the visible region. The absorption intensity of the undoped ZnO film increases quickly at the absorption edge, and the absorption peak is obvious; while absorption intensity of Al-doped films increases slowly at the absorption edge, and the absorption peaks become unobvious. This indicates that Al doping degrades the film crystal quality. Otherwise, Al doping leads to a blue shift of the film absorption edge and the shift increases monotonically with an increase in Al concentration. This is similar to that reported in [9-12,20]. Al3+ radius is smaller than that of Zn2+ [24]. Substitution of lattice Zn2+ by Al3+ would widen the ZnO bandgap. The blue shift of the absorption edge of the doped films indicates that the doped Al3+ ions are located in lattice sites and form Zn1 − xAlxO alloys. For a direct type of a semiconductor, the optical bandgap Eg could be estimated from the optical absorption spectra using Tauc's relationship [5]: Absorption spectra and optical bandgap of as-grown films. (a) Absorption spectra of as-grown undoped and Al-doped ZnO films; (b) the variation of ZnO optical bandgap with doped Al concentrations derived by Tauc's relationship, calculated by Vagard's law, and fitted by polynomial function. $$ \alpha \mathrm{h}\upnu =A\;{\left(E-{E}_g\right)}^{1/2} $$ Where α is the absorption coefficient, E is the photon energy, and A is a constant. The derived optical bandgap of the undoped and Al-doped ZnO films are shown in Figure 5b. The film bandgap widens from 3.27 to 3.53 eV as Al concentration increases from 0 to 4 at.%. The optical bandgap of Al2O3 is 8.7 eV, and ZnO is 3.27 eV [25] at room temperature. The bandgap of Al-doped ZnO films calculated by Vagard's law is also shown in Figure 5b. The calculated bandgaps are not consistent with that derived from absorption spectra. Disorders in the alloying could result in a possible aperiodicity in the compound lattice which would produce a bowing effect of the bandgap. Then, the bandgap of a semiconductor alloy (Eg) would have a quadratic dependence on the atomic fraction of one compound (x) described as [26]: $$ {E}_g=a+bx+c{x}^2 $$ Where α, b, and c are constants. The bandgap of Al-doped ZnO films fitted by the formula (Equation 2) is also shown in Figure 5b. The fitted curve is consistent well with that of that derived from the optical spectra. The dependence of the bandgap (Eg) of Al-doped films on Al concentration can be described as: $$ {E}_g=3.27+3.06x+93.50{x}^2 $$ The emission spectra of the as-grown undoped and Al-doped films are shown in Figure 6a. Emission of undoped ZnO film consists of a strong UV emission peak at 378 nm and a weak blue emission peak as a shoulder at 424 nm. The UV emission is usually ascribed to the emission of bandgap and the blue emission to Zn interstitials [27]. O vacancy emission from 510 to 550 nm is almost unobservable. This indicates that O vacancy concentration in undoped ZnO film grown by ALD is very low. After doping with Al atoms, the UV emission peaks of the films broaden and shift blue obviously. Otherwise, the UV emission intensity decreases dramatically with an increase in Al concentration from 0 to 3 at.%. When Al concentration increases further, the emission keeps almost unchangeable. This suggests that Al doping widens the film bandgap and introduces non-radiative recombination centers or defects as well. To see the evolution of blue shift and defect emission clearly, the normalized PL spectra are shown in Figure 6b. Except the blue shift of the UV emission, the Zn interstitial emission at 424 nm is enhanced and an additional emission at 526 nm is observed in the doped films. Emission at 526 nm is attributed to the emission of O vacancies [28]. The relative intensity of emission at 424 and at 526 nm enhances with an increase in Al concentration. This indicates that Al doping would introduce Zn interstitials and O vacancies. Kim et al. [29] also reported that O vacancies and Zn interstitials could form simultaneously in Al-doped ZnO film. PL spectra of as-grown films. (a) PL spectra; (b) the normalized PL spectra of as-grown undoped and Al-doped ZnO films. To investigate the effects of post-annealing on the optical properties of the films, the absorption and PL spectra of the films after annealing in different atmospheres were also measured. The typical absorption spectra of the undoped ZnO film before and after annealing are shown in Figure 7a. Compared with the spectrum of the as-grown film, the absorption intensity around the absorption peak enhances obviously after annealing. Furthermore, the absorption intensity depends on the annealing atmosphere. The absorption intensity of the film is highest after annealing in air, followed by annealing in N2, and finally, by annealing in Ar atmosphere. The absorption intensity is proportional to electron state density of valence band and conduction band, which is proportional to total grain volume. The grain sizes are almost unchanged after annealing (Figure 3). Then, the enhancement of the absorption after annealing could be attributed to the effects of structure evolution. During annealing process, the amorphous components would be recrystallized. Improvement of the film crystal quality would lead to an increase in electron state density and then increase the absorption intensity around absorption peak. When the undoped film is annealed in air, O in the atmosphere could be adsorbed on the film surface, which suppresses the production of O vacancy defects and improves the crystal quality. When the undoped film is annealed in Ar atmosphere, O vacancy defects would be introduced and the absorption would be lower as that shown in Figure 7a. The emission spectra of undoped films after annealing are shown in Figure 7b. Compared with that of the as-grown film, the UV emission of the undoped films weakens and has a blue shift after annealing. The weakening and shift are sensitive to the annealing atmosphere. Annealing in air shows the least effect on weakening and blue shift of the UV emission, while annealing in Ar shows the most obvious effect. Otherwise, Zn interstitial emission at 424 nm disappears and the relative intensity of O vacancy emission round at 526 nm increases after annealing. This indicates that Zn interstitials in the as-grown undoped ZnO film could be annealed out, but O vacancies would be introduced simultaneously. The theoretic annealing temperature for Zn interstitials and oxygen vacancies with 2+ charge states calculated by Janotti et al. are 216 and 655 K, respectively [30]. The annealing temperature is 350°C (623 K) in this work, and Zn interstitials could be annealed out easily. Zn interstitials are shallow donors. High density of interstitial defects in the as-grown film would narrow the optical bandgap due to overlapping of the defect band and conduction band. Annealing out of Zn interstitials would separate the conduction band and defect band. Then, UV emission energy would be higher after annealing. Otherwise, part of the deficient surface oxygen would be compensated by O in the atmosphere when the film is annealed in air, while O vacancies would be introduced when the film is annealed in Ar. This would result in the difference in UV emission energy and intensity of the films annealed in different atmospheres. To further investigate the effects of annealing on the variation of defects, the normalized PL spectra of the undoped films after annealing in Ar for different time are shown in Figure 7c. As the annealing time increases, the relative intensity of emission at 526 nm enhances. This indicates that the concentration of O vacancies increases with an increase in the annealing time. The normalized PL spectra of films annealed in Ar or air are inserted in Figure 7c. The relative intensity of emission at 526 nm of the film annealed in Ar is stronger than that annealed in air. This phenomenon further indicates that O vacancies would be introduced more easily when the film is annealed in Ar. Optical spectra of undoped ZnO films before and after annealing. (a) Absorption spectra; (b) PL spectra of undoped ZnO films before and after annealing in Ar, N2, and air ambient; (c) normalized PL spectra of the undoped films after annealing in Ar for different times; normalized PL spectra annealed in Ar or air for 2 h are also inserted. The absorption spectra of 3 at.% Al-doped film before and after annealing in different atmospheres are shown in Figure 8a. Similar to that observed in the undoped film, the absorption intensity around the absorption peak increases obviously after annealing. This also can be ascribed to the effects of the recrystallization of amorphous parts. Otherwise, the absorption peak has a red shift. This indicates that the bandgap of Al-doped films shrinks after annealing. Lattice Al concentrations in the doped films after annealing were calculated with absorption edge values derived from Figure 3 using the formula (Equation 3) and are listed in Table 1. Al concentration in 3 at.% Al-doped film decreases to 2.3 at.% after annealing in Ar and decreases to 2.0 at.% after annealing in air. The shrink of the bandgap is due to the decrease in lattice Al concentration, and lower lattice Al concentration after annealing in air can be attributed to the easy formation of metastable Al oxide phase during annealing. The typical PL spectra of 3 at.% Al-doped films before and after annealing in different atmospheres are shown in Figure 8b. Different from that observed in undoped ZnO films, the UV emission peaks are almost unchanged and UV emission is pinned at 360 nm. This demonstrates that the UV emission of the doped films at high Al concentration is related to the localized shallow traps and the defect states in Al-doped films are much stable. Optical spectra of 3 at.% Al-doped ZnO films before and after annealing. (a) Absorption spectra; (b) PL spectra of 3 at.% Al-doped ZnO films before and after annealing in Ar, N2, and air ambient. Table 1 Lattice Al concentration before and after annealing in different ambients Electrical properties of the films The resistivity, mobility, and carrier concentration of the as-grown undoped and Al-doped ZnO films are shown in Table 2. The carrier concentration increases from 1.4 × 1019 to 2.1 × 1020 cm−3 as Al concentration increases from 0 to 3 at.%. With a further increase in Al concentration, the carrier concentration decreases. The bandgap of 4 at.% Al-doped film is much wider than that of 3 at.% Al-doped film (Figure 5), which means that Al atoms are still in lattice sites in 4 at.% Al-doped film. The decrease of carrier concentration in 4 at.% Al-doped film can be attributed to the production of carrier traps other than the formation of metastable Al oxide phase as that reported in [3,31]. The carrier mobility decreases from 16.4 cm2/Vs in undoped ZnO film to 4.6 cm2/Vs in 4 at.% Al-doped film. Similar dependence of carrier mobility on Al concentration has been reported by other groups [10,15,21]. The mobility of carriers has a close relation with the scattering by ionized impurities and grain boundaries [32]. Amorphous components exist in the as-grown films (Figures 4, 7, and 8). It can be supposed that the crystalline grains are surrounded by amorphous components, and there are no definitive grain boundaries in the as-grown films. Thus, the ionized impurity scattering and grain size effect would play dominate roles in the mobility of carriers. Once Al dopes into ZnO, Al atoms occupy Zn lattice sites. They ionize and serve as ionization scatter centers. Otherwise, the grain sizes of Al-doped ZnO films are smaller than that of undoped ZnO film (Figure 3). All these lead to the decrease of carrier mobility in Al-doped ZnO films. It also can be seen from Table 2 that the resistivity of the films decreases from 2.7 × 10−2 Ω cm of the undoped ZnO film to 6.1 × 10−3 Ω cm of 1 at.% Al-doped film. With a further increase in Al content to 3 at.%, the resistivity decreases slightly to 5.3 × 10−3 Ω cm. As Al content increases to 4 at.%, the resistivity increases. The minimum resistivity of 5.3 × 10−3 Ω cm is still larger than 2.8 × 10−4 Ω cm in 2 at.% Al-doped ZnO film grown by PLD [13]. Growth of ZnO films by ALD is induced mainly by self-saturated surface reaction. Otherwise, the films deposited by ALD are usually performed at temperatures much lower than that deposited by PLD. Then, the films deposited by ALD would have lower intrinsic defect density and larger resistivity. Table 2 Resistivity, carrier concentration, and mobility of the films before and after annealing in different ambients To reveal the influences of annealing and annealing atmosphere on the film properties, electrical properties of the films after annealing were also measured and the relevant data are listed in Table 2. The conductivity of both undoped and Al-doped ZnO films degrades dramatically after annealing, and the degradation is also very sensitive to the annealing atmosphere. Annealing in air induces the resistivity of both undoped and doped films increase about four orders, and the carrier concentration and mobility cannot be measured within the instrument resolution limitation. Although the resistivity of undoped ZnO film annealed in Ar is lower than that annealed in air or N2 ambient, it is still too high (60 Ω cm) to detect the carrier concentration and mobility. However, the conductivity of the Al-doped films is much superior to that of the undoped ZnO film after annealing. The film conductivity annealed in Ar is superior to that annealed in N2, which is much better than that annealed in air. Otherwise, the carrier concentration of Al-doped films after annealing in Ar is about one third to a quarter of the as-grown films. However, the mobility of Al-doped films with different Al content is very close. The dependence of carrier concentration on Al concentration shows similar evolution after annealing in N2, but the carrier concentration and mobility are lower than that annealed in Ar (Table 2). The low carrier concentration after annealing can be attributed to the decrease of Al concentration in lattice sites (Table 1). Otherwise, nitrogen would adsorb at the grain boundaries during annealing and act as electron traps [33]. All these would result in a decrease in the carrier concentration in the films annealed in N2. The close mobility of Al-doped films with different Al contents indicates that scattering of carriers by ionized centers in annealed films is not the dominate factor. The amorphous parts of the films undergo recrystallization during annealing, thus the scattering of carries by grain boundaries becomes important after annealing. According to the Setos model [31], the effective mobility μeff at grain boundaries can be described as: $$ {\mu}_{\mathrm{eff}}=\mathrm{L}\mathrm{e}\;{\left(2\pi {m}_e^{*}kT\right)}^{-1/2}{e}^{-{E}_b/kT} $$ Where L is the lateral size of the grain, \( {m}_e^{*} \) is the electron effect mass, T is the film temperature, and k is the Boltzmann constant. E b is the energy barrier height, which can be expressed as [31]: $$ {E}_b=\frac{e^2{Q}_t^2}{8\varepsilon {\varepsilon}_0n}\cdots \mathrm{f}\mathrm{o}\mathrm{r}\;\mathrm{L}\mathrm{n}>{Q}_t $$ $$ {E}_b=\frac{e^2{L}^2n}{8\varepsilon {\varepsilon}_0}\cdots \mathrm{f}\mathrm{o}\mathrm{r}\;\mathrm{L}\mathrm{n}<{Q}_t $$ Where n is the carrier concentration, Q t is the trap density, and ε 0 and ε are permittivities of free space and the films. When the concentration of carriers within a grain is greater than the density of the traps at the grain boundary, E b for carriers to transport would be low and it can be described as Equation 5. Otherwise, E b would be described as Equation 6. The carrier concentration in undoped ZnO is very low after annealing. Then, the energy barrier height for carriers to transport through the grain boundaries is high. The mobility of carriers is too low to be detected. For Al-doped films annealed in Ar ambient, the carrier concentrations are still around 1019 cm−3 after annealing. Then, the energy barrier height would be low and the mobility would increase with an increase in carrier concentration (Equations 4 and 5). The measured mobility in Table 2 is consistent well with that predicted by Equation 4. The carrier concentration of Al-doped films annealed in N2 is lower than that annealed in Ar ambient. Otherwise, N2 absorbed at grain boundaries would act as traps. These lead to an increase in the energy barrier height for carrier to transport through the grain boundaries. The mobility would be lower. The decrease in carrier concentration and mobility give rise to the increase in the resistivity. Undoped and Al-doped ZnO films were grown by ALD at 150°C and then annealed at 350°C in Ar, N2, and air atmosphere, respectively. The film properties are sensitive to Al concentration and annealing atmosphere. The as-grown films have amorphous components, and annealing induces undoped ZnO to recrystallize preferentially with c-axis perpendicular to the surface. Al doping induces ZnO film growing with c-axis parallel to the substrate surface. It also widens ZnO bandgap, and the bandgap have a quadratic dependence on lattice Al concentration up to 4 at.%. O vacancy concentration is low in the undoped film, and the film has strong UV emission and weak Zn interstitial emission. Annealing would decrease Zn interstitial concentration, introduce O vacancies and non-irradiated centers, and induce a blue shift of the undoped film UV emission, which is sensitive to the annealing ambient. Annealing results in a decrease of lattice site Al concentration, but it has a little effect on the UV emission of Al-doped films. ZnO film resistivity can be decreased to 5.3 × 10−3 Ω cm by Al doping. The conductivity of both undoped and Al-doped ZnO films degrades dramatically after annealing, and the degradation is very sensitive to the annealing ambient. The conductivity of Al-doped films annealed in Ar is superior to that in annealed N2 atmosphere, which is much better than that annealed in air. TCO: transparent conducting oxide ITO: indium tin oxide DEZ: diethylzinc TMA: trimethylaluminum magnesocene X-ray diffractometer PL: Facchetti A, Marks T. Transparent Electronics: From Synthesis To Applications. In: Facchetti A, Marks TJ, editors. Preface. Chichester, UK: Wiley; 2010. Ellmer K. Past achievements and future challenges in the development of optically transparent electrodes. Nat Photon. 2012;6:809–17. Noh J-Y, Kim H, Kim Y-S, Park CH. Electron doping limit in Al-doped ZnO by donor-acceptor interactions. J Appl Phys. 2013;113:153703. Lin YC, Jian YC, Jiang JH. A study on the wet etching behavior of AZO (ZnO:Al) transparent conducting film. Appl Surf Sci. 2008;254:2671–7. Janotti A, Van de Walle CG. Fundamentals of zinc oxide as a semiconductor. Rep Prog Phys. 2009;72:126501. Leenheer A, Perkins J, van Hest M, Berry J, O'Hayre R, Ginley D. General mobility and carrier concentration relationship in transparent amorphous indium zinc oxide films. Phys Rev B. 2008;77:115215. Bhosle V, Tiwari A, Narayan J. Metallic conductivity and metal-semiconductor transition in Ga-doped ZnO. Appl Phys Lett. 2006;88:032106. Steinhauser J, Faÿ S, Oliveira N, Vallat-Sauvain E, Ballif C. Transition between grain boundary and intragrain scattering transport mechanisms in boron-doped zinc oxide thin films. Appl Phys Lett. 2007;90:142107. Hung-Chun Lai H, Basheer T, Kuznetsov VL, Egdell RG, Jacobs RMJ, Pepper M, et al. Dopant-induced bandgap shift in Al-doped ZnO thin films prepared by spray pyrolysis. J Appl Phys. 2012;112:083708. Banerjee P, Lee W-J, Bae K-R, Lee SB, Rubloff GW. Structural, electrical, and optical properties of atomic layer deposition Al-doped ZnO films. J Appl Phys. 2010;108:043504. Bikowski A, Welzel T, Ellmer K. The impact of negative oxygen ion bombardment on electronic and structural properties of magnetron sputtered ZnO:Al films. Appl Phys Lett. 2013;102:242106. Kim D, Yun I, Kim H. Fabrication of rough Al doped ZnO films deposited by low pressure chemical vapor deposition for high efficiency thin film solar cells. Curr Appl Phys. 2010;10:S459–62. Liu Y, Li Q, Shao H. Optical and photoluminescent properties of Al-doped zinc oxide thin films by pulsed laser deposition. J Alloys Compd. 2009;485:529–31. Elam JW, George SM. Growth of ZnO/Al2O3 alloy films using atomic layer deposition techniques. Chem Mater. 2003;15:S1020–8. Maeng WJ, Lee J-w, Lee JH, Chung K-B, Park J-S. Studies on optical, structural and electrical properties of atomic layer deposited Al-doped ZnO thin films with various Al concentrations and deposition temperatures. J Phys D Appl Phys. 2011;44:445305. Wójcik A, Godlewski M, Guziewicz E, Minikayev R, Paszkowicz W. Controlling of preferential growth mode of ZnO thin films grown by atomic layer deposition. J Crys Growth. 2008;310:284–9. Yamada A, Sang B, Konagai M. Atomic layer deposition of ZnO transparent conducting oxides. Appl Surf Sci. 1997;112:216–22. Lee D-J, Kim H-M, Kwon J-Y, Choi H, Kim S-H, Kim K-B. Structural and electrical properties of atomic layer deposited Al-doped ZnO films. Adv Funct Mater. 2011;21:448–55. Kim Y, Lee W, Jung D-R, Kim J, Nam S, Kim H, et al. Optical and electronic properties of post-annealed ZnO:Al thin films. Appl Phys Lett. 2010;96:171902. Lin S-S, Huang J-L, Šajgalik P. The properties of heavily Al-doped ZnO films before and after annealing in the different atmosphere. Surf Coat Tech. 2004;185:254–63. Zhou Y, Kelly PJ, Postill A, Abu-Zeid O, Alnajjar AA. The characteristics of aluminium-doped zinc oxide films prepared by pulsed magnetron sputtering from powder targets. Thin Solid Films. 2004;447–448:33–9. Morinaga Y, Sakuragi K, Fujimura N, Ito T. Effect of Ce doping on the growth of ZnO thin films. J Crys Growth. 1997;174:691–5. Sun Y, Fox NA, Riley DJ, Ashfold MNR. Hydrothermal growth of ZnO nanorods aligned parallel to the substrate surface. J Phys Chem C. 2008;112:9234–9. Park KC, Ma DY, Kim KH. The physical properties of Al-doped zinc oxide films prepared by RF magnetron sputtering. Thin Solid Films. 1997;305:201–9. Wilk GD, Wallace RM, Anthony JM. High-κ gate dielectrics: current status and materials properties considerations. J Appl Phys. 2001;89:5243. Van Vechten JA, Bergstresser TK. Electronic structures of semiconductor alloys. Phys Rev B. 1970;1:3351–8. Ahn CH, Kim YY, Kim DC, Mohanta SK, Cho HK. A comparative analysis of deep level emission in ZnO layers deposited by various methods. J Appl Phys. 2009;105:013502. Baraki R, Zierep P, Erdem E, Weber S, Granzow T. Electron paramagnetic resonance study of ZnO varistor material. J Phys Condens Matter. 2014;26:115801. Kim Y-S, Tai W-P. Electrical and optical properties of Al-doped ZnO thin films by sol-gel process. Appl Surf Sci. 2007;253:4911–6. Janotti A, Van de Walle CG. Native point defects in ZnO. Phys Rev B. 2007;76:165202. Park H-W, Chung K-B, Park J-S, Ji S, Song K, Lim H, et al. Electronic structure of conducting Al-doped ZnO films as a function of Al doping concentration. Ceramics Int. 2015;41:1641–5. Owen JI. Growth, Etching, and Stability of Sputtered ZnO: AI for Thin-Film Silicon Solar Cells. In: Owen JI, editor. Fundamentals. Germany: Forschungszentrum Jülich; 2011. Major S, Banerjee A, Chopra KL. Annealing studies of undoped and indium-doped films of zinc oxide. Thin Solid Films. 1984;122:31–43. This work is supported by NSFC (Project 10974017) and the Fundamental Research Funds for the Central Universities. Department of Physics, Beijing Normal University, Beijing, 100875, China Aiji Wang, Tingfang Chen, Shuhua Lu, He Chen & Yinshu Wang Analytical and Testing Center, Beijing Normal University, Beijing, 100875, China Zhenglong Wu & Yongliang Li School of Police Information Engineering, People's Public Security University of China, Beijing, 100038, China Shuhua Lu Aiji Wang Tingfang Chen Zhenglong Wu Yongliang Li He Chen Yinshu Wang Correspondence to Yinshu Wang. AJW carried out the experiments and drafted the manuscript. TFC, YLL, ZLW, SHL, and HC were involved in the SEM and PL measurement analysis of films. YSW supervised the experiments and revision of the article. All authors read and approved the final manuscript. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Wang, A., Chen, T., Lu, S. et al. Effects of doping and annealing on properties of ZnO films grown by atomic layer deposition. Nanoscale Res Lett 10, 75 (2015). https://doi.org/10.1186/s11671-015-0801-y ZnO films Al doping Annealing atmospheres
CommonCrawl
A conservative semi-Lagrangian finite difference WENO scheme based on exponential integrator for one-dimensional scalar nonlinear hyperbolic equations Asymptotic behavior of weak and strong solutions of the magnetohydrodynamic equations March 2021, 29(1): 1803-1818. doi: 10.3934/era.2020092 Combinatorics of some fifth and sixth order mock theta functions Meenakshi Rana 1,, and Shruti Sharma 1,2, School of Mathematics, Thapar Institute of Engineering and Technology, Patiala-147004, India Yadavindra College of Engineering, Punjabi University Guru Kashi Campus, Talwandi Sabo-151302, India * Corresponding author: Meenakshi Rana Received February 2020 Revised July 2020 Published March 2021 Early access September 2020 Fund Project: The first author is supported by SERB project Ref no. MTR/2019/000123 The goal of this paper is to provide a new combinatorial meaning to two fifth order and four sixth order mock theta functions. Lattice paths of Agarwal and Bressoud with certain modifications are used as a tool to study these functions. Keywords: Mock theta functions, lattice paths, $ n $–color partitions. Mathematics Subject Classification: Primary: 05A17, 05A19, 11P81. Citation: Meenakshi Rana, Shruti Sharma. Combinatorics of some fifth and sixth order mock theta functions. Electronic Research Archive, 2021, 29 (1) : 1803-1818. doi: 10.3934/era.2020092 A. K. Agarwal, Partitions with $N$ copies $N$, Combinatoire énumérative (Montreal, Que., 1985/Quebec, Que., 1985), 1–4, Lecture Notes in Math., 1234, Springer, Berlin, 1986. doi: 10.1007/BFb0072504. Google Scholar A. K. Agarwal, $n$–color partition theoretic interpretations of some mock theta functions, Electron. J. combin., 11, (2004), Note 14, 6 pp. doi: 10.37236/1855. Google Scholar A. K. Agarwal, Lattice paths and mock theta functions, In: Proceedings of the Sixth International Conference of SSFA, (2005), 95–102. Google Scholar A. K. Agarwal and G. E. Andrews, Rogers–Ramanujan identities for partitions with "$n$ copies of $n$", J. Combin. Theory Ser. A, 45 (1987), 40-49. doi: 10.1016/0097-3165(87)90045-8. Google Scholar A. K. Agarwal and D. M. Bressoud, Lattice paths and multiple basic hypergeometric series, Pacific J. Math., 136 (1989), 209-228. doi: 10.2140/pjm.1989.136.209. Google Scholar A. K. Agarwal and G. Narang, Generalized Frobenius partitions and mock-theta functions, Ars Combin., 99 (2011), 439-444. Google Scholar A. K. Agarwal and M. Rana, Two new combinatorial interpretations of a fifth order mock theta function, J. Indian Math. Soc. (N.S.), 2007 (2008), 11-24. Google Scholar G. E. Andrews, Enumerative proofs of certain $q$-identities, Glasg. Math. J., 8 (1967), 33-40. doi: 10.1017/S0017089500000057. Google Scholar G. E. Andrews, Partitions with initial repetitions, Acta Math. Sin. (Engl. Ser.), 25 (2009), 1437–1442. doi: 10.1007/s10114-009-6292-y. Google Scholar G. E. Andrews, A. Dixit and A. J. Yee, Partitions associated with the Ramanujan/Watson mock theta functions $\omega (q)$, $\nu (q)$ and $\phi (q)$, Res. Number Theory, 1 (2015), Paper No. 19, 25 pp. doi: 10.1007/s40993-015-0020-8. Google Scholar G. E. Andrews and F. G. Garvan, Ramanujan's "lost" notebook Ⅵ: The mock theta conjectures, Adv. Math., 73 (1989), 242-255. doi: 10.1016/0001-8708(89)90070-4. Google Scholar G. E. Andrews and A. J. Yee, Some identities associated with mock theta functions $\omega(q)$ and $\nu(q)$, Ramanujan J., 48 (2019), 613-622. doi: 10.1007/s11139-018-0028-5. Google Scholar B. C. Berndt and S. H. Chan, Sixth order mock theta functions, Adv. Math., 216 (2007), 771-786. doi: 10.1016/j.aim.2007.06.004. Google Scholar W. H. Burge, A correspondence between partitions related to generalizations of the Rogers–Ramanujan identities, Discrete Math., 34 (1981), 9-15. doi: 10.1016/0012-365X(81)90017-0. Google Scholar W. H. Burge, A three-way correspondence between partitions, European J. Combin., 3 (1982), 195-213. doi: 10.1016/S0195-6698(82)80032-2. Google Scholar Y.-S. Choi and B. Kim, Partition identities from third and sixth order mock theta functions, European J. Combin., 33 (2012), 1739-1754. doi: 10.1016/j.ejc.2012.04.005. Google Scholar N. J. Fine, Basic Hypergeometric Series and Applications, Amer. Math. Soc. Providence, RI, 1988. doi: 10.1090/surv/027. Google Scholar B. Gordon and R. J. McIntosh, A survey of classical mock theta functions, Partitions, $q$-series, and Modular Forms, Springer, New York. 23 (2012), 95–144. doi: 10.1007/978-1-4614-0028-8_9. Google Scholar D. Hickerson, A proof of the mock theta conjectures, Invent. Math., 94 (1988), 639-660. doi: 10.1007/BF01394279. Google Scholar F. Z. K. Li and J. Y. X. Yang, Combinatorial proofs for identities related to generalizations of the mock theta functions $\omega(q)$ and $\nu(q)$, Ramanujan J., 50 (2019), 527-550. doi: 10.1007/s11139-018-0094-8. Google Scholar R. J. McIntosh, Modular transformations of Ramanujan's sixth order mock theta functions, preprint. Google Scholar S. Ramanujan, The Lost Notebook and Other Unpublished Papers, Narosa Publishing House, New Delhi, 1988. Google Scholar J. K. Sareen and M. Rana, Combinatorics of tenth-order mock theta functions, Proc. Indian Acad. Sci. Math. Sci., 126 (2016), 549-556. doi: 10.1007/s12044-016-0305-4. Google Scholar S. Sharma and M. Rana, Combinatorial interpretations of mock theta functions by attaching weights, Discrete Math., 341 (2018), 1903-1914. doi: 10.1016/j.disc.2018.03.017. Google Scholar S. Sharma and M. Rana, On mock theta functions and weight-attached Frobenius partitions, Ramanujan J., 50 (2019), 289-303. doi: 10.1007/s11139-018-0054-3. Google Scholar S. Sharma and M. Rana, Interperting some fifth and sixth order mock theta functions by attaching weights, J. Ramanujan Math. Soc., 34 (2019), 401-410. Google Scholar S. Sharma and M. Rana, A new approach in interpreting some mock theta functions, Int. J. Number Theory, 15 (2019), 1369-1383. doi: 10.1142/S1793042119500763. Google Scholar Harman Kaur, Meenakshi Rana. Congruences for sixth order mock theta functions $ \lambda(q) $ and $ \rho(q) $. Electronic Research Archive, 2021, 29 (6) : 4257-4268. doi: 10.3934/era.2021084 Qinglan Xia. On landscape functions associated with transport paths. Discrete & Continuous Dynamical Systems, 2014, 34 (4) : 1683-1700. doi: 10.3934/dcds.2014.34.1683 Bram van Asch, Frans Martens. Lee weight enumerators of self-dual codes and theta functions. Advances in Mathematics of Communications, 2008, 2 (4) : 393-402. doi: 10.3934/amc.2008.2.393 Ricardo Diaz and Sinai Robins. The Ehrhart polynomial of a lattice n -simplex. Electronic Research Announcements, 1996, 2: 1-6. Olof Heden, Faina I. Solov'eva. Partitions of $\mathbb F$n into non-parallel Hamming codes. Advances in Mathematics of Communications, 2009, 3 (4) : 385-397. doi: 10.3934/amc.2009.3.385 Agnaldo José Ferrari, Tatiana Miguel Rodrigues de Souza. Rotated $ A_n $-lattice codes of full diversity. Advances in Mathematics of Communications, 2020 doi: 10.3934/amc.2020118 David Keyes. $\mathbb F_p$-codes, theta functions and the Hamming weight MacWilliams identity. Advances in Mathematics of Communications, 2012, 6 (4) : 401-418. doi: 10.3934/amc.2012.6.401 Xiaoli Wang, Meihua Yang, Peter E. Kloeden. Sigmoidal approximations of a delay neural lattice model with Heaviside functions. Communications on Pure & Applied Analysis, 2020, 19 (4) : 2385-2402. doi: 10.3934/cpaa.2020104 Valentin Afraimovich, Lev Glebsky. Measures related to $(\epsilon,n)$-complexity functions. Discrete & Continuous Dynamical Systems, 2008, 22 (1&2) : 23-34. doi: 10.3934/dcds.2008.22.23 Joan-Josep Climent, Francisco J. García, Verónica Requena. On the construction of bent functions of $n+2$ variables from bent functions of $n$ variables. Advances in Mathematics of Communications, 2008, 2 (4) : 421-431. doi: 10.3934/amc.2008.2.421 J. Delon, A. Desolneux, Jose-Luis Lisani, A. B. Petro. Automatic color palette. Inverse Problems & Imaging, 2007, 1 (2) : 265-287. doi: 10.3934/ipi.2007.1.265 Bettina Klaus, Frédéric Payot. Paths to stability in the assignment problem. Journal of Dynamics & Games, 2015, 2 (3&4) : 257-287. doi: 10.3934/jdg.2015004 Chiun-Chuan Chen, Ting-Yang Hsiao, Li-Chang Hung. Discrete N-barrier maximum principle for a lattice dynamical system arising in competition models. Discrete & Continuous Dynamical Systems, 2020, 40 (1) : 153-187. doi: 10.3934/dcds.2020007 Seppo Granlund, Niko Marola. Phragmén--Lindelöf theorem for infinity harmonic functions. Communications on Pure & Applied Analysis, 2015, 14 (1) : 127-132. doi: 10.3934/cpaa.2015.14.127 Yu Zhou, Xinfeng Dong, Yongzhuang Wei, Fengrong Zhang. A note on the Signal-to-noise ratio of $ (n, m) $-functions. Advances in Mathematics of Communications, 2020 doi: 10.3934/amc.2020117 Ana Paula S. Dias, Paul C. Matthews, Ana Rodrigues. Generating functions for Hopf bifurcation with $ S_n$-symmetry. Discrete & Continuous Dynamical Systems, 2009, 25 (3) : 823-842. doi: 10.3934/dcds.2009.25.823 Lizhi Zhang, Congming Li, Wenxiong Chen, Tingzhi Cheng. A Liouville theorem for $\alpha$-harmonic functions in $\mathbb{R}^n_+$. Discrete & Continuous Dynamical Systems, 2016, 36 (3) : 1721-1736. doi: 10.3934/dcds.2016.36.1721 Claude Carlet, Yousuf Alsalami. A new construction of differentially 4-uniform $(n,n-1)$-functions. Advances in Mathematics of Communications, 2015, 9 (4) : 541-565. doi: 10.3934/amc.2015.9.541 Virginie Bonnaillie-Noël, Corentin Léna. Spectral minimal partitions of a sector. Discrete & Continuous Dynamical Systems - B, 2014, 19 (1) : 27-53. doi: 10.3934/dcdsb.2014.19.27 Jingwei Liang, Jia Li, Zuowei Shen, Xiaoqun Zhang. Wavelet frame based color image demosaicing. Inverse Problems & Imaging, 2013, 7 (3) : 777-794. doi: 10.3934/ipi.2013.7.777 Meenakshi Rana Shruti Sharma
CommonCrawl
Kruglov Vasilii Igorevich (recent publications) | by years | scientific publications | by types | 1. Kruglov V.I., "A goodness-of-fit Lempel–Ziv test for equiprobable binary sequences", Computer Data Analysis and Modeling: Stochastics and Data Science, Proceedings of the XIII International Conference (Minsk, September 6-10, 2022), Publishing Center of BSU, Minsk, 2022, 100–103 2. V. G. Mikhailov, V. I. Kruglov, "Asymptotic normality of number of multiple coincidences of chains in complete $q$-ary trees and forests with randomly marked vertices", Prikl. Diskr. Mat. Suppl., 2022, no. 15, 8–11 3. V. G. Mikhailov, V. I. Kruglov, Mat. Vopr. Kriptogr., 13:3 (2022), 93–106 4. V. I. Kruglov, "Limit theorems for number of block $C_N$-equivalent pairs of chains in random equiprobable sequence", Review of Applied and Industrial Mathematics, 29:2 (2022), 61–64 5. V. G. Mikhailov, V. I. Kruglov, "On the asymptotic normality in the problem on the tuples repetitions in a marked complete tree", Mat. Vopr. Kriptogr., 12:4 (2021), 59–64 6. V. G. Mikhailov, V. I. Kruglov, "Asymptotical normality of number of pairs of identically labeled chains in q-ary tree with randomly labeled vertices", Review of Applied and Industrial Mathematics, 28:2 (2021), 142–145 7. V. I. Kruglov, V. G. Mikhailov, "On the rank of random matrix over prime field consisting of independent rows with given numbers of nonzero elements", Mat. Vopr. Kriptogr., 11:3 (2020), 41–52 8. Zubkov A. M., Kruglov V.I., "Number of pairs of identically marked templates in $q$-ary tree", Computer Data Analysis and Modeling: Stochastics and Data Science, Proceedings of the XII International Conference (Minsk, September 18-22, 2019), Publishing Center of BSU, Minsk, 2019, 348–351 9. Kruglov V. I., Mikhailov V. G., "Neravenstva dlya ranga sluchainoi dvoichnoi matritsy s nezavisimymi strokami zadannykh vesov", Veroyatnostnye metody v diskretnoi matematike, X Mezhdunarodnaya Petrozavodskaya konferentsiya (22-26 maya, 2019 g., Petrozavodsk, Rossiya), KarNTs RAN, Petrozavodsk, 2019, 93–95 10. V. I. Kruglov, V. G. Mikhailov, "On the rank of random binary matrix with fixed weights of independent rows", Mat. Vopr. Kriptogr., 10:4 (2019), 67–76 11. A. M. Zubkov, V. I. Kruglov, "On quantiles of minimal codeword weights of random linear codes over $\mathbf{F}_p$", Matem. vopr. kriptogr., 9:2 (2018), 99–102 (cited: 2) 12. V. I. Kruglov, "On coincidences of tuples in a $q$-ary tree with random labels of vertices", Discrete Math. Appl., 28:5 (2018), 293–307 13. A. M. Zubkov, V. I. Kruglov, "Unextendable to the root coincidences of tuples in a $q$-ary tree with randomly labelled vertices", Review of Applied and Industrial Mathematics, 25:3 (2018), 249–251 14. A. M. Zubkov, V. I. Kruglov, "Chislo par odinakovo pomechennykh vkhozhdenii zadannogo poddereva v $q$-ichnoe derevo so sluchainymi metkami vershin", Analiticheskie i vychislitelnye metody v teorii veroyatnostei i ee prilozheniyakh (Moskva, 23–27 oktyabrya 2017 g.), eds. A. V. Lebedev, RUDN, Moskva, 2017, 735–740 15. Vasiliy Kruglov, Andrey Zubkov, "Number of Pairs of Template Matchings in $q$-ary Tree with Randomly Marked Vertices", Analytical and Computational Methods in Probability Theory, Lecture Notes in Comput. Sci., 10684, Springer, 2017, 336–346 (cited: 1) 16. A. M. Zubkov, V. I. Kruglov, "On coincidences of tuples in a binary tree with randomly labelled vertices", Computer Data Analysis and Modeling. Theoretical and Applied Stochastics, Proceedings of the XI International Conference (Minsk, September 6-10, 2016), Publishing Center of BSU, Minsk, 2016, 189–192 17. A. M. Zubkov, V. I. Kruglov, "On coincidences of tuples in a binary tree with random labels of vertices", Discrete Math. Appl., 26:3 (2016), 145–153 (cited: 3) (cited: 4) 18. A. M. Zubkov, V. I. Kruglov, "Statistical characteristics of weight spectra of random linear codes over $\mathrm{GF}(p)$", Mat. Vopr. Kriptogr., 5:1 (2014), 27–38 19. A. M. Zubkov, V. I. Kruglov, "Probabilistic characteristics of weight spectra of random linear subcodes over $\mathrm{GF}(p)$", Prikl. Diskr. Mat. Suppl., 2014, no. 7, 118–121 20. A. M. Zubkov, V. I. Kruglov, "On the vectors' weights in random linear spaces over $\mathrm{GF}(p)$", Review of Applied and Industrial Mathematics, 21:4 (2014), 366–368 21. A. M. Zubkov, V. I. Kruglov, "Weight spectra of random linear codes", Computer data analysis and modeling, Proceedings of the Tenth International conference (Minsk, September 10–14, 2013), v. 2, Publishing center of BSU, Minsk, 2013, 20–22 22. V. I. Kruglov, "Poisson approximation for the distribution of the number of "parallelograms" in a random sample from $\mathbb Z_N^q$", Mat. Vopr. Kriptogr., 3:2 (2012), 63–78 23. A. M. Zubkov, V. I. Kruglov, "On distributions of weight spectra for random linear binary codes", Prikl. Diskr. Mat. Suppl., 2012, no. 5, 10–11 24. V. I. Kruglov, "Poisson approximation for the distribution of the number of "parallelograms" in a random sample from $\mathbb Z_N^q$", International conference "Probability Theory and its Applications", in Commemoration of the Centennial of B. V. Gnedenko, Abstracts (Moscow), 2012, 49–50 25. A. Zubkov, V. Kruglov, "On the Distribution of Weight Spectra of Random Linear Binary Codes", Proceedings of Workshop on Current Trends in Cryptology (Nizhny Novgorod), 2012, 29–31 26. A. M. Zubkov, V. I. Kruglov, "Moments of codeword weights in random binary linear codes", Mat. Vopr. Kriptogr., 3:4 (2012), 55–70 27. A. M. Zubkov, V. I. Kruglov, "On the distribution of weight spectra of random linear subspaces", Review of Applied and Industrial Mathematics, 19:4 (2012), 564–566 28. V. I. Kruglov, "A goodness-of-fit test for uniform distribution on a finite group", Computer data analysis and modeling, Proceedings of the Ninth International Conference (Belarus, Minsk, 7–11 Sept 2010), v. 2, Publishing center of BSU, Minsk, 2010, 40–42 Full list of publications
CommonCrawl
How to Value a Company Introduction to Company Valuation Valuation Analysis 6 Basic Financial Ratios 5 Must-Have Metrics for Value Investors Earnings Per Share (EPS) Price-to-Earnings Ratio (P/E Ratio) Price-To-Book Ratio (P/B Ratio) Price/Earnings-to-Growth (PEG Ratio) Fundamental Analysis Basics Intrinsic Value of a Stock Intrinsic Value vs. Current Market Value Equity Valuation: The Comparables Approach 4 Basic Elements of Stock Value How to Become Your Own Stock Analyst Due Diligence in 10 Easy Steps Determining the Value of a Preferred Stock Fundamental Analysis Tools and Methods Bottom-Up Investing What Book Value Means to Investors How to Use Enterprise Value to Compare Companies How to Analyze Corporate Profit Margins Decoding DuPont Analysis Valuing Non-Public Companies How to Value Private Companies Valuing Startup Ventures Corporate Finance & Accounting Financial Ratios Price/Earnings-to-Growth (PEG) Ratio By Will Kenton Reviewed By Margaret James What Is the Price/Earnings-to-Growth?(PEG) Ratio? The price/earnings to growth ratio (PEG ratio) is a stock's price-to-earnings (P/E) ratio divided by the growth rate of its earnings for a specified time period. The PEG ratio is used to determine a stock's value while also factoring in the company's expected earnings growth, and it is thought to provide a more complete picture than the more standard P/E ratio. The PEG ratio enhances the P/E ratio by adding in expected earnings growth into the calculation. The PEG ratio is considered to be an indicator of a stock's true value, and similar to the P/E ratio, a lower PEG may indicate that a stock is undervalued. The PEG for a given company may differ significantly from one reported source to another, depending on which growth estimate is used in the calculation, such as one-year or three-year projected growth. PEG Ratio How to Calculate the PEG Ratio ?PEG?Ratio=Price/EPSEPS?Growthwhere:EPS?=?The?earnings?per?share\begin{aligned} &\text{PEG Ratio}=\frac{\text{Price/EPS}}{\text{EPS Growth}}\\ &\textbf{where:}\\ &\text{EPS = The earnings per share}\\ \end{aligned}?PEG?Ratio=EPS?GrowthPrice/EPS?where:EPS?=?The?earnings?per?share?? To calculate the PEG ratio, an investor or analyst needs to either look up or calculate the P/E ratio of the company in question. The P/E ratio is calculated as the price per share of the company divided by the earnings per share (EPS), or price per share / EPS. Once the P/E is calculated, find the expected growth rate for the stock in question, using analyst estimates available on financial websites that follow the stock. Plug the figures into the equation, and solve for the PEG ratio number. As with any ratio, the accuracy of the PEG ratio depends on the inputs used. When considering a company's PEG ratio from a published source, it's important to find out which growth rate was used in the calculation. Yahoo! Finance, for example, calculates PEG using a P/E ratio based on current-year data and a five-year expected growth rate.?? Using historical growth rates, for example, may provide an inaccurate PEG ratio if future growth rates are expected to deviate from a company's historical growth. The ratio can be calculated using one-year, three-year, or five-year expected growth rates, for example. To distinguish between calculation methods using future growth and historical growth, the terms "forward PEG" and "trailing PEG" are sometimes used. What Does the?Price/Earnings-to-Growth Ratio?Tell You? While a low P/E ratio may make a stock look like a good buy, factoring in the company's growth rate to get the stock's PEG ratio may tell a different story. The lower the PEG ratio, the more the stock may be undervalued given its future earnings expectations. Adding a company's expected growth into the ratio helps to adjust the result for companies that may have a high growth rate and a high P/E ratio. The degree to which a PEG ratio result indicates an over or underpriced stock varies by industry and by company type. As a broad rule of thumb, some investors feel that a PEG ratio below one is desirable. According to well-known investor Peter Lynch, a company's P/E and expected growth should be equal, which denotes a fairly valued company and supports a PEG ratio of 1.0. When a company's PEG exceeds 1.0, it's considered overvalued while a stock with a PEG of less than 1.0 is considered undervalued.?? Example of How to Use the PEG Ratio The PEG ratio provides useful information to compare companies and see which stock might be the better choice for an investor's needs, as follows. Assume the following data for two hypothetical companies, Company A and Company B: Company A: Price per share = $46 EPS this year = $2.09 EPS last year = $1.74 Company B Given this information, the following data can be calculated for each company. P/E ratio = $46 / $2.09 = 22 Earnings growth rate = ($2.09 / $1.74) - 1 = 20% PEG ratio = 22 / 20 = 1.1 Many investors may look at Company A and find it more attractive since it has a lower P/E ratio between the two companies. But compared to Company B, it doesn't have a high enough growth rate to justify its P/E. Company B is trading at a discount to its growth rate and investors purchasing it are paying less per unit of earnings growth. Yahoo! Finance. "How To Find P/E And PEG Ratios." Accessed 4, 2020. California State University Long Beach. "The Peter Lynch Approach to Investing in?'Understandable' Stocks." Accessed August 5, 2020. Price-to-Earnings Ratio – P/E Ratio The price-to-earnings ratio (P/E ratio) is defined as a ratio for valuing a company that measures its current share price relative to its per-share earnings. Stalwart Definition and Example Stalwart is a description of companies that have large capitalizations and provide investors with slow but steady and dependable growth prospects. Understanding the Sustainable Growth Rate (SGR) The sustainable growth rate (SGR) is the maximum rate of growth that a company can sustain without raising additional equity or taking on new debt. Earnings per share (EPS) is the portion of a company's profit allocated to each outstanding share of common stock. Earnings per share serve as an indicator of a company's profitability. Price/Growth Flow Definition Price-Growth Flow is a measure of a company's earnings power and R&D expenditures compared to its current market value. Inside Forward Price-To-Earnings (Forward P/E Metric) Forward price-to-earnings (forward P/E) is a measure of the P/E ratio using forecasted earnings for the P/E calculation. While the earnings used in this formula are an estimate and are not as reliable as current or historical earnings data, there is still a benefit to estimated P/E analysis. The 4 Basic Elements of Stock Value Growth Rate Analysis in Considering the Future Prospects of a Company Using Ratios to Determine If a Stock Is Overvalued or Undervalued Assessing a Stock's Future With the Price-to-Earnings Ratio and PEG PEG Ratio: Determining a Company's Earnings Growth Rate What's Considered a Good PEG Ratio?
CommonCrawl
Yuanfen Xiao Department of Mathematics, University of Science and Technology of China, Hefei, 230026, Anhui, China Received December 2019 Revised April 2020 Published July 2020 Fund Project: The author is supported by NNSF of China grant 11871228 We construct a mean Li-Yorke chaotic set along polynomial sequences (the degree of this polynomial is not less than three) with full Hausdorff dimension and full topological entropy for $ \beta $ -transformation. An uncountable subset $ C $ is said to be a mean Li-Yorke chaotic set along sequence $ \{a_n\} $ , if both $ \begin{equation*} \liminf\limits_{N\to \infty}\frac{1}{N}\sum\limits_{j = 1}^{N}d(f^{a_j}(x),f^{a_j}(y )) = 0 \text{ and } \limsup\limits_{N\to \infty}\frac{1}{N}\sum\limits_{j = 1}^{N}d(f^{a_j}(x),f^{a_j}(y ))>0 \end{equation*} $ hold for any two distinct points $ x $ $ y $ Keywords: $ \beta $-transformation, mean Li-Yorke chaos, sequence version of chaos, Hausdorff dimension, topological entropy. Mathematics Subject Classification: Primary:54H20, 37C45, 37D45;Secondary:37B40. Citation: Yuanfen Xiao. Mean Li-Yorke chaotic set along polynomial sequence with full Hausdorff dimension for $ \beta $-transformation. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 525-536. doi: 10.3934/dcds.2020267 F. Balibrea and V. Jiménez López, The measure of scrambled sets: a survey, Acta Univ. M. Belii Ser. Math., 7 (1999), 3-11. Google Scholar F. Blanchard, W. Huang and L. Snoha, Topological size of scrambled sets, Colloq. Math., 110 (2008), 293-361. doi: 10.4064/cm110-2-3. Google Scholar H. Bruin and V. Jiménez López, On the Lebesgue measure of Li-Yorke pairs for interval maps, Comm. Math. Phys., 299 (2010), 523-560. doi: 10.1007/s00220-010-1085-9. Google Scholar J.-C. Ban and B. Li, The multifractal spectra for the recurrence rates of beta-transformations, J. Math. Anal. Appl., 420 (2014), 1662-1679. doi: 10.1016/j.jmaa.2014.06.051. Google Scholar K. Dajani and C. Kraaikamp, Ergodic Theory of Numbers, Carus Mathematical Monographs, 29. Mathematical Association of America, Washington, DC, 2002. Google Scholar T. Downarowicz, Positive topological entropy implies chaos DC2, Proc. Amer. Math. Soc., 142 (2014), 137-149. doi: 10.1090/S0002-9939-2013-11717-X. Google Scholar K. Falconer, Fractal Geometry, , Mathematical foundations and applications. Third edition. John Wiley & Sons, Ltd., Chichester, 2014. Google Scholar C. Fang, W. Huang, Y. Yi and P. Zhang, Dimensions of stable sets and scrambled sets in positive finite entropy systems, Ergodic Theory Dynam. Systems, 32 (2012), 599-628. doi: 10.1017/S0143385710000982. Google Scholar F. Garcia-Ramos and L. Jin, Mean proximality and mean Li-Yorke chaos, Proc. Amer. Math. Soc., 145 (2017), 2959-2969. doi: 10.1090/proc/13440. Google Scholar F. Hofbauer, $\beta$-Shifts have unique maximal measure, Monatsh. Math., 85 (1978), 189-198. doi: 10.1007/BF01534862. Google Scholar W. Huang, J. Li and X. Ye, Stable sets and mean Li-Yorke chaos in positive entropy systems, J. Funct. Anal., 266 (2014), 3377-3394. doi: 10.1016/j.jfa.2014.01.005. Google Scholar T. Y. Li and James A. Yorke, Period three implies chaos, Amer. Math. Monthly, 82 (1975), 985-992. doi: 10.1080/00029890.1975.11994008. Google Scholar K.-S. Lau and L. Shu, The spectrum of Poincaré recurrence, Ergodic Theory Dynam. Systems, 28 (2008), 1917-1943. doi: 10.1017/S0143385707001095. Google Scholar B. Li and J. Wu, Beta-expansion and continued fraction expansion, J. Math. Anal. Appl., 339 (2008), 1322-1331. doi: 10.1016/j.jmaa.2007.07.070. Google Scholar B. Li and Y.-C. Chen, Chaotic and topological properties of $ \beta $-transformations, J. Math. Anal. Appl., 383 (2011), 585-596. doi: 10.1016/j.jmaa.2011.05.049. Google Scholar W. Liu and B. Li, Chaotic and topological properties of continued fractions, J. Number Theory, 174 (2017), 369-383. doi: 10.1016/j.jnt.2016.10.019. Google Scholar J. Li and Y. Qiao, Mean Li-Yorke chaos along some good sequences, Monatsh. Math., 186 (2018), 153-173. doi: 10.1007/s00605-017-1086-2. Google Scholar W.-B. Liu, C. Huang, M.-H. Li and S. Wang, A construction of the scrambled set with full Hausdorff dimension for beta-transformations, Fractals, 26 (2018), 1850005, 10pp. doi: 10.1142/S0218348X18500056. Google Scholar B. H. P. de M. e Maia, An Equivalent System for Studying Periodic Points of the Beta-Transformation for a Pisot or a Salem Number, Thesis (Ph.D.)-Universidade Autonoma de Lisboa (Portugal). 2008. Google Scholar W. Parry, On the $ \beta $-expansions of real numbers, Acta Math. Acad. Sci. Hungar., 11 (1960), 401-416. doi: 10.1007/BF02020954. Google Scholar A. Rényi, Representations for real numbers and their ergodic properties, Acta Math. Acad. Sci. Hungar., 8 (1957), 477-493. doi: 10.1007/BF02020331. Google Scholar S. Ito and Y. Takahashi, Markov subshifts and realization of $ \beta $-expansions, J. Math. Soc. Japan, 26 (1974), 33-55. doi: 10.2969/jmsj/02610033. Google Scholar B. Schweizer and J. Smítal, Measures of chaos and a spectral decomposition of dynamical systems on the interval, Trans. Amer. Math. Soc., 344 (1994), 737-754. doi: 10.1090/S0002-9947-1994-1227094-X. Google Scholar Y. Wang, E. Chen and X. Zhou, Mean Li-Yorke chaos for random dynamical systems, J. Differential Equations, 267 (2019), 2239-2260. doi: 10.1016/j.jde.2019.03.012. Google Scholar J. C. Xiong, Hausdorff dimension of a chaotic set of shift of a symbolic space, Sci. China Ser. A, 38 (1995), 696-708. Google Scholar Federico Rodriguez Hertz, Zhiren Wang. On $ \epsilon $-escaping trajectories in homogeneous spaces. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 329-357. doi: 10.3934/dcds.2020365 Jiahao Qiu, Jianjie Zhao. Maximal factors of order $ d $ of dynamical cubespaces. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 601-620. doi: 10.3934/dcds.2020278 Chaoqian Li, Yajun Liu, Yaotang Li. Note on $ Z $-eigenvalue inclusion theorems for tensors. Journal of Industrial & Management Optimization, 2021, 17 (2) : 687-693. doi: 10.3934/jimo.2019129 Mathew Gluck. Classification of solutions to a system of $ n^{\rm th} $ order equations on $ \mathbb R^n $. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5413-5436. doi: 10.3934/cpaa.2020246 Luca Battaglia, Francesca Gladiali, Massimo Grossi. Asymptotic behavior of minimal solutions of $ -\Delta u = \lambda f(u) $ as $ \lambda\to-\infty $. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 681-700. doi: 10.3934/dcds.2020293 Guoyuan Chen, Yong Liu, Juncheng Wei. Nondegeneracy of harmonic maps from $ {{\mathbb{R}}^{2}} $ to $ {{\mathbb{S}}^{2}} $. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3215-3233. doi: 10.3934/dcds.2019228 Lei Liu, Li Wu. Multiplicity of closed characteristics on $ P $-symmetric compact convex hypersurfaces in $ \mathbb{R}^{2n} $. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020378 Wenqiang Zhao, Yijin Zhang. High-order Wong-Zakai approximations for non-autonomous stochastic $ p $-Laplacian equations on $ \mathbb{R}^N $. Communications on Pure & Applied Analysis, 2021, 20 (1) : 243-280. doi: 10.3934/cpaa.2020265 Yuan Cao, Yonglin Cao, Hai Q. Dinh, Ramakrishna Bandi, Fang-Wei Fu. An explicit representation and enumeration for negacyclic codes of length $ 2^kn $ over $ \mathbb{Z}_4+u\mathbb{Z}_4 $. Advances in Mathematics of Communications, 2021, 15 (2) : 291-309. doi: 10.3934/amc.2020067 Hai Q. Dinh, Bac T. Nguyen, Paravee Maneejuk. Constacyclic codes of length $ 8p^s $ over $ \mathbb F_{p^m} + u\mathbb F_{p^m} $. Advances in Mathematics of Communications, 2020 doi: 10.3934/amc.2020123 Denis Bonheure, Silvia Cingolani, Simone Secchi. Concentration phenomena for the Schrödinger-Poisson system in $ \mathbb{R}^2 $. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020447 Mokhtar Bouloudene, Manar A. Alqudah, Fahd Jarad, Yassine Adjabi, Thabet Abdeljawad. Nonlinear singular $ p $ -Laplacian boundary value problems in the frame of conformable derivative. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020442 Thabet Abdeljawad, Mohammad Esmael Samei. Applying quantum calculus for the existence of solution of $ q $-integro-differential equations with three criteria. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020440 Shengbing Deng, Tingxi Hu, Chun-Lei Tang. $ N- $Laplacian problems with critical double exponential nonlinearities. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 987-1003. doi: 10.3934/dcds.2020306 Chandra Shekhar, Amit Kumar, Shreekant Varshney, Sherif Ibrahim Ammar. $ \bf{M/G/1} $ fault-tolerant machining system with imperfection. Journal of Industrial & Management Optimization, 2021, 17 (1) : 1-28. doi: 10.3934/jimo.2019096 \begin{document}$ \beta $\end{document}-transformation" readonly="readonly">
CommonCrawl
Extreme sediment fluxes in a dryland flash flood Outburst floods provide erodability estimates consistent with long-term landscape evolution Daniel Garcia-Castellanos & Jim E. O'Connor Morphodynamic evolution following sediment release from the world's largest dam removal Andrew C. Ritchie, Jonathan A. Warrick, … Andrea S. Ogston Late Holocene canyon-carving floods in northern Iceland were smaller than previously reported Willem G. M. van der Bilt, Iestyn D. Barr, … Jostein Bakke Dunes in the world's big rivers are characterized by low-angle lee-side slopes and a complex shape Julia Cisneros, Jim Best, … Yuanfeng Zhang Longest sediment flows yet measured show how major rivers connect efficiently to deep sea Peter J. Talling, Megan L. Baker, … Robert J. Hilton Mud in rivers transported as flocculated and suspended bed material Michael P. Lamb, Jan de Leeuw, … Gary Parker Riverbed erosion of the final 565 kilometers of the Yangtze River (Changjiang) following construction of the Three Gorges Dam Shuwei Zheng, Y. Jun Xu, … Shuaihu Wu Amplification of downstream flood stage due to damming of fine-grained rivers Hongbo Ma, Jeffrey A. Nittrouer, … Baosheng Wu Elevational differences in hydrogeomorphic disturbance regime influence sediment residence times within mountain river corridors Nicholas A. Sutfin & Ellen Wohl J. M. Hooke ORCID: orcid.org/0000-0002-8367-30101 A flash flood on 28th September, 2012, rose to a peak discharge of 2357 m3 s−1 from zero within one hour in the ephemeral Nogalte channel in SE Spain. Channel morphology and sediment sizes were measured at existing monitored sites before and after the flood and peak flow hydraulics calculated from surveyed floodmarks and cross-sections. Maximum peak sediment fluxes were calculated as ~600 kg s−1 m−1, exceeding maximum published, measured dryland channel values by 10 times and common perennial stream fluxes by 100 times. These high fluxes fit the established simple bedload flux - shear stress relations for dryland channels very well, but now extended over a much wider data range. The high sediment fluxes are corroborated by deposits at >1 m height in a channel-side tank, with 90 mm diameter sediment carried in suspension, by transport of large blocks and by massive net aggradation as extensive, structureless channel bars. Very high sediment supply and rapid hydrograph rise and recession produced the conditions for these exceptional sediment dynamics. The results demonstrate the extreme sediment loads that may occur in dryland flash floods and have major implications for catchment and channel management. Flash floods in semi-arid areas can be very hazardous and damaging, arising from water flow and inundation but also from the sediment dynamics and impacts of sediment movement. Effects include damage to infrastructure such as roads, bridges, checkdams and embankments, infilling of reservoirs, and occurrence of muddy flows in settlements, all posing major challenges for catchment and channel management. Flow in dryland channels is usually ephemeral and impacts are highly episodic. The large flash floods are also important geomorphologically in producing morphological and sedimentological changes and contributing to longer-term landscape evolution. Here we show the very high sediment fluxes that can occur in such events, the calculated values exceeding previous published recorded values by 10 times. Process rates of both soil erosion and channel dynamics tend to be high in dryland catchments and sediment yields are amongst the highest globally, particularly in Mediterranean mountainous regions1,2. Some sediment flux data on different magnitude events in such channels have been captured from direct measurements at instrumented sites3,4,5, and from incidental observations and post-flood measurements of individual events6,7. The common distinctive characteristics of sediment flux in semi-arid, ephemeral channels are: lack of channel bed armour, high sediment supply, and equal mobility of sediment sizes8. Sediment fluxes tend to be much higher in ephemeral channels than in humid region, perennial streams, and sediment flux increases very rapidly and simply with shear stress3,4,5,9,10,11,12,13,14. However, due to the sporadic nature of the flows, their high impacts, and commonly low population density, detailed data in extreme dryland events are still sparse. To increase understanding of the dynamics of flow events and to provide data for validation of modelling of morphological and sedimentological responses to hydrological variations in ephemerally flowing channels15, a series of monitored reaches were established on several channels in the semi-arid region of southeast Spain in 1996/716,17. A major flash flood event, extreme in some aspects, occurred on one of these monitored channels, the Nogalte (Fig. 1), on 28th September, 201218. Prior data on morphology, sedimentology, vegetation and infrastructure state had been measured at the monitored sites16,17,19 and repeat surveys were made immediately after the event, with additional data collected on the flood characteristics, processes and channel changes18. This paper analyses the evidence and data on sediment dynamics and processes of this high magnitude event and quantifies the sediment flux and flow. Comparisons with world-wide data provide a perspective on characteristics that can occur in such events. Existing formulations are applied to test the extent to which they predict the observed behaviour and sediment transport. This analysis is particularly important because the calculations here of peak flow sediment fluxes have produced record-breaking data. Location and characteristics of field sites: (a) location in SE Spain, (b) location of measurement sites in Nogalte channel, (c) morphological maps of the three study reaches, (d) photographs of each of the study reaches. The Nogalte is tributary to the Guadalentín (Fig. 1), with a catchment area of 137 km2 at Puerto Lumbreras where a Confederación Hidrográfica del Segura (CHS) stream gauge is located (604913, 41577776) (Fig. 1). The catchment is set entirely within phyllite schist lithology, with some fluvial gravel terraces. The land use or cover is mainly almond cultivation on the steep valley sides, some grazing of goats, and semi-natural vegetation. Retama sphaerocarpa bushes are present in much of the channel. Rainfall averaged 268 mm a−1 over the past 10 years. The Nogalte is typical of many gravel ephemeral streams, with a braided pattern along much of its course. Three long-term monitored sites where detailed measurements are made comprise 150–200 m length reaches, located in the upper, middle and lower parts of the main channel (Fig. 1). All have braided morphology but the upper site (Nog1) is much narrower than the middle and lower sites (Nog2 and NogMon) (Fig. 1). Event characteristics The rainfall event of 28 September 2012 affected much of SE Spain20. Within the Guadalentín basin, the most intense and highest rainfalls were over the Nogalte headwaters and adjacent catchments and resulted in several fatalities as well as severe damage to roads, bridges, bank protection, and irrigational and agricultural structures along the channel edges; regional costs of damage were ~€120 million21. Intense rainfall took place after a very hot, dry summer. Total rainfall in the storm was measured as 161 mm over a few hours at Puerto Lumbreras (Fig. 2a) but could have approached 250 mm in the upper Nogalte based on radar images22,23 and exceeded 313 mm in Almeria province24. Peak rainfall intensities reached 81 mm h−1 for an hour at Puerto Lumbreras and the CHS reported maximum daily intensity of 179 l/m2, with a peak intensity of 17 l/m2 in five minutes25. The stream gauge recorded a rise to peak of 2357 m3 s−1 in one hour and duration to negligible flow of four hours (Fig. 2a)26. This exceeds the peak of a devastating flood in 1973 that probably reached 2000 m3 s−1 27, but implies a flow recurrence interval nearer to 50 years than the 100 year rainfall estimate23. The 2012 event peak discharges were calculated from floodmarks surveyed at cross sections within the monitored reaches and down the whole main Nogalte channel soon after the event (Fig. 2b). Calculations using Manning's roughness coefficient of n = 0.04, adjusted for high Froude Number28, provide consistency of values downstream and with the CHS gauged flow at the downstream end18 (Fig. 2b). Specific discharges (runoff rates) on the Nogalte attained values comparable with other extreme flash floods in Europe22,29, exceeding 100 m3 s−1 km−2 in upper parts of the catchment; the discharge plots above the regional peak discharge against catchment area curve30. Flow was continuous throughout the Nogalte system, exhibiting high runoff connectivity. (a) Rainfall per hour and discharge measured at CHS gauge at downstream end of Nogalte in event on 28 September, 2012; (b) Relationship between calculated peak discharge at surveyed cross-sections and catchment area and gauged peak flow at downstream end of catchment (Puerto Lumbreras). Established cross-sections in each of the three monitored reaches (Fig. 1), topography and multiple floodmarks were resurveyed throughout each reach, using RTK-GPS accurate to ±2 cm. Additional cross-sections were surveyed within the reaches and throughout the catchment main channel (Fig. 2b). All floodmarks were on ground rather than vegetation and convergent values were used to establish the most likely maximum heights. Surveyed floodmarks down the reach on both sides were used to calculate the water surface gradient of the flow. Velocity was calculated using the Manning equation and values of n = 0.04, but adjusted in cases of high velocities and Froude number28. The velocity-area method was used to calculate peak discharge and compared with surveys at intervening points down the valley to establish consistency of results and with the gauged outflow. Shear stress, stream power and unit stream power were derived from the measurements (Table 1) and applied in sediment transport and competence equations. All calculations were made using the pre-flood morphology (eight cross-sections in reaches) since large net aggradation took place (with one exception - see Supplementary methods), probably late in the event, in most cross-sections18. The MPM (Meyer-Peter-Müller) and Bagnold equations, as the most applicable to these kinds of channels31, were used to quantify sediment flux and tested with a range of particle size. Long-established sediment quadrats, 3–5 in each reach, were photographed before and after the flood. Maximum diameter, largest 10 particles (Max 10), and 25 regular grid sampled points (average grid) were parameters measured in each quadrat photograph; representative bulk samples were analysed for each site. DEMs and differences of topography were calculated to produce net change in each reach18. Table 1 Calculated peak flow hydraulics at site cross-sections and peak sediment flux calculated using the Bagnold and Meyer-Peter-Muller equations. The floodmark heights and water surface gradients are based on direct and accurate field measurements of position. Uncertainty in likely flood heights measured at each cross-section range from 0.13 m to 1.02 m (Table 2). Water surface slope was calculated from the floodmarks, testing a range of distance around each cross-section. Most probable values were selected from convergent values in lengths and both sides of the channel and consistency of discharge within and between reaches. Possible ranges are indicated (Table 2); uncertainty varies between cross-sections and reaches, ranging up to 33% in likely values. The biggest uncertainty is associated with the choice of Manning's n value28 but 0.04 is consistent with much guidance and mostly produces calculations consistent with the measured flow at the downstream gauge. However, adjustment to 0.05 has been made where Froude number was >1.2, using the method suggested by Lumbroso and Gaume28. (Uncertainty associated with choice of Manning n value is indicated in Fig. 8 in Supplementary Methods). Sediment flux was calculated for a range of sediment sizes (d = 5, 10 and 20 mm) but conservative values of hydraulics and of sediment flux (d = 20 mm) are quoted in the results and discussion below. It is suggested from corroborating evidence that these are realistic values for the event. Table 2 Ranges of likely uncertainty for field measured values of flood height and water surface gradient and for derived velocity and discharge values. Sediment characteristics of deposits The major geomorphological effect was massive deposition as large, flat, unstructured bars in much of the main channel18. The channel vegetation was largely destroyed or buried. Net aggradation occurred in all three monitored reaches, to a maximum depth of 0.9 m. Sediment sizes in quadrats, classified by site and by type of deposit (bar or channel) (Fig. 3) indicate maxima of 88 mm, 86 mm, and 96 mm at Nog1, Nog2 and NogMon respectively; average maximum 10 particle size was in the range 14.1–47.5 mm at all three sites, with overall average 28.2 mm. The average grid sample size was 7.3 mm with a range of 2.4–13.4 mm. Bulk samples were only of the finer deposits and give d50 ranging from 1.8–7 mm over the three sites. The d84 size was 6–9 mm in most samples, and the overall average of d50 and d84 of the bulk samples are 3.6 mm and 12.7 mm respectively. In the bulk samples 90% or more of the fine fraction is sand so very little cohesive material is present in this system. No clear size distinction is apparent between bar deposits and channels. Ranges and sizes are remarkably consistent between the three reaches, with no downstream fining evident. Sediment was also sampled in a large tank/reservoir at the side of the channel which acted as a sediment trap (Fig. 4a) but had comparable deposits to those in the channel (Fig. 3b). In places in the channel, larger particles of 100–150 mm diameter were deposited within the vegetation (Fig. 4b). Some very large concrete blocks, exceeding 3 m diameter (Fig. 4c) were also were moved a minimum of 250 m into the centre of the channel, at a site upstream of NogMon (Fig. 1). Sediment characteristics of the flood deposits: (a) particle size distribution from bulk samples, (b) particle sizes from quadrat samples at four sites, (c) particle sizes from quadrats at the three monitored sites, classified according to position. (a) Sediment deposited in a channel-side water tank, 1 m above the channel bed; (b) coarse particles trapped in vegetation; (c) large concrete block transported into the centre of the channel from a wall >250 m away. Sediment flux Sediment flux was calculated for each cross-section using MPM and Bagnold sediment transport equations (Table 1), for pre -flood morphology and d50 sediment sizes of 5, 10 and 20 mm. Results for d = 20 mm, coarser material than the d50 of bulk samples, are presented in order to give conservative, minimal flux estimates. Calculated peak flow fluxes are 60–127 kg s−1 m−1 (0.023–0.048 m2 s−1) at Nog1 using Bagnold and 53–121 kg s−1 m−1 using MPM (0.02–0.046 m2 s−1), at Nog2 are 285–327 kg s−1 m−1 (0.108–0.124 m2 s−1) and 204–242 kg s−1 m−1 (0.077–0.091 m2 s−1) for Bagnold and MPM respectively, and are 427–573 kg s−1 m−1 (0.161–0.216 m2 s−1) at NogMon using Bagnold and 339–427 kg s−1 m−1 (0.128–0.162 m2 s−1) using MPM (Table 1). Maximum differences between Bagnold and MPM estimates are 33% and for d = 0.02 m it is 31%. The maximum flux (at NogMon) is equivalent to 0.57 tonnes s−1 m−1 or 74 tonnes s−1 total flux. These fluxes amount to between 1.1% and 3.5% of the total volume of flow or peak concentrations of 10000–35000 ppm. These maximum load values of the order of 60–600 kg s−1 m−1 from the upstream to downstream sites, even using coarse material, exceed by an order of magnitude previous measured maxima of 60 kg s−1 m−1, which is quoted as the published maximum, directly measured bedload flux in ephemeral streams12. They exceed values from perennial streams and flumes, commonly used to calibrate sediment transport equations, by two or more orders of magnitude. The results are highly significant in showing what sediment fluxes may occur in a channel, given suitable hydraulic and sediment supply conditions. If the data calculated here are added to the relations established for instrumented sites in Israel at which the published maxima were measured11,12,13, then the peak sediment fluxes fit those relations to shear stress very well (r2 > 0.96), confirming the simple relation but extending its application by an order of magnitude (Fig. 5) (though the power relations fit slightly better than the linear, in contrast to Cohen et al.13). The dimensionless rates are higher than the record relations published hitherto13,32. The Nogalte Bagnold data combined with the Eshtemoa power relation for that data range produce a correlation coefficient of r2 = 0.99. The calculated Nogalte MPM values incorporate shear stress so a strong relation is expected but the Bagnold calculations use unit stream power and velocity. The Bagnold relations produce higher correlations than those of MPM, both for the Nogalte data and the combined Eshtemoa relation. Comparison of calculated Nogalte sediment flux data with published relations of bedload flux to shear stress: (a) Nogalte Bagnold and MPM calculated fluxes and shear stress with best-fit linear and power relations and comparison with linear and power relations for Eshtemoa data13, (b) Dimensionless Nogalte Bagnold and MPM calculated fluxes and dimensionless shear stress with linear relations compared with dimensionless relations of Cohen et al.13 and Liébault et al.32, (c) Best-fit linear relations of bedload flux to shear stress from combined Nogalte data and Eshtemoa curve13, (d) Best-fit power relations of bedload flux to shear stress from combined Nogalte data and Eshtemoa curve13. Competence, mode of transport and sediment budget The sizes of sediment deposited can be used to assess the competence and transport mechanisms in the event and applied to competence formulations. Using both the Hjulström velocity curve33 and Shields shear stress34 to calculate competence of the flows, the values at peak flow in all sites exceed their predicted thresholds for movement of all quadrat and bulk sample particles sizes (Fig. 6). The ratios of actual to critical values of velocity range up to 3.6 for the d84 of samples at sites. The ratio of actual shear stress to Shields critical values, using 0.03 loose bed, ranged between 52 for 4 mm particles to 1.4 for 150 mm particles at Nog1, and at Nog2, and ranged between 170 for 4 mm particles and 1.7 for 400 mm particle at NogMon. Values exceed the 4.5x critical Shields shear stress, identified for equal mobility35, for all quadrat sample sizes at all sites. Calculations for surveyed cross-sections elsewhere in the channel course (Fig. 1) indicate all exceeded the sediment movement thresholds so transport of all sizes took place throughout the channel system. Evidence of the mode and mechanism of transport is provided by the Shields values for d = 10, 20 and 32 mm (as conservative values) for the various sediment size parameters in the quadrats at each site (Fig. 7)36. Assuming d50 bed material even up to 20 mm, then at Nog2 and NogMon all sizes of material would be in suspension, and at Nog1 all coarse material would be moving as bedload with fine material in suspension. For a bed material of 32 mm all material would be moving as bedload; however, 32 mm is in the range of the maximum 10 particles sampled by the quadrats so is at the extreme of the sediment sizes. The pre-flood bed was not armoured by these very large particles and the post-flood bed was very loose and unstructured. Ratios of actual to critical values at each site for movement of average sizes of sediment measured in quadrats and bulk samples. Calculated Shields entrainment function using D = 10 and D = 20 for each site for measured sediment parameters: maximum size in a quadrat (max), average of 10 largest particles in a quadrat (Max 10), average of regular grid sample of size in quadrat (grid). The ranges indicate the values measured at the multiple quadrat sample points at each site. Source of base graph: Embleton and Thornes35. Additional evidence of the size of material transported during the event is from deposits in a large, 45 × 15 m area, water storage tank/reservoir located at the channel side in the confined, middle part of the course, between sites Nog2 and NogMon (Fig. 1), and beyond a wall 1–1.3 m high above the channel bed, which acted as a sediment trap in the event (empty beforehand) (Fig. 4a). The sizes deposited are exactly comparable with the other channel deposits (Fig. 3), with a maximum sampled quadrat size of 35 mm, but particles of 82 mm were present on the tank deposit surface, 1.3 m above the channel bed. The values of Shields parameter for the tank indicate that the material was easily carried in suspension (Fig. 6). Elsewhere throughout the channel, many particles of 100–150 mm diameter were deposited within the vegetation, indicating movement at height (Fig. 4b). In addition, some very large blocks were moved in a few locations, including two concrete blocks found in the centre of the channel upstream of NogMon (Fig. 1), which were 7.2 m long, with b axis of 3–4 m (Fig. 4c). These concrete blocks must have come from channel walls ~300 m away. Many competence formulae fail to predict movement of blocks of this size under these hydraulic conditions. Overall, the competence measurements corroborate the evidence of high sediment flux and sediment mobility. Deposition was extensive over most of the active channel area, which occupied much of the valley floor in all three sites. Using difference of pre- and post-flood DEMs, the net change is the equivalent of 108 m3 of sediment added in Nog1 reach, 1548 m3 in Nog2, and 5648 m3 in NogMon18. Sediment budget calculations indicate that this material must have been derived directly from slope sources as a net addition during the event. Comparison of six bulk samples of the bare soil in the almond groves on the slopes (4 samples) and under semi-natural vegetation (2 samples) with seven bulk samples of channel sediment shows little coarsening of the channel deposits: 58.9% gravel compared with 62% in the soil, 38.2% sand compared with 31.5% in soil, and 3% silt-clay compared with 6.5% in the soil, confirming the slope erosion as the probable direct source. Wider comparison and implications Little change in size of surface material occurred as a result of the flood in spite of high mobility and completely new surfaces at a different elevation created over much of the active channel area. The sediment deposits are extremely loose and lacking resistant or armoured surfaces and the sediment lacks fine and cohesive material. The similarity of deposits throughout the system implies a similarity of both supply and of hydraulics, a lack of sorting and rapid deposition; the channel material is similar in particle size to the sampled slope materials. Coarse material (>100 mm diameter) is sparse. The schistose bedrock breaks up very easily to relatively homogeneous gravel-sized particles and is easily transported. The sediment budget calculations indicate a large addition of material to the main channel system to account for the amount of net deposition, which was almost continuous along the channel. If the net deposition gain rate at Nog2 of 700 m3 per 100 m length is applied to the 5 km length of channel with similar, wide aggrading morphology between Nog2 and NogMon, then the net gain of material is of the order of 40,000 m3 of material (65,500 tonnes). If that is coming from the catchment area between Nog2 and NogMon (60 km2) the yield becomes 1091 tonnes km−2 or 661 m3 km−2. This is equivalent to 0.66 mm erosion over the whole surface in one event of a few hours duration and equivalent to some of the highest published annual sediment yields globally32. Overall, the evidence is of an event with an extremely high rate of hydrograph rise, in which all the deep, loose gravel channel material across nearly the whole valley floor was mobilised immediately. Supply of sediment from the slopes was very high due to fissile, schistose material, the high unit runoff and the dominantly bare, steep slopes under almond groves throughout the valley. This produced very large sediment loading which prevented erosion taking place early in the event; the rapid recession, still with high sediment loads, meant little net erosion occurred in late stages. The massive, unsorted load was rapidly deposited on the sharp recession, leaving large, flat planar bars. The liability to high sediment fluxes is corroborated by the catchment authority's (CHS) assessment of the catchment risks and dangers associated with floods, who have rated the level of sediment transport risk at the very highest level of 5/537. The calculations here confirm that assessment. This event was a major hazard to human life and produced significant infrastructure damage. The natural channels are well adapted to carrying these fluxes, and most problems of structural damage occurred where channels were constrained by walls and embankments. Management strategies must allow room for the flow and channel mobility and for these possible sediment fluxes. The large sediment flux also has major implications for filling of checkdams; the flux was in spite of many small checkdams being present in the catchment, though some were full. Since the flood event many more, large checkdams have been constructed as the major strategy for reducing flooding downstream but their capacity and longevity could be reduced quickly if similar events occur. Future climate scenarios could exacerbate these problems. The data presented here indicate the magnitude of sediment fluxes that are possible in ephemeral flash floods, given high sediment supply and intense hydraulic conditions. Evidence and field measurements Routine measurements at the monitored sites include: peak flow by crest stage recorder; topography by detailed RTK-GPS surveys of cross-sections, thalweg, all channel edges, and points across the channel and floodplain; sediment size and surface characteristics by photography of established 0.5 m quadrats; and vegetation cover, state, species and height by survey of established 3 m quadrats16. Changes are measured by comparison of repeat surveys after flows16,17 from which DEMs have been constructed and the DoDs for each reach calculated for changes in major events18,38. Peak discharge and hydraulics (velocity, shear stress, stream power and unit stream power) of the event have been calculated from surveyed floodmarks in each of the three monitored reaches, combined with the surveyed cross-sections before and after the event, and surveyed points throughout the reach18 (Fig. 2b). RTK-GPS points are measured to an accuracy of ±2 cm. Hourly discharge data during the event (Fig. 2a) were also available from the CHS website26 for the gauge at Puerto Lumbreras town, further downstream (Fig. 1). (No higher temporal resolution data are available but the rapidity of the recession means that the peak is unlikely to have been much higher than this validated value). Hydraulics have been calculated for 3–5 post-flood cross-sections in each reach (Fig. 1) using the velocity-area method and testing with a range of Manning's n values and uncertainty in gradient and floodmark levels. Convergent values have been selected as most probable. Hydraulics have been calculated for both pre- and post-flood morphology for those cross-sections surveyed immediately before the flood and pre-flood hydraulics are used here (Table 1) because of the high sedimentation at most locations, which probably occurred post-peak (with the exception of Nog1 X10 where the prior survey was some time beforehand and the section was erosional so post-flood morphology was used). The floodmark heights and water surface gradients are based on direct and accurate field measurements of position. Uncertainty in likely flood heights measured at each cross-section range from 0.13 cm to 1.02 m (Table 2). Water surface slope was calculated from the floodmarks throughout the reach to gain water surface profiles, testing a range of distance around each cross-section. Most probable values were selected from convergent values in lengths and both sides of the channel and consistency of discharge within and between reaches. Possible ranges are indicated; uncertainty varies between cross-sections and reaches, ranging up to 33% in likely values. The biggest uncertainty is associated with the choice of Manning's n value but 0.04 is considered suitable as an initial test and mostly produces calculations consistent with the measured flow at the downstream gauge for post-flood morphology18 and for intervening cross-sections measured throughout the system18 (Fig. 2b), but for Froude Number values of >1.2 adjustments were made (three cases) in accordance with analysis in the HYDRATE project28. The uncertainty arising from selection of Manning n value is illustrated in Fig. 8 for velocity, discharge and Bagnold sediment flux values using a range of n from 0.03 to 0.07. Overall, all the evidence combined and the calculations of uncertainty indicate that values in the event are unlikely to be lower than those used and thus the calculated sediment flux is probably a conservative estimate. Values of velocity, discharge and Bagnold sediment flux for cross-sections, indicating uncertainty associated with values of Manning's n from 0.03 (maximum values plotted) to 0.07 (minimum values plotted). The long-established, sediment quadrats in fixed horizontal position, 3–5 in each reach, were photographed before and after the flood, with no intervening flow taking place between pre- and post-flood measurements. From the images, several size parameters have been measured in each quadrat - maximum, largest 10 particles (Max 10), and mean and standard deviation of 25 regular grid sampled points; changes in each from pre- to post-flood were also calculated. In addition, some representative bulk samples were taken and analysed in the laboratory using sieving for particles >2 mm and Coulter Counter for <2 mm size. Bulk samples were also taken of soil material on the slopes to assess characteristics of potential supply. At the most downstream site, NogMon, pits were dug in the channel bed after the event to examine evidence of stratigraphy, layering and grading, and of the depth of the active layer in the event. In addition, cross-sections at other locations through the whole channel system were surveyed after the flood to calculate peak discharge and hydraulics18. Observations and mapping of channel features, evidence of erosion and deposition, sizes of sediment, and nature of tributary supply were conducted along the whole 25 km length of channel. This included hydraulic and sediment measurements at a site where a tank or small reservoir had acted as a sediment trap in the channel and a location where some very large blocks had been deposited (Fig. 1). These data provide corroboration for some inferences about dynamics of the event. Volumes of sediment eroded and deposited and net sediment budget within the three monitored reaches have been calculated from the pre- and post-flood DEMs constructed from the detailed topographic surveys. Calculations of the difference of DEMs (DODs), using ArcGIS (10.4.1) with the TIN algorithm and the 'Geomorphological Change Detection' plug-in (GCD 6) procedures attached to ArcGIS, were used to calculate net sediment volume changes and uncertainties. The volumes derived from the DoDs have been used to calculate the net sediment flux using the morphological method39,40. Derived calculations and modelling of sediment dynamics The following aspects of the sediment dynamics have been analysed and modelled: sediment flux, flow competence, mode of transport, sediment yield and sediment budget. For sediment flux or sediment transport load, a wide number of equations for calculation of sediment flux or amount transported have been developed, very many from flume experiments and using homogenous sediment, and most of the other field-derived relations are from very controlled and perennial flow channels. Most have encompassed relatively small ranges of shear stress values. It is notorious that use of various sediment questions produces very wide variations of estimates of load, especially bedload, often one or more orders of magnitude40. Some equations are for bedload only, others for suspended sediment and some for total (combined load); some use mixed sediment sizes41. Reid and associates8 have investigated the applicability of various equations in relation to field measurement data from the Israeli instrumented ephemeral channels and have found that the Meyer-Peter and Müller42 equation fits the Nahel Yatir and Eshtemoa bedload data collected over several years reasonably well. Gomez and Church43 state that stream power equations provide the most straightforward scale correlation of flow and sediment transport and should be used when information on channel hydraulics is limited and Graf (1988, p149)44 advocated the use of the Bagnold45 total load equation by geomorphologists. The Bagnold equations are based on stream power and were used, for example, by Graf46 in his model of sediment movement in the ephemeral streams near Los Alamos, USA. It also worked well in a model developed to simulate morphological, sediment and vegetation processes for these present studied channels in SE Spain16. In the present study, various sediment transport equations were tested including MPM loose42, MPM Parker47, MPM Wilson48 variation, and Bagnold total load45, but results are focused on the Bagnold (equation 1) and MPM equations (equation 2)48, which gave reasonably convergent results (Table 1) and because they have been found to be highly applicable to measured loads in ephemeral stream floods31. Equations: Bagnold45 $$i={\rm{\omega }}(\frac{{e}_{b}}{\tan \,{\rm{\alpha }}}+0.01\frac{u}{{v}_{{\rm{ss}}}})$$ where i is total load transport (kg s−1), ω is unit stream power (N m−1 s−1), eb is a bedload efficiency factor, tan α is coefficient of friction, u is flow velocity (m s−1), vss is settling velocity of particles for a given size (m s−1). Efficiency and tan α values are as tested in Hooke et al.15. MPM49 $${{\rm{q}}}_{{\rm{b}}}={{\rm{\O }}(({\rm{\rho }}}_{{\rm{s}}}\,-\,{\rm{\rho }})\,{\rm{g}}\,{{\rm{d}}}^{{\rm{3}}}{)}^{{\rm{0.5}}}$$ $${\rm{\O }}={((4{\rm{\tau }}/{\rm{\rho }}({{\rm{\rho }}}_{{\rm{s}}}-{\rm{\rho }}){\rm{gd}})-0.188)}^{1.5}$$ ρs is sediment density (kg m−3), ρ is fluid density (kg m−3), g is gravitational acceleration (m s−2), d is grain diameter (m), and τ is shear stress (N m−2)49. All calculations have used 0.7 porosity or 1.65 density to convert weight to volume. An efficiency value of 0.15 and fall velocity values of 0.3–0.9 were used for the Bagnold equation (1). All calculations have been made for d50 = 10 mm and 20 mm to give conservative quantities and for d = 5 mm which is near the d50 of bulk samples, which are biased towards fine material. Results are presented using the Bagnold total load equation (1) and the MPM equation (2). Uncertainty in the Bagnold estimates are affected by velocity (and thus discharge and unit stream power) and effects of Manning n values are indicated in Fig. 8. The data derived from the Bagnold and MPM calculations of peak sediment flux in each cross–section were compared with the published relations between bedload flux and shear stress11,12,13, which show a simple relation, though for smaller magnitude values. The Nogalte values have been plotted with best-fit linear and power relations together with the separate curves established by Cohen et al.12 for the whole Eshtemoa data set (Fig. 5a). The Nogalte values and best-fit relations have also been plotted for dimensionless values together with published relationships12,32 (Fig. 5b). In addition, the best-fit linear and power relationships have been calculated for the combined Nogalte data and Cohen et al. curves (using calculated points, not the original whole Eshtemoa dataset) (Fig. 5c,d). All calculations were made using excel. The sediment budget has been calculated from the sediment continuity equation of input-output-change model using calculations of flux calculated as above and evidence of amounts of erosion, deposition and net change in a reach from the DoDs and cross-sections. Sediment yield produced by this event has also been calculated from the data of flux and net storage. No direct sediment load measurements (suspended or bedload) are available for the gauging station or elsewhere. In such an event, standard measurement techniques for suspended load are unlikely to be meaningful anyway since the evidence is of very coarse particles being carried in suspension. It is suggested that the detailed field measurements made here, combined with the methods of calculation of fluxes and the corroborating evidence give estimates that are realistic for the event. Many equations and relations are available for calculating competence and Buffington and Montgomery50 reviewed analyses to that date. Hooke et al.15, in developing a simulation model of morphological, sedimentological and vegetation changes in these channels, tested various formulations in terms of critical velocity, critical shear stress, critical discharge, critical depth and critical power. Following this review, they used the values derived from the Hjulström graph for entrainment and the lower velocities at which sedimentation takes place for any size, though the Baker and Ritter51 equation was also considered. More recently, Billi52,53 has tested several critical shear stress equations on ephemeral channels in Ethiopia, including for predictions of the entrainment of large boulders. Thompson and Croke54 also reviewed and tested eight competence equations. In assessing competence of later, very large floods in perennial rivers in Queensland55, they tested Costa's56 critical unit stream power regression model, Dreg, and the empirical lower envelope curve for critical stream power, Dlec, as well as the Shields entrainment function, and found the Dreg equation did not predict entrainment of the 2 m boulders transported but the Dlec equation did. In the current analysis a range of equations and relations for competence were tested33,34,52,57,58,59,60,61,62. A calculator created by Mecklenburg and Ward49 was also used to check the Shields entrainment function. The competence values were calculated for the actual hydraulics at each cross-section to predict the size that could be moved. The hydraulics necessary for mobilisation of the mean and maximum sizes and for d = 5 mm, d = 10 mm and d = 20 mm have also been calculated. In addition, competence and required hydraulics (velocity and shear stress) to transport some very large blocks were calculated. Ratios of actual hydraulic values in each section to critical values for the range of sizes found were calculated (Fig. 6)35,59. The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request. Langbein, W. B. & Schumm, S. A. Yield of sediment in relation to mean annual precipitation. Trans. AGU 39, 1076–84 (1958). Dedkov, A. P. & Mozzherin, V. I. Erosion and sediment yield in mountain regions of the world. In: Walling, D. E., Davies, T. R., Harsholt, B., (Eds), IAHS Publication 209, 29–36 (1992). Laronne, J. B. & Reid, I. Very high-rates of bedload sediment transport by ephemeral desert rivers. Nature 366(6451), 148–150 (1993). Reid, I., Laronne, J. B. & Powell, D. M. The Nahal Yatir bedload database: Sediment dynamics in a gravel-bed ephemeral stream. Earth Surface Processes and Landforms 20, 845–857 (1995). Nichols, M. H., Stone, J. J. & Nearing, M. A. Sediment database, Walnut Gulch Experimental Watershed, Arizona, United States. Water Resources Research 44(5) (2008). Schick, A. P. & Lekach, J. An evaluation of 2 10-year sediment budgets, Nahal-Yael, Israel. Physical Geography 14, 225–238 (1993). Greenbaum, N. & Bergman, N. Formation and evacuation of a large gravel-bar deposited during a major flood in a Mediterranean ephemeral stream, Nahal Me'arot, NW Israel. Geomorphology 77, 169–186 (2006). Reid, I. Sediment Dynamics of Ephemeral Channels. In: Bull, L. J., Kirkby, M. J., (Eds). Dryland Rivers. John Wiley & Sons, Ltd., Chichester, 107–128 (2002). Lekach, J. & Schick, A. P. Suspended sediment in desert floods in small catchments. Israel Journal of Earth Sciences 31, 144–156 (1982). Hassan, M. A. Scour, fill, and burial depth of coarse material in gravel bed streams. Earth Surface Processes and Landforms 15, 341–356 (1990). Reid, I., Laronne, J. B. & Powell, D. M. Flash-flood and bedload dynamics of desert gravel-bed streams. Hydrological Processes 12(4), 543–557 (1998). Cohen, H. & Laronne, J. B. High rates of sediment transport by flashfloods in the Southern Judean Desert, Israel. Hydrological Processes 19(8), 1687–1702 (2005). Cohen, H., Laronne, J. B. & Reid, I. Simplicity and complexity of bed load response during flash floods in a gravel bed ephemeral river: A 10 year field study. Water Resources Research 46 (2010). Powell, D. M., Brazier, R., Parsons, A., Wainwright, J. & Nichols, M. Sediment transfer and storage in dryland headwater streams. Geomorphology 88(1-2), 152–166 (2007). Hooke, J. M., Brookes, C. J., Duane, W. & Mant, J. M. A simulation model of morphological, vegetation and sediment changes in ephemeral streams. Earth Surface Processes and Landforms 30(7), 845–866 (2005). Hooke, J. M. Monitoring morphological and vegetation changes and flow events in dryland river channels. Environmental Monitoring and Assessment 127, 445–457 (2007). Hooke, J. & Mant, J. Morphological and vegetation variations in response to flow events in rambla channels of SE Spain. In: Dykes, A. P., Mulligan, M. & Wainwright, J. (Eds), Monitoring and Modelling Dynamic Environments (A Festschrift in memory of Professor John B. Thornes). Chichester: John Wiley & Sons, pp 61–98 (2015). Hooke, J. M. Geomorphological impacts of an extreme flood in SE Spain. Geomorphology 263, 19–38 (2016). Hooke, J. M. Morphological impacts of flow events of varying magnitude on ephemeral channels in a semiarid region. Geomorphology, 128–143 (2015). BBC News. http://www.bbc.co.uk/news/world-europe-19767627 (2012). AON Benfield September 2012 Global Catastrophe recap. http://thoughtleadership.aonbenfield.com/Documents/20121004_if_global_cat_recap_september.pdf (2012). Kirkby, M. et al. Hydrological impacts of floods in SE Spain, September 2012. Poster paper at IAG International Conference, Paris (2013). Smith, M. W., Carrivick, J. L., Hooke, J. & Kirkby, M. J. Reconstructing Flash Flood Magnitudes Using 'Structure-from-Motion': a rapid assessment tool. Journal of Hydrology 519, 1914–1927 (2014). Riesco Martin, J., Mora García, M., de Pablo Davila, F. & Rivas Soriano, L. Regimes of intense precipitation in the Spanish Mediterranean area. Atmos. Res. 137, 66–79 (2014). CHS Noticias 08/04/2013 La riada de San Wenceslao demuestra la importancia de las presas en la defensa contra grandes inundaciones. https://www.chsegura.es/chs/informaciongeneral/comunicacion/noticias/noticia_1024.html (2013). CHS hydrological data, Station 05O01Q01.Available at https://www.chsegura.es/chs/cuenca/redesdecontrol/SAIH/visorsaih/visorjs.html. Last accessed November 2018. Conesa-García, C. Torrential flow frequency and morphological adjustments of ephemeral channels in south-east Spain. In: Hickin, E. J. (Ed.), River Geomorphology. Wiley, Chichester, 171–192 (1995). Lumbroso, D. & Gaume, E. Reducing the uncertainty in indirect estimates of extreme flash flood discharges. J. Hydrol. 414–415, 16–30 (2012). Gaume, E. et al. A compilation of data on European flash floods. Journal of Hydrology 367, 70–78 (2009). Lopez-Bermudez, F., Conesa-Garcia, C., & Alsonso-Sarria, F. Floods: magnitude and frequency in ephemeral streams of the Spanish Mediterranean region. In: Bull, L. & Kirkby, M. (Eds), Dryland Rivers: Hydrology and Geomorphology of Semi-arid Channels. John Wiley & Sons, Chichester, pp. 229–350 (2002). Reid, I., Powell, D. M. & Laronne, J. B. Prediction of bed-load transport by desert flash floods. J. Hydraul. Eng. 122(3), 170–173 (1996). Liébault, F., Jantzi, H., Klotz, S., Laronne, J. B. & Recking, A. Bedload monitoring under conditions of ultra-high suspended sediment concentrations. Journal of Hydrology 540, 947–958 (2016). Hjulström, F. Studies of the morphological activity of rivers as illustrated by the River Fyris. Bulletin of the Geological Institute of the University of Uppsala 25, 221–527 (1935). Shields A. Anwendung der aehnlichkeitsmechanik und der turbulenz-forschung auf die geschiebebwegung. Mitteilung der Preussischen versuchsanstalt fuer Wasserbau und Schiffbau, Heft 26, Berlin (1936). Powell, D. M., Reid, I. & Laronne, J. B. Evolution of bed load grain size distribution with increasing flow strength and the effect of flow duration on the caliber of bed load sediment yield in ephemeral gravel bed rivers. Water Resources Research 37(5), 1463–1474 (2001). Embleton, C. & Thornes, J. Process in geomorphology. Arnold, London (1979). CHS.Plan de Gestión del Riesgo de Inundación. https://www.chsegura.es/export/descargas/planificacionydma/planriesgoinundaciones/docsdescarga/02_PGRI_ANEJOS.pdf (2015). Hooke, J. M. & Mant, J. M. Geomorphological impacts of a flood event on ephemeral channels in SE Spain. Geomorphology 34, 163–180 (2000). Ham, D. G. & Church, M. Bed-material transport estimated from channel morphodynamics: Chilliwack River, British Columbia. Earth Surface Processes and Landforms 25, 1123–1142 (2000). Martin, Y. & Ham, D. Testing bedload transport formulae using morphologic transport estimates and field data: lower Fraser River, British Columbia. Earth Surface Processes and Landforms 30(10), 1265–1282 (2005). Wilcock, P. R. & Crowe, J. C. Surface-based transport model for mixed-size sediment. Journal of Hydraulic Engineering-ASCE 129(2), 120–128 (2003). Meyer-Peter, E. & Muller, R. Formulas for bed-load transport. Proc. 2nd Meeting IAHR, Stockholm, pp.39–64. (1948). Gomez, B. & Church, M. An assessment of bed-load sediment transport formulas for gravel bed rivers. Water Resources Research 25(6), 1161–1186 (1989). Graf, W. L. Fluvial Processes in Dryland Rivers. Berlin: Springer, 346pp (1988). Bagnold, R. A. An approach to the sediment transport problem from general physics. Physiographic and hydraulic studies of rivers. US Geological Survey Professional Paper 422-1, 1–20 (1966). Graf, W.L. Plutonium and the Rio Grande; Environmental Change and Contamination in the Nuclear Age. Oxford University Press: New York. (1994). Wong, M. & Parker, G. Reanalysis and correction of bed-load relation of Meyer-Peter and Muller using their own database. J. Hydraulic Engineering 132, 1159–1168 (2006). Wilson, K. C. Bed-Load Transport at High Shear Stress. Journal of the Hydraulics Division 92, 49–59 (1966). Mecklenburg, D. & Ward, A. Sediment Equations version 4.0. https://water.ohiodnr.gov/portals/soilwater/data/xls/Sediment_Equations_4_0.xls. Last accessed November 2018. Buffington, J. M. & Montgomery, D. R. A systematic analysis of eight decades of incipient motion studies, with special reference to gravel-bedded rivers. Water Resources Research 33(8), 1993–2029 (1997). Baker, V. R. & Ritter, D. F. Competence of rivers to transport coarse bedload material. Geological Society of America Bulletin 86, 975–978 (1975). Billi, P. Bedforms and sediment transport processes in the ephemeral streams of Kobo basin, Northern Ethiopia. Catena 75, 5–17 (2008). Billi, P. Flash flood sediment transport in a steep sand-bed ephemeral stream. International Journal of Sediment Research 26(2), 193–209 (2011). Thompson, C. & Croke, J. Channel flow competence and sediment transport in upland streams in southeast Australia. Earth Surface Processes and Landforms 33(3), 329–352 (2008). Thompson, C. & Croke, J. Geomorphic effects, flood power, and channel competence of a catastrophic flood in confined and unconfined reaches of the upper Lockyer valley, southeast Queensland, Australia. Geomorphology 197, 156–169 (2013). Costa, J. E. Paleohydraulics reconstruction of flash-flood peaks from boulder deposits in the Colorado Front Range. Geol. Soc. Am. Bull. 94, 986–1004 (1983). Church, M. Palaeohydrological reconstructions from a Holocene valley fill. In Fluvial Sedimentology Vol. 5, Miall AD (ed.). Canadian Society of Petroleum Geologists; 743–772 (1978). Williams, G. P. Paleohydrological methods and some examples from Swedish fluvial environment, I — cobble and boulder deposits. Geogr. Ann. 65 A, 227–243 (1983). Davis, L. Sediment entrainment potential in modified alluvial streams: implications for re-mobilization of stored in-channel sediment. Physical Geography 30(3), 249–268 (2009). Carling, P. A. Threshold of coarse sediment transport in broad and narrow natural streams. Earth Surface Processes and Landforms 8(1), 1–18 (1983). Ferguson, R. I. Estimating critical stream power for bedload transport calculations in gravel-bed rivers. Geomorphology 70(1–2), 33–41 (2005). Ferguson, R. I. River channel slope, flow resistance, and gravel entrainment thresholds. Water Resources Research 48 (2012). The author is grateful to Robert Perry for field assistance. Department of Geography and Planning, School of Environmental Sciences, University of Liverpool, Roxby Building, Liverpool, L69 7ZT, UK J. M. Hooke J.M.H. undertook the research design, fieldwork and analysis and wrote the paper. Correspondence to J. M. Hooke. The author declares no competing interests. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Hooke, J.M. Extreme sediment fluxes in a dryland flash flood. Sci Rep 9, 1686 (2019). https://doi.org/10.1038/s41598-019-38537-3 Contextualising sediment trapping and phosphorus removal regulating services: a critical review of the influence of spatial and temporal variability in geomorphic processes in alluvial wetlands in drylands K. D. Wiener P. K. Schlegel B. van der Waal Wetlands Ecology and Management (2022) RETRACTED ARTICLE: Simulation of plain rainfall runoff and monitoring of basketball players' sports fatigue based on heterogeneous computing Li Fenglei Zhang Shengnian Cheng Wanxiang Arabian Journal of Geosciences (2021) RETRACTED ARTICLE: Influence of GIS technology on coastal atmospheric climate and employee's innovation intention Ping Liu Impact of urbanization on desert flash flood generation Duaa Almousawi Jaber Almedeij Abdullah A. Alsumaiei
CommonCrawl
Forgotten username ? Hardy-Ramanujan Journal Instruction for Authors Volume 41 - Special Commemorative volume in honour of S. Srinivasan - 2018 S. Srinivasan, A Biographical Sketch By C. S. Aravinda and K. Srinivas My memories of Srinivasan by M. Ram Murty Remembering Srinivasan... 1. A Theorem of Fermat on Congruent Number Curves Lorenz Halbeisen ; Norbert Hungerbühler. A positive integer $A$ is called a \emph{congruent number} if $A$ is the area of a right-angled triangle with three rational sides. Equivalently, $A$ is a \emph{congruent number} if and only if the congruent number curve $y^2 = x^3 − A^2 x$ has a rational point $(x, y) \in {\mathbb{Q}}^2$ with $y \ne 0$. Using a theorem of Fermat, we give an elementary proof for the fact that congruent number curves do not contain rational points of finite order. 2. On Pillai's problem with Pell numbers and powers of 2 Mohand Ouamar Hernane ; Florian Luca ; Salah Rihane ; Alain Togbé. In this paper, we find all integers c having at least two representations as a difference between a Pell number and a power of 2. 3. Fluctuation of the primitive of Hardy's function Matti Jutila. We consider oscillatory properties of the primitive of Hardy's function including a certain Gibbs phenomenon due to M. A. Korolev. 4. On the Wintner-Ingham-Segal summability method S Kanemitsu ; T Kuzumaki ; Y Tanigawa. The aim of this note is to establish a subclass of $\mathcal{F}$ considered by Segal if functions for which the Ingham-Wintner summability implies $\mathcal{F}$-summability as wide as possible. The subclass is subject to the estimate for the error term of the prime number theorem. We shall make good use of Stieltjes integration which elucidates previous results obtained by Segal. 5. On an identity of Ramanujan Yoichi Motohashi. Proofs published so far in articles and books, of the Ramanujan identity presented in this note, which depend on Euler products, are essentially the same as Ramanujan's original proof. In contrast, the proof given here is short and independent of the use of Euler products. 6. On-Regular Bipartitions Modulo $m$ D D Somashekara ; K N Vidya. Let $b_l (n)$ denote the number of $l$-regular partitions of $n$ and $B_l (n)$ denote the number of $l$-regular bipartitions of $n$. In this paper, we establish several infinite families of congruences satisfied by $B_l (n)$ for $l \in {2, 4, 7}$. We also establish a relation between $b_9 (2n)$ and $B_3 (n)$. 7. When are Multiples of Polygonal Numbers again Polygonal Numbers? Jasbir Chahal ; Michael Griffin ; Nathan Priddis. Euler showed that there are infinitely many triangular numbers that are three times other triangular numbers. In general, it is an easy consequence of the Pell equation that for a given square-free m > 1, the relation ∆ = m∆' is satisfied by infinitely many pairs of triangular numbers ∆, ∆'. After recalling what is known about triangular numbers, we shall study this problem for higher polygonal numbers. Whereas there are always infinitely many triangular numbers which are fixed multiples of other triangular numbers, we give an example that this is false for higher polygonal numbers. However, as we will show, if there is one such solution, there are infinitely many. We will give conditions which conjecturally assure the existence of a solution. But due to the erratic behavior of the fundamental unit of Q(√ m), finding such a solution is exceedingly difficult. Finally, we also show in this paper that, given m > n > 1 with obvious exceptions, the system of simultaneous relations P = mP' , P = nP'' has only finitely many possibilities not just for triangular numbers, but for triplets P , P' , P'' of polygonal numbers, and give examples of such solutions. 8. An elementary property of correlations Giovanni Coppola. We study the "shift-Ramanujan expansion" to obtain a formulae for the shifted convolution sum $C_{f,g} (N,a)$ of general functions f, g satisfying Ramanujan Conjecture; here, the shift-Ramanujan expansion is with respect to a shift factor a > 0. Assuming Delange Hypothesis for the correlation, we get the "Ramanujan exact explicit formula", a kind of finite shift-Ramanujan expansion. A noteworthy case is when f = g = Λ, the von Mangoldt function; so $C_{\Lamda, \Lambda} (N, 2k)$, for natural k, corresponds to 2k-twin primes; under the assumption of Delange Hypothesis, we easily obtain the proof of Hardy-Littlewood Conjecture for this case. 9. The Zeta Mahler measure of $(z^n − 1)/(z − 1)$ Arunabha Biswas ; M Ram Murty. We consider the k-higher Mahler measure $m_k (P) $ of a Laurent polynomial $P$ as the integral of ${\log}^k |P | $ over the complex unit circle and zeta Mahler measure as the generating function of the sequence ${m_k (P)}$. In this paper we derive a few properties of the zeta Mahler measure of the polynomial $P_n (z) := (z^N − 1)/(z − 1) $ and propose a conjecture. 10. On certain sums over ordinates of zeta-zeros II Andriy Bondarenko ; Aleksandar Ivić ; Eero Saksman ; Kristian Seip. Let γ denote the imaginary parts of complex zeros ρ = β + iγ of ζ(s). The problem of analytic continuation of the function $G(s) :=\sum_{\gamma >0} {\gamma}^{-s}$ to the left of the line $\Re{s} = −1 $ is investigated, and its Laurent expansion at the pole s = 1 is obtained. Estimates for the second moment on the critical line $\int_{1}^{T} {| G (\frac{1}{2} + it) |}^2 dt $ are revisited. This paper is a continuation of work begun by the second author in [Iv01]. 11. Two applications of number theory to discrete tomography Rob Tijdeman. Tomography is the theory behind scans, e.g. MRI-scans. Most common is continuous tomography where an object is reconstructed from numerous projections. In some cases this is not applicable, because the object changes too quickly or is damaged by making hundreds of projections (by X-rays). In such cases discrete tomography may apply where only few projections are made. The present paper shows how number theory helps to provide insight in the application and structure of discrete tomography. 12. Hybrid level aspect subconvexity for GL(2) × GL(1) Rankin-Selberg L-Functions Keshav Aggarwal ; Yeongseong Jo ; Kevin Nowland. Let $M$ be a squarefree positive integer and $P$ a prime number coprime to $M$ such that $P \sim M^{\eta}$ with $0 < \eta < 2/5$. We simplify the proof of subconvexity bounds for $L(\frac{1]{2}, f \otimes \chi)$ when $f$ is a primitive holomorphic cusp form of level $P$ and $\chi$ is a primitive Dirichlet character modulo $M$. These bounds are attained through an unamplified second moment method using a modified version of the delta method due to R. Munshi. The technique is similar to that used by Duke-Friedlander-Iwaniec save for the modification of the delta method. 13. Set Equidistribution of subsets of (Z/nZ) * Jaitra Chattopadhyay ; Veekesh Kumar ; R Thangadurai. In 2010, Murty and Thangadurai [MuTh10] provided a criterion for the set equidistribution of residue classes of subgroups in (Z/nZ) *. In this article, using similar methods, we study set equidistribution for some class of subsets of (Z/nZ) *. In particular, we study the set equidistribution modulo 1 of cosets, complement of subgroups of the cyclic group (Z/nZ) * and the subset of elements of fixed order, whenever the size of the subset is sufficiently large. 14. A remark on cube-free numbers in Segal-Piatestki-Shapiro sequences Jean-Marc Deshouillers. Using a method due to G. J. Rieger, we show that for $1 < c < 2 $ one has, as $x$ tends to infinity $$\textrm{Card}{n \leq x : \lfloor{n^c}\rfloor} \ \textrm{ is cube-free} } = \frac{x}{\zeta(3)} + O (x^{ (c+1)/3} \log x)$$ , thus improving on a recent result by Zhang Min and Li Jinjiang. 15. A note on some congruences involving arithmetic functions József Sándor. We consider some congruences involving arithmetical functions. For example, we study the congruences nψ(n) ≡ 2 (mod ϕ(n)), nϕ(n) ≡ 2 (mod ψ(n)), ψ(n)d(n) − 2 ≡ 0 (mod n), where ϕ(n), ψ(n), d(n) denote Euler's totient, Dedekind's function, and the number of divisors of n, respectively. Two duals of the Lehmer congruence n − 1 ≡ 0 (mod ϕ(n)) are also considered. 16. Integral points on circles A Schinzel ; M Skalba. Sixty years ago the first named author gave an example \cite{sch} of a circle passing through an arbitrary number of integral points. Now we shall prove: {\it The number $N$ of integral points on the circle $(x-a)^2+(y-b)^2=r^2$ with radius $r=\frac{1}{n}\sqrt{m}$, where $m,n\in\mathbb Z$, $m,n>0$, $\gcd(m,n^2)$ squarefree and $a,b\in\mathbb Q$ does not exceed $r(m)/4$, where $r(m)$ is the number of representations of $m$ as the sum of two squares, unless $n|2$ and $n\cdot (a,b)\in\mathbb Z^2$; then $ N\leq r(m)$}. 17. Explicit abc-conjecture and its applications Chi Chim Kwok ; Saranya G. Nair ; T. N. Shorey. We state well-known abc-conjecture of Masser-Oesterlé and its explicit version, popularly known as the explicit abc-conjecture, due to Baker. Laishram and Shorey derived from the explicit abc-conjecture that (1.1) implies that $c < N^{1.75}$. We give a survey on improvements of this result and its consequences. Finally we prove that $c < N^{1.7}$ and apply this estimate on an equation related to a conjecture of Hickerson that a factorial is not a product of factorials non-trivially. 18. The Barban-Vehov Theorem in Arithmetic Progressions V Kumar Murty. A result of Barban-Vehov (and independently Motohashi) gives an estimate for the mean square of a sequence related to Selberg's sieve. This upper bound was refined to an asymptotic formula by S. Graham in 1978. In 1992, I made the observation that Graham's method can be used to obtain an asymptotic formula when the sum is restricted to an arithmetic progression. This formula immediately gives a version of the Brun-Titchmarsh theorem. I am taking the occasion of a volume in honour of my friend S. Srinivasan to revisit and publish this observation in the hope that it might still be of interest. Technical support Email About Episciences Hosted journals Episciences v1.0.39.1-f2769edf
CommonCrawl
Most harmful heuristic? What's the most harmful heuristic (towards proper mathematics education), you've seen taught/accidentally taught/were taught? When did handwaving inhibit proper learning? soft-question big-list 2 revs, 2 users 100% $\begingroup$ In view of many of the answers to this question, it might help to have in the statement a definition of heuristic as it is applied to mathematics. $\endgroup$ – Pete L. Clark Apr 26 '10 at 3:56 $\begingroup$ In fact, the harmful entity in most answers is not a heuristic at all! $\endgroup$ – Victor Protsak May 22 '10 at 15:07 Not the most harmful, but a fun example (credit due to Tony Varilly): "You can't add apples and oranges." False. You can in the free abelian group generated by an apple and an orange. As Patrick Barrow says, "A failure of imagination is not an insight into necessity." Dave Penneys $\begingroup$ This almost belongs in the mathematical jokes question. ;) $\endgroup$ – GMRA Oct 25 '09 at 2:13 $\begingroup$ Two apples plus three oranges equals five pieces of fruit. What's the problem? $\endgroup$ – Gerry Myerson Aug 19 '10 at 5:50 $\begingroup$ Indeed. Take the free abelian group A generated by the set of all types of fruit and consider the natural homomorphism onto the free abelain group generated by {Fruit} induced by sending each generator of A to the single generator of <Fruit> ... $\endgroup$ – Steven Gubkin Oct 2 '10 at 23:59 $\begingroup$ Just occurred to me to wonder whether we shouldn't be adding apples and oranges in the free abelian grape. $\endgroup$ – Gerry Myerson Feb 15 '15 at 4:59 $\begingroup$ Isn't the saying "you can't compare apples and oranges"? I'm not aware of a natural order structure on the free abelian group generated by an appl and an orange. $\endgroup$ – Paul Siegel May 29 '15 at 2:29 This isn't really a heuristic, but I hate "functions are formulas." It takes a lot of students a really long time to think of a function as anything other than an algebraic expression, even though natural algorithmic examples are everywhere. For example, some students won't think of f(n) = {1 if n is even, -1 if n is odd} as a function until you write it as f(n) = (-1)^n. Qiaochu Yuan $\begingroup$ <p> I'm a high school student and I can safely say that most of my peers just don't get what a function is. The only ones who do seem to have learned from programing. Then again, all the really mathematically talented students in my very small school also program... </p> <p>Functions seem to get slipped in somewhere along the line without a proper introduction, and then it is assumed that students know it from there on in.</p> $\endgroup$ – Christopher Olah Apr 25 '10 at 22:21 $\begingroup$ Actually I still have a lot of trouble going back the other way, to "functions are polynomial formulas, not maps" in algebraic geometry and/or combinatorics. $\endgroup$ – Elizabeth S. Q. Goodman Jan 27 '12 at 8:31 $\begingroup$ Moreover, even for the best of mathematicians on 18th and much of 19th century "function" meant "analytic function"... $\endgroup$ – Michael May 28 '15 at 21:06 $\begingroup$ That's precisely the Euler view: in his works, "continuous functions" were those you may write with a single analytic expression. So 1/x was continuous, while "0 for $x<0$, $x$ for $x\ge 0$" wasn't. $\endgroup$ – Kolya Ivankov Oct 20 '16 at 9:56 $\begingroup$ Much worse than this heuristic is the "official" definition of a function $X\to Y$ as a subset of $X\times Y$ satisfying various axioms. $\endgroup$ – Amritanshu Prasad Oct 20 '16 at 18:05 A tensor is a multidimensional array of numbers that transforms in the following way under a change of coordinates... I saw that for years, and I never understood it until I saw the real definition of a tensor. [Clarification] Sorry, I did leave that very vague. A tensor is a multilinear function mapping some product of vector spaces $V_1\times \cdots \times V_n$ to another vector space. In the context of differential geometry, we're really talking about a tensor field, which assigns a tensor to every point that acts on the tangent and/or cotangent spaces at the point. A more abstract definition is possible by considering tensor products of vector spaces, but the definition using multilinear functions is (to me) extremely intuitive and general enough for a first encounter. It also leads naturally enough to the abstract concepts anyway, as soon as you start thinking about the set of all tensors of a particular rank and its structure. The "multidimensional array" definition suffers from conflating object and representation. The array is an encoding of the underlying multilinear function, and it's perfectly reasonable if understood in that way (to partially reply to Scott Aaronson's comment). Unfortunately, the encoding depends on an arbitrary choice (coordinate system), while the underlying function obviously doesn't, so it gets very confusing if you try to use it as the definition. Regarding accessibility (also referring to Scott Aaronson's comment): I don't really agree: I think multilinear functions are pretty accessible. Assuming a familiarity with vector spaces and linear transformations, multilinear functions are a natural and very tangible extension of those ideas. And since multilinearity is the key concept underlying tensors, if you're going to deal with tensors, you should really just bite the bullet and deal with the concept. Darsh Ranjan $\begingroup$ What's "the real definition of a tensor"? Element of a tensor product? A section of a tensor bundle? $\endgroup$ – Victor Protsak May 22 '10 at 15:09 $\begingroup$ I second this wholeheartedly!!!!!! @Victor Protsak: For me, "the real definition of a tensor" is something like the following. "Let M be a smooth manifold; let FM be the space of smooth functions on M, let VM be the space of smooth vector fields on M, and let VM be the space of smooth covector fields on M. A (k,l) tensor is a multilinear map from (VM)^k x (VM)^l to FM." There might be more abstract and versatile definitions, but this one seems to work pretty well in the context of general relativity, which is where the definition Darsh Ranjan quoted tends to show up (in my experience). $\endgroup$ – Vectornaut May 22 '10 at 22:01 $\begingroup$ I second, third and fourth that, Darsh. That particular definition of tensor set back my understanding of differential geometry by at least a year. $\endgroup$ – Cosmonut Oct 3 '10 at 3:39 $\begingroup$ The trouble I have is that none of the alternative definitions on offer seem accessible to someone first learning about tensors! Related to that (in my mind), they don't make clear how one would actually represent a tensor on a computer (e.g., how many degrees of freedom are there, and what do we do with them?). So, is there a way to explain what tensors are that satisfies those constraints but also leads to fewer wrong intuitions? $\endgroup$ – Scott Aaronson Apr 18 '13 at 5:26 $\begingroup$ I agree with Scott Aaronson. In fact, the physicist way of defining tensors as things that change correctly under coordinates gives a nice way to define tensor fields on manifolds (Simply a smooth collection of multi-index beasts on different open sets such that on the intersection they are related by an appropriate transformation (the transition functions of the tensor bundle)). I am not sure if this "heuristic" actually gives rise to wrong intuitions. $\endgroup$ – Vamsi May 29 '15 at 0:24 Along the same lines as Qiaochu's and Zach's responses, the commonly taught heuristics pertaining to functions, differentiability and integration are a pet hate of mine. I certainly left school thinking of functions as formulas involving combinations of elementary functions and having a very poor understanding of the relevance and correct relationship between integration and differentiation, the worst manifestation of which, now that I'm a bit older, seems to have been that Differentiation is a nice, computable operation and tells you about functions; integration is hard and tells you about areas under curves. Areas under curves never seemed interesting. As an analyst, my personal feelings towards them are now almost entirely reversed and I think of integration as my friend and differentiation as the enemy. Differentiation uses up regularity; integration smooths. $\begingroup$ That's because on formulas differentiation is nice and integration is hard, but on computable functions differentiation is hard and integration is nice. In theory, we have a denotational semantics between formulas that functions that should transport these notions back-and-forth, but we really really don't. There are tons and tons of papers in computer algebra which basically boil down to this massive gulf between abstract analysis (the study of functions given by properties) and concrete analysis (study of functions given by formulas). $\endgroup$ – Jacques Carette Mar 13 '10 at 3:50 $\begingroup$ I'm upvoting this partially because I agree, but mostly because you used the term "pet hate" as opposed to "pet peeve". $\endgroup$ – Jamie Weigandt Apr 26 '10 at 0:51 $\begingroup$ @Jacques: that's really well-phrased! I had an "a-ha" moment reading your comment. $\endgroup$ – Neel Krishnaswami Apr 26 '10 at 9:40 $\begingroup$ I'm reading this as a second year undergraduate student and I didn't go through this kind of reversal yet. I'd be glad if someone would give a short explanation in layman terms why it's the other way around for computable functions! $\endgroup$ – Lenar Hoyt Aug 21 '13 at 1:26 $\begingroup$ @user8823741, in general, numerical differentiation is an unstable process. Think about how the derivative is defined; you are in effect subtracting two nearly equal quantities to get a tiny result, and then dividing that tiny result by another tiny value to get a result that is almost often far from tiny. That's a lot of opportunities for a computer to slip up. $\endgroup$ – J. M. is not a mathematician Jun 9 '16 at 4:12 The "FOIL" (first+outside+inside+last) mnemonic for multiplying two binomials is terrible. It suppresses what is really going on (three applications of the distributive property) in favor of an algorithm. In other words, it is teaching a human being to behave like a computer. The legacy of FOIL is clear when you ask your students to multiply three binomials, or two trinomials. Students usually either have no idea what to do, attempt it but get lost in the algebra, or succeed but complain about the arduousness of the task. $\begingroup$ I can't stand FOIL! It seems to indicate to students that order matters here. I don't see what FOIL adds, but it certainly detracts from the idea of just multiplying all the pairs and adding. Instead of teaching the idea (which they'll never forget), they now have something memorized (easy to forget). And I once had a student erase their correct work because they accidentally did FLOI or something and rewrite the same thing in a different order. $\endgroup$ – Matt Apr 25 '10 at 18:43 $\begingroup$ As much as I dislike teaching mathematics "algorithmically", there is a reason why FOIL is taught as such: by forcing the user to adopted an algorithm, you can minimize mistakes. Doing things "in order" is a good habit, which should be encouraged. It is unfortunate the trend where "educators" take good practices, and distil from it something all but recognizable... $\endgroup$ – Willie Wong Apr 25 '10 at 21:18 $\begingroup$ As a high school teacher, I usually encountered students after their first exposure to FOIL, so I made a point to revisit the process and introduce "Super-FOILing" (which, of course, was just applying the distributive property to two polynomials of any length). Yes, yes: I hammered proper terminology and all the conceptual stuff, too, but starting off with "Ah, so you can FOIL ... but can you SuperFOIL?" really made the ears perk right up! In a way, prior exposure to FOIL was helpful to me, providing an accessible object lesson that math is always "bigger" than any of us are ever taught. $\endgroup$ – Blue Apr 25 '10 at 21:23 $\begingroup$ Todd: Ask students on an exam to solve an equation such as $(x-1)(x-2)(x-3)(x-4)=0$. I've done this a couple of times. A very common attempt of solution was to expand things out (often making mistakes along the way), contemplate the new, messy equation, and declare, "It can't be factored!". Sad... $\endgroup$ – Pedro Teixeira Apr 17 '13 at 21:35 $\begingroup$ @PedroTeixeira Surely this is the way to proceed! Let $y = x - 2.5$; then $x = y + 2.5$, and the equation becomes: $0 = (y + 1.5)(y + 0.5)(y - 0.5)(y - 1.5) = (y^2 - 2.25)(y^2 - 0.25)$, which holds when $y = \sqrt{2.25}$ or $y = \sqrt{0.25}$. For each square root we obtain two possible $y$-values; add back the $2.5$ to each to get the four possible $x$-values. A similar approach can be found by observing $(x-1)(x-4) = (x-2)(x-3) - 2$; now denote the LHS by $y$ so that the original equation becomes $y(y+2) = 0$; solve for $y$ using the quadratic equation, etc. $\endgroup$ – Benjamin Dickman Aug 19 '14 at 6:44 Two-column proofs Usually the only proofs that students see upon graduating from high-school are the geometry "two-column" proofs, and trying to convince them that the essence of mathematical proof lies not in the form but in the logical deductive argument takes a lot of convincing. Anna Varvak $\begingroup$ Do students even see the two-column proofs any more? From some things I've read I've gotten the impression that those have been pushed aside in favor of just not proving anything at all. $\endgroup$ – Michael Lugo Oct 28 '09 at 23:11 $\begingroup$ They certainly do. $\endgroup$ – Akhil Mathew Oct 29 '09 at 0:06 $\begingroup$ If students are taught that two-column proofs are the only kind there is, then I agree that they could be harmful. However, I think the framework of two-column proofs can be extremely helpful in teaching students to think through the underlying structure of a proof before trying to write it out in paragraph form, because it helps them avoid vague hand-waving arguments. When I teach undergrads how to do proofs, I have them write two-column proofs first, and then explain that "This is what the proof looks like naked. But to take it out in public, you need to put clothes on it." $\endgroup$ – Jack Lee Aug 18 '10 at 17:30 $\begingroup$ ...what is a two-column proof? $\endgroup$ – Piero D'Ancona Aug 18 '10 at 19:57 $\begingroup$ A two-column proof is a proof arranged as a series of numbered statements, with the statements in the left-hand column and corresponding justifications in the right-hand column. This used to be the way proofs were universally taught in US high-school geometry courses. They're still taught this way, but somewhat less universally, I think. $\endgroup$ – Jack Lee Aug 18 '10 at 20:39 "Stacks are schemes with groups attached to points." I don't know how much damage this has caused, but I never understood how it was actually helpful to anybody. Not only is it hand-wavy (which is okay for a heuristic), but it's hand-wavy in a way that can't really be corrected (because it's false). My feeling is that people who adopt this heuristic are trapped. If they use the heuristic to come up with a result, it's very hard to sharpen the reasoning to turn it into a proof. You have to just start from scratch and not use the heuristic. Anton Geraschenko $\begingroup$ How do I up-vote answers multiple times?!?! $\endgroup$ – Kevin H. Lin Oct 24 '09 at 23:02 $\begingroup$ By leaving a comment explaining that the answer is so great others just have to upvote it. You convinced me, by the way, to give my last daily vote :). $\endgroup$ – Ilya Nikokoshev Oct 24 '09 at 23:50 $\begingroup$ Anton: ok, the heuristics of "groups attached to points" is very incomplete, but... so how do you (heuristically) imagine a stack, you really think of it as a forest of objects and arrows over the category of schemes?? [*/G] ? Orbifolds? Orbifold curves? Gerbes? $\endgroup$ – Qfwfq Apr 25 '10 at 19:14 $\begingroup$ @unknown: How do you (heuristically) imagine schemes? It's fine to use terminology like "fat point" so long as you keep in mind that the "fatness" of a point is not all the information there is: Spec(k[ε]/ε³) is different from Spec(k[x,y]/(x²,xy,y²)), even though they're both "fat points of order 3". Similarly, points of stacks do indeed have automorphism groups, but it is important not to think that that's all there is to it. I guess my point was that I feel like too many people take this heuristic as the definition, so they are not sufficiently mindful of its limitations. $\endgroup$ – Anton Geraschenko Apr 25 '10 at 22:56 $\begingroup$ This seems to me to be one of those heuristics which is very useful as a first approximation, but very misleading if one starts to think of it as the whole story. $\endgroup$ – Peter LeFanu Lumsdaine Sep 27 '10 at 18:21 "Generalization for the sake of generalization is a waste of time" I think that generalization for the sake of generalization can be rather fruitful. Gil Kalai $\begingroup$ Whoever first said that had in mind one or two specific examples of empty or shallow generalizations, and generalized based on those examples, purely for the sake of generalization. $\endgroup$ – Tracy Hall Aug 18 '10 at 22:18 $\begingroup$ I'm not sure if this statement can be generalized ... $\endgroup$ – Hagen von Eitzen Dec 17 '14 at 15:55 Linear algebra purely as row manipulations. I've written about this here: Students stuck in a rut of thinking of matrices as a clever way to arrange numbers will get lost and confused; I know this because I was one of those students. I had to "de-program" what I was taught in high school before I could grasp what was going on. Jason Dyer $\begingroup$ Agreed. It's really hard to internalize what all those intermediate steps in a row reduction actually mean. $\endgroup$ – Qiaochu Yuan Oct 24 '09 at 21:11 $\begingroup$ I had no idea why matrices would exist until beginning the linear algebra class I'm currently in. They seemed perverse and non-sensical. They really don't belong in high school math, frankly. I didn't even remember how to multiply them until I refreshed myself recently. $\endgroup$ – DoubleJay Oct 25 '09 at 16:47 $\begingroup$ By the time I got to linear algebra last year, I had already totally forgotten how to multiply matrices. Luckily, for proofs, the definition of matrix multiplication is a better way to prove something than drawing out (with ...'s) a big nxn matrix. $\endgroup$ – Harry Gindi Apr 25 '10 at 15:43 $\begingroup$ I didn't come across matrices until university, but I wholeheartedly agree that linear algebra should not begin with matrices and their operations. I didn't get a proper view of linear algebra (especially the determinant, which was basically taught by giving the definition and making the students calculate the determinant of a general four-by-four matrix by hand) until I read Sheldon Axler's "linear algebra done right". There the pedagogical idea was to begin with linear mappings and noting as a side note how they can be presented with these funny squares of numbers etc... $\endgroup$ – Rami Luisto Apr 17 '13 at 20:13 $\begingroup$ Picking a basis in a vector space is the root of much evil $\endgroup$ – Hagen von Eitzen Dec 17 '14 at 15:52 "Truth is binary. If a theorem has been proven once, there is no need in a second proof." darij grinberg $\begingroup$ Is this a quote? If yes, who do you quote? $\endgroup$ – eins6180 Dec 2 '14 at 21:23 $\begingroup$ I am not quoting anything. I am merely trying to clarify that both of my sentences are part of the false heuristic, rather than the first being the false heuristic and the second being its refutation. Maybe I should have used parentheses, but I don't want to be that guy. $\endgroup$ – darij grinberg Dec 2 '14 at 22:44 $\begingroup$ A genuinely better proof, in the sense that you feel pretty certain your argument is easier to follow/more intuitive and perhaps even shorter, is ALWAYS of value. And we should not discourage people from publishing their work when they chance upon such an improvement. It makes the field more accessible to newcomers and speeds up advancement. $\endgroup$ – Michael Cotton Oct 6 '17 at 4:34 Similar to Tom's answer, a vector is a mathematical quantity with both a magnitude and a direction. Useful for distinguishing between speed and velocity but little else. The above is a typical definition from a physics textbook I had on the shelf; here in British Columbia, vectors are introduced in high school physics but not high school math. By the time students get to linear algebra in first- or second-year university, it can be hard to convince them that a real number (much less a polynomial) can be a vector. Usually, you have to resort to "a real number does too have a direction: positive or negative" and even then they don't believe you because a scalar is a mathematical quantity with a magnitude and no direction and so if real numbers are vectors, how can they be scalars? Don't even ask about function spaces. Ross Churchley $\begingroup$ My mother had an old "Advanced Calculus" book lying around when I was in high school. It mentioned this old chestnut and commented that it is a poor definition because some things are vectors but have neither magnitude nor direction (like scalars) and some things have both but are not vectors (like trains). $\endgroup$ – Ryan Reich Nov 19 '11 at 7:06 $\begingroup$ +1: it's just wrong for so many reasons. For one thing, it sounds sort of like a reduction of math to physics or something. For another, you need something like an inner product to make sense of it. But worst of all, it's totally ass-backwards when it comes to abstract mathematics, because "vector" has no independent meaning. Rather, a "vector" just means an element of some given vector space, which is a set equipped with ... so it's the concept of vector space which is primary, not vector! Paul Halmos had a similar rant in his automathography. $\endgroup$ – Todd Trimble♦ Aug 25 '12 at 19:54 $\begingroup$ If you are trying to say that $\mathbb R$ is a real vector space, do people really object that $-3$ and $+3$ only have magnitudes, and not directions? I prefer an actual definition over a misleading characterization, but I don't think this one leads to big problems. $\endgroup$ – Douglas Zare Apr 17 '13 at 20:47 $\begingroup$ I recently heard someone joke that a movie must be a vector, since it has both length and direction. $\endgroup$ – Gerry Myerson Oct 20 '16 at 22:45 $\begingroup$ The magnitude-and-direction definition doesn't even really work in physics. In relativity, you can't define a vector by its magnitude and direction, because a nonzero vector can have a zero magnitude. $\endgroup$ – Ben Crowell Jan 18 '17 at 13:51 One extremely harmful heuristic I held until fairly recently: identifying math with algebraic manipulation. When asked to prove an identity or an inequality I would often dive straight into algebraic manipulation of the relations that I knew, wasting many many hours of my time. I have found that it is much more useful to try and test statements against examples I already know, and to try and rephrase identities and inequalities in terms of a statement in natural language that I have some intuition for. Kevin Teh "Categories can be specified by objects alone." It's easy to get this impression, because people who are familiar with the categories in question already know the morphism structure, and don't bother to specify it. There is a related heuristic concerning the composition law, but it doesn't seem to burn people as often. S. Carnahan $\begingroup$ Similar abuses of language include naming a model category by its fibrant objects ("the model category of quasicategories") or a 2-category by its 1-morphisms ("the 2-category of spans"). $\endgroup$ – Reid Barton Oct 24 '09 at 22:03 $\begingroup$ yet nobody is brave enough to name categories from the name of arrows, like if we said "category of continuous mapping" for Top, etc. $\endgroup$ – Pietro Majer Jun 25 '10 at 10:55 $\begingroup$ @Pietro With the exception of Ehresmann and his school. :-) $\endgroup$ – Robert K Mar 13 '11 at 15:24 $\begingroup$ I'd like to hear a convincing example where this has really been a problem. Usually there's a default notion of morphism (think of the category of sets, for instance), and in my experience, when anyone departs from the default, they make a point of it (e.g., the category or bicategory of sets and relations -- see, I didn't specify the 2-cells just now!). I hope Thierry can remember the details of his tale. $\endgroup$ – Todd Trimble♦ Aug 25 '12 at 19:40 $\begingroup$ Ironically, I just had an example the other day (linear codes) where it wasn't completely clear to me what the correct notion of isomorphism should be!! So this is me answering my former (August 25 2012) self. $\endgroup$ – Todd Trimble♦ Apr 18 '13 at 15:24 That there is something weird and unsavory about field extensions that are not separable and that serious contemplation of such things should be put off to the indefinite future. (In fact, much of the richness and "pathology" of geometry in characteristic p is easily understood once one has a firm grasp of how field extensions behave.) Pete L. Clark $\begingroup$ Moreover, the heuristic that there is something weird about the "theory of the automorphism groups" of inseparable extensions. Rather, the automorphisms that do exist are perfectly fine; it's just that inseparable extensions are more rigid, so there are fewer of them. $\endgroup$ – Jay Apr 27 '10 at 2:36 $\begingroup$ @Jay True in one sense, false in another. I remember in grad school several of us got interested in computing the group scheme of autmorphisms of an inseparable extension. It's length is more than the degree, although all of that length is nilpotent, so you don't see it in the actual automorphisms. $\endgroup$ – David E Speyer Apr 11 '11 at 12:16 "A continuous function is one you can draw without raising the pencil" This has terrible disadvantages when generalizing functions defined on a real interval to non connected sets, non compact sets and in general topological spaces. Bruno Stonek $\begingroup$ oh and I heard of a student claiming that "x+1" is not continuous because you need to raise the pencil at least twice whn you write it. $\endgroup$ – Pietro Majer May 22 '10 at 16:57 $\begingroup$ @Pietro: Se non e vero, e ben trovatto! $\endgroup$ – Victor Protsak May 23 '10 at 7:06 $\begingroup$ Victor: compliments, very good knowledge of Italian -and Italians $\endgroup$ – Pietro Majer May 23 '10 at 22:35 $\begingroup$ Pietro, that's just too funny (albeit in a sad way). For that matter, $x$ is discontinuous, unless you're in the habit of making your $x$'s look like $\alpha$'s. $\endgroup$ – Todd Trimble♦ Aug 25 '12 at 19:58 $\begingroup$ If $x+1$ is discontinuous because you need to raise the pencil, does it even pass the vertical line test? $\endgroup$ – Joe Berner Oct 20 '16 at 13:26 In elementary school, there are false principles which take a lot of effort to overcome: Math problems have one answer. There is one right method. These may be ok (though the second is debatable) when you are working on $1+2$, but not when you are supposed to isolate a variable, to graph a function, to recognize how you can apply the chain rule, to solve a complicated word problem, or to prove something. Many students don't think math is a place to experiment or to apply creativity. They are afraid to take incorrect steps even when it is no longer convenient or possible to say what the right first step is. There is an interesting app called Dragonbox. It is very popular in Norway. When children think of algebra as a puzzle or game, they feel free to experiment, and they quickly learn to do things like isolate variables which usually give algebra students trouble. See also Terry Tao's blog posts on gamifying algebra. Students can learn to solve the problems, but have difficulty because these incorrect principles get in the way. Douglas Zare The opposite of Qiaochu's dictum is just as misleading - "formulas are functions". There are a lot of non-denoting expressions! It's just that mathematicians don't tend to write non-denoting terms very often. Of course, there's a good reason for that - you can't prove anything interesting about non-denoting terms (or rather, way too much). But then students never get the intuition that there are expressions which are 'junk', nor tools to prove that something is 'junk'. My favourite 'junk' expression is $$1/\frac{1}{\left( x - x \right) } $$ Lest you think this is not very important, try to "teach" first-year calculus to a computer, and you'll see how these non-denoting terms are most troublesome. Jacques Carette "Vectors are directed line segments." When worded this way, this utterance is only acceptable if the student is satisfied with getting on his or her bicycle at the end of class and never returning to mathematics again. $\begingroup$ Well...in principle, you could define a vector of, say, R^2 to be an equivalence class of "directed line segments". $\endgroup$ – Qfwfq May 11 '10 at 12:26 $\begingroup$ That's verbatim how I learned the definition of vector. But the "equivalence class" part of it changes everything (and did not go over too well with many of the other students; it was junior-high after all...) $\endgroup$ – Thierry Zell Aug 18 '10 at 22:06 $\begingroup$ This was (more or less) the definition I heard when I was 7 or 8. I think it's great for a seven or eight-year-old, but probably not so great for an undergraduate mathematics major. :) $\endgroup$ – apnorton Oct 17 '14 at 19:41 $\begingroup$ Could you say in more detail what's wrong with this one? In an affine space, a directed line segment is indeed the same thing as a tangent vector. And there's no need for equivalence classes—line segments based at different points live in different tangent spaces, so they shouldn't be identified (although all the tangent spaces are canonically isomorphic through translation). I certainly agree that it's harmful to give the impression that all vectors are directed line segments, but I think it's very true and useful to point out that all directed line segments are vectors. $\endgroup$ – Vectornaut Feb 15 '15 at 4:06 Not sure if this qualifies exactly, but I can never remember which theorems of group theory apply to finite groups, and which ones apply to groups in general. Anytime I remember a result, I have this sinking feeling that it appears in a textbook preceded by "for the remainder of this section, let G be a finite group." I'm not sure how well-founded this fear is (other than the theorems that obviously don't make sense for infinite groups, like the Sylow theorems). Gabe Cunningham $\begingroup$ By the way, the Sylow theorems make sense (and are true, I think) for infinite groups if you make a few modifications. A p-Sylow subgroup is a maximal subgroup which is a p-group. The first theorem (existence) is obvious by Zorn's lemma. The second (that all p-Sylows are conjugate) is interesting. The third is interesting if the index of a p-Sylow is finite or if the number of p-Sylows is finite. $\endgroup$ – Anton Geraschenko Oct 24 '09 at 21:47 $\begingroup$ There are also profinite Sylow theorems, yielding the existence of a maximal pro-p subgroup. The proofs are relatively straightforward extensions of the finite proofs. $\endgroup$ – S. Carnahan♦ Oct 24 '09 at 21:54 $\begingroup$ This got me in a lot of trouble in my first-year graduate algebra class. I also had a habit of forgetting that infinite groups even exist, which is the same sort of thing. $\endgroup$ – Michael Lugo Oct 24 '09 at 22:06 $\begingroup$ @ML: right. I don't think textbooks can be fairly construed to be confusing about which results apply only to finite groups. BUT most undergraduate algebra textbooks I have seen certainly give the impression that finite groups are more important, more natural, and more studied than infinite groups, when many if not most mathematicians would say that the reverse is true. $\endgroup$ – Pete L. Clark Mar 1 '10 at 0:08 $\begingroup$ I was tempted to try adding something like this to the false beliefs question. At a higher level, the same becomes true with the properties "finitely generated" or "residually finite". $\endgroup$ – Jonathan Kiehlmann May 9 '11 at 8:35 Almost any heruistic can be "most harmful" if used by a teacher in a situation when the audience does not know why it makes sense, and without an explanation. This is especially dangerous in the frequent case that the heruistic does not actually seem reasonable to a person seeing it for the first time, since it makes sense only in some ways but not others. It might require months of experience for an uninitiated person to understand how and why it applies. For example, the heuristic of schemes as manifolds is such -- every algebraic geometer understands it, but it actually is harmful to a person who is seeing schemes for a first time (such a person would vary likely interpret this heruistic as saying that affine schemes are trivial to understand). Same applies to "integration is the inverse of differentiation", and some of the other answers to this question. Of course, these heuristics are also the most useful ones, once you (and any audience you might have) actually understand them. The whole point of learning math is to gain more such heuristics, and to makes the ones you have more precise. For this reason, it seems to me that the use of such heruistics on an unprepared audience is the most common problem in the lectures by the very best mathematicians. A related problem is the an abundance of statements that are not strictly true, but "correct in spirit". Again, this may be very useful in research or when talking to a person of appropriate sophistication, but it is very bad for students if such statements are used carelessly and without explanation. P.S. This whole answer is generalization for the sake of generalization. Was it a waste of time, I wonder? Ilya Grigoriev Also not really a heuristic, but "differentiation is easy," as encoded in the following two sub-heuristics: Differentiation is just repeated application of the product and chain rules, and Most functions are differentiable most of the time. Edit: Someone doesn't seem to like this answer, so I'll expand. Students who leave calculus with this impression enter analysis with a disadvantage: differentiation is not a property that "most" functions have in any reasonable sense, not even continuous ones, and to compute the derivative of a function that isn't given as a sum of compositions of "elementary" functions requires an entirely different mindset than the one that values the product and chain rule. $\begingroup$ I think your argument is more effective against a slogan like, "all interesting functions are differentiable". In my (limited) experience, differentiation tends to be algorithmic in practice, although it can be unstable in numerical applications. This is in contrast to integrals, which exist much more often and tolerate numerical error well, but are generally very difficult to compute. $\endgroup$ – S. Carnahan♦ Oct 24 '09 at 22:38 $\begingroup$ Somewhat related is the assertion that "differentiation is more fundamental", since it is "easier" and usually taught first. Not only is this misguided for the reasons you and Scott cite, but following Roger Penrose we can also turn the argument upside down in the complex plane by using Cauchy's theorem to define the derivative of a function by means of a contour integral. I've always hoped there was some alien civilization in another spacetime where derivatives were actually introduced this way. $\endgroup$ – jvkersch Mar 1 '10 at 12:39 I wish to point the attention on Pete Clark's very relevant initial comment. The term heuristic is often taken as synonymous to non-rigorous method, only based on intuition or experience. I personally dislike this acceptance of the word in mathematics, and I suspect it is not even historically correct (now I'm curious to check the use of it in the classic authors). The etymology of the adjective, from the verb εὑρίσκω (to find, discover) means "aimed to find". As I see it, it is exactly the method we follow when looking for a solution of a problem: using all implications of being a solution in order to identify a candidate solution. Of course, the heuristic is only half the job, and it is only rigorous if followed by part 2: checking the solution. But there's a very smart idea in it. For instance: solving an equation, transform it, but do not check the equivalence of each single step, just follow a chain of implications. So, what is harmful is not the heurstic method, but leaving out the (often less creative) part 2. That said, here's my example: let F be a smooth function bounded below (or a functional) with only one critical point. Then one would argue: Any minimum point of F(x)=0 satisfies F'(x)=0, whose only solution is x0. Hence, x0 is the minimizer. False!, if one does not check that F(x0)≤F(x) for all x ("direct method in Calculus of Variations") or if one has not proved the existence of a minimizer (indirect method). Many students make this mistake... but not only them! 4 revisions, 2 users Pietro Majer 79% $\begingroup$ That example isn't a heuristic though. Just a false idea. $\endgroup$ – Michael Cotton Oct 6 '17 at 5:29 Any attempt to draw a fat Cantor set is a bad heuristic in my opinion. I saw such a diagram as an undergrad and believed for a while that there were intervals contained in the fat Cantor set. I don't think it's possible to express in a picture that a fat Cantor has positive Lebesgue measure and has empty interior. $\begingroup$ I'm upvoting because until now I'd only ever heard of fat Cantor sets in passing, and if you hadn't said this, I probably would have been misled in exactly the same way you were. $\endgroup$ – Vectornaut Feb 15 '15 at 4:11 $\begingroup$ An animation would be better (now possible with computers). Zoom in on it and see that the seemingly-"interval" areas have holes, then zoom in on the seeming-"interval" areas there, and so forth, until one "gets the point". $\endgroup$ – The_Sympathizer Feb 15 '15 at 6:45 $\begingroup$ Understandable. They often do a bad job explaining that any Cantor set is just an embedding of $2^\omega$ into the space. And it's not that hard to show later that this essentially all your proper closed subsets. :/ $\endgroup$ – Michael Cotton Oct 6 '17 at 5:27 A natural (iso)morphism is one that is "canonical", or defined without making "choices", or that is defined "in the same way" for all objects. This is a heuristic I found in every introductory text on category theory I can remember reading (and usually followed with the single/double dual of a vector space as an example) and it took me quite a while to realize that this is not only inaccurate, but just plainly wrong. Explanation of "wrongness": A natural morphism is a morphism between two functors. That is, a morphism in the category of functors between two categories. And as such, should be thought as usual as mapping the "data" in a way that preserves the "structure" and choices have really nothing to do with it. For example, thinking of a group $G$ as a one object category, functors from it to the category of sets form the category of $G$-sets. A morphism of $G$-sets is a map of sets preserving the action of $G$ and not a map of sets that "does not involve choices". Same goes for other familiar categories of functors (representations, sheaves etc.) Another example is the category of functors from the one object category $G$ again to itself. To give a natural map (isomorphism) from the identity functor of $G$ to itself is just to pick an element of the center of $G$. I don't imagine anyone describing it as doing something that "doesn't involve choices". Moreover, every category $C$ is the category of functors from the terminal one-object-one-morphism category to $C$. Hence, every morphism in any category is a "natural morphism between functors" so there is really no point in specifying a heuristic for when a morphism is "natural". This is utterly meaningless. In the other direction, it is easy to write down "canonical" object-wise maps between two functors that fail to be natural in the technical sense. Conisder the category of infinite well ordered sets with weakly monotone functions. The "successor function" is definitely defined "in the same way" for all objects, but is not a natural endomorphism of the identity functor in the technical sense. Explanation of harmfulness": Well I guess it is clear that a completely wrong heuristic is a bad one, but I'll just point out one specific example that is perhaps not so important, but shows clearly the problem. When showing that every category is equivalent to a skeletal category there is a very "non-canonical" construction of the natural isomorphisms. I saw several people get seriously confused about this. Some thought: One might argue that this heuristic was advanced by the very people who invented category theory (like Maclane) and thus, it is perhaps a bit presumptuous to declare it as "plainly wrong". My guess is that at the time people where considering mainly large categories (like all sets, all spaces, all groups etc.) as both domain and codomain of functors and were focusing on natural isomorphisms. In such situations it is unlikely that the functor will have non trivial automorphisms (or have very few and "uninteresting" ones) and therefore a natural isomorphism will be in fact unique so maybe this is the origin of the heuristic (It is just a guess, I am not an expert on the history of category theory). This relates to the point that by definition, if specifying an object does not involve choices, then it is unique (this is a tautology). So when we say that an isomorphism is "canonical" we usually mean that given enough restrictions, it is unique (and not just natural in the technical sense). For example, the reason we identify the set $A\times (B \times C)$ with the set $(A\times B)\times C$ is not because there is a natural isomorphism between them, but because if we consider the product sets with the projections to $A,B$ and $C$, then there is a unique isomorphism between them. And this is in line with the general philosophy of identifying objects when (and only when) they are isomorphic in a unique way. In contrast, we don't identify two elements of a group $G$, just because they are conjugate (This is "naturally isomorphic" viewed as functors of one object categories $\mathbb{Z}\to G$) precisely because this natural isomorphism is not unique. Well, I did not intend this to get so lengthy... I was just anticipating some "hostile" responses defending this heuristic, so I tried to be as convincing as possible! KotelKanim $\begingroup$ I think there is a version of this heuristic that is mostly accurate and useful: almost any "canonical" construction is functorial or a natural tranformation. As you point out, this isn't always the case (and the converse certainly isn't the case in general), but in my experience the exceptions that arise in practice are quite rare and it is not difficult to get an intuition for detecting the rare cases when it fails. A special case of this that is actually literally always correct is that any canonical construction is functorial/natural with respect to isomorphisms. $\endgroup$ – Eric Wofsey May 29 '15 at 13:34 Writing a proof as a chain of expressions connected by equals signs whether they are appropriate or not. SixWingedSeraph $\begingroup$ That's not really a heuristic, that's a misunderstanding of the equals sign. $\endgroup$ – Anna Varvak Oct 28 '09 at 15:46 "Teach the subject before its applications." Some important constructions seem quite pointless until you understand the rationale for them. For example, I recall finding the lectures in freshman linear algebra on constructing Jordan Normal Form extremely boring and pointless until JNF came up in the context of solving linear ODEs a year later. "That's what Jordan Normal Form is for!" - I thought - "I wish I knew that a year ago!" $\begingroup$ As a counterpoint, I never understood Jordan normal form until I learned that it was a special case of the classification of finitely generated modules over a PID. In other words, my difficulty with Jordan normal form came from teaching this application of representation theory before the subject! $\endgroup$ – Vectornaut May 28 '15 at 22:14 $\begingroup$ Both of your points are true. It is a good idea to bring up the Jordan normal form before the theory of modules over a PID, but it is not at all necessary to teach its proof and the algorithm before the general case of a PID. $\endgroup$ – darij grinberg May 29 '15 at 0:12 $\begingroup$ Well, I think most good teaching is either motivated theory or theoretically sound applications, because these two things should almost never live without each other. $\endgroup$ – Juan Sebastian Lozano Oct 20 '16 at 20:38 "Differentiation and integration are inverse operations." To many calculus students, this is their conception of the fundamental theorem. There's truth to this heuristic, of course, but one needs to be constantly informed by a much deeper understanding of integration (and differentiation) in order to properly wield this correspondence in most situations beyond those encountered in a first course in calculus. Zach Conn $\begingroup$ Generalizing differentiation and integration lead us to see that they differ as left- of right- sided inverses. One side generalizes to Lebesgue differentiation theorem, on the other side generalizes to bounded variation and absolute continuity. $\endgroup$ – user2529 Apr 25 '10 at 14:41 $\begingroup$ I disagree with this: I think it is a fantastic heuristic, indeed the single most important heuristic of first year calculus. To argue against it is mostly to say "I don't like heuristics", it seems to me. $\endgroup$ – Pete L. Clark Aug 2 '12 at 8:21 $\begingroup$ Well, I didn't really have first year calculus in mind when I wrote this answer. Sure, it's a great heuristic at that level, but it's not so great later on. I guess the lesson here is that you can't really talk about a heuristic without talking about the context as well. My answer was less about the heuristic being bad, and more about it being bad to cling onto a heuristic as you transition into territory where it ceases to be so fantastically useful. $\endgroup$ – Zach Conn Nov 18 '12 at 5:21 $\begingroup$ It sounds like Zach is saying that some unlearning has to take place if they go on in math. That's true, but at the same time there are so many viewpoints on what differentiation "is" (see for example Thurston's list in the beginning of his Proofs and Progress paper) that it's hard to get more than just a few across in a semester or even year-long course, so I suppose some unlearning will have to take place anyway. The inversion heuristic has an advantage of being memorable. $\endgroup$ – Todd Trimble♦ Oct 27 '18 at 11:29 From Keith Devlin's article http://www.maa.org/devlin/devlin_06_08.html "Multiplication is repeated addition." This is true when multiplying natural numbers, but is a special case of a scaling operation in the reals. We know it is also a rotation in the complexes, but that should probably be left out at the beginning, although it might interesting to think about how one would include them at the beginning. Devlin also mentions "exponentiation is repeated multiplication." Anthony Pulido $\begingroup$ It's an incomplete heuristic, one that does work only for very special cases. But does this mean it is a bad heuristic? The only case where I can imagine getting bitten by it is when defining a linear map, forgetting the $f\left(\lambda x\right)=\lambda f\left(x\right)$ condition. On the other hand, here is a much more malign heuristic: Lie brackets are commutators. Very dangerous when you consider the tensor algebra of a Lie algebra. $\endgroup$ – darij grinberg Apr 10 '11 at 21:20 $\begingroup$ On the other hand, "the exponential map is an infinitely repeated infinitesimal multiplication" is a very good heuristic to have, particularly in Lie groups... $\endgroup$ – Terry Tao Dec 13 '11 at 19:20 $\begingroup$ But this rule has such a nice direct application: it shows that all rings (with unit) admit a map from $\mathbb Z$. $\endgroup$ – Elizabeth S. Q. Goodman Jan 27 '12 at 8:48 "you'll need a computer for that". $\begingroup$ I don't think Zeilberger would disagree with that "heuristic"/advice! $\endgroup$ – Quadrescence Oct 3 '10 at 2:15 Two bad principles that taste worse together: Decimals are the true numbers. Rounding makes no difference. Since students learn about decimals after they've learned about whole numbers and fractions, they might assume that decimals are always the preferred way to represent real numbers, and so everything should be converted to decimals. Meanwhile, since in generally one cannot be expected to write out an infinite decimal expansion, they might assume that stopping after two decimal places makes no difference. I'm not saying that approximations are bad. But it's bad to approximate if you have no sense of your error tolerance, or even of the fact that you're introducing an error at all. Here are two perverse outcomes. Imagine a problem whose answer is, say, $\pi/4$, and a solution that ends like this: $$\text{blah blah blah} = \pi/4 = 3.14/4 = .785.$$ I'm sure that there are some situations where it's important to know that your answer is between $.78$ and $.79$. But much of the time, conversion to decimals obscures what's going on. (Small sample size alert!) About half of my calculus students will, on the first day of class, mark the equation $\frac{1}{3} = 0.33$ as ``true''. Jeff Adler $\begingroup$ What fraction of your students do you want to mark $1/3=0.33$ as true? There are different conventions people use, like the way mathematicians use "if" to mean "iff" in definitions like "$x$ is even if there is some integer $k$ so that $x=2k$." It's perfectly reasonable to say $1/3=0.33$ in some contexts. It looks strange because we don't usually use the $=$ sign to mean that, but others do, such as in the $f(n) = O(g(n))$ notation. $\endgroup$ – Douglas Zare Oct 21 '16 at 5:31 $\begingroup$ As you will know because I've told you this in person, I frequently encounter students who think that $\sqrt2 \approx 1.41$ but $\sqrt2 = 1.41413562$ (since it's all the digits displayed on the calculator). $\endgroup$ – LSpice Nov 28 '17 at 20:10 Not the answer you're looking for? Browse other questions tagged soft-question big-list or ask your own question. Examples of common false beliefs in mathematics What are the most attractive Turing undecidable problems in mathematics? "Let $x \in A$", beginning a proof of "$\forall x \in A$ …", if A were empty "Plateaus" to watch out for Most helpful heuristic? japanese/chinese for mathematicians? Are there elementary-school curricula that capture the joy of mathematics? How seriously should a graduate student take teaching evaluations? How do you find out the latest news in fields other than your own? When did you "meet Polya"? How to approach the stigma of not having a math degree? Why is the current math community not contributing to machine learning much? Source for analysis of identification of structures in learner's mind and mathematical structures?
CommonCrawl
GUIAS GUIDECENTRAL mujer profesional soy mamá discrete probability distribution calculator Let the random variable $X$ have a discrete uniform distribution on the integers $0\leq x\leq 5$. Determine mean and variance of $Y$. The probability mass function (pmf) of $X$ is, $$ \begin{aligned} P(X=x) &=\frac{1}{5-0+1} \\ &= \frac{1}{6}; x=0,1,2,3,4,5. Hope you like article on Discrete Uniform Distribution. Suppose $X$ denote the number appear on the top of a die. The probability that the last digit of the selected number is 6, $$ \begin{aligned} P(X=6) &=\frac{1}{10}\\ &= 0.1 \end{aligned} $$, b. \end{aligned} $$. The probability that the number appear on the top of the die is less than 3 is, $$ \begin{aligned} P(X < 3) &=P(X=1)+P(X=2)\\ &=\frac{1}{6}+\frac{1}{6}\\ &=\frac{2}{6}\\ &= 0.3333 \end{aligned} $$ Find the mean and variance of $X$. © VrcAcademy - 2020About Us | Our Team | Privacy Policy | Terms of Use. 4 of theese distributions are available here. Below are the few solved examples on Discrete Uniform Distribution with step by step guide on how to find probability and mean or variance of discrete uniform distribution. b. Calculator of Mean And Standard Deviation for a Probability Distribution Instructions: You can use step-by-step calculator to get the mean (\mu) (μ) and standard deviation (\sigma) (σ) associated to a discrete probability distribution. The probability that the last digit of the selected telecphone number is less than 3, $$ \begin{aligned} P(X < 3) &=P(X\leq 2)\\ &=P(X=0) + P(X=1) + P(X=2)\\ &=\frac{1}{10}+\frac{1}{10}+\frac{1}{10}\\ &= 0.1+0.1+0.1\\ &= 0.3 \end{aligned} $$, c. The probability that the last digit of the selected telecphone number is greater than or equal to 8, $$ \begin{aligned} P(X\geq 8) &=P(X=8) + P(X=9)\\ &=\frac{1}{10}+\frac{1}{10}\\ &= 0.1+0.1\\ &= 0.2 \end{aligned} $$. Let $X$ denote the number appear on the top of a die. Find the value of $k$. Discrete Probability Distributions which contains calculators for the most important discrete distributions in mathematics. \end{aligned} $$, Now, Variance of discrete uniform distribution $X$ is, $$ \begin{aligned} V(X) &= E(X^2)-[E(X)]^2\\ &=100.67-[10]^2\\ &=100.67-100\\ &=0.67. For variance, we need to calculate $E(X^2)$. c. Compute mean and variance of $X$. Discrete uniform distribution calculator can help you to determine the probability and cumulative probabilities for discrete uniform distribution with parameter a and b. Geometric Distribution Discrete uniform distribution calculator can help you to determine the probability and cumulative probabilities for discrete uniform distribution with parameter $a$ and $b$. Some Examples include 'chance of three random points on a plane forming an acute triangle', 'calculating mean area of polygonal region formed by random oriented lines over a plane'. \end{aligned} $$, And variance of discrete uniform distribution $Y$ is, $$ \begin{aligned} V(Y) &=V(20X)\\ &=20^2\times V(X)\\ &=20^2 \times 2.92\\ &=1168. b. You can refer below recommended articles for discrete uniform distribution theory with step by step guide on mean of discrete uniform distribution,discrete uniform distribution variance proof. Suppose $X$ denote the last digit of selected telephone number. 2. Code to add this calci to your website Discrete Random Variable's expected value,variance and standard deviation are calculated easily. \end{aligned} $$, The variance of discrete uniform distribution $X$ is, $$ \begin{aligned} V(X) &=\frac{(8-4+1)^2-1}{12}\\ &=\frac{25-1}{12}\\ &= 2 \end{aligned} $$, c. The probability that $X$ is less than or equal to 6 is, $$ \begin{aligned} P(X \leq 6) &=P(X=4) + P(X=5) + P(X=6)\\ &=\frac{1}{5}+\frac{1}{5}+\frac{1}{5}\\ &= \frac{3}{5}\\ &= 0.6 \end{aligned} $$. Let the random variable $Y=20X$. $P(X=x)=k$ for $x=4,5,6,7,8$, where $k$ is constant. This geometric probability calculator is used to find geometric distribution probability with total number of occurrence & probability … Determine mean and variance of $X$. The probability distribution is often denoted by pm(). The calculator can also solve for the number of trials required. \end{aligned} $$, Mean of discrete uniform distribution $X$ is, $$ \begin{aligned} E(X) &=\sum_{x=9}^{11}x \times P(X=x)\\ &= \sum_{x=9}^{11}x \times\frac{1}{3}\\ &=9\times \frac{1}{3}+10\times \frac{1}{3}+11\times \frac{1}{3}\\ &= \frac{9+10+11}{3}\\ &=\frac{30}{3}\\ &=10. Then the random variable $X$ take the values $X=1,2,3,4,5,6$ and $X$ follows $U(1,6)$ distribution. The mean of discrete uniform distribution $X$ is, $$ \begin{aligned} E(X) &=\frac{4+8}{2}\\ &=\frac{12}{2}\\ &= 6. Roll a six faced fair die. This list has either a finite number of members, or at most is countable. 3. Then the mean of discrete uniform distribution $Y$ is, $$ \begin{aligned} E(Y) &=E(20X)\\ &=20\times E(X)\\ &=20 \times 2.5\\ &=50. \end{aligned} $$, eval(ez_write_tag([[250,250],'vrcacademy_com-banner-1','ezslot_9',127,'0','0']));a. which is the probability mass function (pmf) of discrete uniform distribution. 4. Find the probability that the last digit of the selected number is, eval(ez_write_tag([[728,90],'vrcacademy_com-large-mobile-banner-2','ezslot_13',121,'0','0']));a. The Discrete uniform distribution, as the name says is a simple discrete probability distribution that assigns equal or uniform probabilities to all values that the random variable can take. Copyright (c) 2006-2016 SolveMyMath. A random variable $X$ has a probability mass function Uniform (Discrete) Distribution. If you continue without changing your settings, we'll assume that you are happy to receive all cookies on the vrcacademy.com website. In general, PX()=x=px(), and p can often be written as a formula. All the numbers $0,1,2,\cdots, 9$ are equally likely. c. The mean of discrete uniform distribution $X$ is, $$ \begin{aligned} E(X) &=\frac{1+6}{2}\\ &=\frac{7}{2}\\ &= 3.5 \end{aligned} $$ 1. A discrete random variable $X$ is said to have uniform distribution with parameter $a$ and $b$ if its probability mass function (pmf) is given byeval(ez_write_tag([[580,400],'vrcacademy_com-medrectangle-3','ezslot_4',126,'0','0'])); $$f(x; a,b) = \frac{1}{b-a+1}; x=a,a+1,a+2, \cdots, b $$, $$P(X\leq x) = F(x) = \frac{x-a+1}{b-a+1}; a\leq x\leq b $$, The expected value of discrete uniform random variable $X$ is, The variance of discrete uniform random variable $X$ is, A general discrete uniform distribution has a probability mass function, Distribution function of general discrete uniform random variable $X$ is, The expected value of above discrete uniform random variable $X$ is, The variance of above discrete uniform random variable $X$ is. The probability mass function (pmf) of random variable $X$ is, $$ \begin{aligned} P(X=x)&=\frac{1}{6-1+1}\\ &=\frac{1}{6}, \; x=1,2,\cdots, 6. \end{aligned} $$. The mean μ of a discrete random variable X is a number that indicates the … \end{aligned} $$, $$ \begin{aligned} E(X) &=\sum_{x=0}^{5}x \times P(X=x)\\ &= \sum_{x=0}^{5}x \times\frac{1}{6}\\ &=\frac{1}{6}(0+1+2+3+4+5)\\ &=\frac{15}{6}\\ &=2.5. All rights are reserved. To understand more about how we use cookies, or for information on how to change your cookie settings, please see our Privacy Policy. Continuous Uniform Distribution Calculator, Weibull Distribution Examples - Step by Step Guide, Karl Pearson coefficient of skewness for grouped data. Onnit Protein Review, Swift For Good Pdf, Taco Bell Burrito Supreme Nutrition, Oven Bird Nest, Organic Kombucha Scoby, Patcham East Sussex, Debate Script Example For First Speaker, Cómo preparar un delicioso Fetuccini Alfredo
CommonCrawl
What is the meaning of writing a state in its Bloch representation? What is the meaning of writing a state $|\psi\rangle$ in its Bloch representation. Would I be correct in saying it's just writing out its Bloch vector? notation bloch-sphere bhapibhapi $\begingroup$ The short answer is "yes." Read Sanchayan's answer for a complete understanding :) $\endgroup$ – Will Yes. The Bloch sphere formalism is used for geometrically representing pure and mixed states of two-dimensional quantum systems a.k.a qubits. Any pure state $|\Psi\rangle$ of a qubit can be written in the form: $$|\Psi\rangle = \cos\frac{\theta}{2}|0\rangle + e^{i\phi}\sin\frac{\theta}{2}|1\rangle$$ where $0\leq \theta\leq \pi$ and $0\leq \phi\leq 2\pi$. This $|\Psi\rangle$ can be represented on the Bloch sphere as: The Bloch vector $\vec{a}\in \Bbb R^3$ is basically $(\sin\theta \cos\phi, \sin\theta\sin\phi, \cos \theta) = (a_1,a_2,a_3)$. To represent mixed states you need to consider the corresponding density operator $\rho$. the set of states of a single qubit can be described in terms of $2\times 2$ density matrices and as $\{I,X,Y,Z\}$ forms a basis for the vector space of $2\times 2$ Hermitian matrices, you can write the density operator as $$\rho = a_0I+a_1X+a_2Y+a_3Z = \frac{1}{2}\begin{pmatrix}1+a_3 & a_1-ia_2 \\ a_1+ia_2 & 1-a_3\end{pmatrix}.$$ As density matrices always have trace $1$, and here $\mathrm{tr}(\rho)=2a_0$, so $a_0$ is necessarily $\frac{1}{2}$. So from here you can find out the Bloch coordinates of the any mixed state i.e. $(a_1,a_2,a_3)$ after performing the Pauli decompostion of the density matrix. If you're wondering what ensures that $|\vec{a}|\leq 1$, it's the positive semidefiniteness! The two eigenvalues of $\rho$ are $\frac{1}{2}(1+|\vec{a}|)$ and $\frac{1}{2}(1-|\vec{a}|)$. Thus, to ensure that the second eigenvalue is non-negative, $|\vec{a}|\leq 1$. The three properties of density matrices which you should drill into your brain are: self-adjointness, positive-semidefiniteness and unit trace; prove them as an exercise. Once you determine the values $a_1,a_2$ and $a_3$ from the density operator, you can easily find the location of the qubit state $(\sin\theta \cos\phi, \sin\theta\sin\phi, \cos \theta)$ inside the Bloch sphere. Let me emphasize on this point: pure states lie on the Bloch sphere (i.e. $|\vec{a}|=1$) whereas mixed states lie inside the Bloch sphere (i.e. $|\vec{a}|<1$). If you're mathematically inclined, you'll also love to think about the Bloch sphere in terms of stereographic projections; it's excellently summarized in this Physics SE answer. I'll attach the image from there, which is originally from this blogpost (the article is in French, sorry :). Here are a few "further readings" for you: Understanding the Bloch sphere Density matrices for pure states and mixed states Why do Bloch sphere wavefunctions have half angles? Purity of mixed states as a function of radial distance from origin of Bloch ball Alternative to Bloch sphere to represent a single qubit Can the Bloch sphere be generalized to two qubits? Why is an entangled qubit shown at the origin of a Bloch sphere? Essentially, go through the bloch-sphere tag; you'll find several interesting questions and answers, which should clarify most of your beginner confusions about the Bloch sphere formalism. Sanchayan DuttaSanchayan Dutta Not the answer you're looking for? Browse other questions tagged notation bloch-sphere or ask your own question. What's the difference between a pure and mixed quantum state? Why are half angles used in the Bloch sphere representation of qubits? Why can any density operator be written this way? (quantum tomography) Bloch sphere and quantum operations What is the meaning of the state $|1\rangle-|1\rangle$? What's a vector in the Bloch sphere representation? What utility is provided by the Bloch sphere visualization? What would be the meaning of an $i$ in a qubit state $i\alpha|0\rangle+\beta|1\rangle$? Do pure qudit states lie on a hypersphere in the Bloch representation? Find local state and compute Bloch coordinates, like Quirk Meaning of a pound sign (#) on a Bloch sphere What is a term for a basis state along with its corresponding complex amplitude? How to calculate the coefficients of a qubit from the angles of its Bloch representation?
CommonCrawl
The standard PDE on a manifold We state and solve the standard second-order linear PDE on a compact Riemannian manifold: the potential, diffusion, diffraction and wave equations. $\newcommand{\R}{\mathbf{R}}$ $\newcommand{\Z}{\mathbf{Z}}$ 1. The Laplace-Beltrami spectrum Let $M$ be a compact Riemannian manifold (with or without boundary), and let $\Delta$ be its Laplace-Beltrami operator, defined as $\Delta=*d*d$, where $d$ is the exterior derivative (which is independent of the metric) and $*$ is the Hodge duality between $p$-forms and $d-p$-forms (which is defined using the metric). The following are standard results in differential geometry (see e.g. Warner's book chapter 6 https://link.springer.com/content/pdf/10.1007\%2F978-1-4757-1799-0_6.pdf) (1) There is a sequence of $\mathcal{C}^\infty(M)$ functions $\varphi_n$ and positive numbers $\lambda_n\to\infty$ such that $$\Delta\varphi_n=-\lambda_n\varphi_n$$ (2) The functions $\varphi_n$, suitably normalized, are an orthonormal basis of $L^2(M)$. These results generalize Fourier series to an arbitrary smooth manifold $M$. Any square-integrable function $f:M\to\R$ is written uniquely as $$f(x)=\sum_nf_n\varphi_n(x)$$ and the coefficients $f_n$ are computed by $$f_n=\int_Mf\varphi_n.$$ Some particular cases are the habitual Fourier and sine bases (but not the cosine basis), bessel functions for the disk, and spherical harmonics for the surface of a sphere. $M$ $\varphi_n$ $-\lambda_n$ interval $[0,2\pi]$ $\sin\left(\frac{nx}{2}\right)$ $n^2/4$ circle $S^1$ $\sin(n\theta),\cos(n\theta)$ $n^2$ square $[0,2\pi]^2$ $\sin\left(\frac{nx}{2}\right)\sin\left(\frac{m\theta}{2}\right)$ $\frac{n^2+m^2}{4}$ torus $(S^1)^2$ $\sin(nx)\sin(my),\ldots$ $n^2+m^2$ disk $|r|\le1$ $\sin,\cos(n\theta)J_n(\rho_{m,n}r)$ $\rho_{m,n}$ roots of $J_n$ sphere $S^2$ $Y^m_l(\theta,\varphi)$ $l^2+l$ The eigenfunctions $\varphi_n$ are called the vibration modes of $M$, and the eigenvalues $\lambda_n$ are called the (squared) fundamental frequencies of $M$. Several geometric properties of $M$ can be interpreted in terms of the Laplace-Beltrami spectrum. For example, if $M$ has $k$ connected components, the first $k$ eigenfuntions will be supported successively on each connected component. On a connected manifold $M$, the first vibration mode can be taken to be positive $\varphi_1\ge0$, thus all the other modes have non-constant signs (because they are orthogonal to $\varphi_1$). In particular, the sign of $\varphi_2$ cuts $M$ in two parts in an optimal way, it is the Cheeger cut of $M$, maximizing the perimeter/area ratio of the cut. The zeros of $\varphi_n$ are called the nodal curves (or nodal sets) of $M$, or also the Chladni patterns. If $M$ is a subdomain of the plane, these patterns can be found by cutting an object in the shape of $M$, pouring a layer of sand over it, and letting it vibrate by high-volume sound waves at different frequencies. For most frequencies, the sand will not form any particular pattern, but when the frequency coincides with a $\sqrt{\lambda_n}$, the sand will accumulate over the set $[\varphi_n=0]$, which is the set of points of the surface that do not move when the surface vibrates at this frequency. In the typical case, the number of connected components of $[\varphi_n>0]$ grows linearly with $n$, thus the functions $\varphi_n$ become more oscillating (less regular) as $n$ grows. Generally, symmetries of $M$ arise as multiplicities of eigenvalues. The Laplace-Beltrami spectrum ${\lambda_1,\lambda_2,\lambda_3,\ldots}$ is closely related, but not identical, to the geodesic length spectrum, that measures the sequence of lengths of all closed geodesics of $M$. The grand old man of this theory is Yves Colin de Verdière, student of Marcel Berger. Geometry is not in general a spectral invariant, but non-isometric manifolds with the same spectrum are difficult to come by. The first pair of distinct but isospectral manifolds was wound in 1964 by John Milnor, in dimension 16. The first example in dimension 2 was found in 1992 by Gordon, Webb and Wolperd, and it answered negatively the famous question of Marc Kac ``Can you hear the shape of a drum?'. In 2018, we have many ways to construct discrete and continuous families of isospectral manifolds in dimensions two and above. 2. The standard equations and their explicit solutions The classical linear second order equations (potential, heat, wave and Schrödinger) are all defined in terms of the Laplacian operator in space. Thus, they can be defined readily on an arbitrary Riemannian manifold $M$. If $M$ is compact, the solution can be found explicitly in terms of the Laplace-Beltrami eigenfunctions. Henceforth we will call the expression of a function $f:M\to\R$ as $f=\sum_nf_n\varphi_n$ the Fourier series of $f$, the numbers $f_n$ the Fourier coefficients of $f$ and so on. The simplest case is Poisson equation $$ \Delta u = f $$ The solution is found by expressing $u$ and $f$ as Fourier series and identifying the coefficients: $$ u(x) = \sum_n\frac{-f_n}{\lambda_n}\varphi_n(x) $$ Notice that since $\lambda_n\to\infty$, the Fourier coefficients of $u$ tend to zero faster than those of $f$, thus $u$ is more regular than $f$ (this is obvious from the equation, since $\Delta u$ is less regular than $u$). Another simple case is the screened Poisson equation $$ \Delta u = \alpha u + f $$ and the solution is found by the same technique: $$ u(x) = \sum_n\frac{-f_n}{\alpha+\lambda_n}\varphi_n(x) $$ This is like the regular Poisson equation, but the regularity is enhanced by $\alpha$. The next case is the heat equation, also called diffusion or smoothing equation: $$ \begin{cases} u_t = \Delta u & (x,t)\in M\times[0,T] \\ u(x,0)=g(x) & x\in M\\ \end{cases} $$ This equation requires an initial condition $g$. The solution is found by separation of variables, which leads to a trivial ODE, resulting in $$ u(x,t)=\sum_ng_ne^{-{\lambda_n}t}\varphi_n(x) $$ It is immediate to check that this expression is a solution of the heat equation with initial condition $g$. Several properties of the solution are visible from this form, most notably that $u(x,\infty)=u_1$ if $\lambda_1=0$, or $1$ otherwise. A pure vibration mode $\varphi_n$ decays exponentially to zero, and the speed of the exponential decay is $\lambda_n$. By combining the heat and Poisson equations, we get the heat equation with source: $$ \begin{cases} u_t = \Delta u + f & (x,t)\in M\times[0,T] \\ u(x,0)=g(x) & x\in M\\ \end{cases} $$ whose solution is $$ u(x,t)=\sum_n\left( \frac{f_n}{\lambda_n}+g_ne^{-{\lambda_n}t} \right)\varphi_n(x) $$ The solution of the reverse heat equation $$ \begin{cases} u_t = -\Delta u & (x,t)\in M\times[0,T] \\ u(x,0)=g(x) & x\in M\\ \end{cases} $$ is formally similar $$ u(x,t)=\sum_ng_ne^{{\lambda_n}t}\varphi_n(x) $$ but notice that it blows up, often in a finite time. Both direct and reverse heat equations are of the form $u_t=c\Delta u$, whose solution is $u(x,t)=\sum_n g_n e^{-c\lambda_n t}\varphi_n(x)$. The constant $c$ is the speed of transmission of heat. An intermediate behaviour between $c>0$ and $c<0$ happens when $c=i$. The linear Schrödinger equation, also called diffraction equation $$ \begin{cases} w_t = i\Delta w & (x,t)\in M\times[0,T] \\ w(x,0)=g(x) & x\in M\\ \end{cases} $$ describes the evolution of a complex-valued function $w$. It can be interpreted as a system of two coupled real equations by writing $w=u+iv$ (here, assuming a real-valued initial condition $g$): $$ \begin{cases} u_t = -\Delta v & (x,t)\in M\times[0,T] \\ v_t = \Delta u & (x,t)\in M\times[0,T] \\ u(x,0)=g(x) & x\in M\\ v(x,0)=0 & x\in M\\ \end{cases} $$ The solution is then $$ w(x,t)=\sum_n g_n e^{-i\lambda_n t}\varphi_n(x) $$ or, in terms of $u$ and $v$: $$ \begin{cases} u(x,t) = \sum_n g_n\cos\left(\lambda_nt\right)\varphi_n(x) \\ v(x,t) = \sum_n-g_n\sin\left(\lambda_nt\right)\varphi_n(x) \\ \end{cases} $$ thus, a pure vibration mode $\varphi_n$ oscillates periodically, at a frequency $\lambda_n$. In terms of $|w|$, this phenomenon is called diffraction. The wave equation is $$ \begin{cases} u_{tt} = \Delta u & (x,t)\in M\times[0,T] \\ u(x,0) = g(x) & x\in M \\ u_t(x,0) = h(x) & x\in M \\ \end{cases} $$ notice that it requires an initial condition and an initial speed. By linearity, we can deal with these separately, and then sum the results. The solution is then $$ u(x,t)=\sum_n\left( g_n\cos\left(\sqrt{\lambda_n} t\right) + \frac{h_n}{\sqrt{\lambda_n}}\sin\left(\sqrt{\lambda_n} t\right) \right)\varphi_n(x) $$ Thus, a pure vibration mode $\varphi_n$ oscillates with frequency $\sqrt{\lambda_n}$. Finally the wave equation with a force is the most complex case we treat here: $$ \begin{cases} u_{tt} = \Delta u +f& (x,t)\in M\times[0,T] \\ u(x,0) = g(x) & x\in M \\ u_t(x,0) = h(x) & x\in M \\ \end{cases} $$ The solution $$ u(x,t)=\sum_n\left( \frac{f_n}{\lambda_n} + g_n\cos\left(\sqrt{\lambda_n} t\right) + \frac{h_n}{\sqrt{\lambda_n}}\sin\left(\sqrt{\lambda_n} t\right) \right)\varphi_n(x) $$ is found by the same methods as above. 3. Discretization and implementation in Octave Except in emblematic cases (rectangle, torus, sphere) the eigenfunctions of an arbitrary manifold $M$ do not have a closed-form expression. For practical computations, we are thus restricted to numerical methods in the discrete case. The most convenient form for this discretization is to representd $M$ as a graph with weights in their edges. In this context, we have the following objects The weighted graph $G=(V,E)$ where $V$ is a set of $n$ vertices. The Laplacian matrix $L$ of this graph, which is of size $n\times n$ The space $\R^n$ is identified with functions $V\to\R$. Thus $\R^n$ is the discrete version of $\mathcal{C}^\infty(M)$. typically $L$ will be a matrix of rank $n-1$ with a constant eigenvector of eigenvalue 0. We can find the eigensystem of $L$ by calling eigs(L) in octave, and transfer the solutions obtained above using the obtained eigenvectors and eigenvalues. However, in most cases the solution is more easily obtained by solving a linear problem. To fix the ideas we start with a concrete example: a square domain with flat metric. The following is a complete program that computes the chladni figures of a square domain. w = 128; # width and height of the domain p = sparse(1:w-1, 2:w, 1, w, w); # path graph of length p A = kron(p, speye(w)) + kron(speye(w), p); # kronecker sum L = A+A' - diag(sum(A+A')); # graph laplacian [f,l] = eigs(L, 64, "sm"); # eigs of smallest magnitude After running this code, the ith eigenfunction is f(:,i) and the eigenvalues are on diag(l). And now, with Dirichlet boundary conditions (slightly different code) w = 128; # width and height of the domain p = sparse(1:w-1, 2:w, 1, w, w) - speye(w); # path graph of length p A = kron(p, speye(w)) + kron(speye(w), p); # kronecker sum L = A + A'; # graph laplacian [f,l] = eigs(L, 64, "sm"); # eigs of smallest magnitude For completenes, this is the octave code that saves the figures above n = sprintf("o/chladni_%03d.png", i); x = reshape(200*double(0<f(:,i)),w,w); iio_write(n, x);
CommonCrawl
Neural networks and spatial topology Trump and Brexit Neuro-mathematician Carina Curto has recently published a fascinating paper, 'What can topology tell us about the neural code?' The centrepiece of the paper is a simple and profound exposition of the method by which the neural networks in animal brains can represent the topology of space. As Curto reports, neuroscientists have discovered that there are so-called place cells in the hippocampus of rodents which "act as position sensors in space. When an animal is exploring a particular environment, a place cell increases its firing rate as the animal passes through its corresponding place field - that is, the localized region to which the neuron preferentially responds." Furthermore, a network of place cells, each representing a different position, is collectively capable of representing the topology of the environment. Rather than beginning with the full topological structure of an environmental space X, the approach of such research is to represent the collection of place fields as an open covering, i.e., a collection of open sets $\mathcal{U} = \{U_1,...,U_n \}$ such that $X = \bigcup_{i=1}^n U_i$. A covering is referred to as a good cover if every non-empty intersection $\bigcap_{i \in \sigma} U_i$ for $\sigma \subseteq \{1,...,n \}$ is contractible. i.e., if it can be continuously deformed to a point. The elements of the covering, and the finite intersections between them, define the so-called 'nerve' $\mathcal{N(U)}$ of the cover, (the mathematical terminology is coincidental!): $\mathcal{N(U)} = \{\sigma \subseteq \{1,...,n \}: \bigcap_{i \in \sigma} U_i \neq \emptyset \}$. The nerve of a covering satisfies the conditions to be a simplicial complex, with each subset $U_i$ corresponding to a vertex, and each non-empty intersection of $k+1$ subsets defining a $k$-simplex of the complex. A simplicial complex inherits a topological structure from the imbedding of the simplices into $\mathbb{R}^n$, hence the covering defines a topology. And crucially, the following lemma applies: Nerve lemma: Let $\mathcal{U}$ be a good cover of X. Then $\mathcal{N(U)}$ is homotopy equivalent to X. In particular, $\mathcal{N(U)}$ and X have exactly the same homology groups. The homology (and homotopy) of a topological space provides a group-theoretic means of characterising the topology. Homology, however, provides a weaker, more coarse-grained level of classification than topology as such. Homeomorphic topologies must possess the same homology (thus, spaces with different homology must be topologically distinct), but conversely, a pair of topologies with the same homology need not be homeomorphic. Now, different firing patterns of the neurons in a network of hippocampal place cells correspond to different elements of the nerve which represents the corresponding place field. The simultaneous firing of $k$ neurons, $\sigma \subseteq \{1,...,n \}$, corresponds to the non-empty intersection $\bigcap_{i \in \sigma} U_i \neq \emptyset$ between the corresponding $k$ elements of the covering. Hence, the homological topology of a region of space is represented by the different possible firing patterns of a collection of neurons. As Curto explains, "if we were eavesdropping on the activity of a population of place cells as the animal fully explored its environment, then by finding which subsets of neurons co-fire, we could, in principle, estimate $\mathcal{N(U)}$, even if the place fields themselves were unknown. [The nerve lemma] tells us that the homology of the simplicial complex $\mathcal{N(U)}$ precisely matches the homology of the environment X. The place cell code thus naturally reflects the topology of the represented space." This entails the need to issue a qualification to a subsection of my 2005 paper, 'Universe creation on a computer'. This paper was concerned with computer representations of the physical world, and attempted to place these in context with the following general definition: A representation is a mapping $f$ which specifies a correspondence between a represented thing and the thing which represents it. An object, or the state of an object, can be represented in two different ways: $1$. A structured object/state $M$ serves as the domain of a mapping $f: M \rightarrow f(M)$ which defines the representation. The range of the mapping, $f(M)$, is also a structured entity, and the mapping $f$ is a homomorphism with respect to some level of structure possessed by $M$ and $f(M)$. $2$. An object/state serves as an element $x \in M$ in the domain of a mapping $f: M \rightarrow f(M)$ which defines the representation. The representation of a Formula One car by a wind-tunnel model is an example of type-$1$ representation: there is an approximate homothetic isomorphism, (a transformation which changes only the scale factor), from the exterior surface of the model to the exterior surface of a Formula One car. As an alternative example, the famous map of the London Underground preserves the topology, but not the geometry, of the semi-subterranean public transport network. Hence in this case, there is a homeomorphic isomorphism. Type-$2$ representation has two sub-classes: the mapping $f: M \rightarrow f(M)$ can be defined by either (2a) an objective, causal physical process, or by ($2$b) the decisions of cognitive systems. As an example of type-$2$b representation, in computer engineering there are different conventions, such as ASCII and EBCDIC, for representing linguistic characters with the states of the bytes in computer memory. In the ASCII convention, 0100000 represents the symbol '@', whereas in EBCDIC it represents a space ' '. Neither relationship between linguistic characters and the states of computer memory exists objectively. In particular, the relationship does not exist independently of the interpretative decisions made by the operating system of a computer. In 2005, I wrote that "the primary example of type-$2$a representation is the representation of the external world by brain states. Taking the example of visual perception, there is no homomorphism between the spatial geometry of an individual's visual field, and the state of the neuronal network in that part of the brain which deals with vision. However, the correspondence between brain states and the external world is not an arbitrary mapping. It is a correspondence defined by a causal physical process involving photons of light, the human eye, the retina, and the human brain. The correspondence exists independently of human decision-making." The theorems and empirical research expounded in Curto's paper demonstrate very clearly that whilst there might not be a geometrical isometry between the spatial geometry of one's visual field and the state of a subsystem in the brain, there are, at the very least, isomorphisms between the homological topology of regions in one's environment and the state of neural subsystems. On a cautionary note, this result should be treated as merely illustrative of the representational mechanisms employed by biological brains. One would expect that a cognitive system which has evolved by natural selection will have developed a confusing array of different techniques to represent the geometry and topology of the external world. Nevertheless, the result is profound because it ultimately explains how you can hold a world inside your own head. One of the strangest things about most scientists and academics, and, indeed, most educated middle-class people in developed countries, is their inability to adopt a scientific approach to their own political and ethical beliefs. Such beliefs are not acquired as a consequence of growing rationality or progress. Rather, they are part of what defines the identity of a particular human tribe. A particular bundle of shared ideas is acquired as a result of chance, operating in tandem with the same positive feedback processes which drive all trends and fashions in human society. Alex Pentland, MIT academic and author of 'Social Physics', concisely summarises the situation as follows: "A community with members who actively engage with each other creates a group with shared, integrated habits and beliefs...most of our public beliefs and habits are learned by observing the attitudes, actions and outcomes of peers, rather than by logic or argument," (p25, Being Human, NewScientistCollection, 2015). So it continues to be somewhat surprising that so many scientists and academics, not to mention writers, journalists, and the judiciary, continue to regard their own particular bundle of political and ethical ideas, as in some sense, 'progressive', or objectively true. Never has this been more apparent than in the response to Britain's decision to leave the European Union, and America's decision to elect Donald Trump. Those who voted in favour of these respective decisions have been variously denigrated as stupid people, working class people, angry white men, racists, and sexists. To take one example of the genre, John Horgan has written an article on the Scientific American website which details the objective statistical indicators of human progress over hundreds of years. At the conclusion of this article he asserts that Trump's election "reveals that many Americans feel threatened by progress, especially rights for women and minorities." There are three propositions implicit in Horgan's statement: (i) The political and ethical ideas represented by the US Democratic party are those which can be objectively equated with measurable progress; (ii) Those who voted against such ideas are sexist; (iii) Those who voted against such ideas are racist. The accusation that those who voted for Trump feel threatened by equal rights for women is especially puzzling. As many political analysts have noted, 42% of those who voted for Trump were female, which, if Horgan is to be believed, was equivalent to turkeys voting for Christmas. It doesn't say much for Horgan's view of women that he thinks so many millions of them could vote against equal rights for women. Unless, of course, people largely tend to form political beliefs, and vote, according to patterns determined by the social groups to which they belong, rather than on the basis of evidence and reason. A principle which would, unfortunately, fatally undermine Horgan's conviction that one of those bundles of ethical and political beliefs represents an objective form of progress. In the course of his article, Horgan defines a democracy "as a society in which women can vote," and also, as an indicator of progress, points to the fact that homosexuality was a crime when he was a kid. These are two important points to consider when we turn from the issue of Trump to Brexit, and consider the problem of immigration. The past decades have seen the large-scale migration of people into Britain who are enemies of the open society: these are people who reject equal rights for women, and people who consider homosexuality to be a crime. So the question is as follows: Do you permit the migration of people into your country who oppose the open society, or do you prohibit it? If you believe that equal rights for women and the non-persecution of homosexuals are objective indicators of progress, then do you permit or prohibit the migration of people into your country who oppose such progress? It's a well-defined, straightforward question for the academics, the writers, the journalists, the judiciary, and indeed for all those who believe in objective political and ethical progress. It's a question which requires a decision, not merely an admission of complexity or difficulty. Now combine that question with the following European Union policy: "Access to the European single market requires the free migration of labour between participating countries." Hence, Brexit. What unites Brexit and Trump is that both events are a measure of the current relative size of different tribes, under external perturbations such as immigration. It's not about progress, rationality, reactionary forces, conspiracies or conservatism. Those are merely the delusional stories each tribe spins as part of its attempts to maintain internal cohesion and bolster its size. It's more about gaining and retaining membership of particular social groups, and that requires subscription to a bundle of political and ethical ideas. However, the thing about democracy is that it doesn't require the academics, the writers, the journalists, the judiciary, and other middle-class elites to understand any of this. They just need to lose.
CommonCrawl
☰ Table of Contents The Cotangent Complex and Derived de Rham Cohomology Derived Schemes Simplicial Rings Examples of Simplicial Categories a: Simplicial Sets and Topological Spaces b: Simplicial Abelian Groups c: Simplicial Commutative \(k{\hbox{-}}\)Algebras The Cotangent Complex 1: A Derived Functor Approach 2: Extending Functors 3: A Universal Property Derived de Rham Cohomology The Hodge Filtration Reference: MSRI Workshop on Derived AG, Birational Geometry, Moduli Spaces. Specific Video: https://www.youtube.com/watch?v=zRPa-VAvl6Q Basic affine objects in AG: commutative rings, replace with simplicial commutative rings which we'll use as a base diagram. Later: derived stacks and geometric derived stacks. Here is an evolution of objects. and how we can think about them: Algebraic schemes/spaces, e.g. \({\mathbb{P}}^n\). Think of these as étale sheaves of sets (think functor of points), identified as discrete spaces: \[\begin{align*} \mathcal{S}_{\leq 0} \mathrel{\vcenter{:}}=\left\{{\text{Discrete spaces}}\right\} .\end{align*}\] Every component is contractible, so there are no higher homotopy groups and we think of these as 0-truncated spaces. For \(X\) a smooth and proper \(k{\hbox{-}}\)scheme, the Picard stack \[\begin{align*} \underline{{\operatorname{Pic}}}_{X/k} \end{align*}\] is an Artin stack, which is a subclass Deligne-Mumford stacks. Note that this still has automorphisms given by global units on \(X\). Think of these as \[\begin{align*} \mathcal{S}_{\leq 1} \mathrel{\vcenter{:}}=\left\{{\text{Étale sheaves of groupoids}}\right\} ,\end{align*}\] where the notation now suggests 1-truncated spaces, and we can take fundamental groupoids \(\Pi_0\) since there is now 1-homotopy. Note that the Picard stack can be identified as a mapping stack, \[\begin{align*} \underline{{\operatorname{Map}}}(X, K({\mathbb{G}}_m, 1)) .\end{align*}\] \(K({\mathbb{G}}_m, n)\) is a "higher stack," thought of as a sheaf taking values in \(n{\hbox{-}}\)truncated spaces \(\mathcal{S}_{\leq n}\), i.e. a space where when basing at any point, there are no homotopy groups above degree \(n\): \[\begin{align*} \mathcal{S}_{\leq n} \mathrel{\vcenter{:}}=\left\{{\text{Étale sheaves of $n{\hbox{-}}$truncated spaces}}\right\} .\end{align*}\] This is a stack with a single point, where the isotopy is \(K({\mathbb{G}}_m, n-1)\). Note that these are all built from affine schemes with a few acceptable moves. (DZG) The definitions of \(\mathcal{S}_{j}\) above aren't explicitly stated, so these are guesses at slightly more precise definitions. In stack notation, we can write \[\begin{align*} B{\mathbb{G}}_m &= K({\mathbb{G}}_m, 1) \cong [{\{\text{pt}\}}/{\mathbb{G}}_m] \\ K({\mathbb{G}}_m, 2) &= [{\{\text{pt}\}}/ B{\mathbb{G}}_m] ,\end{align*}\] where the latter is a smooth Artin stack. Mapping into this gives the Picard groupoid of a scheme. These are higher geometric stacks that still have some "smoothness" properties. What does it mean to give a map from a scheme \(X\) into a higher stack? Note that the category of étale schemes taking values in \(\mathcal{S}_{\leq n}\) is enriched in topological spaces. There is a topological space \[\begin{align*} M\mathrel{\vcenter{:}}={\operatorname{Map}}(X, K({\mathbb{G}}_m, n)) \end{align*}\] where the homotopy groups are \[\begin{align*} \pi_i M = \begin{cases} H^{n-i}_{\mathrm{\text{ét}}}(X, {\mathbb{G}}_m) & 0\leq i \leq n \\ 0 & \text{else} \end{cases} .\end{align*}\] So this higher geometric stack that says something about higher étale cohomology groups. We thus have étale sheaves taking values in higher topological spaces, and has some geometric meaning. They're also built from geometric objects: namely, iterating taking quotients by smooth actions. \(K({\mathbb{G}}_m, 1)\) is a quotient by a smooth algebraic group, \(K({\mathbb{G}}_m, 2)\) is now a smooth Artin stack, and we can keep going. This is the fundamental process for building geometric higher stacks. Why derive things? Schemes are equipped with sheaves of commutative rings, so the basic idea is let the sheaves take values in groupoids, stacks, etc. So we can consider replacing the structure sheaf \({\mathcal{O}}_X\) is itself a sheaf of spaces, and this is the fundamental idea of derived algebraic geometry. Consider \(\operatorname{Spec}k\otimes_{k[x]}^L k\), a derived tensor product. This is a simplicial commutative ring, and the basic version of an affine derived scheme. This is a complex \(C^{\,\cdot\,}\) with homology in degree 0 and 1, where \[\begin{align*} H_1 = \text{Tor}^1(k\otimes_{k[x]} k) \end{align*}\] So analogously, we'll start with derived schemes and take quotients by smooth groups. In the end, we get derived stacks. An example is \(\mathcal{M}_\phi\), the moduli of objects in some DG category \(\mathcal{C}\). We need to agree on what the local affine modules will look like. For our purposes, they'll be simplicial commutative rings. Consider the derived category \(\mathcal{C} \mathrel{\vcenter{:}}= D({\mathbb{Z}})_{\geq 0}\) and its connective1 objects, which are chain complexes \(C_{\,\cdot\,}\) where \(H_{< 0}(C_{\,\cdot\,}) = 0\). There is a derived tensor product \(\otimes^L\) which makes \(\mathcal{C}\) into a symmetric monoidal category. Basic idea: we want to look at commutative algebra objects in this symmetric monoidal category \(({D}({\mathbb{Z}}), \otimes^L)\).2 Note that of working in a symmetric monoidal abelian category, we will be looking at connective chain complexes, and simplicial rings are one way of studying commutative algebra objects here. We have some choices for making sense of DAG: \(E_\infty{\hbox{-}}\)ring spectra Simplicial commutative rings. Over \({\mathbb{Q}}\), \({\mathbb{Q}}{\hbox{-}}\)commutative DGAs. Our choice here will be the following: Let \(\Delta\) denote the simplex category, the category of non-empty finite ordered sets with order-preserving maps. We have the following situation: The arrows going up are face maps (or coface maps), and the others are degeneracy maps. If \(\mathcal{C}\) is a category, then \(s\mathcal{C} \mathrel{\vcenter{:}}={\text{Fun}}(\Delta^\text{op}, \mathcal{C})\) is the category of simplicial objects of \(\mathcal{C}\). An analogy: simplicial commutative \(k{\hbox{-}}\)algebras enrich usual \(k{\hbox{-}}\)algebras much like the derived category \(D(k)\) enriches \(k{\hbox{-}}\)modules. \(\operatorname{sSets}\simeq{\text{Top}}\): this is not an equivalence of categories, but rather they have equivalent homotopy theories3, where we have notions of weak equivalence4 on each side. Here there is an \(n{\hbox{-}}\)simplex on the LHS (\(\operatorname{sSets}\)), \[\begin{align*} \Delta^n = \hom_{\Delta}({\,\cdot\,}, [n]) \end{align*}\] and on the RHS we have \[\begin{align*} \Delta^n_{\text{Top}}\mathrel{\vcenter{:}}=\left\{{{\left[ {x_0, \cdots, x_n} \right]} \in {\mathbb{R}}^n ~{\text{s.t.}}~x_i\geq 0,\, \sum x_i = 1}\right\} \end{align*}\] If you make a functor \(\Delta^n \rightleftharpoons\Delta^n_{\text{Top}}\), then by Yoneda the presheaf category \({\operatorname{Presh}}(\Delta) \mathrel{\vcenter{:}}={\text{Fun}}(\Delta^\text{op}, {\text{Set}})\) is generated by representable objects. Everything on in \(\operatorname{sSets}\) is generated by taking colimits of the \(\Delta^n\), so we can make some assignment and extend by colimits to get a functor \(\operatorname{sSets}\to {\text{Top}}\). We have a notion of weak equivalence for \({\text{Top}}\), and so the notion of weak equivalence on \(\operatorname{sSets}\) is just given by pullback along the functor \(\operatorname{sSets}\to {\text{Top}}\), and this induces an equivalence of homotopy theories. The inverse functor \({\text{Top}}\to \operatorname{sSets}\) is the singular complex construction. Considering \(\Delta^{\,\cdot\,}_{{\text{Top}}}\), this is a cosimplicial object in \({\text{Top}}\). \({\text{Top}}\) will denote that 1-category, while \(\mathcal{T}\text{op}\) will be its full \(\infty{\hbox{-}}\)category. So we have a natural cosimplicial object in \({\text{Top}}\), so \[\begin{align*} \text{Sing}(X) \mathrel{\vcenter{:}}={\operatorname{Hom}}_{{\text{Top}}}(\Delta_{{\text{Top}}}^{\,\cdot\,}, X) \end{align*}\] is a simplicial object in \(\operatorname{sSets}\). As in singular homology, we can get a simplicial abelian group by taking the free abelian group \({\mathbb{Z}}[{\text{Sing}}(X)]\). Note that this is just composing functors \(\Delta^\text{op}\to{\text{Set}}\) and \({\text{Set}}\to {{{\mathbb{Z}}}{\hbox{-}}\operatorname{mod}}\). We can use this to create a chain complex \(C_{\,\cdot\,}({\mathbb{Z}}[\text{Sing}(X)])\), and as expected, we get the singular homology: \[\begin{align*} H_i(C_{\,\cdot\,}) \cong H^{\text{Sing}}_i(X, {\mathbb{Z}}) \end{align*}\] We can take simplicial abelian groups \(s{\text{Ab}}\) and the connective objects \(D({\mathbb{Z}})_{\geq 0}\); these have equivalent homotopy theories. There is a notion of weak equivalence on the RHS, quasi-isomorphism, and asking if the literal underlying spaces on the LHS are weakly equivalence as spaces. A specific way of doing this is the Dold-Kan correspondence: Suppose we have a simplicial abelian group \(M_{\,\cdot\,}\), then we have face maps going to the left: We make this into a chain complex by setting the differential to a graded sum: The homology of this complex turns out to be the same as the homotopy groups of the simplicial abelian group viewed as a topological space. Defined as \[\begin{align*} s\mathrm{CAlg}_k \mathrel{\vcenter{:}}={\text{Fun}}(\Delta^\text{op},\mathrm{CAlg}_k) ,\end{align*}\] where \(k\) is some commutative ring. This was studied by Quillen, and an impetus for model categories. Models give a notion of weak equivalence and a "right way" of computing: for the usual derived category of a ring, this is taking free/projective/injective resolutions. So the LHS is sometimes called a non-abelian derived category. For \(R\in \mathrm{sCAlg}\), the homotopy groups \(\pi_* R\) have a graded commutative ring structure: \(xy = (-1)^{{\left\lvert {x} \right\rvert} {\left\lvert {y} \right\rvert}}yx\) and \(x^2 =0\) for elements \(x\) with \({\left\lvert {x} \right\rvert}\) odd. This is useful because it gives us some graded ring to associated to \(R\). The category of simplicial abelian groups is equivalent to \(\mathrm{Ch}({\mathbb{Z}})_{\geq 0}\); i.e. chain complexes of abelian groups concentrated in non-negative degree. This also yields an equivalence of homotopy theories. A different perspective on simplicial commutative rings: there is an adjunction from sets to commutative \(k{\hbox{-}}\)algebras \[\begin{align*} {\text{Set}}&\rightleftharpoons{}{} \mathrm{CAlg}_k \\ S &\mapsto K[S] .\end{align*}\] i.e. we send a set to the polynomial ring generated by \(S\). Any time such an adjunction exists, given an \(R\in \mathrm{CAlg}_k\) we can construct a simplicial resolution \(S^{\,\cdot\,}\) and a map \(S^{\,\cdot\,}\to R\). This resolution has the following structure: \[\begin{align*} S^0 = k[R], \,\, \text{the free commutative algebra on }R .\end{align*}\] Using the unit and counit maps of the adjunction, one obtains a canonical simplicial object, Moreover, \(S^{\,\cdot\,}\xrightarrow{\sim} R\) is a homotopy equivalence. So we've taken an arbitrary \(k{\hbox{-}}\)algebra and replaced it with a simplicial \(k{\hbox{-}}\)algebra which is given by polynomial rings in each degree, typically in infinitely many variables, which has the same homology. This is the analog of a projective resolution. Now define \(\mathrm{CAlg}_k^{\text{poly}}\) as the category of finitely generated polynomial rings, and suppose you have a functor \[\begin{align*} \mathrm{CAlg}_k^{\text{poly}} \xrightarrow{F} \mathcal{C} \end{align*}\] where \(\mathcal{C}\) is a "reasonable" category or possible in \(\infty{\hbox{-}}\)category. We can consider the category \(\operatorname{Ind}(\mathrm{CAlg}_k^{\text{poly}})\) given by formally adjoining filtered colimits. We have the following diagram, where the bottom inclusion is given by viewing a commutative ring as the constant simplicial commutative ring, and the extension \(\tilde F\) exists by applying \(F\) to any colimit diagram. The functor \(LF\) is a derived functor that exists if \(\mathcal{C}\) has certain colimits. So starting with a functor defined on finitely generated polynomial rings, i.e. affine spaces, we get a simplicial derived functor. For \(R\in \mathrm{CAlg}_k\), using \(S^{\,\cdot\,}\xrightarrow{\sim} R\), we can apply \(F\) level-wise to get a new simplicial object \(F(S^{\,\cdot\,}) \in \mathcal{C}\). Then \(LF(R)\) is defined by taking the colimit over \(\Delta\) yields the geometric realization, i.e. \[\begin{align*} LF(R) \mathrel{\vcenter{:}}=\mathop{\mathrm{hocolim}}_{\Delta} F(S^{\,\cdot\,}) = {\left\lvert {F(S^{\,\cdot\,})} \right\rvert} \end{align*}\] So we can promote functors on polynomial rings to functors on simplicial commutative rings. This ends up being a Kan extension. There is a nice universal property here, namely that functors out of \(\mathrm{sAlg}_k\) are equivalent to functors out of \(\mathrm{CAlg}_k^{\text{poly}}\) that satisfy some additional properties. The last example will be the cotangent complex. There are 3 equivalent ways to view the cotangent complex. Use this derived functor approach: it will essentially be the derived functor of taking \(\Omega^1\). Suppose \(k \to R\) in commutative rings, denote by \(S^{\,\cdot\,}\) again the canonical resolution of \(R\), yielding a diagram Take Kahler differentials degree-wise to get \(\Omega^1_{S^{\,\cdot\,}/k}\) . Now base-change along this map to get a simplicial \(R{\hbox{-}}\)module, \[\begin{align*} \Omega^1_{S^{\,\cdot\,}/k} \otimes_{S^{\,\cdot\,}} R \,\, \in \mathrm{sMod}{\hbox{-}}R .\end{align*}\] We can now use Dold-Kan to view this as a connective object in \(D(R)_{\geq 0}\), the derived category of \(R\), yielding \(\mathrm{sMod}{\hbox{-}}R \simeq D(R)_{\geq 0}\), so we define the cotangent complex of \(R/k\) as \[\begin{align*} L_{R/k} \mathrel{\vcenter{:}}=\Omega^1_{S^{\,\cdot\,}/k} \otimes_{S^{\,\cdot\,}} R \,\, \in D(R)_{\geq 0} .\end{align*}\] This turns out to work if you take a resolution other than the canonical one. A downside to this definition is that it's not clear how/why it might depend on the resolution. We can define them by taking a map \[\begin{align*} \mathrm{CAlg}_k^{\text{poly}} \xrightarrow{\Omega^1_{{\,\cdot\,}/k}} D(k)_{\geq 0} \end{align*}\] to the connective objects in the derived category of \(k\). We can extend to get a diagram It then turns out that \(L_{R/k} \simeq L\Omega^1_{R/k}\). Here it's not quite clear why this lands in \({{k}{\hbox{-}}\operatorname{mod}}\) instead of \({{R}{\hbox{-}}\operatorname{mod}}\), or how the \(R{\hbox{-}}\)module structure works here. Let \(k\to R\to S\) where \(k\) is a commutative ring and \(R, S\) are ordinary (or simplicial) commutative rings. Let \(M\in {{S}{\hbox{-}}\operatorname{mod}}\) in \(D(S)_{\geq 0}\). There is a natural enrichment in topological spaces, so we'll write \({\operatorname{Map}}\) for homs with their topological structure. In particular, these have homotopy groups. The universal property that the cotangent complex has comes from an equivalence \[\begin{align*} {\operatorname{Map}}_R(L_{R/k}, M) \simeq {\operatorname{Map}}_{\mathrm{sCAlg}_{k//S}}(R, S\oplus M) .\end{align*}\] where the latter is in the category of simplicial commutative \(k{\hbox{-}}\)algebras with a fixed map to \(S\), a bit like a comma category. This makes sense because \(R\to S\) is a fixed map, and \(S\oplus M\) has a projection to \(S\), i.e. this is a square zero extension. The point of simplicial commutative rings is that this extension still makes sense, even when \(M\) is a chain complex. It's still possible to make this into a simplicial commutative ring. This may look familiar from the exercises in Hartshorne, since it resembles the definition of Kahler differentials. Exercises Using the Universal Property Prove that \[\begin{align*} \pi_0 L_{R/k} \cong \Omega^1_{R/k} .\end{align*}\] This is exactly lifting through the square zero extension. This is similar to having a lift where giving \(\tilde f\) is like giving a derivation \(R\to M\). So the RHS comma category mapping space in the previous equivalence is denoted the "space of derivations of \(R\) and \(M\)" (whatever that means) You can base-change the cotangent complex of \(R/K\) to \(S\) and obtain an exact triangle/cofiber sequence This follows from the universal property above, and is a form of "transitivity." Given a diagram show that \[\begin{align*} T\otimes_k^L L_{R/k} \cong L_{R\otimes_k^L T / T} .\end{align*}\] All of these follow from just the mapping space property, but require thinking about what it means to compute maps in a comma category. Let \(R\) be a perfect \({\mathbb{F}}_p{\hbox{-}}\)algebra, so the Frobenius is an isomorphism: \[\begin{align*} F: R &\xrightarrow{\sim} R\\ x &\mapsto x^p .\end{align*}\] Show that \(L_{R/{\mathbb{F}}_p} \cong 0\). We can make a specific functor: \[\begin{align*} \mathrm{CAlg}_k^{\text{poly}} &\to D(k)\\ R &\mapsto \mathrm{dR}_{R/k} \mathrel{\vcenter{:}}=\qty{R\to \Omega^1_{R/k} \to \Omega^2_{R/k} \to \cdots} ,\end{align*}\] which takes a polynomial ring \(R\) to its algebraic de Rham complex. If \(R\) is finitely generated, then the complex is bounded. One can formally extend by deriving this functor to obtain The point is that we know de Rham cohomology behaves well for smooth things, and we want to extend to non-smooth things. Suppose \(k \in \mathrm{Alg}_{\mathbb{Q}}\), noting that we're in characteristic zero, and let \(R \in \mathrm{CAlg}_k^\text{poly}\) be a polynomial ring. Then \(\mathrm{dR}_{R/k} \cong k\) by the Poincaré lemma, just the cohomology of affine space. Then \(L\mathrm{dR}_{S/k} \cong k\), which is disappointing. How to fix this: we have the Hodge filtration \(F_H^{\,\cdot\,}\mathrm{dR}_{R/k}\), and the graded pieces are given by \[\begin{align*} {\operatorname{gr}}^i_H \mathrm{dR}_{R/k} \cong \Omega^i_{R/k}[-i] \cong \Lambda^i \Omega^1_{R/k} ,\end{align*}\] a shifted version of the \(i\)th exterior power 1-forms. Taking these gradings is compatible with colimits, so we can remember the filtration in the Kan extension, yielding a canonical Hodge filtration on \(L\mathrm{dR}\), \(F^*_H L \mathrm{dR}_{{\,\cdot\,}/k}\). The graded pieces are given by \[\begin{align*} {\operatorname{gr}}^i_H L\mathrm{dR}_{{\,\cdot\,}/k} \cong L \Lambda^i L_{R/k}[-i] ,\end{align*}\] i.e. you take derived exterior powers of the cotangent complex, shifted by some degrees. This follows because doing the Kan extension on the graded pieces is deriving the functor \({\operatorname{gr}}^i_H \mathrm{dR}_{{\,\cdot\,}/k}\). We can complete with respect to this filtration, which we'll write has \[\begin{align*} \widehat{L\mathrm{dR}_{{\,\cdot\,}/k}} \mathrel{\vcenter{:}}=\varprojlim L\mathrm{dR}_{{\,\cdot\,}/k} / F_H^i ,\end{align*}\] where you quotient out by the \(i\)th piece of the filtration. Note that this definition is a general way of taking completions.5 Suppose \(X/{\mathbb{C}}\) is finite type, then \[\begin{align*} R \Gamma(X, \widehat{ L\mathrm{dR}_{{\mathcal{O}}_X/ k} } ) \cong R \Gamma_{{\text{Sing}}}(X({\mathbb{C}}), {\mathbb{C}}) ,\end{align*}\] where the RHS is the singular cohomology of the \({\mathbb{C}}{\hbox{-}}\)points. This is a generalization of Grothendieck's theorem in the smooth case. A positive feature is that this doesn't depend on a choice of putting \(X\) in an ambient smooth scheme. Connective means \(H_{<0} = 0\).↩︎ This is a familiar move: people in the 60s knew one could do AG in some ambient symmetric monoidal abelian category.↩︎ Theory up to weak equivalence.↩︎ A weak equivalence is an isomorphism on \(\pi_0\), and for each choice of basepoint, an isomorphism on all \(\pi_{\geq 1}\) on each side.↩︎ This is like defining the \(p{\hbox{-}}\)adics as the limit of \({\mathbb{Z}}/p^n{\mathbb{Z}}\).↩︎
CommonCrawl
Study on in-plane shear failure mode of cross-laminated timber panel Yuhao Zhou1, Zhaoyu Shen1, Haitao Li2, Yao Lu1 & Zheng Wang1 To explore in-plane shear failure mode of cross-laminated timber (CLT) panel, this paper carried out relevant research work from the perspective of stress analysis and combined with the crack morphology of the specimen after planar shear. In this study, the load–displacement curve of the hemlock [Tsuga canadensis (L.) Carrière] CLT specimen was obtained by a three-point bending test or an improved planar shear test, the crack morphology of the CLT vertical layer and the azimuth angle of the crack surface were observed and recorded synchronously. The shear strength values of CLT specimens under the two tests were obtained by corresponding calculation. Then the stress analysis of the CLT vertical layer was combined with the azimuth angle of the crack surface to discuss the failure mode of the CLT vertical layer in planar shear. The results showed that the planar shear strength measured by the three-point bending test and the improved planar shear test was in good agreement, and the results measured by the improved planar shear test were more dispersed than those measured by the three-point bending test; Considering the approximation that the in-plane shear of the CLT vertical layer could be treated as pure shear, the three-point bending test was better than the improved planar shear test; For the vertical layer of 63.3% CLT specimens, the azimuth of the crack surface was near the azimuth of the first principal plane obtained by stress analysis; There were two failure modes in the CLT vertical layer in-plane shear: tension failure and shear failure. In recent years, in developed countries in Europe and the United States, sustainable bamboo-based materials [1, 2], passively controlled structural systems [3], as well as cross-laminated timber (CLT), a new type of building material made from sawn timber as the basic unit, have been widely used in the construction of mid-rise and high-rise residential and public buildings. CLT solves the height limitation of traditional wood structure buildings. It is not only easy to implement factory prefabrication and on-site assembly, but also has the advantages of good dimensional stability, sound insulation, good heat preservation performance, good mechanical properties, convenient construction, low carbon, carbon fixation, and environmental protection [4,5,6]. Although traditional CLT structures suffer high seismic damage in timber components (i.e., CLT walls) subjected to seismic actions, CLT structures are easy to repair after the earthquake [7, 8]. And new seismic devices devoted to avoiding high damage are being studied [9]. However, wood has orthotropic anisotropy material behavior, so the mechanical properties of materials composing the wood are undoubtedly complicated [10,11,12,13]. When CLT is used as a floor slab, beam, or other components that are subjected to out-of-plane lateral loads, the planar shear strength becomes one of the key factors to control the mechanical performance of CLT [14]. Therefore, it is particularly important to study the failure mode of CLT in-plane shear. Since 2016, the main research topics on CLT have included the following areas: material structure, test methods, loading methods, and failure mechanisms of the CLT vertical layer. First, the research on the material structure applies the following tests: CLT three-point bending test, improved planar shear test, and four-point bending test. The test content is aimed at studying the influence of the number of CLT layers, the thickness of each layer, materials, and processing methods of the vertical and parallel layers on the rolling shear strength, shear modulus, and stiffness of CLT [15,16,17,18]. It should be noted that rolling shear refers to the shear strain behavior of timber in its cross-section. Under the action of shear force, cracks are easy to occur in the transition area of early and late wood, wood ray and pith. This failure is called rolling shear failure [19]. Second, the research on the test methods for testing CLT shear properties (rolling shear strength and shear modulus) includes the similarity of the three-point bending test and improved planar shear test, and the span-to-height ratio of the test piece [20, 21]. Third, the research on the loading methods includes the specimen damage under fatigue loading and the impact of damage accumulation on CLT rolling shear strength [22]. Fourth, the research on the failure mechanism of the CLT vertical layer includes the torque load test, Monte Carlo simulation, and simulation on the shear block specimen, which showed that the CLT rolling shear failure is brittle [23, 24]. In the previous research [25], the three-point bending test or the improved planar shear test was carried out on CLT specimen. The load–displacement curve of the specimen and the peak load value of the curve were obtained. When the test was over, the crack morphology of the damaged CLT vertical layer was recorded, and the azimuth angle of the crack surface was measured. These were used to reveal the appearance characteristics of the CLT vertical layer crack initiation and propagation on the load–displacement curve. Based on the above research, the stress, strain, principal stress, maximum and minimum shear stress of the vertical layer of CLT specimens in CLT three-point bending test and improved planar shear test were analyzed in this study. It clarified that the vertical layer of the three-point bending specimen and the planar shear specimen could achieve in-plane shear, and could be approximately treated as pure shear. When considering the in-plane shear of the CLT vertical layer as pure shear, the three-point bending test was better than the improved planar shear test. Since the CLT vertical layer had undergone shear deformation in the cross-section of the vertical layer, the CLT vertical layer deformation was referred to as in-plane shear in this article, and the rolling shear term was not adopted. The corresponding strength value was called the CLT planar shear strength. In brief, this study revealed the failure mechanism of CLT in-plane shear by combining the principal stress, maximum and minimum shear stress (including value and direction) of CLT vertical layer with the azimuth of crack surface, and put forward two failure modes of CLT in-plane shear: shear failure and tensile failure. CLT three-point bending and improved planar shear specimens were sawed from a 3-layered 500 × 1200 × 105 mm hemlock [Tsuga canadensis (L.) Carrière] CLT panel. The component unit of the panel (hemlock timber) was rip-cut. The one-component polyurethane (PUR) was used as the adhesive, with the sizing amount of 180 g/m2. The elasticity modulus of CLT major strength direction was 1.07 × 104 MPa, and the bending strength was 35 MPa. The above materials and relevant data were provided by Ningbo Sino-Canada Low-Carbon Technology Research Institute Co., Ltd., China. Specimen for the CLT three-point bending test The specimens of A-series were as follows: 735 × 305 × 105 mm, 15 pieces, to achieve a three-point bending load with a span-to-height ratio of 6 (Fig. 1). The specimens of B-series were as follows: 735 × 210 × 105 mm, 6 pieces, to achieve a three-point bending load with a span-to-height ratio of 6. The laminar width of the vertical layer of A, B-series was 140 mm. The average moisture content (MC) of A and B-series specimens was 12%, and the average density ρ was 475 kg/m3. The widths of A-series and B-series were 305 mm and 210 mm, respectively, to explore the influence of the width of the three-point bending specimens on the CLT planar shear (rolling shear) strength test value. It should be noted that the fiber orientation of the parallel layer is parallel to the length of the specimen, and the fiber orientation of the vertical layer is parallel to the width of the specimen. Schematic diagram of the CLT three-point bending test loading Specimen for the CLT-improved planar shear test The specimens of C-series were as follows: 270 × 135 × 105 mm (the length of the specimen refers to the length of the interface between the vertical layer and the parallel layer), 9 pieces. The laminar width of the vertical layer of C-series was 140 mm. The average MC was 14%, and the average density ρ was 431 kg/m3. The design of the CLT-improved planar shear test is shown in Fig. 2. Compared with the planar shear test specified in EN408 [26] and ASTM D2718 [27], the advantage of the improved planar shear test is that there's no need to paste steel plates on the surface of two parallel layers [28]. Schematic diagram of the CLT-improved planar shear test loading Test method JAW-2000 multi-channel structure test loading system (maximum test force of 300 kN) and AG-IC electronic universal mechanical testing machine (maximum test force of 100 kN) were used to carry out the three-point bending test and the improved planar shear test (Fig. 3). And the load–displacement curve of the specimen was obtained. The tests were carried out in displacement control by using supporting software, with a loading speed of 0.5 mm/min. Testing machines of the three-point bending test and the improved planar shear test. a JAW-2000 multi-channel structure test loading system. b AG-IC electronic universal mechanical testing machine The process of the crack initiation and propagation on the test specimen was observed through video recording synchronized with the load–displacement curve. It was applied to explore the relationship between the initiation and propagation of crack and the characteristics of the load–displacement curve. The maximum load value from the load–displacement curve was obtained to calculate the CLT planar shear strength τ. According to ASTM D198 [29], the formula used in the CLT three-point bending test is as follows: $$\tau = 0.92\frac{{3P_{\max } }}{4bh}.$$ In Eq. 1: τ is the CLT planar shear strength, MPa; Pmax is the maximum peak load, N; b is the width of the specimen, mm; h is the thickness of the specimen, mm. It should be noted that 0.92 in Eq. 1 is the correction factor, which is determined by the location of the maximum interlayer shear stress of CLT panel. It is related to the number of layers of the panel [30]. According to EN 408 and reference [28], the formula used in the CLT-improved planar shear test is as follows: $$\tau = \frac{{P_{max} \cos \beta }}{lb}.$$ In Eq. 2: τ is the CLT planar shear strength, MPa; Pmax is the maximum peak load, N; l is the length of the specimen, mm; β is the inclination angle of the specimen, ˚. The final crack morphology of the specimen was observed and summarized, and the azimuth angle of the main crack surface was measured. CLT crack morphology The crack morphology of the wood grain on the cross-section of the timber is usually divided into two types: ring shake (Fig. 4a) and heart shake (Fig. 4b). Ring shake includes the crack along the annual ring and the crack along the tangent direction of the annual ring; heart shake refers to the crack along the wood ray. In this research, except for ring shake and heart shake, there was a new crack morphology on the cross-section of the CLT vertical layer under mechanical stress, as shown in Fig. 4c. Crack morphology on the CLT vertical layer. a Ring shake, b heart shake, c new shake which is neither ring shake nor heart shake CLT crack azimuth angle The orientation of the crack surfaces of the CLT three-point bending A, B-series specimens was symmetrical with respect to the middle of the specimens. The azimuth angle of the crack surface refers to the included angle between the crack direction and the length direction of the specimens, ranging from 0 to 90°. Table 1 summarizes the vertical layer crack morphologies and crack azimuth angles of the CLT three-point bending A, B-series specimens and the CLT planar shear C-series specimens. For example, A4 (50°) represents the A-series specimen, the specimen number is 4, and the angle in bracket indicates that the crack azimuth angle is 50°. Table 1 CLT vertical layer crack morphologies and crack azimuth angles of three-point bending test and improved planar shear test From Table 1: the specimens with two cracks were A10 (25°, neither ring shake nor heart shake; 50°, heart shake), B1 (35°, heart shake; 50°, ring shake), B5 (45° and 50°, ring shake), A16 (40°, heart shake; 50°, neither ring shake nor heart shake). The specimen with three cracks was A15 (45°, ring shake; 45°, neither ring shake nor heart shake; and a crack along the wood ray—annual ring—wood ray). The number of specimens with crack azimuth angles between 40° and 50° accounted for 63.3% of the total, and that between 25°–40° or 50°–65° accounted for 23.3%. Hemlock CLT planar shear strength Failure tests on the hemlock CLT A, B, C-series specimens were conducted to obtain the maximum load value of the load–displacement curve, and then the planar shear strength of each hemlock CLT specimen was calculated according to Eq. 1 and Eq. 2. The mean values (peak load and shear strength) are shown in Table 2. Table 2 Hemlock CLT planar shear strength test value Table 2 shows that the average shear strength of the hemlock CLT tested by the A, B-series specimens was almost the same, and the relative error was 0.8%. This result showed that in the three-point bending test, the width of the specimen had virtually no effect on the hemlock CLT shear strength. The planar shear strength of the hemlock CLT tested by the three-point bending test was quite consistent with that tested by the improved planar shear test, and the relative error was only 5.7%. The dispersion of the CLT shear strength tested by the improved planar shear test was much greater than that tested by the three-point bending test, and the coefficient of variation of the former was 24.7% while that of the latter was 10.5%. Stress analysis of the CLT vertical layer In the test, the vertical layers of the CLT three-point bending specimen and the CLT shear specimen were subjected to similar forces, which caused the in-plane shear in the cross-section of the vertical layer. To explore the mechanism of planar shear and crack failure of the CLT vertical layer, the stress analysis of the vertical layer was applied to obtain the principal stress, the maximum and minimum shear stresses, as well as the normal stress and shear stress on any section. Then the test results of the crack orientation and stress analysis were combined to reveal the failure mode of the CLT vertical layer. The stress circle is an effective method for stress analysis [31]. The magnitude and direction of the principal stress, the maximum and minimum shear stresses, the normal stress and shear stress on each section can be accurately obtained from the stress circle. Stress analysis of the vertical layer of the CLT three-point bending specimen Part I: Stress component of the vertical layer of the CLT three-point bending specimen As shown in Fig. 5, a coordinate system O-xy was established for the CLT three-point bending specimen. The origin of the coordinates was taken at the center of the beam section at the left support. The horizontal x-axis and the vertical y-axis have positive directions. Stress components at points on the vertical layer of the CLT three-point bending specimen For the CLT three-point bending specimen, three points were taken on the left half-span: left 1, left 2, and left 3, respectively, located on the upper edge, neutral axis, and lower edge of the vertical layer on the section. Similarly, three points were taken on the right half-span: right 1, right 2, and right 3. The normal stresses on the points left 1 and right 1 were the maximum compressive stress on the vertical layer of the CLT three-point bending specimen. And the normal stresses on the points left 3 and right 3 were the maximum tensile stress. The stress state at each point could be obtained by stress analysis in material mechanics: the normal stress was obtained from the bending moment analysis and the shear stress was obtained from the shear force analysis. Provisions on the symbol before the stress value: for normal stress, + indicates tensile stress and − indicates compressive stress; For shear stress, + indicates that it rotates counter-clockwise around the stress element (e.g., point left 1), and − indicates clockwise. Provisions on stress direction (Table 3, Table 4): when the element rotates counter-clockwise along the positive x-axis, it is + , and clockwise is −. The above provisions apply to all stress analyses in this paper. Table 3 Stress state at left 1 Table 4 Stress state at point B To ensure that the stress on the CLT vertical layer was elastic stress, half of the maximum load was taken as the load for calculating the stress, as the load–displacement curve was linear when the load was less than half of the maximum value [25]. Therefore, for the A-series specimens, the maximum tensile (compressive) stress on the vertical layer was calculated with a half load of the average maximum load 61.18 kN (Table 2), which was P = 30.59 kN: $$(\sigma_{x} )_{\max } = 0.03457\frac{Pl}{{2bh^{2} }} = 0.1\,{\text{MPa}}{.}$$ The above formula is from reference [30], in which l, b, h correspond to the relevant parameters of A-series specimens. So the normal stresses on the x-section of left 1 and right 1 were −0.1 MPa. The normal stresses on the x-section of left 3 and right 3 were + 0.1 MPa. The points left 2 and right 2 were located on the neutral axis of the vertical layer, so the normal stresses on the x-section of them were 0. According to Eq. 1, the shear stress of A-series specimens was 0.66 MPa. So the shear stresses on the x-section of the points left 1, left 2, and left 3 were + 0.66 MPa, and that on the y-section were −0.66 MPa. The shear stresses on the x-section of right 1, right 2, and right 3 were −0.66 MPa, and that on the y-section were + 0.66 MPa. In Fig. 6, the stress components at left 1 were: Stress components at left 1 on the x-section: σx = −0.1 MPa, τxy = + 0.66 MPa; on the y-section: σy = 0, τyx = −0.66 MPa. Part II: Stress circle of the vertical layer of the CLT three-point bending specimen Based on the stress components on the x, y-sections at the point left 1, the stress circle of left 1 was drawn as shown in Fig. 7. The same method can be used to draw stress circles for left 2, left 3, right 1, right 2, and right 3. Stress circle of left 1 (unit: MPa) Part III: Principal stress of the vertical layer of the CLT three-point bending specimen The values of the first principal stress and the third principal stress could be read from the abscissa of E and E1, which were the intersection points of the stress circle and the σ-axis. And the corresponding principal direction could be read from the angle \(\angle DCE\). The stress circle of left 1 (Fig. 7) shows that E (+ 0.613, 0), E1 (−0.711, 0), \(\angle DCE = - 94.3^\circ\). Therefore, the principal stress and their corresponding principal directions at left 1 were as follows (Fig. 8): Principal stress at left 1 σ1 = + 0.613 MPa, direction: −47.2°; σ3 = -0.711 MPa, direction: + 42.8°. From the stress circles of the points left 2, left 3, right 1, right 2, and right 3, the principal stress and corresponding principal direction could be obtained similarly. The first principal stresses at the points left 1, left 2, and left 3 of the three-point bending specimen were the tensile stress, and the angles between their direction (the direction of the normal line outside the first principal plane) and the x-axis were −42.8°, −45° and −47.2°, respectively. The first principal stresses at the points right 1, right 2, and right 3 were also tensile stress, and the angles between their direction and the x-axis were +42.8°, +45°, and +47.2°, respectively. The value of the angle between the direction of tensile stress and the x-axis on the left half-span was opposite to that on the right half-span, indicating that their first principal plane orientation was symmetrical with respect to the middle span. This was consistent with the phenomenon that the azimuth orientation of the crack surface on the vertical layer of the CLT three-point bending specimen was symmetrical with respect to the middle span. In addition, there were points on the vertical layer of the CLT three-point bending specimen. The azimuth angle of their first principal plane only differed by 2.2° from that of the points on the neutral axis, and the difference in the magnitude of the first principal stress between the two was only 7.6%. Therefore, the stress state of the CLT vertical layer in the three-point bending test could be approximately treated as a pure shear stress state. Part IV: Maximum and minimum shear stresses of the vertical layer of the CLT three-point bending specimen In Fig. 9, the values of the maximum and minimum shear stresses could be read from the ordinates of the two points F and F1 with the largest distance from the stress circle to the σ-axis, which were +0.662 MPa and −0.662 MPa, respectively. The maximum shear stress τ1 was located on the cross-section that was rotated 2.2° clockwise along the positive x-axis, so the angle was −2.2°. The minimum shear stress τ3 was located on the cross-section rotated 87.8° counter-clockwise along the positive x-axis, so the angle was +87.8°. Maximum and minimum shear stresses at left 1 Similarly, for the points left 2, left 3, right 1, right 2 and right 3, the maximum and minimum shear stresses could also be obtained by performing the same analysis. The stress state (including the first principal stress σ1, the third principal stress σ3, the maximum shear stress τ1, the minimum shear stress τ3) at left 1 is concluded in Table 3. Stress analysis of the vertical layer of the CLT-improved planar shear specimen Point B was taken at the interface between the parallel layer and the vertical layer of the CLT shear specimen (Fig. 2). The x-section of point B was located at the interface between the parallel layer and the vertical layer. The x-axis was perpendicular to the x-section through point B, and its positive direction was along the outer normal. The y-axis was perpendicular to the x-axis, and the section perpendicular to the y-axis through point B was the y-section of point B. The formulas for calculating normal stress and shear stress were as follows:\(\sigma = P\sin \beta /bl\), \(\tau = P\cos \beta /bl\). Half of the maximum load 47.15 kN (Table 2) measured by the CLT shear test was taken to calculate the normal stress and shear stress at point B on the CLT vertical layer. According to the average value of the CLT planar shear strength 0.625 MPa and the relationship \(\sigma = \tau \tan 15^\circ\)(β = 15°) [25], σ could be obtained as 0.167 MPa. In this case: on the x-section: σx = −0.167 MPa, τxy = −0.625 MPa; on the y-section: σy = 0, τyx = +0.625 MPa. The stress components on the x, y-sections of point B are shown in Fig. 10. Stress components at point B Through drawing the stress circle at point B, the following values could be obtained. In summary, the stress circle analysis of the vertical layer of the CLT three-point bending specimen and the CLT shear specimen showed that: Under the influence of the maximum normal stress on the vertical layer, the first principal stress obtained by the CLT three-point bending test was 7.6% different from that under pure shear, and the directions of these two differed by 2.2°. The maximum and minimum shear stresses obtained by the bending test were only 0.3% different from that under pure shear, and the angle between the surfaces where the two located differed by 2.2°. The first principal stress obtained by the CLT-improved planar shear test was 12.5% different from that under pure shear, and the directions of these two differed by 3.8°. The maximum and minimum shear stresses obtained by the shear test were only 0.9% different from that under pure shear, and the angle between the surfaces where the two located differed by 3.8°. The results of principal stress, maximum and minimum shear stresses obtained by the three-point bending test and the improved planar shear test showed that the points on the CLT vertical layer of these two tests could be approximated as in a pure shear stress state. Considering the accuracy of the approximation, the three-point bending test was better than the improved planar shear test. For both the three-point bending test and the improved planar shear test, the stress circle of the CLT vertical layer in the pure shear stress state (ignoring the normal stress) was drawn, then the abscissa and ordinate of the specific points on the circumference were obtained. Figure 11 shows the normal stress σα and shear stress τα of the points on the CLT vertical layer on the α section in a specific orientation. α is the angle at which the stress element rotates around the positive axis-x. Its definition is the same as the stress direction mentioned above. Stress distribution of the vertical layer on the α section under the CLT in-plane shear Failure mode of the CLT vertical layer in-plane shear In material mechanics, the mechanism analysis of the compression failure of cast-iron cylinders and the torsion failure of cast-iron circular shafts are two classic examples of combining stress analysis with the orientation of the failure section [32]. Although the compression load of the cast-iron cylinder produces compression deformation, the failure caused by it is called shear failure, instead of compression failure. And although the torsion of the cast-iron shaft produces torsional shear deformation, the failure caused by it is called tensile failure, instead of torsional failure. In other words, the type of deformation cannot determine the type of failure. Therefore, in the CLT three-point bending test and the CLT-improved planar shear test, the in-plane shear deformation of the vertical layer could not be simply considered as shear failure. CLT vertical layer crack azimuth angle in the range of 40°–50° A total of 30 three-point bending specimens and planar shear specimens were tested. After the test, there were 19 specimens with crack azimuth angles of the vertical layer between 40° and 50°, accounting for 63.3% of the total number (Table 1). The principal stress analysis of the CLT vertical layer showed that when the three-point bending test achieved the CLT vertical layer in-plane shear, the first principal stress was tensile, and the orientation of its action surface (first principal plane) was between −42.8° and −47.2° on the left half-span, and between + 42.8° and + 47.2°on the right half-span. The signs reflected that the first principal plane of the vertical layer was symmetrical on the left half-span and right half-span. When the improved planar shear test achieved the CLT vertical layer in-plane shear, the first principal stress was also the tensile stress, and the orientation of its action surface was + 48.8° or −48.8°. This showed that whether it was a three-point bending test or an improved planar shear test, the azimuth angle of the first principal plane of each point on the CLT vertical layer was between 40° and 50°, which indicated that the CLT vertical layer was cracked under planar shear. The agreement between the azimuth angle of the crack surface and the azimuth angle of the first principal plane of the CLT vertical layer reached 63.3%. For the local stress σα and τα on the crack surface of the vertical layer with the crack angle in the range of 40–50°, σα/σ1 changed from 0.98 to 1, τα/τ1 changed from 0.17 to 0 (Fig. 11). Although the vertical layer was approximately in pure shear and underwent shear deformation, since τα was very small, it could not cause the vertical layer to crack along the α plane, so the corresponding failure could not be called shear failure. On the other hand, σα was very large, almost equal to the first principal stress, and it was also tensile stress. Based on these two findings, it could be considered that the failure of CLT was related to the stretching of the horizontal wood grain of the vertical layer. And within this angle range, the failure mode of CLT was tensile failure. According to the above analysis, the failure mode of C1 specimen (Fig. 12) could be explained as follows. Crack surface and its failure type on the CLT shear specimen C1 There were two cracks on the vertical layer of the CLT shear specimen C1. One of the cracks was at the interface, and the wood chips were clearly visible on the cracked interface, so the crack occurred along the interlayer rather than the glue layer. The azimuth angle of another crack was 40°, so it was under tensile failure. CLT vertical layer crack azimuth angle in the range of 0°–22.5° or 67.5°–90° The local stress σα and τα on the crack surface on the vertical layer with crack angle in the range of 0°–22.5° or 67.5°–90° changed as follows: σα/σ1 changed from 0 to 0.71, τα/τ1 changed from 1 to 0.71. When τα was large, and the vertical layer cracked along the action plane of the shear stress, the CLT failure could be considered as shear failure. The corresponding specimens were as follows: A13 (with an azimuth angle of 78°) and B4 (with an azimuth angle of 85°). CLT vertical layer crack azimuth angle in the range of 22.5°–40° or 50°–67.5° The specimens with the vertical layer crack angle within this range were: A7 (25°, along the wood ray), A8 (65°, along the wood ray), A10 (25°, neither along the annual ring nor along the wood ray), A18 (30°, along the annual ring tangent, connecting the wood ray), B1 (35°, the wood grain was not clear), B2 (60°, the annual ring line was blurred), C8 (two cracks: 30°, along the wood ray; 30°, along the annual ring), C9 (two cracks: 60°, along the wood ray; 55°, along the annual ring), 9 pieces in total. It should be noted that the definition of direction here is related to the crack morphology. For example, "along the wood ray" means the same as "heart shake". See the definitions of different crack morphologies for more details. For the local stress of the vertical layer crack angle within this range, τα/τ1 decreased fastly from 0.71 to 0.17, and the value was not very large. Therefore, it cannot be called shear failure. σα/σ1 changed from 0.71 to 0.98. Although σα was relatively large and slowly increased as the azimuth angle increased, it was not certain that this phenomenon was due to tensile failure. This situation just reflected the limitation of applying stress analysis for the failure of the vertical layer with the crack angle in the range of 22.5°–40° or 50°–67.5°. Perhaps it was necessary to combine the characteristics of wood as an anisotropic material and make further analysis of the microstructure of the wood. As the tensile property of wood ray was worse than that of nearby wood fibers, it can be considered that the failure of cracks of the following six specimens was caused by tensile stress: A7, A8, A18, C8 (30°, along the wood ray), C7, C9 (60°, along the wood ray). However, the reason for the failure of cracks of the following specimens could not be determined yet: A10, B1, B2, C8 (30°, along the annual ring), C9 (55°, along the annual ring). In summary, the specimens with the azimuth angle of the crack surface of 40°–50° accounted for 63.3% of the total number. The specimens with the azimuth angle of 22.5°–40° or 50°–67.5° along wood rays accounted for 20%. These two cases account for 83.3% of the total number of specimens, indicating that the failure of the specimens was related to the transverse tensile property of the cross-section of the vertical layer. The specimens with the azimuth angle of 0°–15° or 75°–90° along wood rays only accounted for 6.7%, and the failure mode of them was shear failure. According to the crack morphologies and shear strength test results of the three-point bending test and improved planar shear test, this paper carried out the research on the in-plane shear failure mode of CLT panel. The stress state of the CLT vertical layer was determined by analyzing the principal stress, maximum and minimum shear stresses of the point on the vertical layer through stress circle. And the failure mechanism of the vertical layer was comprehensively analyzed by combing planar stress state and crack azimuth angle. The main conclusions were as follows: The planar shear strength of CLT measured by the three-point bending test was highly consistent with that measured by the improved planar shear test. The dispersion of CLT shear strength obtained by the improved planar shear test was greater than that measured by the three-point bending test. CLT vertical layer in-plane shear of the three-point bending test and improved planar shear test could be treated as pure shear. For the accuracy of the approximation, the three-point bending test was better than the improved planar shear test. For the CLT vertical layer in the three-point bending test and improved planar shear test, the orientation of the crack surface was in good agreement with that of the first principal plane, and the number of coincided specimens accounted for 63.3% of the total. There were two failure modes in the CLT vertical layer in-plane shear: tensile failure and shear failure. In this study, 83.3% of the specimens were under tensile failure and 6.7% of the specimens were under shear failure. In this paper, the planar shear strength test and failure mechanism analysis of three-layer CLT panel with equal thickness were carried out. In the future, it is planned to carry out relevant research on three-layer CLT panel with unequal thickness and five, seven-layer CLT panel, so as to improve the systematicness and integrity of the research on the in-plane shear failure mechanism of CLT panel. The reason why the coefficient of variation of the three-point bending test was smaller than that of the improved planar shear test needs to be further explored. Moreover, the crack morphologies of some specimens in this paper have not been fully explained. In the follow-up study, it will be further analyzed combined with the orthotropy characteristic of wood and the mesostructure of wood. The datasets used and analyzed during this study are available from the corresponding author upon reasonable request. CLT: PUR: Zhou K, Li HT, Hong CK, Ashraf M, Sayed U, Lorenzo R, Corbi I, Corbi O, Yang D, Zuo YF (2021) Mechanical properties of large-scale parallel bamboo strand lumber under local compression. Constr Build Mater 271:121572. https://doi.org/10.1016/j.conbuildmat.2020.121572 Tan C, Li HT, Ashraf M, Corbi I, Corbi O, Lorenzo R (2021) Evaluation of axial capacity of engineered bamboo columns. J Building Eng 34:102039. https://doi.org/10.1016/j.jobe.2020.102039 Corbi I, Corbi O, Li HT (2020) Convolutive PD controller for hybrid improvement of dynamic structural systems. Soil Dyn Earthq Eng 137:106255. https://doi.org/10.1016/j.soildyn.2020.106255 Karacabeyli E, Douglas B (2013) CLT Handbook: cross-laminated timber. FPInnovations, Pointe-Claire Sun JP, Niederwestberg J, Cheng FC, Chui YH (2020) Frequencies prediction of laminated timber plates using ANN approach. J Renew Mater 8(3):319–328. https://doi.org/10.32604/jrm.2020.08696 Muñoz F, Tenorio C, Moya R, Navarro-Mora A (2022) CLT fabricated with Gmelina arborea and Tectona grandis wood from fast-growth forest plantations physical and mechanical properties. J Renew Mater 10(1):1–17. https://doi.org/10.32604/jrm.2022.017392 van de Lindt JW, Furley J, Amini MO, Pei SL, Tamagnone G (2019) Experimental seismic behavior of a two-story CLT platform building. Eng Struct 183:408–422. https://doi.org/10.1016/j.engstruct.2018.12.079 Sandoli A, D'Ambra C, Ceraldi C, Calderoni B, Prota A (2021) Role of perpendicular to grain compression properties on the seismic behaviour of CLT walls. J Building Eng 34:101899. https://doi.org/10.1016/j.jobe.2020.101889 Sandoli A, D'Ambra C, Ceraldi C, Calderoni B, Prota A (2021) Sustainable cross-laminated timber structures in a seismic area: overview and future trends. Applied Science 11(5):2078. https://doi.org/10.3390/app11052078 Wang ZH, Wang Z, Wang BJ, Wang YL, Rao X, Liu B, Wei PX, Yang Y (2014) Dynamic testing and evaluation of modulus of elasticity (MOE) of SPF dimension lumber. BioResources 9(3):3869–3882. https://doi.org/10.15376/biores.9.3.3869-3882 Wang ZH, Wang YL, Cao Y, Wang Z (2016) Measurement of shear modulus of materials based on the torsional mode of cantilever plate. Constr Build Mater 124:1059–1071. https://doi.org/10.1016/j.conbuildmat.2016.08.104 Ding FL, Zhuang ZL, Liu Y, Jiang D, Yan XA, Wang ZG (2020) Detecting defects on solid wood panels based on an improved SSD algorithm. Sensors 20(18):5315. https://doi.org/10.3390/s20185315 Yoshihara M, Maruta M (2021) Effect of specimen configuration and orthotropy on the Young's modulus of solid wood obtained from a longitudinal vibration test. Holzforschung 75(5):428–435. https://doi.org/10.1515/hf-2020-0115 Sandoli A, Calderoni B (2020) The rolling shear influence on the out-of-plane behavior of CLT panels: a comparative analysis. Buildings 10(3):42. https://doi.org/10.3390/buildings10030042 Pang SJ, Jeong GY (2019) Effects of combinations of lamina grade and thickness, and span-to-depth ratios on bending properties of cross-laminated timber (CLT) floor. Constr Build Mater 222:142–151. https://doi.org/10.1016/j.conbuildmat.2019.06.012 Lim H, Tripathi S, Li MH (2020) Rolling shear modulus and strength of cross-laminated timber treated with micronized copper azole type C (MCA-C). Constr Build Mater 259:120419. https://doi.org/10.1016/j.conbuildmat.2020.120419 He MJ, Sun XF, Li Z, Feng W (2020) Bending, shear, and compressive properties of three- and five-layer cross-laminated timber fabricated with black spruce. J Wood Sci 66(1):38. https://doi.org/10.1186/s10086-020-01886-z Ukyo S, Miyatake A, Hiramatsu Y (2021) Shear strength properties of hybrid (hinoki cypress and Japanese cedar) cross-laminated timber. J Wood Sci 67(1):23. https://doi.org/10.1186/s10086-021-01954-y Ehrhart T, Brandner R, Schickhofer G, Frangi A (2015) Rolling shear properties of some European timber species with focus on cross-laminated timber (CLT): test configuration and parameter study. In: International Network on Timber Engineering Research: Proceedings of the 48th Meeting, Sibenik, Croatia. August 24–27, 2015. pp 61–76 Li MH (2017) Evaluating rolling shear strength properties of cross-laminated timber by short-span bending tests and modified planar shear tests. J Wood Sci 63(4):331–337. https://doi.org/10.1007/s10086-017-1631-6 Ma YX, Musah M, Si RZ, Dai QL, Xie XF (2021) Integrated experimental and numerical study on flexural properties of cross laminated timber made of low-value sugar maple lumber. Constr Build Mater 280:122508. https://doi.org/10.1016/j.conbuildmat.2021.122508 Li Y, Lam F (2016) Low cycle fatigue tests and damage accumulation models on the rolling shear strength of cross-laminated timber. J Wood Sci 62:251–262. https://doi.org/10.1007/s10086-016-1547-6 Lam F, Li Y, Li M (2016) Torque loading tests on the rolling shear strength of cross-laminated timber. J Wood Sci 62:407–415. https://doi.org/10.1007/s10086-016-1567-2 Wang ZH, Ghanem R (2021) An extended polynomial chaos expansion for PDF characterization and variation with aleatory and epistemic uncertainties. Comput Methods Appl Mech Eng 382:113854. https://doi.org/10.1016/j.cma.2021.113854 Huang YJ, Zhang YF, Wang Z, Dauletbe A, Lu Y (2022) Analysis of crack expansion and morphology of cross-laminated timber planar shear test. J Renew Mater 10(3):849–870. https://doi.org/10.32604/jrm.2022.018515 ASTM Standard D2718 (2011) Standard test methods for structural panels in planar shear (rolling shear). ASTM International, PA EN Standard 408 (2003) Timber Structures-structural timber and glued laminated timber-determination of some physical and mechanical properties. European Committee for Standardisation, Brussels Gong M, Tu D, Li L (2015) Planar shear of hardwood cross layer in hybrid cross laminated timber. In: 5th International Scientific Conference on Hardwood Processing, Quebec City. September 15–17, 2015. pp 85–90 ASTM Standard D198 (2015) Standard test methods of static tests of lumber in structural sizes. ASTM International, PA. Wang Z, Lu Y, Xie WB, Gao ZZ, Ding YW, Fu HY (2019) Shear stress analysis and interlayer shear strength test of Cross Laminated Timber (CLT) Beam (in Chinese). Scientia Silvae Sinicae. 55(2):152–158. https://doi.org/10.11707/j.1001-7488.20190216 Babaeeian M, Mohammadimehr M (2021) Experimental and computational analyses on residual stress of composite plate using DIC and Hole-drilling methods based on Mohr's circle and considering the time effect. Opt Lasers Eng 137:106355. https://doi.org/10.1016/j.optlaseng.2020.106355 Hibbeler RC (2016) Mechanics of materials, 10th edn. Prentice Hall, Hoboken The authors would like to thank Mr. Gongxun Ma (Nanjing Tech University, China) for his assistance with the theoretical support. This work was sponsored by the Science and Technology Project for Policy Guidance of Jiangsu Province (SZ-LYG 2020016). College of Materials Science and Engineering, Nanjing Forestry University, Nanjing, 210037, China Yuhao Zhou, Zhaoyu Shen, Yao Lu & Zheng Wang College of Civil Engineering, Nanjing Forestry University, Nanjing, 210037, China Haitao Li Yuhao Zhou Zhaoyu Shen Yao Lu Zheng Wang YZ contributed to experiments, data analysis, and writing the manuscript. ZS contributed to experiments. HL contributed to data analysis. YL contributed to manuscript review and editing. ZW designed this study and contributed to manuscript revision and funding acquisition. All authors read and approved the final manuscript. Correspondence to Zheng Wang. Zhou, Y., Shen, Z., Li, H. et al. Study on in-plane shear failure mode of cross-laminated timber panel. J Wood Sci 68, 36 (2022). https://doi.org/10.1186/s10086-022-02045-2 Cross-laminated timber (CLT) In-plane shear Three-point bending test Improved planar shear test Crack morphology Stress circle Failure mode
CommonCrawl
Predictive control strategy of a gas turbine for improvement of combined cycle power plant dynamic performance and efficiency Omar Mohamed1, Jihong Wang2, Ashraf Khalil3 & Marwan Limhabrash4 This paper presents a novel strategy for implementing model predictive control (MPC) to a large gas turbine power plant as a part of our research progress in order to improve plant thermal efficiency and load–frequency control performance. A generalized state space model for a large gas turbine covering the whole steady operational range is designed according to subspace identification method with closed loop data as input to the identification algorithm. Then the model is used in developing a MPC and integrated into the plant existing control strategy. The strategy principle is based on feeding the reference signals of the pilot valve, natural gas valve, and the compressor pressure ratio controller with the optimized decisions given by the MPC instead of direct application of the control signals. If the set points for the compressor controller and turbine valves are sent in a timely manner, there will be more kinetic energy in the plant to release faster responses on the output and the overall system efficiency is improved. Simulation results have illustrated the feasibility of the proposed application that has achieved significant improvement in the frequency variations and load following capability which are also translated to be improvements in the overall combined cycle thermal efficiency of around 1.1 % compared to the existing one. In recent years, gas turbines (GT) have easily reached a primary position in thermal power generation field because of their fast deliveries of power and availability of natural gas (NG) (Rayaprolu 2009). GT as simple cycle or integrated with heat recovery steam generator (HRSG) to form a combined cycle (CC) power plant has become very popular generation technology in many countries. However, the manufacturers of such devices are doing great effort to achieve improved efficiencies and lower pollutant emissions that compete coal and clean coal technologies. Nowadays, apart from reliability and fuel cost optimization, novel power generation techniques demand much improved load demand tracking that lead to lessen frequency variations of power system with better corresponding plant efficiency. It has been noticed that improving the operation performance of gas turbine can significantly lead to higher cycle efficiency and better dynamic performance. Achieving better compressing ratios and maintaining the exhausted gas temperature within certain limits despite load stochastic variations is likely to do just that improvement. In the developed countries, it is, however, reported that the efficiency of CC power plants from 1960 to 2000 has been improved from 35 % up to nearly 60 % respectively (Rayaprolu 2009). In general, the literature of thermal power plants has often suggested optimal and predictive control theories that meet wide acceptance in industry and power plants (Lee and Ramirez 2001). In particular, some articles has been written and published on CC power plants' control that sought to optimize combined cycle power plants with regard to efficiency and load following capability (Saez et al. 2007; Matsumoto et al. 1996; Lalor et al. 2005). A fuzzy predictive control based on genetic algorithms for power plant gas turbines is developed which provides the optimal dynamic set point for the regulatory level with contribution to capturing the nonlinearity of the plant (Saez et al. 2007). Start-up process optimization by an expert system has been proposed under NOx emission regulation and management of machine life (Matsumoto et al. 1996). The influence of gas turbine short-term dynamics on the performance of frequency control can also be investigated through suitable modelling for knowledge of frequency excursions in the grid in advance (Lalor et al. 2005). However, there are other important factors that have a direct influence on the system output performance and the grid frequency, which are studied in this paper and not considered in many papers. For instance, in the previous literature the emphasis of fuel flow changes was given only to the valve position of natural gas (NG); however, the position of pilot gas valve can also be manipulated to stabilize the flames in the premix and ensure steady combustion at all time. In addition, the compressor pressure ratio signal (the ratio of the discharged pressure of the compressor to the inlet atmospheric pressure) passes through the compressor pressure limit controller to influence the inlet guide vane (IGV) pitch controller which in turn affects the combustion process and flow of air to the GT. If the system automation is upgraded with potential to correct such signals by the MPC, these signals will optimize in advance which means reduced process variations while keeping faster load following capability due to higher stored/kinetic energy in the plant. This also can make the CCGT works close to its optimum efficiency by expanding the pressure ratio and the firing temperature by MPC. These potent motivations have to be investigated on gas turbines through advanced technique of system identification and control system design. However, without digital simulation, the gap between theory and practice cannot be easily bridged in this task. A model has been developed and verified via identification technique which is assessed and published in our research (Mohamed et al. 2015). The scientific contribution added to this paper is to integrate MPC to the developed identified model representing the real GT with emphasis on the strategic influences discussed above for the target of performance enhancement. The present paper is organized as follows. "Combined (deterministic/stochastic) subspace identification" section clarifies the technique of subspace identification. "The application of subspace identification to gas turbine process" section discusses the developed model of real gas turbine by subspace identification method for different operating regions. Simulation results of identification and verification procedure have shown model accuracy and capability of reflecting the key variables of the turbine. "Predictive controller design and implementation" section presents the designed model predictive control to be applied to the system. "Simulation results" section then mentions simulation scenarios that have offered the advantage of the proposed upgraded strategy. Finally, the present paper has concluded the work with suggested further opportunities for future research. Combined (deterministic/stochastic) subspace identification Theoretical foundation for subspace identification Some applied linear algebra may be necessary to simplify the description of subspace identification method. Subspace identification is based on the tools of singular value decomposition and oblique projection. The reader is highly recommended to refer to the text (Meyer 2000) for more details. Singular value decomposition Singular value decomposition (SVD) is a matrix analysis that facilitates the subspace identification method. It simply states that an m × n matrix M could be dissected into three matrices, two of them are orthogonal matrices and one is a diagonal matrix contains the singular values of the main matrix as nonzero diagonal elements. Though it is applied on either real or complex matrix, it is assumed in our application that the matrices are real. Then we have for every M \(\in\) R m×n of rank r, there are orthogonal matrices U m×m , V n×n and a diagonal matrix S r×r = dig (σ1, σ2, o r ) such that $${\mathbf{M}} = {\mathbf{U}}\left( {\begin{array}{*{20}l} {\mathbf{S}} \hfill &\quad {\mathbf{0}} \hfill \\ {\mathbf{0}} \hfill &\quad {\mathbf{0}} \hfill \\ \end{array} } \right)_{{{\mathbf{n}} \times {\mathbf{m}}}} {\mathbf{V}}^{{\mathbf{T}}}$$ The factorization in Eq. (1) is known as singular value decomposition of M. The columns in U and V are called the left hand and right hand singular vectors of M, respectively. For matrix computations and analysis refer to Meyer (2000) and Mohamed et al. (2014). Orthogonal projection and oblique projection Suppose that we have the subspaces \(\mathcal{V}\) and \(\mathcal{W},\) then, the orthogonal projection of the row space of \(\mathcal{V}\) into the row space of \(\mathcal{W}\) is formulated as follows (Meyer 2000; Ruscio 2009; Overschee and Moore 1996): $$\mathcal{V} /\mathcal{W} = \mathcal{ V W}^{{{\dag }}} \mathcal{W }$$ where † stands for the Moore–Penrose pseudo-inverse that facilitates the concept of orthogonal projection of the matrix which is defined as $$\mathcal{W}^{{\dag }} = (\mathcal{W}^{\rm T} \mathcal{W})^{{{ - 1}}} \mathcal{W}^{\rm T}$$ Oblique projection of row space of matrix \(\mathcal{V}\) onto the row space of matrix \(\mathcal{M}\) along the row space of matrix \(\mathcal{W}\) can be defined as $$\mathcal{V} /_{\mathcal{W}} \mathcal{M} = \left[ {\mathcal{V} /\mathcal{W}^{ \bot } } \right] \cdot \left[ {\mathcal{M} /\mathcal{W}^{ \bot } } \right]^{{{\dag }}} \cdot \mathcal{M}$$ where \(\mathcal{W}^{ \bot }\) is the orthogonal projection into the null space of \(\mathcal{W}\) such that \(\mathcal{W}^{ \bot } \cdot \mathcal{W} = 0.\) In identification of combined systems, the identification of the deterministic part is done by means of projection and singular value decomposition (Meyer 2000). In general, an instrument matrix is multiplied by both sides of the extended state space model to remove the stochastic part and the input vector so that we can get the extended observability matrix and state sequence. Once the extended observability matrix is known, the system matrices can be found. This is discussed in details in the next section. The subspace identification technique This section presents the algorithm of subspace identification method. The method has emerged in late 1980s and resolved many problems regarding identification of complex industrial processes (Ruscio 2009; Overschee and Moore 1996). It has been proved that it is capable of identifying the key features of gas turbine power plants (Mohamed et al. 2014). The method of subspace identification is based on the advanced matrix linear algebra techniques which are singular value decomposition and oblique projection. The problem is described as follows (Ruscio 2009; Overschee and Moore 1996). A set of data measured for combined unknown system of order n: $$x_{k + 1} = Ax_{k} + Bu_{k} + w_{k}$$ $$y_{k} = Cx_{k} + Du_{k} + v_{k}$$ With w and v are zero mean white noise innovations with covariance matrix $${\mathbf{E}}\left[ {\left( {\begin{array}{*{20}l} {w_{p} } \\ {v_{p} } \\ \end{array} } \right)\left( {\begin{array}{*{20}l} {w_{p}^{T} } &\quad {v_{p}^{T} } \\ \end{array} } \right)} \right] = \left( {\begin{array}{*{20}l} Q \hfill &\quad S \hfill \\ {S^{T} } \hfill &\quad R \hfill \\ \end{array} } \right)\delta_{pq}$$ With knowledge of system inputs/outputs u k and y k , the problem is to determine/identify: 1. The system order n. 2. The system matrices \(A \in R^{n \times n} ,\,B \in R^{n \times m} ,C \in R^{l \times n} ,\,D \in R^{l \times m}\) and the matrices, \(Q \in R^{n \times n} ,\,S \in R^{n \times l} ,\, \in R^{l \times l}\) so that the model output agree with the main variation trends of the output data. The system extended state space model can be organized as follows: $$Y_{f} = O_{i} X_{f} + H_{i}^{d} U_{f} + H_{i}^{s} E_{f} + N_{f}$$ where Y f , U f , X f , E f , N f denotes the future output, future input, future states, and future noises. The matrices are defined as follows: $$\begin{aligned} & O_{i}\, \mathop = \limits^{def}\, \left[ {\begin{array}{*{20}l} C \\ {CA} \\ \vdots \\ {CA^{i - 1} } \\ \end{array} } \right] \in R^{im \times n} ,\quad H_{i}^{d}\, \mathop = \limits^{def}\, \left[ {\begin{array}{*{20}l} D &\quad 0 &\quad 0 &\quad \cdots &\quad 0 \\ {CB} &\quad D &\quad 0 &\quad \cdots &\quad 0 \\ {CAB} &\quad {CB} &\quad D &\quad \cdots &\quad 0 \\ \vdots &\quad \vdots &\quad \vdots &\quad \ddots &\quad \vdots \\ {CA^{i - 2} B} &\quad {CA^{i - 3} B} &\quad {CA^{i - 4} B} &\quad \cdots &\quad D \\ \end{array} } \right], \\ & H_{i}^{s}\, \mathop = \limits^{def}\,\left[ {\begin{array}{*{20}l} 0 &\quad 0 & \quad 0 &\quad \cdots &\quad 0 \\ C &\quad 0 &\quad 0 &\quad \cdots &\quad 0 \\ {CA} &\quad C &\quad 0 &\quad \cdots &\quad 0 \\ \vdots &\quad \vdots &\quad \vdots &\quad \ddots &\quad \vdots \\ {CA^{i - 2} } &\quad {CA^{i - 3} } &\quad {CA^{i - 4} } &\quad \cdots &\quad 0 \\ \end{array} } \right] \\ \end{aligned}$$ H i d is known as deterministic Toeplitz matrix while H i s is the stochastic Toeplitz matrix. The data is sampled and organized as Hankel matrix, the input data matrix for past and future samples $$U_{\left. 0 \right|2i - 1}\, \mathop = \limits^{def}\, \left( {\frac{{\begin{array}{*{20}l} {u_{0} } &\quad {u_{1} } &\quad {u_{2} } &\quad \cdots &\quad {u_{j - 1} } \\ {u_{1} } &\quad {u_{2} } &\quad {u_{3} } &\quad \cdots &\quad {u_{j} } \\ \cdots &\quad \cdots &\quad \cdots &\quad \cdots &\quad \cdots \\ {u_{i - 1} } &\quad {u_{i} } &\quad {u_{i + 1} } &\quad \cdots &\quad {u_{i + j - 2} } \\ \end{array} }}{{\begin{array}{*{20}l} {u_{i} } &\quad {u_{i + 1} } &\quad {u_{i + 2} } &\quad \cdots &\quad {u_{i + j - 1} } \\ {u_{i + 1} } &\quad {u_{i + 2} } &\quad {u_{i + 3} } &\quad \cdots &\quad {u_{i + j} } \\ \cdots &\quad \cdots &\quad \cdots &\quad \cdots &\quad \cdots \\ {u_{2i - 1} } &\quad {u_{2i} } &\quad {u_{2i + 1} } &\quad \cdots &\quad {u_{2i + j - 2} } \\ \end{array} }}} \right)\mathop = \limits^{def} \left( {\frac{{U_{p} }}{{U_{f} }}} \right)$$ and the output data matrix is $$Y_{\left. 0 \right|2i - 1} \mathop = \limits^{def} \left({\frac{{\begin{array}{*{20}l} {y_{0} } &\quad {y_{1} } &\quad {y_{2} }&\quad \cdots &\quad {y_{j - 1} } \\ {y_{1} } &\quad {y_{2} } &\quad{y_{3} } &\quad \cdots &\quad {y_{j} } \\ \cdots &\quad \cdots &\quad\cdots &\quad \cdots &\quad \cdots \\ {y_{i - 1} } &\quad {y_{i} }&\quad {y_{i + 1} } &\quad \cdots &\quad {y_{i + j - 2} } \\ \end{array} }}{{\begin{array}{*{20}l} {y_{i} } &\quad {y_{i + 1} } &\quad {y_{i + 2} } &\quad \cdots &\quad {y_{i + j - 1} } \\ {y_{i + 1} } &\quad {y_{i + 2} } &\quad {y_{i + 3} } &\quad \cdots &\quad {y_{i + j} } \\ \cdots &\quad \cdots &\quad \cdots &\quad \cdots &\quad \cdots \\ {y_{2i - 1} } &\quad {y_{2i} } &\quad {y_{2i + 1} } &\quad \cdots &\quad {y_{2i + j - 2} } \\ \end{array} }}} \right)\mathop = \limits^{def} \left( {\frac{{Y_{p} }}{{Y_{f} }}} \right)$$ where the subscript p and f denote the past and future respectively. The same can be done for matrix E i . The state vector X i is defined as $$X_{i}\, \mathop = \limits^{def}\, \left( {\begin{array}{*{20}l} {x_{i} } &\quad {x_{i + 1} } &\quad {x_{i + 2} } &\quad \ldots &\quad {x_{i + j - 1} } \\ \end{array} } \right).$$ Proof of extended state space model Looking at the general state space model in (5) and (6). The extended state space model that contains the matrices data can be easily derived; $$y_{k + 1} = Cx_{k + 1} + Du_{k + 1} + v_{k + 1}$$ Substitute (5) in (8) we get $$y_{k + 1} = CAx_{k} + CBu_{k} + Cw_{k} + Du_{k + 1} + v_{k + 1}$$ $$x_{k + 2} = Ax_{k + 1} + Bu_{k + 1} + w_{k + 1}$$ Then from (9) and (10) we get, $$y_{k + 2} = CAx_{k + 1} + CBu_{k + 1} + Cw_{k + 1} + Du_{k + 2} + v_{k + 2}$$ Substituting (5) in (11), we get: $$\begin{aligned} y_{k + 2} & = CA^{2} x_{k} + CABu_{k} + CAw_{k} + CBu_{k + 1} \\ & \quad + Cw_{k + 1} + Du_{k + 2} + v_{k + 2} \\ \end{aligned}$$ Organizing the above equations as matrix equation; with extended data vectors y, u, and v $$\left[ {\begin{array}{*{20}l} {y_{k} } \\ {y_{k + 1} } \\ {y_{k + 2} } \\ \end{array} } \right] = \left[ {\begin{array}{*{20}l} C \\ {CA} \\ {CA^{2} } \\ \end{array} } \right]x_{k}\,+ \left[ {\begin{array}{*{20}l} D \hfill &\quad 0 \hfill &\quad 0 \hfill \\ {CB} \hfill &\quad D \hfill &\quad 0 \hfill \\ {CAB} \hfill &\quad {DB} \hfill &\quad D \hfill \\ \end{array} } \right]\left[ {\begin{array}{*{20}l} {u_{k} } \\ {u_{k + 1} } \\ {u_{k + 2} } \\ \end{array} } \right] + \left[ {\begin{array}{*{20}l} 0 \hfill &\quad 0 \hfill &\quad 0 \hfill \\ C \hfill &\quad 0 \hfill &\quad 0 \hfill \\ {CA} \hfill &\quad C \hfill &\quad 0 \hfill \\ \end{array} } \right]\left[ {\begin{array}{*{20}l} {w_{k} } \\ {w_{k + 1} } \\ {w_{k + 2} } \\ \end{array} } \right] + \left[ {\begin{array}{*{20}l} {v_{k} } \\ {v_{k + 1} } \\ {v_{k + 2} } \\ \end{array} } \right]$$ For i instants (block rows) and j number of experiments (block columns), we get Eq. (12) with the, inputs, outputs and states defined; $$Y_{p} = O_{i} X_{p} + H_{i}^{d} U_{p} + H_{i}^{s} E_{p} + N_{p} .$$ Subspace identification algorithms N4SID which stands for Numerical algorithm for Subspace State Space System Identification (Ruscio 2009). We shall now define the block Hankel matrix that contains the past inputs and outputs W p $$W_{p} = \left( {\frac{{U_{p} }}{{Y_{p} }}} \right),$$ The general steps for subspace identification are (Ruscio 2009; Overschee and Moore 1996): Calculate the oblique projection: This algorithm is based on oblique projection and singular value decomposition. The tool of oblique projection is mainly used to extract the term of extended observability matrix and the sequence of states [i.e. the term O i X f in (7)]. Projection of row space of future output Y f onto W p along the future input U f $$\zeta_{i} = Y_{f} /_{{U_{f} }} W_{p} \quad {\text{and}}\quad \zeta_{i + 1} = Y_{f}^{ + } /_{{U_{f}^{ + } }} W_{p}^{ + }$$ $$Y_f/U_f\,\,W_{p} = \left[ {Y_{f} /U_{f}^{ \bot } } \right] \cdot \left[ {W_{p} /U_{f}^{ \bot } } \right]^{{^{{\dag }} }} W_{p}$$ where \(U_{f}^{ \bot }\) is the orthogonal complement of the raw space of \(U_{f} .\) According to the elementary linear algebra given in (Overschee and Moore 1996), $$\begin{aligned} Y_{f} /U_{f} & = Y_{f} U_{f}^{{\dag }} U_{f} \\ Y_{f} /U_{f}^{ \bot } & = Y_{f} - Y_{f} /U_{f} \\ \end{aligned}$$ There are weighting matrices W 1 and W 2 to be multiplied by the oblique projection to remove the stochastic part (i.e. \(W_{1} \cdot (H_{i}^{s} M_{i} + N_{i} ) \cdot W_{2} = 0\)). The choice of these matrices is relatively arbitrary and different from one algorithm to another (Ruscio 2009; Overschee and Moore 1996). However, they are chosen to satisfy the equation mentioned. Calculate the singular value decomposition SVD of weighted oblique projection: $$W_{1} \xi_{i} W_{2} = USV^{T} = \left( {\begin{array}{*{20}l} {U_{1} } &\quad {U_{2} } \\ \end{array} } \right)\left( {\begin{array}{*{20}l} {S_{1} } &\quad 0 \\ 0 &\quad 0 \\ \end{array} } \right)\left( {\begin{array}{*{20}l} {V_{1}^{T} } \\ {V_{2}^{T} } \\ \end{array} } \right)$$ Estimate the system order by counting the nonzero singular values of S and set apart the SVD to obtain U 1 and S 1. Calculate the extended observability matrix O i and O i−1 from: $$O_{i} = W_{1}^{ - 1} U_{1} S_{1}^{1/2}$$ Determine the sequences of states X i and X i+1 $$\begin{aligned} \tilde{X}_{i} & = O_{i}^{{{\dag }}} \zeta_{i} \\ \tilde{X}_{i + 1} & = O_{i - 1}^{{{\dag }}} \zeta_{i + 1} \\ \end{aligned}$$ The superscript † means the Moore–Penrose pseudoinverse. Up to this step, the system states are known with the system inputs/outputs processed data. Then, solve the following linear equation for the system matrices A, B, C and D. $$\left( {\begin{array}{*{20}l} {\tilde{X}_{i + 1} } \\ {Y_{i\left| i \right.} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}l} A &\quad B \\ C &\quad D \\ \end{array} } \right)\left( {\begin{array}{*{20}l} {\tilde{X}_{i} } \\ {U_{i\left| i \right.} } \\ \end{array} } \right) + \left( {\begin{array}{*{20}l} {\rho_{w} } \\ {\rho_{v} } \\ \end{array} } \right)$$ For stochastic part, estimate Q, R, and S from the residuals: $$\left( {\begin{array}{*{20}l} Q \hfill &\quad S \hfill \\ {S^{T} } \hfill &\quad D \hfill \\ \end{array} } \right) = {\mathbf{E}}_{{\mathbf{j}}} \left[ {\left( {\begin{array}{*{20}l} {\rho_{w} } \\ {\rho_{v} } \\ \end{array} } \right) \cdot \left( {\begin{array}{*{20}l} {\rho_{w}^{T} } & {\rho_{v}^{T} } \\ \end{array} } \right)} \right]$$ For more details about subspace identification method, refer to Overschee and Moore (1996). The application of subspace identification to gas turbine process This section discusses the process of gas turbine technology, the preparation of data signals, and the simulation results for the method of subspace technique for both phases of research (IEEE Power System Dynamic Performance Committee 2013; Modau and Pourbeik 2008). However, the need for developing gas turbine model by alternative advanced techniques is one of the main strong motivations behind this paper. The main components of gas turbine are shown in Fig. 1, a compressor, a combustion chamber, and a turbine. The air required for combustion is supplied by the compressor (process 1–2); there in the combustion chamber the air is mixed with the fuel and combusted (process 2–3). In ideal situations, the process 1–2 is an isentropic process while process 2–3 is isobaric or constant pressure process. The expansion of the hot combusted gases in the turbine is an isentropic (process 3–4) which produces useful work in the turbine sufficient to derive the rotor of the synchronous generator. Finally, heat rejection process takes place at constant pressure (process 4–1). The exhausted gas from the gas turbine is used to energize the HRSG to supply a steam turbine with the necessary superheated steam. The remaining electricity is produced by the generator which is mechanically coupled to steam turbine supplied by the HRSG (IEEE Power System Dynamic Performance Committee 2013; Sonntag and Borgnakke 1998). The data points were collected as discrete time signals from the industrial team at the General Electric Company of Libya at the plant centre of control at North Benghazi Power Plant in the eastern part of Libya. Three sets of data were organized. One set of data is used for identification phase and the other two sets of data are used for verification. The inputs to the system to be identified, from control point of view, have been selected to be the natural gas (NG) control valve (%), the pilot gas valve (%), and the compressor outlet pressure (bar). These are regarded later to be the manipulated inputs of the MPC to be fed as set points to the subsystems of the process. The output signals are the power output (MW) of the turbine, the exhausted temperature of the turbine (CƟ), and the frequency of the grid (Hz). System Identification toolbox has been utilized (Ljung 2010). Identification and sample of verification results are presented through Figs. 2, 3, 4, 5, 6 and 7. The model responses nicely agree with the main trends of the real plant responses. The model parameters appear in the "Appendix". Identification of power output signal Verification of output power signal Identification of exhausted temperature signal Verification of exhausted temperature signal Identification of grid frequency signal Verification of grid frequency signal Predictive controller design and implementation Description of a portion in the current automation system The concept proposed in this work is applied to a specific portion of the existing control unit, which is responsible for the variables of interests. The current situation of the GT from control point of view is investigated through site visits and plant operation documentation (Daewoo E&C, Siemens 2009 Approval). A functional blocks diagram that shows the critical components for the control system to be upgraded is shown in Fig. 8. It should be mentioned that there are lots of other control circuits that performs other tasks of control, but this research considers only the part of controlling the load, frequency and the turbine exhausted temperature. The frequency of the generator is presented to the turbine controller through three channels; the average value of these three is selected via 1-out-of-3 logic and considered to be the actual frequency for frequency or the speed for the controller. The load set point is amendable within certain limits by the operation and monitoring (OM) system for the purposes of coordinating unit load. By the OM system, the mode of control can be selected whether power is to be controlled in load operation by speed controller or in load operation by load controller. This regulates the gas given to the turbine for the required production of power. Natural gas consumption is measured by flow-meters installed upstream of the terminal point of supply. The NG premix control valve is positioned by the valve lift controller of the natural gas premix. A portion of interest of the GT control system at North Benghazi Power Plant The valve lift is read directly into the gas turbine controller. The pilot gas valve position is changed by the valve lift controller of the pilot gas. Both valves have electro-hydraulic actuators which are operated via two hardware outputs to the two coils of the electro-hydraulic actuators. Undesirable compressor operation is prevented via the compressor pressure ratio limit controller (also known as π controller). The function of the cool air limit controller is to rule out mode of operations, which leads to inadequate flow of cooling air to the turbine blades. The system exhausted temperature is being controlled by the IGVs by varying the air mass flow into the combustion chamber. Exhausted temperature is measured immediately downstream of the gas turbine via 24 triple-element thermocouples (MBA26CT101A/B/C to MBA26CT124A/B/C) placed around the surroundings of the exhaust diffuser. All B and C signals from the 24 triple-element thermocouples are used to calculate the mean turbine outlet temperature. These IGVs signal is influenced, in such a way, by two signals: one from the exhausted temperature control and the other from the compressor pressure ratio limit controller (Daewoo E&C, Siemens 2009 Approval). The portion of interest of the automation unit is described and the next section presents the proposed upgrade of the control system for the purpose performance enhancement. Generalized predictive controller (GPC) design and implementation Model predictive control is a well recognized control system technology for controlling power plants and many industrial processes (Bittani and Poncia 2003; Mohamed et al. 2012, Oluwande 2001; Badgwell and Qina 2003). Although there are many other modern control techniques in the previously published literature (Lee and Ramirez 2001), state space formulation of multivariable model based predictive control has been selected for this specific application for so many reasons. First of all, the practical constraints of the control signals and the output signals of the model can be easily considered in the computation algorithm of the controller. In addition, the influences of noises that satisfy the nature of power plant can be included in the control system responsibility. Finally, the world leading electric utilities use this technology in power plant control. The use of MPC has been justified. In addition, the simplicity of using linear MPC with considerations of noises and disturbances is valued over complexity of using nonlinear model predictive control based on deterministic nonlinear model. This is because of the higher computation demands of nonlinear MPC which has lead to its rare industrial applications in comparison to linear state space MPC (Mohamed et al. 2012). A model based predictive control is developed with provisions of unmeasured disturbances and measurement noises to be used for compensation around the investigated operating conditions. Here, the linear time invariant model developed by subspace method in the second section has been used inside the model prediction algorithm. However, many models are developed beforehand and tested by comparison with each other for the one which gives the most feasible controller performance. Thereafter, the model has been augmented as follows: $$x(k + 1) = Ax(k) + B_{u} u(k) + B_{v} v(k) + B_{w} w(k)$$ $$\begin{aligned} y(k) & = y(k) + z(k) \\ & = Cx(k) + D_{u} u(k) + D_{v} v(k) + D_{w} w(k) \\ \end{aligned}$$ where v is the measured disturbance and w is the unmeasured disturbance vector, z is the measurement noise. The adopted predictive control algorithm is quite analogous to Linear Quadratic Gaussian procedure (LQG), but with implication of the operational constraints. The prediction is made over a specific prediction horizon. Then, the optimization program is executed on-line to calculate the optimal values of the manipulated variables to minimize the objective function below: $$\xi (k) = \sum\limits_{{i = H_{w} }}^{{H_{p} }} {\left\| {y(k + \left. i \right|k) - r(k + \left. i \right|k)} \right\|} Q + \sum\limits_{i = 0}^{{H_{c} - 1}} {\left\| {\Delta u(k + \left. i \right|k)} \right\|}^{2} R$$ The weighting coefficients (Q and R), control interval (Hw), prediction horizon (Hp) and control horizon (H C ) of the performance objective function will affect the performance of the controller and computation time demands. The terms r represents the demand outputs used as a reference for MPC model and Δu is the change in control values for HC number of steps. Zero-order hold method is then used to convert the control signal from discrete to continuous fed to the plant. The constraints of inputs are expressed as minimum and maximum permissible inputs, $$u_{\rm min } \le u \le u_{\rm max }$$ $$\Delta u_{\rm min } \le \Delta u \le \Delta u_{\rm max }$$ The control system optimized signal is generated by the control law, $$\mathop {\hbox{min} \xi (k)}\limits_{{\Delta u, \ldots \Delta u(k + 1 + H_{c} )}} \,{\text{Subject to }}\left( { 22} \right){\text{ and }}\left( { 23} \right)$$ Traditionally, quadratic programming (QP) solver is used, with interior point method or active set method, to solve the optimization problem of the MPC. The package of the proposed system is shown in Fig. 9. A quantified description of the upgraded strategy of control should be given in words. In the proposed strategy, one important signal is the NG valve position reference necessary to supply the fuel energy to the combustion chamber and satisfy the concept of energy balance in plant thermodynamics. The second signal is the pilot gas valve position reference which is very important to stabilize the premix flames. The third is the best compressor pressure ratio that is corrected by the MPC and fed into the compressor pressure ratio limit controller and eventually will have a positive impact on the IGV pitch controller, compressor actual outlet pressure, and the necessary air flow. Thereby, it is supplying higher amount of air flow to the combustion chamber and reduces the fuel consumption and finally improves the efficiency. Great pressure ratios may cause compressor surging; however, there will not be any such problems because the practical safe limits or constraints of the pressure ratio are naturally included in the MPC optimization algorithm and can be limited by the pressure ratio limit controller. The integrated system is tested in the next section by simulations on a personal computer environment. The integration of model predictive control into the associated control of the GT Simulation results MPC tuning is finalized by selecting appropriate values for the prediction horizon Hp, control horizon HC, and weighting matrices Q and R. The control interval, prediction, and control horizons are found to be 1, 40, 5 s respectively. Q = [1 1 1] and R = [0.2 0.2 0.2]. Simulating different scenarios have caused this selection. In this scenario, a load demand signal extracted from the data during classical closed loop control is used as one of the set-point signals injected to the MPC, with higher exhausted temperature set-point of 565CƟ and the frequency should be maintained at 50 Hz. From simulations, it can be readily seen that when the plant control strategy is integrated with model predictive control (MPC ON state); the frequency response is smoother than that case of MPC OFF (Fig. 10). Less frequency variations are found in case of MPC, however, that should be also expected from the power response as faster load following means less excursions (Fig. 11). Since gas turbines in general are very sensitive to frequency deviations and cascaded trip may occur in case of large frequency disturbances, it is now assured that the possibility of relay malfunctioning for trip singling is reduced for the gas turbine in case of MPC ON state. The plant has faster load following capability with less settling time as shown in Fig. 11. It is believed that that has been as a result from predicting the control signals of the NG valve, the pressure ratio, and pilot gas valve in advance, which help in achieving stable and rapid combustion. The response of the temperature is depicted in Fig. 12; higher temperature is maintained during system operation, which means higher thermal energy is supplied to the HRSG. However, this set point can also be amended by the operator in case of changes of ambient conditions to set the suitable reference temperature for the exhausted gas. The proposed signals of manipulated variables have relatively different trends in comparison with classical control system. These corrected signals are amended by the optimization routine in the MPC and can be seen that they are feasible and within the operating restrictions reported by the plant manufacturer. Figures 13 and 14 show the fuel preparation signals for the demanded dynamic set points, it is seen that the MPC has given constrained signals within shorter time of periods. From the signal of the outlet compressor pressure in Fig. 15, the pressure ratio is high in case of high load demands with (MPC ON) state; Therefore, higher GT efficiency during large load operating regions which the plant is very likely to operate. As an estimate of how much improvement has been achieved, the reader is recommended to inspect the thermodynamic curves of pressure ratio depicted in Simões-Moreira (2012) and Ibrahim and Rahman (2012). It can be deduced that upgrading control system with the MPC has promising performance with regard to better system operation and improved responses. This can be accurately expressed in the following points: firstly, the general curve relating the pressure ratio and overall thermal efficiency is redrawn using MATLAB. Second, the pressure ratio is found from the signals arrays of compressor pressure outlet to the atmospheric pressure while the total efficiency (the ratio of the total work done by the CCGT unit to the heat input) is given by thermodynamic relations with the pressure ratio, the equation is used to plot the curve of thermal efficiency for three distinct periods in which the efficiency including MPC may be higher or lower than that without the MPC. The approach used to calculate the average efficiency for the working hours is very simple. The compressor outlet pressure response is used to find the pressure ratio which is used to plot the efficiency. Then, the average efficiency is found for three different periods of operating the plant. Curve 1 is shown in Fig. 16. It can be readily seen that the red part of the curve represents the deviation in efficiency in the from minute 700 to minute 950 with around (250 working minutes), the compressor outlet pressure (and hence the pressure ratio for 1 bar atmospheric pressure) raised from 14 to 19.3 with (MPC ON) state in comparison with the normal strategy (MPC OFF) with corresponding efficiencies of (52.9–56.88 %), respectively (or 3.98 % increase in efficiency). Another investigated period of operation is shown in (Fig. 17). There is a small decrease in the compressor outlet from 11.7 to 11.5 that corresponds to efficiencies of (49.7–50 % = −0.3 %). Similarly, in Fig. 18, there is a decline in the compressor outlet pressure from 13 to 11.5 bars from the existing to the new strategy; However, this will result in decreasing the efficiency from 51.8 to 49.9 % (i.e. 49.9–51 = −1.1 %). The average thermal efficiency for three equal intervals of operation (ηav = (3.98 – 0.3 – 1.1)/3 = 1.1 %). However, this argument can also be supported from time based simulations. NG control valve position is also an obvious indicator for reduced fuel consuming (Fig. 14). The plant situation with integration of MPC is more improved than existing control strategy without MPC and more suitable for satisfying the grid obligations. Power response Exhausted temperature responses Pilot valve position response NG valve response Compressor outlet pressure Thermal efficiency for the GT cycle with including the efficiency for period 1 A feasible application of model predictive control into GT air and its associated controls is newly proposed. The suggested configuration acts as corrector for the key controllers and their actuators that affect the system efficiency through the compression ratio, power dynamic response contribution to the grid, and heat sent to the HRSG. Simulation studies have shown encouraging results that stimulates further research and practical implementation. As a future recommendation, it is suggested that the attention is turned to the HRSG for further enhancement to the system performance. Also, it is intended to handle some practical issues that are not undertaken by the model and its associated control system. These are the two points mainly to extend this research: The composition of natural gas produced in Libyan varies from time to another and here are some examples supplied by our industrial partners in Sirte Oil Company for Production and Manufacturing of Oil & Gas, Technical Dept/Process Engineering & Labs Division. The compositions are in two different dates in different years. On 16/02/2011: Methane (81.47 %), ethane (11.15 %), propane (2.7 %), iso-Butane (0.5 %), n-Butane (0.61 %), iso-Pentane (0.19 %), n-Pentane (0.13 %), nitrogen (0.52 %), Carbon Dioxide (2.7 %), Hexane (0.03 %). On 19/01/2012: Methane (82.98 %), ethane (6.82 %), propane (1.97 %), iso-Butane (0.38 %), n-Butane (0.48 %), iso-Pentane (0.24 %), n-Pentane (0.16 %), nitrogen (0.41 %), Carbon Dioxide (6.5 %), Hexane (0.06 %). These variations affect fuel calorific value and consequently the efficiency of the plant. However, including all factors that affect the thermal efficiency and/or dynamic responses is quite complex and difficult to achieve in the present industry. The MPC performance is not very ambitious in some small intervals like that mentioned in the previous section, although these small intervals are not likely to remain and the average overall efficiency over 800 min is higher. These limitations in the controller performance can be handled by adapting the control parameters and/or using nonlinear model predictive control. The value of using nonlinear model predictive control is that it handles the uncertainty associated with the nonlinearity of the plant. Thereby, improved control performance but increasing the computation burdens on the centralized computer used for control. This practical issue along with the 1st one are the authors' interest for the long-term research work. system matrix \(\in R^{n \times n}\) B : system matrix \(\in R^{n \times m}\) C : system matrix \(\in R^{l \times n}\) D : system matrix \(\in R^{l \times m}\) E f : future noise H i d : deterministic Toeplitz matrix H i s : stochastic Toeplitz matrix H P : prediction horizon H C : control horizon Hw : control interval N f : n : system order O i : extended observability matrix Q : system matrix \(\in R^{n \times n}\) or weighting coefficient in MPC R : system matrix \(\in R^{n \times l}\) or weighting coefficient in MPC S : system matrix \(\in R^{l \times l}\) u k : input at instant k U : orthogonal matrix U f : future input U p : past input v k : zero mean white noise innovations at instant k V : \(\mathcal{V}\) : W 1 : weighting matrix \(\mathcal{W}\) : x k+1 : system state at instant k + 1 x k : system state at instant k X i : sequence of states vector X f : future states y k+1 : system output at instant k + 1 y k : Y f : future output Y p : past output ξ(k): objective function to be minimized by the MPC Badgwell TA, Qina SJ (2003) A survey of industrial model predictive control technology. Control Eng Pract 11(7):733–764 Bittani S, Poncia G (2003) Multivariable model predictive control of thermal power plant with built-in classical regulations. Int J Control 74(11):1118–1130 Ibrahim TK, Rahman MM (2012) Effect of compression ratio on performance of combined cycle gas turbine. Int J Energy Eng 2(1):9–14 Lalor G, Ritchie J, Flynn D, O'Malley MJ (2005) The impact of combined-cycle gas turbine short-term dynamics on frequency control. IEEE Trans Power Syst 20(3):1456–1464 Lee KY, Ramirez GR (2001) Overall control of fossil-fuel power plants. In: Proceedings of IEEE power & energy society, Winter Meeting. IEEE, Piscataway, pp 1209–1214 Ljung L (2010) System identification toolbox: User's Guide. Mathwork Inc. Matsumoto H, Ohaswa Y, Takahasi S, Akiyama T, Ishiguro O (1996) An expert system for startup optimization of combined cycle power plants under NOx emission regulation and machine life management. IEEE Trans Energy Convers 11(2):414–422 Meyer C (2000) Matrix analysis and applied linear algebra. Society of Industrial & Applied Mathematics (SIAM), Philadelphia Modau F, Pourbeik P (2008) Model development and field testing of a heavy duty gas-turbine generator. IEEE Trans Power Syst 23(2):664–672 Mohamed O, Wang J, Al-Duri B, Lv J, Gao Q (2012) Predictive control of coal mills for improving supercritical power plant dynamic responses. In: Proceedings of 51st IEEE conference on decision and control. IEEE, Hawaii, pp 1709–1714 Mohamed O, Younis D, Abdulwahab H, Anizei A, Elobidei B (2014) Comparative study between subspace method and prediction error method for identification of gas turbine power plant. In: Proceedings of the 6th international congress on ultra modern telecommunications and control systems and workshops, ICUMT 2014. IEEE, St. Petersburg, pp 521–528 Mohamed O, Wang J, Khalil A, Limhabrash M (2015) The application of system identification via canonical variate algorithm to north benghazi gas turbine power generation system. In: Proceedings of IEEE Jordan conference on applied electrical engineering and computing technology AEECT 2015. Dead Sea, pp 1–6 Oluwande GA (2001) Exploitation of advanced control techniques in power generation. Comput Control Eng J 12(2):63–67 Overschee PV, Moore BD (1996) Subspace identification for linear systems: theory implementation application. Kluwer, Dordrecht Power System Dynamic Performance Committee (2013) Dynamic models of turbine-governors in power system studies. IEEE Power & Energy Society, Technical Report PES-TR1. http://sites.ieee.org/fw-pes/files/2013/01/PES_TR1.pdf Rayaprolu K (2009) Boilers for power and process. CRC Press, Boca Raton Ruscio D (2009) Subspace identification: theory and application. Lecture Notes, 6th edn. Telemark Institute of Tehcnology Saez D, Milla F, Vargas LS (2007) Fuzzy predictive supervisory control based on genetic algorithms for gas turbines of combined cycle power plants. IEEE Trans Energy Convers 22(3):689–696 Siemens, Daewoo E&C (2009) Combined cycle gas turbine project: operation manual: Econopac. In: Document No: MB-V-GT-92-900 Simões-Moreira JR (2012) Fundamentals of thermodynamics applied to thermal power plants. In: Thermal power plant performance analysis. Springer, Berlin Sonntag RE, Borgnakke C (1998) Fundamentals of thermodynamics. Wiley, New York OM has applied the research methodologies, produced the results and findings, and written the paper. JW revised the paper and provided valuable advices for research quality improvement. AK revised the paper for correcting technical and scientific errors. ML provided the power plant documents & data and facilitates the permissions to visit the power plant central control room. All authors read and approved the final manuscript. The authors would like to express their thanks to the Control and Production Department of General Electricity Company of Libya for providing the necessary documents and data for the power plant under study. Department of Electrical Engineering, Princess Sumaya University for Technology, P.O. Box 1438, Al-Jubaiha, 11941, Jordan Omar Mohamed School of Engineering, University of Warwick, Coventry, CV4 7AL, UK Jihong Wang Department of Electrical Engineering, University of Benghazi, P.O. Box 9476, Benghazi, Libya Ashraf Khalil Production Department, General Electric Company of Libya, Tripoli, Libya Marwan Limhabrash Search for Omar Mohamed in: Search for Jihong Wang in: Search for Ashraf Khalil in: Search for Marwan Limhabrash in: Correspondence to Omar Mohamed. $$A = \left[ {\begin{array}{*{20}l} { 0. 6 6 2} \hfill & { - 0. 6 2 2} \hfill & {{ - 0} . 0 5 4} \hfill & { 0. 1 0 4} \hfill & { 0. 0 0 4} \hfill & { 0. 1 3 7} \hfill & {{ - 0} . 0 3 7} \hfill & { 0. 0 8 2} \hfill & { 0. 0 2 8} \hfill & { 0. 0 2 6} \hfill \\ { 0. 5 1 8} \hfill & { 0. 0 3 8} \hfill & { 0. 1 1 6} \hfill & { 0. 0 4 6} \hfill & {{ - 0} . 1 6 2} \hfill & { 0. 3 1 3} \hfill & {{ - 0} . 2 0 4} \hfill & { 0. 1 1 3} \hfill & {{ - 0} . 0 4 9} \hfill & { 0. 0 9 2} \hfill \\ {{ - 0} . 3 0 9} \hfill & { - 0.523} \hfill & { - 0.411} \hfill & { - 0.18} \hfill & { - 0.327} \hfill & {0.422} \hfill & {0.413} \hfill & {0.293} \hfill & { - 0.25} \hfill & {0.136} \hfill \\ {0.046} \hfill & {0.117} \hfill & { - 0.682} \hfill & {0.447} \hfill & { - 0.09} \hfill & {0.425} \hfill & { - 0.137} \hfill & {0.438} \hfill & {0.337} \hfill & {0.324} \hfill \\ { - 0.001} \hfill & { - 0.169} \hfill & {0.157} \hfill & { - 0.42} \hfill & {0.506} \hfill & { - 0.262} \hfill & { - 0.117} \hfill & {0.355} \hfill & {0.162} \hfill & {0.551} \hfill \\ { - 0.191} \hfill & { - 0.07} \hfill & {0.15} \hfill & {0.143} \hfill & {0.315} \hfill & {0.989} \hfill & { - 0.284} \hfill & { - 0.298} \hfill & {0.072} \hfill & { - 0.159} \hfill \\ { - 0.007} \hfill & {0.077} \hfill & {0.208} \hfill & { - 0.12} \hfill & {0.029} \hfill & { - 0.274} \hfill & {0.133} \hfill & { - 0.249} \hfill & {0.813} \hfill & { - 0.029} \hfill \\ {0.289} \hfill & {0.347} \hfill & { - 0.385} \hfill & { - 0.614} \hfill & { - 0.153} \hfill & {0.184} \hfill & {0.225} \hfill & {0.039} \hfill & {0.053} \hfill & {0.046} \hfill \\ {0.279} \hfill & {0.218} \hfill & { - 0.46} \hfill & { - 0.039} \hfill & {0.022} \hfill & { - 0.524} \hfill & { - 0.894} \hfill & {0.036} \hfill & { - 0.299} \hfill & { - 0.151} \hfill \\ {0.058} \hfill & {0.06} \hfill & { - 0.008} \hfill & {0.084} \hfill & {0.275} \hfill & {0.505} \hfill & {0.239} \hfill & {0.566} \hfill & { - 0.358} \hfill & {0.367} \hfill \\ \end{array} } \right]$$ $$B = \left[ {\begin{array}{*{20}l} { - 0.378} \hfill & { - 0.167} \hfill & {0.485} \hfill \\ { - 0.367} \hfill & { - 0.242} \hfill & {0.305} \hfill \\ { - 1.053} \hfill & { - 0.463} \hfill & {2.611} \hfill \\ { - 0.784} \hfill & { - 0.343} \hfill & {1.675} \hfill \\ { - 0.051} \hfill & { - 0.219} \hfill & { - 1.084} \hfill \\ {1.055} \hfill & {0.623} \hfill & { - 2.236} \hfill \\ {1.907} \hfill & {1.074} \hfill & { - 4.922} \hfill \\ {3.21} \hfill & {1.796} \hfill & { - 6.506} \hfill \\ {1.435} \hfill & {0.401} \hfill & { - 2.751} \hfill \\ { - 2.971} \hfill & { - 1.53} \hfill & {7.583} \hfill \\ \end{array} } \right]$$ $$D = \left[ {\begin{array}{*{20}l} 0 \hfill & 0 \hfill & 0 \hfill \\ 0 \hfill & 0 \hfill & 0 \hfill \\ 0 \hfill & 0 \hfill & 0 \hfill \\ \end{array} } \right]$$ $$C = \left[ {\begin{array}{*{20}l} { - 32.31} \hfill & {15.18} \hfill & {1.541} \hfill & {2.44} \hfill & { - 14.68} \hfill & { - 21.06} \hfill & {3.322} \hfill & {2.06} \hfill & { - 0.722} \hfill & { - 0.926} \hfill \\ {14.42} \hfill & { - 2.019} \hfill & { - 5.686} \hfill & {3.799} \hfill & {7.479} \hfill & { - 15.42} \hfill & {1.735} \hfill & {1.48} \hfill & {2.418} \hfill & { - 2.35} \hfill \\ {1.33} \hfill & { - 0.2145} \hfill & { - 0.603} \hfill & {0.508} \hfill & { - 0.709} \hfill & { - 1.336} \hfill & {0.186} \hfill & {0.148} \hfill & {0.126} \hfill & { - 0.210} \hfill \\ \end{array} } \right]$$ Mohamed, O., Wang, J., Khalil, A. et al. Predictive control strategy of a gas turbine for improvement of combined cycle power plant dynamic performance and efficiency. SpringerPlus 5, 980 (2016) doi:10.1186/s40064-016-2679-2 Model predictive control (MPC) Natural gas industry Subspace identification
CommonCrawl